title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
TypeError when running Pythonshell on my Mac | 39,248,573 | <p>I run a python script in node.js using python shell.<br>
On my Windows system everything runs fine.<br>
But running the same thing on my Macbook get's me this error:</p>
<pre><code>Error: TypeError: can't multiply sequence by non-int of type 'float'
at PythonShell.parseError (/Users/johannes/Dropbox/Javascript Projects/DNA Assembler/Algorithmus_Visualisierung/node_modules/python-shell/index.js:183:17)
at terminateIfNeeded (/Users/johannes/Dropbox/Javascript Projects/DNA Assembler/Algorithmus_Visualisierung/node_modules/python-shell/index.js:98:28)
at ChildProcess.<anonymous> (/Users/johannes/Dropbox/Javascript Projects/DNA Assembler/Algorithmus_Visualisierung/node_modules/python-shell/index.js:88:9)
at emitTwo (events.js:87:13)
at ChildProcess.emit (events.js:172:7)
at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)
----- Python Traceback -----
File "Assembler.py", line 224, in <module>
reads, n, nUnique, nComplements = readReads(path, verbose = True)
File "Assembler.py", line 20, in readReads
reads = random.sample(reads, round(len(reads) * subset))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/random.py", line 326, in sample
result = [None] * k
</code></pre>
<p>Does someone have an idea what might be going wrong here?<br>
I'm looking for a solution, that does not involve changing a lot in my .py because everything runs fine in Windows. </p>
| 0 | 2016-08-31T11:36:29Z | 39,316,147 | <p>Solved by changing a float value to int.</p>
<p>(though it would be interesting to know, why this only occurred on Mac.)</p>
| 0 | 2016-09-04T10:53:59Z | [
"javascript",
"python",
"node.js",
"osx",
"typeerror"
] |
Kivy Python "Invalid Indentation (too many levels)? | 39,248,740 | <p>When i take the label "id:meaning" out of the code it runs fine. I cant see where the indentation is incorrect. Can anyone help, please?</p>
<p><a href="http://i.stack.imgur.com/tjTDQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/tjTDQ.png" alt="ERROR MESSAGE"></a></p>
<p><a href="http://i.stack.imgur.com/Qo0rG.png" rel="nofollow"><img src="http://i.stack.imgur.com/Qo0rG.png" alt="SECTION OF CODE WITH ERROR"></a></p>
<p><strong>solved</strong> Click the Format tab in IDLE then click toggle tabs. It fixed the spacing issue.</p>
| 0 | 2016-08-31T11:43:55Z | 39,248,866 | <p>Delete all the spaces and line feeds before and after lines 44, 45 and 46 and be sure to insert the right indentation characters</p>
<p>Sometimes this errors are not enough clear. It happends that there is a problem in the line before or after the one indicated</p>
| 1 | 2016-08-31T11:49:37Z | [
"python",
"kivy",
"kivy-language"
] |
Kivy Python "Invalid Indentation (too many levels)? | 39,248,740 | <p>When i take the label "id:meaning" out of the code it runs fine. I cant see where the indentation is incorrect. Can anyone help, please?</p>
<p><a href="http://i.stack.imgur.com/tjTDQ.png" rel="nofollow"><img src="http://i.stack.imgur.com/tjTDQ.png" alt="ERROR MESSAGE"></a></p>
<p><a href="http://i.stack.imgur.com/Qo0rG.png" rel="nofollow"><img src="http://i.stack.imgur.com/Qo0rG.png" alt="SECTION OF CODE WITH ERROR"></a></p>
<p><strong>solved</strong> Click the Format tab in IDLE then click toggle tabs. It fixed the spacing issue.</p>
| 0 | 2016-08-31T11:43:55Z | 39,248,875 | <p>Python needs a 4-space indent.</p>
<p>Try to trim excess spaces and tabs from ends of lines, and remove empty lines at the end of files. Also ensure the last line ends with a newline.</p>
| 1 | 2016-08-31T11:49:58Z | [
"python",
"kivy",
"kivy-language"
] |
Django REST custom authentication | 39,248,825 | <p>Here's my issue guys. I have a store user that can log to his site and view all kinds of data related to his store. In his store he has, let's say 10 android billing devices that send bills to him. Now I need to create a custom authentication in Django REST framework that will authenticate android devices with their <code>id_number</code> and <code>type</code> fields (not store user's username & password) and assign a token to them (each individually) and allow them to POST, GET, PUT etc.. Is that possible?</p>
<pre><code>Store(models.Model):
user = models.ForeignKey(User, default=1)
AndroidDevice(models.Model):
id_number = models.CharField()
type = models.CharField()
store= models.ForeignKey(Store)
</code></pre>
| 2 | 2016-08-31T11:48:08Z | 39,254,144 | <p>yes... it is possible</p>
<p><strong>Note:</strong> use a <a href="http://www.django-rest-framework.org/api-guide/authentication/#custom-authentication" rel="nofollow">custom authentication</a> and create your own Token Model that is related to you AndroidDevice model instead of the User Model or whatever you need to do (multiple tokens, tokens that expires, etc.). Read the DRF source code to get inspired :)</p>
<p>Hope this helps</p>
| 0 | 2016-08-31T15:57:57Z | [
"python",
"django",
"authentication",
"django-rest-framework",
"custom-authentication"
] |
Correct format for IMAP append command in Python? (Yahoo mail) | 39,248,898 | <p>The following Python function works for outlook, gmail and my shared hosting exim server but when sending mail through yahoo.com it returns this error:</p>
<pre><code>APPEND command error: BAD ['[CLIENTBUG] Additional arguments found after last expected argument']. Data: FHDJ4 APPEND inbox.sent "31-Aug-2016 12:30:45 +0100" {155}
</code></pre>
<p>For comparison, outlook returns:</p>
<pre><code>('OK', ['[APPENDUID 105 2] APPEND completed.'])
</code></pre>
<p>Gmail returns:</p>
<pre><code> ('OK', ['[APPENDUID 14 2] (Success)'])
</code></pre>
<p>and Exim returns:</p>
<pre><code>('OK', ['[APPENDUID 1472211409 44] Append completed (0.788 + 0.076 secs).'])
</code></pre>
<p>My function uses imaplib2, the arguments passed to it are all strings, and self.username is the sending email address as address@domain.com</p>
<p>My function is:</p>
<pre><code>def send_mail(self, to_addrs, subject, msgtext, verbose=False):
# build message to send
msg = email.message.Message()
msg.set_unixfrom('pymotw')
msg['From'] = self.username
msg['To'] = to_addrs
msg['Subject'] = subject
msg.set_payload(msgtext)
if verbose: print("Sending Mail:\n ", msg)
# connect and send message
server = self.connect_smtp()
server.ehlo()
server.login(self.username, self.password)
server.sendmail(self.username, to_addrs, str(msg))
server.quit()
print("Saving mail to sent")
sentbox_connection = self.imap_connection
print(sentbox_connection.select('inbox.sent'))
print(sentbox_connection.append('inbox.sent', None, imaplib2.Time2Internaldate(time.time()) , str(msg)))
</code></pre>
<p>I've tried generating the msg variable with this line instead:</p>
<pre><code> msg = "From: %s\r\n" % self.username + "To: %s\r\n" % to_addrs + "Subject: %s\r\n" % subject + "\r\n" + msgtext
</code></pre>
<p>and appending the message using "" instead of None like so:</p>
<pre><code> print(sentbox_connection.append('inbox.sent', None, imaplib2.Time2Internaldate(time.time()) , str(msg)))
</code></pre>
<p>Can you tell me what I'm doing wrong? Or if Yahoo has a specific way of handling append commands that I need to account for?</p>
<p>Edit: To clarify, sending the mail works OK for all smtp servers, but appending the sent mail to inbox.sent fails for yahoo</p>
| 0 | 2016-08-31T11:51:09Z | 39,252,246 | <p>I've resolved this. I noticed the message text did not end with CRLF. Other mail servers were appending this serverside to accept the command, Yahoo does not. The below now works.</p>
<p>I've amended the message payload line to:
msg.set_payload("%s \r\n" % msgtext) # Yahoo is strict with CRLF at end of IMAP command</p>
| 0 | 2016-08-31T14:23:04Z | [
"python",
"python-3.x",
"yahoo-mail"
] |
importing simpy in python - unnamed module error | 39,248,925 | <p>I am trying to import <code>simpy</code> package in python, however I get the unnamed module error. I am on a Mac OSX and have <code>anaconda</code> installed. I installed it using <code>pip install simpy</code> command. These outputs could also be helpful:</p>
<pre><code>$ which python
//anaconda/bin/python
$ pip list
simpy (3.0.10)
$ conda list
simpy 3.0.10 <pip>
</code></pre>
<p>(If relevant, I am using PyCharm as IDE)</p>
| 0 | 2016-08-31T11:52:19Z | 39,249,070 | <p>The error could be caused by your project configuration in pycharm pointing to another python interpreter, maybe python3 or a virtualenv? </p>
<p>Check it by going through your project settings.</p>
<p>Make sure you are using the same interpreter in PyCharm as the one where you installed your module. </p>
| 3 | 2016-08-31T11:59:31Z | [
"python",
"python-3.x",
"anaconda",
"python-import",
"simpy"
] |
Writing variable sized arrays to Pandas cells | 39,249,030 | <p>I have a large data set and I want to do a convolution calculation using multiple rows that match a criteria. I need to calculate a vector for each row first, and I thought it would be more efficient to store my vector in a dataframe column so I could try and avoid a for loop when I do the convolution. Trouble is, the vectors are variable length and I can't figure out how to do it.</p>
<p>Here's a summary of my data:</p>
<pre><code>Date State Alloc P
2012-01-01 AK 3 0.5
2012-01-01 AL 4 0.3
â¦
</code></pre>
<p>Each state has a different Alloc and P value. Thereâs a row for every date and state and my dataframe is over 15,000 rows long.</p>
<p>For each entry, I want a vector that looks like this:</p>
<pre><code>[P, np.zeros(Alloc), 1-P]
</code></pre>
<p>I can't figure out how to set a new column like this. I've tried statements like:</p>
<pre><code>df['Test'] = [df['P'], np.zeros(df['Alloc'), 1 â df['P']]
</code></pre>
<p>but they don't work.</p>
<p>Does anyone have any ideas?</p>
<p>Thanks âº</p>
| 2 | 2016-08-31T11:57:15Z | 39,251,861 | <p>Try:</p>
<pre><code>def get_vec(x):
return [x.P] + np.zeros(x['Alloc']).tolist() + [1 - x.P]
df.apply(get_vec, axis=1)
0 [0.5, 0.0, 0.0, 0.0, 0.5]
1 [0.3, 0.0, 0.0, 0.0, 0.0, 0.7]
dtype: object
</code></pre>
<hr>
<pre><code>df['Test'] = df.apply(get_vec, axis=1)
df
</code></pre>
<p><a href="http://i.stack.imgur.com/DDE45.png" rel="nofollow"><img src="http://i.stack.imgur.com/DDE45.png" alt="enter image description here"></a></p>
| 2 | 2016-08-31T14:05:31Z | [
"python",
"pandas"
] |
Writing variable sized arrays to Pandas cells | 39,249,030 | <p>I have a large data set and I want to do a convolution calculation using multiple rows that match a criteria. I need to calculate a vector for each row first, and I thought it would be more efficient to store my vector in a dataframe column so I could try and avoid a for loop when I do the convolution. Trouble is, the vectors are variable length and I can't figure out how to do it.</p>
<p>Here's a summary of my data:</p>
<pre><code>Date State Alloc P
2012-01-01 AK 3 0.5
2012-01-01 AL 4 0.3
â¦
</code></pre>
<p>Each state has a different Alloc and P value. Thereâs a row for every date and state and my dataframe is over 15,000 rows long.</p>
<p>For each entry, I want a vector that looks like this:</p>
<pre><code>[P, np.zeros(Alloc), 1-P]
</code></pre>
<p>I can't figure out how to set a new column like this. I've tried statements like:</p>
<pre><code>df['Test'] = [df['P'], np.zeros(df['Alloc'), 1 â df['P']]
</code></pre>
<p>but they don't work.</p>
<p>Does anyone have any ideas?</p>
<p>Thanks âº</p>
| 2 | 2016-08-31T11:57:15Z | 39,391,032 | <p>So here's the answer. <a href="http://stackoverflow.com/users/2336654/pirsquared">piRSquared</a> was almost right, but not quite. There are several parts here.</p>
<p>The apply method partially works. It passes a row to the function and you can do a calculation as shown above. The problem is, you get a "ValueError: Shape of passed values is..." error message. The number of columns returned doesn't match the number of columns in the dataframe. My guess is this is because the return value is a list and Pandas isn't interpreting the result correctly. </p>
<p>The workaround is to do the apply on a single column. This single column should contain the P value and Alloc value. Here are the steps:</p>
<p>Create the merged column:</p>
<pre><code>df['temp'] = df[['P','Alloc']].values.tolist()
</code></pre>
<p>Write a function:</p>
<pre><code>def array_p(x): return [x[0]] + [0]*int(x[1]) + [1 - x[0]]
</code></pre>
<p>(int is needed because the previous line gives floats. I didn't need np.zeros)</p>
<p>Apply the function: </p>
<pre><code>df['Array'] = temp['temp'].apply(array_p)
</code></pre>
<p>This works, but obviously involves more steps than it should. If anyone can provide a better answer, I'd love to hear it.</p>
| 0 | 2016-09-08T12:32:32Z | [
"python",
"pandas"
] |
Exit a multiprocessing script | 39,249,050 | <p>I am trying to exit a multiprocessing script when an error is thrown by the target function, but instead of quitting, the parent process just hangs.</p>
<p>This is the test script I use to replicate the problem:</p>
<pre><code>#!/usr/bin/python3.5
import time, multiprocessing as mp
def myWait(wait, resultQueue):
startedAt = time.strftime("%H:%M:%S", time.localtime())
time.sleep(wait)
endedAt = time.strftime("%H:%M:%S", time.localtime())
name = mp.current_process().name
resultQueue.put((name, wait, startedAt, endedAt))
# queue initialisation
resultQueue = mp.Queue()
# process creation arg: (process number, sleep time, queue)
proc = [
mp.Process(target=myWait, name = ' _One_', args=(2, resultQueue,)),
mp.Process(target=myWait, name = ' _Two_', args=(2, resultQueue,))
]
# starting processes
for p in proc:
p.start()
for p in proc:
p.join()
# print results
results = {}
for p in proc:
name, wait, startedAt, endedAt = resultQueue.get()
print('Process %s started at %s wait %s ended at %s' % (name, startedAt, wait, endedAt))
</code></pre>
<p>This works perfectly, I can see the parent script spawning two child processes in <code>htop</code> but when I want to force the parent script to exit if an error is thrown in the <code>myWait</code> target function the parent process just hangs and doesn't even spawn any child process. I have to <code>ctrl-c</code> to kill it.</p>
<pre><code>def myWait(wait, resultQueue):
try:
# do something wrong
except:
raise SystemExit
</code></pre>
<p>I have tried every way to exit the function (e.g. <code>exit()</code>, <code>sys.exit()</code>, <code>os._exit()</code>...) to no avail.</p>
| 5 | 2016-08-31T11:58:37Z | 39,253,759 | <p>Firstly, your code has a major issue: you're trying to join the processes before flushing the content of the queues, if any, which can result in a deadlock. See the section titled 'Joining processes that use queues' here: <a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing-programming" rel="nofollow">https://docs.python.org/3/library/multiprocessing.html#multiprocessing-programming</a></p>
<p>Secondly, the call to <code>resultQueue.get()</code> will block until it receives some data, which never happens
if an exception is raised from the <code>myWait</code> function and that no data has been pushed into the queue before then. So make it non-blocking and make it check for any data in a loop until it finally receives something or that something's wrong.</p>
<p>Here's a quick'n'dirty fix to give you the idea:</p>
<pre><code>#!/usr/bin/python3.5
import multiprocessing as mp
import queue
import time
def myWait(wait, resultQueue):
raise Exception("error!")
# queue initialisation
resultQueue = mp.Queue()
# process creation arg: (process number, sleep time, queue)
proc = [
mp.Process(target=myWait, name = ' _One_', args=(2, resultQueue,)),
mp.Process(target=myWait, name = ' _Two_', args=(2, resultQueue,))
]
# starting processes
for p in proc:
p.start()
# print results
results = {}
for p in proc:
while True:
if not p.is_alive():
break
try:
name, wait, startedAt, endedAt = resultQueue.get(block=False)
print('Process %s started at %s wait %s ended at %s'
% (name, startedAt, wait, endedAt))
break
except queue.Empty:
pass
for p in proc:
p.join()
</code></pre>
<p>The function <code>myWait</code> will throw an exception but both processes will still join and the program will exit nicely.</p>
| 2 | 2016-08-31T15:37:48Z | [
"python",
"multiprocessing",
"python-3.5"
] |
Exit a multiprocessing script | 39,249,050 | <p>I am trying to exit a multiprocessing script when an error is thrown by the target function, but instead of quitting, the parent process just hangs.</p>
<p>This is the test script I use to replicate the problem:</p>
<pre><code>#!/usr/bin/python3.5
import time, multiprocessing as mp
def myWait(wait, resultQueue):
startedAt = time.strftime("%H:%M:%S", time.localtime())
time.sleep(wait)
endedAt = time.strftime("%H:%M:%S", time.localtime())
name = mp.current_process().name
resultQueue.put((name, wait, startedAt, endedAt))
# queue initialisation
resultQueue = mp.Queue()
# process creation arg: (process number, sleep time, queue)
proc = [
mp.Process(target=myWait, name = ' _One_', args=(2, resultQueue,)),
mp.Process(target=myWait, name = ' _Two_', args=(2, resultQueue,))
]
# starting processes
for p in proc:
p.start()
for p in proc:
p.join()
# print results
results = {}
for p in proc:
name, wait, startedAt, endedAt = resultQueue.get()
print('Process %s started at %s wait %s ended at %s' % (name, startedAt, wait, endedAt))
</code></pre>
<p>This works perfectly, I can see the parent script spawning two child processes in <code>htop</code> but when I want to force the parent script to exit if an error is thrown in the <code>myWait</code> target function the parent process just hangs and doesn't even spawn any child process. I have to <code>ctrl-c</code> to kill it.</p>
<pre><code>def myWait(wait, resultQueue):
try:
# do something wrong
except:
raise SystemExit
</code></pre>
<p>I have tried every way to exit the function (e.g. <code>exit()</code>, <code>sys.exit()</code>, <code>os._exit()</code>...) to no avail.</p>
| 5 | 2016-08-31T11:58:37Z | 39,266,552 | <p>You should use <code>multiprocessing.Pool</code> to manage your processes for you. And then use <code>Pool.imap_unordered</code> to iterate over the results in the order they are completed. As soon as you get the first exception, you can stop the pool and its child processes (this is automatically done when you exit the <code>with Pool() as pool</code> block). eg</p>
<pre><code>from multiprocessing import Pool
import time
def my_wait(args):
name, wait = args
if wait == 2:
raise ValueError("error!")
else:
startedAt = time.strftime("%H:%M:%S", time.localtime())
time.sleep(wait)
endedAt = time.strftime("%H:%M:%S", time.localtime())
return name, wait, startedAt, endedAt
if __name__ == "__main__":
try:
with Pool() as pool:
args = [["_One_", 2], ["_Two_", 3]]
for name, wait, startedAt, endedAt in pool.imap_unordered(my_wait, args):
print('Task %s started at %s wait %s ended at %s' % (name,
startedAt, wait, endedAt))
except ValueError as e:
print(e)
</code></pre>
<p>This method is not suitable for long-timed, low-workload tasks, as it will only run as many of the tasks in parallel as the number of child processes it is managing (but this is something you can set). It's also not great if you need to run different functions.</p>
| 1 | 2016-09-01T08:34:33Z | [
"python",
"multiprocessing",
"python-3.5"
] |
Extract a part of fasta sequence based on bp coordinates | 39,249,121 | <p>I have a huge fasta file, but I need to extract only a part of it, If I know the start and end base pair coordinate of my sequence. Also, it should be in fasta format with the length of 60 bp per line.This is my try, please let me know if looks OK and any suggestions to improve it are welcome.</p>
<pre><code>from Bio import SeqIO
inFile = open('full_chr.fa','r')
fw=open("part.fa",'w')
line_width = 60
for record in SeqIO.parse(inFile,'fasta'):
fw.write(">" + record.id + "\n")
fww = (str(record.seq[600130000:602000000]) + '\n')
for i in xrange(0,len(fww),line_width):
fw.write(str(fww[i:i+line_width]) + '\n')
fw.close()
</code></pre>
| 0 | 2016-08-31T12:02:26Z | 39,249,843 | <p>It's as easy as:</p>
<pre><code>from Bio import SeqIO
record = SeqIO.read("Chromosome.fas", "fasta")
with open("output.fas", "w") as out:
SeqIO.write(record[100:500], out, "fasta")
</code></pre>
<p>The <code>SeqIO.write</code> already uses a 60 character length wrapping. If you want to manipulate the line wrap use the <code>FastaWriter</code> object. This is an example for 80 bp lines:</p>
<pre><code>from Bio import SeqIO
from Bio.SeqIO.FastaIO import FastaWriter
record = SeqIO.read("Chromosome.fas", "fasta")
with open("output.fas", "w") as out:
writer = FastaWriter(out, wrap=80)
writer.write_header()
writer.write_record(record[100:500])
writer.write_footer()
</code></pre>
| 1 | 2016-08-31T12:35:47Z | [
"python",
"bioinformatics",
"biopython",
"fasta"
] |
command to generate html report using nosetests over jenkins | 39,249,155 | <p>nosetests --processes=2 --process-timeout=1800 -v --nocapture -a=attr --attr=api src/tests/externalapi/test_accounts_api.py --with-html</p>
<p>doesn't work</p>
<p>console says:
nosetests: error: no such option: --with-html
Build step 'Execute shell' marked build as failure</p>
| -1 | 2016-08-31T12:03:43Z | 39,249,682 | <p>You can publish test results by adding post build action named âPublish JUnit test result reportâ in configure section.
You will need to generate <code>*.xml</code> file with Nose, tell Jenkins what is the name of that file and Jenkins will be able to interpret it and display it in nice looking form.</p>
<p>Option which is interesting for you instead of <code>--with-html</code> is <code>--with-xunit</code>.</p>
<p>Documentation for this option is available <a href="http://nose.readthedocs.io/en/latest/plugins/xunit.html" rel="nofollow">here</a>.</p>
| 0 | 2016-08-31T12:28:31Z | [
"python",
"jenkins",
"nose"
] |
When use pip to install flask-bcrypt, one error is :UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 49: ordinal not in range(128) | 39,249,166 | <p>Today, when I install flask-bcrypt used:</p>
<pre><code>pip install flask-bcrypt
</code></pre>
<p>This error happend:</p>
<pre><code>Command /home/sf/python/venv/bin/python2 -c "import setuptools,
tokenize;__file__='/tmp/pip-build-duYRO6/bcrypt/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-hlvpv8-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/sf/python/venv/include/site/python2.7 failed with error code 1 in /tmp/pip-build-duYRO6/bcrypt
Traceback (most recent call last):
File "/home/sf/python/venv/bin/pip", line 11, in <module>
sys.exit(main())
File "/home/sf/python/venv/local/lib/python2.7/site-packages/pip/__init__.py", line 248, in main
return command.main(cmd_args)
File "/home/sf/python/venv/local/lib/python2.7/site-packages/pip/basecommand.py", line 161, in main
text = '\n'.join(complete_log)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe6 in position 49: ordinal not in range(128)
</code></pre>
| 0 | 2016-08-31T12:04:10Z | 39,249,585 | <p>When you type under command to python shell, </p>
<pre><code>>>> import sys
>>> sys.getdefaultencoding()
</code></pre>
<p>it will output this:</p>
<pre><code>'ascii'
</code></pre>
<p>So, I modify </p>
<pre><code>/etc/python2.7/sitecustomize.py
</code></pre>
<p>added:</p>
<pre><code>import sys
sys.setdefaultencoding('utf-8')
</code></pre>
<p>Now the defaultencoding change to the:</p>
<pre><code>'utf-8'
</code></pre>
<p>This error was solved, but also had another error:</p>
<pre><code>Command /home/sf/python/venv/bin/python2 -c "import setuptools,
tokenize;__file__='/tmp/pip-build-6LgjpC/bcrypt/setup.py';
exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n',
'\n'), __file__, 'exec'))" install --record /tmp/pip-3qxBJi-record/install-
record.txt --single-version-externally-managed --compile --install-headers
/home/sf/python/venv/include/site/python2.7 failed with error code 1 in
/tmp/pip-build-6LgjpC/bcrypt
Storing debug log for failure in /home/sf/.pip/pip.log
</code></pre>
<p>You can see this: <a href="http://stackoverflow.com/questions/28363514/when-i-try-to-install-flask-bcrypt-it-throws-me-error-command-x86-64-linux-gnu">when I try to install Flask-bcrypt it throws me error: command 'x86_64-linux-gnu-gcc' failed with exit status 1</a></p>
<p>I just install:</p>
<pre><code>apt-get install libffi-dev
</code></pre>
<p>Finally, I succeed to install the flask-bcrypt.</p>
| -1 | 2016-08-31T12:24:14Z | [
"python",
"python-2.7",
"pip"
] |
ImportError: No module named impyla | 39,249,170 | <p>I have installed impyla and it's dependencies following <a href="https://github.com/cloudera/impyla" rel="nofollow">this</a> guide. The installation seems to be successful as now I can see the folder <strong>"impyla-0.13.8-py2.7.egg"</strong> in the Anaconda folder (64-bit Anaconda 4.1.1 version).</p>
<p>But when I import impyla in python, I get the following error:</p>
<pre><code>>>> import impyla
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named impyla
</code></pre>
<p><strong>I have installed 64-bit Python 2.7.12</strong></p>
<p><strong>Can any body please explain me why I am facing this error?</strong> I am new on Python and have been spending allot of time on different blogs, but I don't see much information present there yet. Thanks in advance for your time.</p>
| 0 | 2016-08-31T12:04:20Z | 39,250,176 | <p>Usage is a little bit different then you mentioned (from <a href="https://github.com/cloudera/impyla" rel="nofollow">https://github.com/cloudera/impyla</a>)</p>
<p>Impyla implements the Python DB API v2.0 (PEP 249) database interface (refer to it for API details):</p>
<pre><code>from impala.dbapi import connect
conn = connect(host='my.host.com', port=21050)
cursor = conn.cursor()
cursor.execute('SELECT * FROM mytable LIMIT 100')
print cursor.description # prints the result set's schema
results = cursor.fetchall()
</code></pre>
<p>The Cursor object also exposes the iterator interface, which is buffered (controlled by cursor.arraysize):</p>
<pre><code>cursor.execute('SELECT * FROM mytable LIMIT 100')
for row in cursor:
process(row)
</code></pre>
<p>You can also get back a pandas DataFrame object</p>
<pre><code>from impala.util import as_pandas
df = as_pandas(cur)
# carry df through scikit-learn, for example
</code></pre>
| 2 | 2016-08-31T12:51:11Z | [
"python",
"hadoop",
"thrift",
"impala"
] |
Detect if Django function is running in a celery worker | 39,249,172 | <p>I have a post_save hook that triggers a task to run in celery. The task also updates the model, which causes the post_save hook to run. The catch is I do not want to <code>.delay()</code> the call in this instance, I just want to run it synchronously because it's already being run in a worker.</p>
<p>Is there an environmental variable or something else I can use to detect when the code is being run in celery?</p>
<p>To clarify: I'm aware that Celery tasks can still be called as normal functions, that's exactly what I'm trying to take advantage of. I want to do something like this:</p>
<pre><code>if os.environ['is_celery']:
my_task(1, 2, 3)
else:
my_task.delay(1, 2, 3)
</code></pre>
| 1 | 2016-08-31T12:04:26Z | 39,259,111 | <p>Usually you'd have <code>common.py, production.py, test.py and local.py/dev.py</code>. You could just add a <code>celery_settings.py</code> with the following content:</p>
<pre><code>from production import *
IS_CELERY = True
</code></pre>
<p>Then in your <code>celery.py</code> (I'm assuming you have one) you'll do</p>
<pre><code>os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.celery_settings")
</code></pre>
<p>Then in your script you can now do:</p>
<pre><code>if getattr(settings, 'IS_CELERY', None):
my_task(1, 2, 3)
else:
my_task.delay(1, 2, 3)
</code></pre>
| 2 | 2016-08-31T21:14:28Z | [
"python",
"django",
"celery"
] |
Stack Bar Grapth with plot / pandas | 39,249,233 | <p>I am not being able to generate a stacked graph with the error below:</p>
<pre><code>TypeError: There is no Line2D property "stacked"
</code></pre>
<p>My csv has the format:</p>
<pre><code>feb-17,1,2,3
apr-17,2,4,3
may-18,3,5,3
oct-20,4,1,1
dec-21,5,1,1
</code></pre>
<p>I would expect to see something like:</p>
<p><a href="http://i.stack.imgur.com/KiqgL.png" rel="nofollow"><img src="http://i.stack.imgur.com/KiqgL.png" alt="enter image description here"></a></p>
<p>My code is:</p>
<pre><code>import pandas
import matplotlib.pyplot as plt
df = pandas.read_csv(datacsv.csv, delimiter=',',
index_col=0,
parse_dates=[0], dayfirst=True,
names=['Date','Black','Red', 'Yellow'])
df.plot(Stacked=True, marker='.',markersize=8, title ="My graph", fontsize = 10, color=['b','g','c'], figsize=(15, 15))
imagepathprotocol = "image.png"
plt.savefig(imagepathprotocol)
</code></pre>
<p>Any idea what can be done? Thanks a lot.</p>
| 0 | 2016-08-31T12:07:41Z | 39,249,308 | <p>Yes, change the type - </p>
<pre class="lang-py prettyprint-override"><code>df.plot.bar(stacked=True, marker='.',markersize=8, title ="My graph", fontsize = 10, color=['b','g','c'], figsize=(15, 15))
</code></pre>
<p>By default it tries to plot a 2D line and this indeed does not have a stacked option.</p>
<p>See documentation <a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html#bar-plots" rel="nofollow">here</a></p>
| 0 | 2016-08-31T12:11:39Z | [
"python",
"plot"
] |
Stack Bar Grapth with plot / pandas | 39,249,233 | <p>I am not being able to generate a stacked graph with the error below:</p>
<pre><code>TypeError: There is no Line2D property "stacked"
</code></pre>
<p>My csv has the format:</p>
<pre><code>feb-17,1,2,3
apr-17,2,4,3
may-18,3,5,3
oct-20,4,1,1
dec-21,5,1,1
</code></pre>
<p>I would expect to see something like:</p>
<p><a href="http://i.stack.imgur.com/KiqgL.png" rel="nofollow"><img src="http://i.stack.imgur.com/KiqgL.png" alt="enter image description here"></a></p>
<p>My code is:</p>
<pre><code>import pandas
import matplotlib.pyplot as plt
df = pandas.read_csv(datacsv.csv, delimiter=',',
index_col=0,
parse_dates=[0], dayfirst=True,
names=['Date','Black','Red', 'Yellow'])
df.plot(Stacked=True, marker='.',markersize=8, title ="My graph", fontsize = 10, color=['b','g','c'], figsize=(15, 15))
imagepathprotocol = "image.png"
plt.savefig(imagepathprotocol)
</code></pre>
<p>Any idea what can be done? Thanks a lot.</p>
| 0 | 2016-08-31T12:07:41Z | 39,250,056 | <p>try the following code </p>
<pre><code>df.plot(stacked=True,kind ='bar',title ="My graph",fontsize =10,color=['b','g','c'], figsize=(15, 15))
</code></pre>
<p>you need to point out the bar kind and delete the marker/markersize</p>
| 1 | 2016-08-31T12:45:20Z | [
"python",
"plot"
] |
Using django ORM in my celery app giving error | 39,249,312 | <p>I am developing a celery standalone application which uses Django ORM to access my database and perform operations in my data. I followed the answer to this question: <a href="http://stackoverflow.com/questions/937742/use-django-orm-as-standalone">Use Django ORM as standalone</a></p>
<p>But when I run my celery worker, it gives me the following error:</p>
<blockquote>
<p>django.core.exceptions.ImproperlyConfigured: Requested setting
LOGGING_CONFIG, but settings are not configured. You must either
define the environment variable DJANGO_SETTINGS_MODULE or call
settings.configure() before accessing settings.</p>
</blockquote>
<p>Here is my full application code and directory structure:</p>
<pre><code>standAlone/
----init.py
----settings.py
DATABASE_ENGINE = "django.db.backends.mysql"
DATABASE_NAME = "my_database_name"
DATABASE_USER = "my_database_username"
DATABASE_PASSWORD = "my_database_password"
DATABASE_HOST = "my_host"
DATABASE_PORT = "3306",
INSTALLED_APPS = ("myApp")
----manage.py
#!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myApp.settings")
try:
from django.core.management import execute_from_command_line
except ImportError:
# The above import may fail for some other reason. Ensure that the
# issue is really that Django is missing to avoid masking other
# exceptions on Python 2.
try:
import django
except ImportError:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
)
raise
execute_from_command_line(sys.argv)
----tasks.py
import django
django.setup()
from celery import Celery
from django.conf import settings
settings.configure(
DATABASE_ENGINE = "django.db.backends.mysql",
DATABASE_NAME = "myDatabase",
DATABASE_USER = "myUsername",
DATABASE_PASSWORD = "myPassword",
DATABASE_HOST = "my host",
DATABASE_PORT = "3306",
INSTALLED_APPS = ("myApp,")
)
from django.db import models
from myApp.models import *
app = Celery('tasks', broker='redis://ip_address')
@app.task(name="task1")
def task1():
#my task code
@app.task(name="task2")
def task2():
#my task code
----/myApp
--------/init.py
--------/models.py
from django.db import models
from django.utils.encoding import python_2_unicode_compatible
from django.utils import timezone
# my models go here
</code></pre>
<p>Now when I run the following command:</p>
<pre><code>celery -A tasks worker &
</code></pre>
<p>I get this error:</p>
<pre><code>django.core.exceptions.ImproperlyConfigured: Requested setting LOGGING_CONFIG, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
</code></pre>
<p>Can anyone suggest where I am going wrong?</p>
<p><strong>UPDATE:</strong></p>
<p>Here is the full traceback:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/bin/celery", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/celery/__main__.py", line 30, in main
main()
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 81, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 793, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 309, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 469, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 489, in find_app
return find_app(app, symbol_by_name=self.symbol_by_name)
File "/usr/local/lib/python2.7/dist-packages/celery/app/utils.py", line 235, in find_app
sym = symbol_by_name(app, imp=imp)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 492, in symbol_by_name
return symbol_by_name(name, imp=imp)
File "/usr/local/lib/python2.7/dist-packages/kombu/utils/__init__.py", line 96, in symbol_by_name
module = imp(module_name, package=package, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/utils/imports.py", line 101, in import_from_cwd
return imp(module, package=package)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/root/standAlone/tasks.py", line 2, in <module>
django.setup()
File "/usr/local/lib/python2.7/dist-packages/django/__init__.py", line 22, in setup
configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 53, in __getattr__
self._setup(name)
File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 39, in _setup
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting LOGGING_CONFIG, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
</code></pre>
<p><strong>UPDATE 2:</strong></p>
<p>Now that I have applied the answer provided, it is giving me error saying</p>
<blockquote>
<p>no module named m</p>
</blockquote>
<p>with the following traceback:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/bin/celery", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/celery/__main__.py", line 30, in main
main()
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 81, in main
cmd.execute_from_commandline(argv)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/celery.py", line 793, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 309, in execute_from_commandline
argv = self.setup_app_from_commandline(argv)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 469, in setup_app_from_commandline
self.app = self.find_app(app)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 489, in find_app
return find_app(app, symbol_by_name=self.symbol_by_name)
File "/usr/local/lib/python2.7/dist-packages/celery/app/utils.py", line 235, in find_app
sym = symbol_by_name(app, imp=imp)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/base.py", line 492, in symbol_by_name
return symbol_by_name(name, imp=imp)
File "/usr/local/lib/python2.7/dist-packages/kombu/utils/__init__.py", line 96, in symbol_by_name
module = imp(module_name, package=package, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/utils/imports.py", line 101, in import_from_cwd
return imp(module, package=package)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/root/standAlone/tasks.py", line 13, in <module>
django.setup()
File "/usr/local/lib/python2.7/dist-packages/django/__init__.py", line 27, in setup
apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python2.7/dist-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/usr/local/lib/python2.7/dist-packages/django/apps/config.py", line 90, in create
module = import_module(entry)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named m
</code></pre>
<p><strong>UPDATE 3</strong></p>
<p>It turns out that I needed to change </p>
<pre><code>INSTALLED_APPS = ("myApp")
</code></pre>
<p>to </p>
<pre><code>INSTALLED_APPS = ("myApp",)
</code></pre>
<p>and everything works good then!</p>
| 3 | 2016-08-31T12:11:53Z | 39,249,431 | <p>You need to configure your settings before you call <code>django.setup()</code>. Simply switching the statements should fix the issue:</p>
<pre><code># tasks.py
import django
from celery import Celery
from django.conf import settings
settings.configure(
DATABASE_ENGINE = "django.db.backends.mysql",
DATABASE_NAME = "myDatabase",
DATABASE_USER = "myUsername",
DATABASE_PASSWORD = "myPassword",
DATABASE_HOST = "my host",
DATABASE_PORT = "3306",
INSTALLED_APPS = ("myApp",) # Move the comma out of the quotes
)
django.setup()
</code></pre>
| 3 | 2016-08-31T12:17:39Z | [
"python",
"django",
"django-models",
"orm"
] |
Python print pdf file with win32print | 39,249,360 | <p>I'm trying to print a pdf file from python with module win32print but the only way i can print success is a text.</p>
<pre><code>hPrinter = win32print.OpenPrinter("\\\\Server\Printer")
filename = "test.pdf"
try:
hJob = win32print.StartDocPrinter(hPrinter, 1, ('PrintJobName',None,'RAW'))
try:
win32api.ShellExecute(0, "print", filename, None, ".", 0)
win32print.StartPagePrinter(hPrinter)
win32print.WritePrinter(hPrinter, "test") #Instead of raw text is there a way to print PDF File ?
win32print.EndPagePrinter(hPrinter)
finally:
win32print.EndDocPrinter(hPrinter)
finally:
win32print.ClosePrinter(hPrinter)
</code></pre>
<p>So instead of printing a text i need to print the "test.pdf" file.</p>
<p>I also tried with <strong><code>win32api.ShellExecute(0, "print", filename, None, ".", 0)</code></strong> but is not working, after some test like (getprinter, getdefault, setprinter, setdefaultprinter) it seems not to be attaching the printer. So at this way i can't get working.</p>
<p>This is the code i used !</p>
<pre><code>win32print.SetDefaultPrinter(hPrinter)
win32api.ShellExecute(0, "print", filename, None, ".", 0)
</code></pre>
| 0 | 2016-08-31T12:13:39Z | 39,249,850 | <p>You can try</p>
<pre><code>win32print.SetDefaultPrinter("\\\\Server\Printer")
</code></pre>
<p>This method accepts a String, not the printer object you tried to pass it.</p>
| 0 | 2016-08-31T12:35:56Z | [
"python",
"pdf",
"printing"
] |
Numerical computation of the Spherical harmonics expansion | 39,249,594 | <p>I'm trying to understand the Spherical harmonics expansion in order to solve a more complex problem but the result I'm expecting from a very simple calculation is not correct. I have no clue why this is happening.</p>
<hr>
<p><strong>A bit of theory:</strong> It is well known that a function on the surface of a sphere (<img src="http://latex.codecogs.com/gif.latex?%5Cmathbb%7BR%7D%5E%7B3%7D" alt="\mathbb{R}^{3}">) can be defined as an infinite sum of some constant coefficients <img src="https://chart.googleapis.com/chart?cht=tx&chl=f_%7Bm%7D%5E%7Bl%7D" alt=""> and the spherical harmonics <img src="https://chart.googleapis.com/chart?cht=tx&chl=Y_%7Bm%7D%5E%7Bl%7D(%5CTheta,%5Cvarphi)" alt=""> : </p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=f(%5CTheta,%5Cvarphi)=%5Csum_%7Bl=0%7D%5E%7B%5Cinfty%7D%5Csum_%7Bm=-l%7D%5E%7Bl%7Df_%7Bm%7D%5E%7Bl%7DY_%7Bm%7D%5E%7Bl%7D(%5CTheta,%5Cvarphi)" alt=""></p>
<p>The spherical harmonics are defined as : </p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=Y_%7Bm%7D%5E%7Bl%7D(%5CTheta,%5Cvarphi)=%5Csqrt%7B%5Cfrac%7B(2l%2b1)%5Ccdot(l-m)!%7D%7B4%5Cpi%5Ccdot(l%2bm)!%7D%7DP_%7Bm%7D%5E%7Bl%7D(cos%5CTheta)e%5E%7Bim%5Cvarphi%7D" alt=""></p>
<p>where <img src="https://chart.googleapis.com/chart?cht=tx&chl=P_%7Bm%7D%5E%7Bl%7D" alt=""> are the associated Legendre polynomials.</p>
<p>An finally, the constant coefficients <img src="https://chart.googleapis.com/chart?cht=tx&chl=f_%7Bm%7D%5E%7Bl%7D" alt=""> can be calculated (similarly to the Fourier transform) as follow:</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=f_%7Bm%7D%5E%7Bl%7D=%5Coint_%7B%5COmega%7D%5E%7B%7Df(%5CTheta,%5Cvarphi)Y_%7Bm%7D%5E%7Bl%7D(%5CTheta,%5Cvarphi)d%5COmega" alt=""></p>
<hr>
<p><strong>The problem:</strong> Let's assume we have a sphere centered in <img src="https://chart.googleapis.com/chart?cht=tx&chl=(0,0,0)" alt=""> where the function on the surface is equal to <img src="https://chart.googleapis.com/chart?cht=tx&chl=1" alt=""> for all points <img src="https://chart.googleapis.com/chart?cht=tx&chl=(%5CTheta,%5Cvarphi)" alt="">. We want to calculate the constant coefficients and then calculate back the surface function by approximation. Since <img src="https://chart.googleapis.com/chart?cht=tx&chl=f(%5CTheta,%5Cvarphi)=1" alt=""> the calculation of the constant coefficients reduces to :</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=f_%7Bm%7D%5E%7Bl%7D=%5Csum_%7B%5COmega%7DY_%7Bm%7D%5E%7Bl%7D(%5CTheta,%5Cvarphi)" alt=""></p>
<p>which numerically (in Python) can be approximated using:</p>
<pre><code>def Ylm(l,m,theta,phi):
return scipy.special.sph_harm(m,l,theta,phi)
def flm(l,m):
phi, theta = np.mgrid[0:pi:101j, 0:2*pi:101j]
return Ylm(l,m,theta,phi).sum()
</code></pre>
<p>Then, by computing a band limited sum over <img src="https://chart.googleapis.com/chart?cht=tx&chl=l" alt=""> I'm expecting to see <img src="https://chart.googleapis.com/chart?cht=tx&chl=f'(%5CTheta,%5Cvarphi)%5Crightarrow1" alt=""> when <img src="https://chart.googleapis.com/chart?cht=tx&chl=l%5Crightarrow%5Cinfty" alt=""> for any given point <img src="https://chart.googleapis.com/chart?cht=tx&chl=(%5CTheta,%5Cvarphi)" alt="">.</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=f'(%5CTheta,%5Cvarphi)=%5Csum_%7Bl=0%7D%5E%7BL%7D%5Csum_%7Bm=-l%7D%5E%7Bl%7Df_%7Bm%7D%5E%7Bl%7DY_%7Bm%7D%5E%7Bl%7D(%5CTheta,%5Cvarphi)" alt=""></p>
<pre><code>L = 20
f = 0
theta0, phi0 = 0.0, 0.0
for l in xrange(0,L+1):
for m in xrange(-l,l+1):
f += flm(l,m)*Ylm(l,m,theta0,phi0)
print f
</code></pre>
<p>but for <img src="https://chart.googleapis.com/chart?cht=tx&chl=L=20" alt=""> it gives me <img src="https://chart.googleapis.com/chart?cht=tx&chl=(12860.5407362%2b0j)" alt=""> and not <img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Csim(1%2b0j)" alt="">. For <img src="https://chart.googleapis.com/chart?cht=tx&chl=L=40" alt=""> it gives me <img src="https://chart.googleapis.com/chart?cht=tx&chl=(28156.0642429%2b0j)" alt=""></p>
<p>I know it seems more a Mathematics problem but the formulas should be correct. The problem seems being on my computation. It could be a really stupid mistake but I cannot spot it. Any suggestion?</p>
<p>Thanks </p>
| 1 | 2016-08-31T12:24:37Z | 39,250,424 | <p>The spherical harmonics are orthonormal with the inner product</p>
<pre><code><f|g> = Integral( f(theta,phi)*g(theta,phi)*sin(theta)*dphi*dtheta)
</code></pre>
<p>So you should calulate the coefficients by</p>
<pre><code> clm = Integral( Ylm( theta, phi) * sin(theta)*dphi*dtheta)
</code></pre>
| 1 | 2016-08-31T13:02:04Z | [
"python",
"math"
] |
python 3D numpy array time index | 39,249,639 | <p>Is there a way to index a 3 dimensional array using some form of time index (datetime etc.) on the 3rd dimension?</p>
<p>My problem is that I am doing time series analysis on several thousand radar images and I need to get, for example, monthly averages. However if i simply average over every 31 arrays in the 3rd dimension it becomes inacurate due to shorter months and missing data etc. </p>
| 0 | 2016-08-31T12:26:43Z | 39,250,087 | <p>you can use pandas module. it supports indexing by date/datetime range. it also support multiindexing, which allows you to work with multidimensional data in 2D manner. </p>
<pre><code>>>> rng = pd.date_range('1/1/2016', periods=100, freq='D')
>>> rng[:5]
DatetimeIndex(['2016-01-01', '2016-01-02', '2016-01-03', '2016-01-04', '2016-01-05'], dtype='datetime64[ns]', freq='D')
>>> ts = pd.Series(np.random.randn(len(rng)), index=rng)
>>> ts.head()
2016-01-01 0.119762
2016-01-02 -0.010990
2016-01-03 0.226537
2016-01-04 -0.087559
2016-01-05 0.484426
Freq: D, dtype: float64
>>> ts.resample('M').mean()
2016-01-31 -0.171578
2016-02-29 0.055878
2016-03-31 -0.243225
2016-04-30 -0.015087
Freq: M, dtype: float64
</code></pre>
<p>check some detailed info below:</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.html</a>
<a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/timeseries.html</a>
<a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/advanced.html</a></p>
| 1 | 2016-08-31T12:46:46Z | [
"python",
"arrays",
"datetime",
"numpy",
"multidimensional-array"
] |
Why is remote_addr missing from the Flask documentation? | 39,249,655 | <p><a href="http://stackoverflow.com/a/3760309">This question</a> answers how to find the IP of your visitors in Flask. The code works. However, <code>request.remote_addr</code> seems to be absent from the <a href="http://flask.pocoo.org/docs/0.11/api/#incoming-request-data" rel="nofollow">Flask documentation</a>. Why is this?</p>
| 0 | 2016-08-31T12:27:21Z | 39,250,636 | <p>Because the <a href="http://werkzeug.pocoo.org/docs/0.11/wrappers/#werkzeug.wrappers.BaseRequest" rel="nofollow"><code>Request</code></a> object comes from Werkzeug, where it <em>is</em> <a href="http://werkzeug.pocoo.org/docs/0.11/wrappers/#werkzeug.wrappers.BaseRequest.remote_addr" rel="nofollow">documented</a>. Flask's docs just copy over some of the more commonly used information, but they do link directly to the Werkzeug docs too.</p>
| 4 | 2016-08-31T13:13:13Z | [
"python",
"flask",
"documentation"
] |
Python/Tkinter - Using multiple classes, causes infinite loop when inheriting | 39,249,702 | <p>I am doing a bigger tkinter application and because the size of the application is quite big I have tried to divide the application to different classes and methods for making it clearer.</p>
<p>The problem is that because the subclasses I have made are inheriting from the main class their initiation function causes an infinite loop.</p>
<p>Here is an example code:</p>
<pre><code>import tkinter as tk
import random
class main_window(object):
def __init__(self):
self.root = tk.Tk()
self.cv = tk.Canvas(self.root, height = 400, width = 400)
self.cv.pack()
self.button1 = tk.Button(text="draw line", command=method().draw_line)
self.Solve_button_window = self.cv.create_window(5, 5, anchor=tk.NW, window=self.button1)
self.root.mainloop()
class method(main_window):
def __init__(self):
super().__init__() #Sublclassing from main_window class causes an infinite loop
self.point1 = 0
self.point2 = 0
def draw_line(self):
self.point1 = random.randrange(10, 400)
self.point2 = random.randrange(10, 400)
self.cv.create_line(self.point1, self.point2)
main_window()
</code></pre>
<p>How can I change this code that the subclasses won't cause an infinite loop?</p>
<p>Thanks for the help</p>
| 2 | 2016-08-31T12:29:30Z | 39,250,239 | <p>Your <code>method</code> class shouldn't inherit from the <code>main_window</code> class. Just create an instance of <code>main_window</code> and pass it into the <code>__init__</code> of <code>method</code>.</p>
<p>BTW, <code>method</code> is a confusing name for a class. Also, class names in Python are conventionally written in CamelCase.</p>
<hr>
<p>Here's a modified version of your code that illustrates what I mean.</p>
<pre><code>import tkinter as tk
import random
class MainWindow(object):
def __init__(self):
self.root = tk.Tk()
self.cv = tk.Canvas(self.root, height = 400, width = 400)
self.cv.pack()
draw_stuff = DrawStuff(self)
self.button1 = tk.Button(text="draw line", command=draw_stuff.draw_line)
self.Solve_button_window = self.cv.create_window(5, 5, anchor=tk.NW, window=self.button1)
self.root.mainloop()
class DrawStuff(object):
def __init__(self, mainwin):
self.mainwin = mainwin
self.cv = mainwin.cv
def draw_line(self):
self.point1 = (random.randrange(10, 400), random.randrange(10, 400))
self.point2 = (random.randrange(10, 400), random.randrange(10, 400))
self.cv.create_line(self.point1, self.point2)
MainWindow()
</code></pre>
| 2 | 2016-08-31T12:54:19Z | [
"python",
"inheritance",
"tkinter",
"infinite-loop",
"subclassing"
] |
Tensorflow : ValueError Duplicate feature column key found for column | 39,249,704 | <p>I was trying to get my hands dirty with Tensorflow and following
<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py" rel="nofollow">Wide and Deep Learning</a> example code. I modified certain imports for it to work with python 3.4 on centos 7. </p>
<p>Highlights of the changes are:</p>
<pre><code> -import urllib
+import urllib.request
</code></pre>
<p>...</p>
<pre><code> -urllib.urlretrieve
+urllib.request.urlretrieve
</code></pre>
<p>...</p>
<p>On running the code, I am getting following error</p>
<pre><code> Training data is downloaded to /tmp/tmpw06u4_xl
Test data is downloaded to /tmp/tmpjliqxhwh
model directory = /tmp/tmpcyll7kck
WARNING:tensorflow:Setting feature info to {'education': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'capital_gain': TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False), 'capital_loss': TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False), 'hours_per_week': TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False), 'gender': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'occupation': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'native_country': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'race': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'age': TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False), 'education_num': TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False), 'marital_status': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'workclass': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'relationship': TensorSignature(dtype=tf.string, shape=None, is_sparse=True)}
WARNING:tensorflow:Setting targets info to TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False)
Traceback (most recent call last):
File "wide_n_deep_tutorial.py", line 213, in <module>
tf.app.run()
File "/usr/lib/python3.4/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "wide_n_deep_tutorial.py", line 209, in main
train_and_eval()
File "wide_n_deep_tutorial.py", line 202, in train_and_eval
m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps)
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 240, in fit
max_steps=max_steps)
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 550, in _train_model
train_op, loss_op = self._get_train_ops(features, targets)
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 182, in _get_train_ops
logits = self._logits(features, is_training=True)
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 260, in _logits
dnn_feature_columns = self._get_dnn_feature_columns()
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 224, in _get_dnn_feature_columns
feature_column_ops.check_feature_columns(self._dnn_feature_columns)
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/layers/python/layers/feature_column_ops.py", line 318, in check_feature_columns
f.name))
ValueError: Duplicate feature column key found for column: education_embedding. This usually means that the column is almost identical to another column, and one must be discarded.
</code></pre>
<p>Is that I have change some variable or is this a python 3 problem. How can I get going forward with this tutorial.</p>
| 3 | 2016-08-31T12:29:33Z | 39,268,045 | <p><strong>Final update</strong> I had this problem with the recommended 0.10rc0 branch, but after reinstalling using the master (no branch on git clone) this problem went away. I checked the source code and they fixed it. Python 3 now gets the same results as Python 2 for wide_n_deep mode, after fixing the urllib.request thing you already mentioned.</p>
<p>For anyone coming later and still using 0.10rc0 branch, feel free to read on:</p>
<p>Had the same problem, and did some debugging. Looks like a bug in tensorflow/contrib/layers/python/layers/feature_column.py in the _EmbeddingColumn class. The key(self) property is plagued by this bug:
<a href="https://bugs.python.org/issue24931" rel="nofollow">https://bugs.python.org/issue24931</a></p>
<p>So instead of coming out with a nice unique key, we get the following key for all _EmbeddingColumn instances:
'_EmbeddingColumn()'</p>
<p>This causes the feature_column_ops.py's check_feature_columns() function to determine that the second _EmbeddingColumn instance is a duplicate since they keys of all of them are the same.</p>
<p>I'm kind of a Python noob, and I can't figure out how to monkey patch a property. So I fixed this problem by creating a subclass at the top of the wide_n_deep tutorial file:</p>
<pre><code># EmbeddingColumn for Python 3.4 has a problem with key property
# can't monkey patch a property, so subclass it and make a method to create the
# subclass to use instead of "embedding_column"
from tensorflow.contrib.layers.python.layers.feature_column import _EmbeddingColumn
class _MonkeyEmbeddingColumn(_EmbeddingColumn):
# override the key property
@property
def key(self):
return "{}".format(self)
def monkey_embedding_column(sparse_id_column,
dimension,
combiner="mean",
initializer=None,
ckpt_to_load_from=None,
tensor_name_in_ckpt=None):
return _MonkeyEmbeddingColumn(sparse_id_column, dimension, combiner, initializer, ckpt_to_load_from, tensor_name_in_ckpt)
</code></pre>
<p>Then find the calls like this:</p>
<pre><code>tf.contrib.layers.embedding_column(workclass, dimension=8)
</code></pre>
<p>and replace "tf.contrib.layers." with "monkey_" so you now have:</p>
<pre><code> deep_columns = [
monkey_embedding_column(workclass, dimension=8),
monkey_embedding_column(education, dimension=8),
monkey_embedding_column(marital_status,
dimension=8),
monkey_embedding_column(gender, dimension=8),
monkey_embedding_column(relationship, dimension=8),
monkey_embedding_column(race, dimension=8),
monkey_embedding_column(native_country,
dimension=8),
monkey_embedding_column(occupation, dimension=8),
age,
education_num,
capital_gain,
capital_loss,
hours_per_week,
]
</code></pre>
<p>So now it uses the MonkeyEmbeddingColumn class with the modified key property (works like all the other key properties from feature_column.py). This lets the code run to completion, but I'm not 100% sure it's correct as it reports the accuracy as:</p>
<pre><code>accuracy: 0.818316
</code></pre>
<p>As this is slightly worse than the wide-only training, I wonder if it has this accuracy in Python 2 or if my fix is lowering the accuracy by causing a training problem.</p>
<p><strong>Update</strong> I installed in Python 2 and the wide_n_deep gets over 0.85 accuracy, so this "fix" lets the code run but seems to be doing the wrong thing. I'll debug and see what Python 2 gets for these values and see if it can be fixed properly in Python 3. I'm curious too.</p>
| 1 | 2016-09-01T09:42:06Z | [
"python",
"machine-learning",
"tensorflow",
"python-3.4",
"deep-learning"
] |
Tensorflow : ValueError Duplicate feature column key found for column | 39,249,704 | <p>I was trying to get my hands dirty with Tensorflow and following
<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py" rel="nofollow">Wide and Deep Learning</a> example code. I modified certain imports for it to work with python 3.4 on centos 7. </p>
<p>Highlights of the changes are:</p>
<pre><code> -import urllib
+import urllib.request
</code></pre>
<p>...</p>
<pre><code> -urllib.urlretrieve
+urllib.request.urlretrieve
</code></pre>
<p>...</p>
<p>On running the code, I am getting following error</p>
<pre><code> Training data is downloaded to /tmp/tmpw06u4_xl
Test data is downloaded to /tmp/tmpjliqxhwh
model directory = /tmp/tmpcyll7kck
WARNING:tensorflow:Setting feature info to {'education': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'capital_gain': TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False), 'capital_loss': TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False), 'hours_per_week': TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False), 'gender': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'occupation': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'native_country': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'race': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'age': TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False), 'education_num': TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False), 'marital_status': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'workclass': TensorSignature(dtype=tf.string, shape=None, is_sparse=True), 'relationship': TensorSignature(dtype=tf.string, shape=None, is_sparse=True)}
WARNING:tensorflow:Setting targets info to TensorSignature(dtype=tf.int64, shape=TensorShape([Dimension(32561)]), is_sparse=False)
Traceback (most recent call last):
File "wide_n_deep_tutorial.py", line 213, in <module>
tf.app.run()
File "/usr/lib/python3.4/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "wide_n_deep_tutorial.py", line 209, in main
train_and_eval()
File "wide_n_deep_tutorial.py", line 202, in train_and_eval
m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps)
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 240, in fit
max_steps=max_steps)
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 550, in _train_model
train_op, loss_op = self._get_train_ops(features, targets)
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 182, in _get_train_ops
logits = self._logits(features, is_training=True)
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 260, in _logits
dnn_feature_columns = self._get_dnn_feature_columns()
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/learn/python/learn/estimators/dnn_linear_combined.py", line 224, in _get_dnn_feature_columns
feature_column_ops.check_feature_columns(self._dnn_feature_columns)
File "/usr/lib/python3.4/site-packages/tensorflow/contrib/layers/python/layers/feature_column_ops.py", line 318, in check_feature_columns
f.name))
ValueError: Duplicate feature column key found for column: education_embedding. This usually means that the column is almost identical to another column, and one must be discarded.
</code></pre>
<p>Is that I have change some variable or is this a python 3 problem. How can I get going forward with this tutorial.</p>
| 3 | 2016-08-31T12:29:33Z | 39,359,007 | <p>First of all, as Jesse suggested use the master (no branch on git clone) when you are downloading the tutorial source code from tensorflow website.
Second, you only need to pass the first two parameters to the class. You will get some warnings on the way; just ignore them and wait for the program to finish.
So use the following code on top of the tutorial source code: or download the
modified code from here: <a href="https://raw.githubusercontent.com/henaras/tensorflow/66bebf4c03e15686f91c692be6efee671107bf32/tensorflow/examples/learn/wide_n_deep_tutorial.py" rel="nofollow">Modified Version</a></p>
<pre><code>from tensorflow.contrib.layers.python.layers.feature_column import _EmbeddingColumn
class _MonkeyEmbeddingColumn(_EmbeddingColumn):
# override the key property
@property
def key(self):
return "{}".format(self)
def monkey_embedding_column(sparse_id_column,
dimension,
combiner="mean",
initializer=None,
ckpt_to_load_from=None,
tensor_name_in_ckpt=None):
return _MonkeyEmbeddingColumn(sparse_id_column, dimension)
</code></pre>
| 0 | 2016-09-06T23:03:06Z | [
"python",
"machine-learning",
"tensorflow",
"python-3.4",
"deep-learning"
] |
Regex for a calculator input | 39,249,894 | <p>I'm trying to create a regular expression to decide if an input is a valid input for my very basic calculator.</p>
<p>Basically </p>
<ul>
<li>nest parentheses is not allowed (meaning '(1+(2+3))
' will get rejected</li>
<li>after a bracket must come a number (or the expressions ends)</li>
</ul>
<p>Few examples for valid expressions:</p>
<pre><code>['1+19+2', '(5-06-9)/98*7', '(14/2-393+01)+3', '(00-0*7)', '(3*17-9+5-6)', '(6+1/6)+6/9', '(6/8)/2', '(6/1)', '(08+5+8)-7/19', '(9938/2)']
</code></pre>
<p>Basically what I thought of doing is splitting the problem to</p>
<ol>
<li>an expression is inside brackets</li>
<li>an expression is not inside a brackets</li>
</ol>
<p>And the regex will accept any thing that is <code>1[+-*/]2</code></p>
<p>Im having troubles putting it all together tho...</p>
| -1 | 2016-08-31T12:38:01Z | 39,250,208 | <p>With <code>re.match</code>. The pattern structure is simple: <code>operand (?: operator operand )*$</code> where <code>operand</code> can be a number or an expression between brackets. <em>(Note that with <code>re.match</code> you don't need to anchor the pattern at the start of the string)</em>.Result:</p>
<pre><code> (?:[0-9]+|\([0-9]+(?:[-+*/][0-9]+)*\))(?:[-+*/](?:[0-9]+|\([0-9]+(?:[-+*/][0-9]+)*\)))*$
#^-------------operand----------------^ ^-------------operand----------------^
</code></pre>
<p>It's a bit long but not complicated.</p>
<p>Feel free to edit the pattern to support other kinds of numbers (decimal, relative, sci notation...)</p>
| 1 | 2016-08-31T12:52:45Z | [
"python",
"regex",
"expression",
"calculator"
] |
Turtle freehand drawing | 39,249,934 | <p>I'm working on a simple drawing program that combines Tkinter and Turtle modules. </p>
<p>I would like to add an option that the user can draw anything by just using mouse similar to pen widget on Paint. I tried many things, I could not figure out how l can do it.How can l make the turtle draw anything (like pen widget on Paint ) on canvas by using mouse</p>
<pre><code>from tkinter import *
import turtle
sc=Tk()
sc.geometry("1000x1000+100+100")
fr4=Frame(sc,height=500,width=600,bd=4,bg="light green",takefocus="",relief=SUNKEN)
fr4.grid(row=2,column=2,sticky=(N,E,W,S))
#Canvas
canvas = Canvas(fr4,width=750, height=750)
canvas.pack()
#Turtle
turtle1=turtle.RawTurtle(canvas)
turtle1.color("blue")
turtle1.shape("turtle")
points=[]
spline=0
tag1="theline"
def point(event):
canvas.create_oval(event.x, event.y, event.x+1, event.y+1, fill="red")
points.append(event.x)
points.append(event.y)
return points
def canxy(event):
print (event.x, event.y)
def graph(event):
global theline
canvas.create_line(points, tags="theline")
def toggle(event):
global spline
if spline == 0:
canvas.itemconfigure(tag1, smooth=1)
spline = 1
elif spline == 1:
canvas.itemconfigure(tag1, smooth=0)
spline = 0
return spline
canvas.bind("<Button-1>", point)
canvas.bind("<Button-3>", graph)
canvas.bind("<Button-2>", toggle)
sc.mainloop()
</code></pre>
| 1 | 2016-08-31T12:39:27Z | 39,278,539 | <p>The following code will let you freehand draw with the turtle. You'll need to integrate with the rest of your code:</p>
<pre><code>import tkinter
import turtle
sc = tkinter.Tk()
sc.geometry("1000x1000+100+100")
fr4 = tkinter.Frame(sc, height=500, width=600, bd=4, bg="light green", takefocus="", relief=tkinter.SUNKEN)
fr4.grid(row=2, column=2, sticky=(tkinter.N, tkinter.E, tkinter.W, tkinter.S))
# Canvas
canvas = tkinter.Canvas(fr4, width=750, height=750)
canvas.pack()
# Turtle
turtle1 = turtle.RawTurtle(canvas)
turtle1.color("blue")
turtle1.shape("turtle")
def drag_handler(x, y):
turtle1.ondrag(None) # disable event inside event handler
turtle1.goto(x, y)
turtle1.ondrag(drag_handler) # reenable event on event handler exit
turtle1.ondrag(drag_handler)
sc.mainloop()
</code></pre>
| 1 | 2016-09-01T18:27:12Z | [
"python",
"turtle-graphics",
"tkinter-canvas"
] |
Scrapy: Looping through search results only returns first item | 39,249,954 | <p>I'm scraping a site by going through the search page, then looping through all results within. However it only seems to be returning the first result for each page. I also don't think it's hitting the start page's results either.</p>
<p>Secondly, the price is returning as some sort of Unicode (£ symbol) - how can I remove it altogether just leaving the price?</p>
<pre><code> 'regular_price': [u'\xa38.59'],
</code></pre>
<p>Here is the HTML:
<a href="http://pastebin.com/F8Lud0hu" rel="nofollow">http://pastebin.com/F8Lud0hu</a></p>
<p>Here's the spider:</p>
<pre><code>import scrapy
import random
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.selector import Selector
from cdl.items import candleItem
class cdlSpider(CrawlSpider):
name = "cdl"
allowed_domains = ["www.xxxx.co.uk"]
start_urls = ['https://www.xxxx.co.uk/advanced_search_result.php']
rules = [
Rule(LinkExtractor(
allow=['advanced_search_result\.php\?sort=2a&page=\d*']),
callback='parse_listings',
follow=True)
]
def parse_listings(self, response):
sel = Selector(response)
urls = sel.css('a.product_img')
for url in urls:
url = url.xpath('@href').extract()[0]
return scrapy.Request(url,callback=self.parse_item)
def parse_item(self, response):
candle = candleItem()
n = response.css('.prod_info_name h1')
candle['name'] = n.xpath('.//text()').extract()[0]
if response.css('.regular_price'):
candle['regular_price'] = response.css('.regular_price').xpath('.//text()').extract()
else:
candle['was_price'] = response.css('.was_price strong').xpath('.//text()').extract()
candle['now_price'] = response.css('.now_price strong').xpath('.//text()').extract()
candle['referrer'] = response.request.headers.get('Referer', None)
candle['url'] = response.request.url
yield candle
</code></pre>
| 1 | 2016-08-31T12:40:25Z | 39,250,038 | <p>To remove the £, just replace it with an empty string like this:</p>
<pre><code>pricewithpound = u'\xa38.59'
price = pricewithpound.replace(u'\xa3', '')
</code></pre>
<p>To investigate the scrapy issue, can you please provide the HTML source ?</p>
| 1 | 2016-08-31T12:44:40Z | [
"python",
"scrapy"
] |
Scrapy: Looping through search results only returns first item | 39,249,954 | <p>I'm scraping a site by going through the search page, then looping through all results within. However it only seems to be returning the first result for each page. I also don't think it's hitting the start page's results either.</p>
<p>Secondly, the price is returning as some sort of Unicode (£ symbol) - how can I remove it altogether just leaving the price?</p>
<pre><code> 'regular_price': [u'\xa38.59'],
</code></pre>
<p>Here is the HTML:
<a href="http://pastebin.com/F8Lud0hu" rel="nofollow">http://pastebin.com/F8Lud0hu</a></p>
<p>Here's the spider:</p>
<pre><code>import scrapy
import random
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.selector import Selector
from cdl.items import candleItem
class cdlSpider(CrawlSpider):
name = "cdl"
allowed_domains = ["www.xxxx.co.uk"]
start_urls = ['https://www.xxxx.co.uk/advanced_search_result.php']
rules = [
Rule(LinkExtractor(
allow=['advanced_search_result\.php\?sort=2a&page=\d*']),
callback='parse_listings',
follow=True)
]
def parse_listings(self, response):
sel = Selector(response)
urls = sel.css('a.product_img')
for url in urls:
url = url.xpath('@href').extract()[0]
return scrapy.Request(url,callback=self.parse_item)
def parse_item(self, response):
candle = candleItem()
n = response.css('.prod_info_name h1')
candle['name'] = n.xpath('.//text()').extract()[0]
if response.css('.regular_price'):
candle['regular_price'] = response.css('.regular_price').xpath('.//text()').extract()
else:
candle['was_price'] = response.css('.was_price strong').xpath('.//text()').extract()
candle['now_price'] = response.css('.now_price strong').xpath('.//text()').extract()
candle['referrer'] = response.request.headers.get('Referer', None)
candle['url'] = response.request.url
yield candle
</code></pre>
| 1 | 2016-08-31T12:40:25Z | 39,254,815 | <p>Yes it's returning only the first result because of your <code>parse_listing</code> method (you're returning the first url and you should be yielding it). I would do something like:</p>
<pre><code>def parse_listings(self, response):
for url in response.css('a.product_img::attr(href)').extract():
yield Request(url, callback=self.parse_item)
</code></pre>
<p>In that case I would even do something like:</p>
<pre><code>class CdlspiderSpider(CrawlSpider):
name = 'cdlSpider'
allowed_domains = ['www.xxxx.co.uk']
start_urls = ['https://www.xxxx.co.uk/advanced_search_result.php']
rules = [
Rule(LinkExtractor(allow='advanced_search_result\.php\?sort=2a&page=\d*')),
Rule(LinkExtractor(restrict_css='a.product_img'), callback='parse_item')
]
def parse_item(self, response):
...
if response.css('.regular_price'):
candle['regular_price'] = response.css('.regular_price::text').re_first(r'\d+\.?\d*')
else:
candle['was_price'] = response.css('.was_price strong::text').re_first(r'\d+\.?\d*')
candle['now_price'] = response.css('.now_price strong::text').re_first(r'\d+\.?\d*')
...
return candle
</code></pre>
| 2 | 2016-08-31T16:36:55Z | [
"python",
"scrapy"
] |
python time lags holidays | 39,250,003 | <p>In pandas, I have two data frames. One containing the Holidays of a particular country from <a href="http://www.timeanddate.com/holidays/austria" rel="nofollow">http://www.timeanddate.com/holidays/austria</a> and another one containing a date column. I want to calculate the <code>#days</code> after a holiday.</p>
<pre><code>def compute_date_diff(x, y):
difference = y - x
differenceAsNumber = (difference/ np.timedelta64(1, 'D'))
return differenceAsNumber.astype(int)
for index, row in holidays.iterrows():
secondDF[row['name']+ '_daysAfter'] = secondDF.dateColumn.apply(compute_date_diff, args=(row.day,))
</code></pre>
<p>However, this</p>
<ul>
<li>calculates the wrong difference e.g. <code>></code> than a year in case <code>holidays</code> contains data for more than a year.</li>
<li>is pretty slow.</li>
</ul>
<p>How could I <strong>fix the flaw</strong> and <strong>increase performance</strong>? Is there a parallel apply? Or what about <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#holidays-holiday-calendars" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/timeseries.html#holidays-holiday-calendars</a>
As I am new to pandas I am unsure how to obtain the current date/index of the date object whilst iterating through in apply. As far as I know I cannot loop the other way round e.g. over all my rows in <code>secondDF</code> as it was impossible for me to generate feature columns whilst iterating via <code>apply</code></p>
| 0 | 2016-08-31T12:42:44Z | 39,250,999 | <p>To do this, join both data frames using a common column and then try this code</p>
<pre><code>import pandas
import numpy as np
df = pandas.DataFrame(columns=['to','fr','ans'])
df.to = [pandas.Timestamp('2014-01-24'), pandas.Timestamp('2014-01-27'), pandas.Timestamp('2014-01-23')]
df.fr = [pandas.Timestamp('2014-01-26'), pandas.Timestamp('2014-01-27'), pandas.Timestamp('2014-01-24')]
df['ans']=(df.fr-df.to) /np.timedelta64(1, 'D')
print df
</code></pre>
<p>output </p>
<pre><code> to fr ans
0 2014-01-24 2014-01-26 2.0
1 2014-01-27 2014-01-27 0.0
2 2014-01-23 2014-01-24 1.0
</code></pre>
| 0 | 2016-08-31T13:28:52Z | [
"python",
"date",
"pandas",
"difference"
] |
python time lags holidays | 39,250,003 | <p>In pandas, I have two data frames. One containing the Holidays of a particular country from <a href="http://www.timeanddate.com/holidays/austria" rel="nofollow">http://www.timeanddate.com/holidays/austria</a> and another one containing a date column. I want to calculate the <code>#days</code> after a holiday.</p>
<pre><code>def compute_date_diff(x, y):
difference = y - x
differenceAsNumber = (difference/ np.timedelta64(1, 'D'))
return differenceAsNumber.astype(int)
for index, row in holidays.iterrows():
secondDF[row['name']+ '_daysAfter'] = secondDF.dateColumn.apply(compute_date_diff, args=(row.day,))
</code></pre>
<p>However, this</p>
<ul>
<li>calculates the wrong difference e.g. <code>></code> than a year in case <code>holidays</code> contains data for more than a year.</li>
<li>is pretty slow.</li>
</ul>
<p>How could I <strong>fix the flaw</strong> and <strong>increase performance</strong>? Is there a parallel apply? Or what about <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#holidays-holiday-calendars" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/timeseries.html#holidays-holiday-calendars</a>
As I am new to pandas I am unsure how to obtain the current date/index of the date object whilst iterating through in apply. As far as I know I cannot loop the other way round e.g. over all my rows in <code>secondDF</code> as it was impossible for me to generate feature columns whilst iterating via <code>apply</code></p>
| 0 | 2016-08-31T12:42:44Z | 39,263,368 | <p>I settled for something entirely different:
Now, only the number of days since before the most current holiday will be calculated.</p>
<p>my function:</p>
<pre><code>def get_nearest_holiday(holidays, pivot):
return min(holidays, key=lanbda x: abs(x- pivot)
# this needs to be converted to an int, but at least the nearest holiday is found efficiently
</code></pre>
<p>is called as a lambda expression on a per-row basis</p>
| 0 | 2016-09-01T05:32:10Z | [
"python",
"date",
"pandas",
"difference"
] |
Format date string in a file using Python | 39,250,152 | <p>I get csv files from my client which contains variable number of columns. Out of these columns there can be some columns containing date string but the order is not defined, for example :</p>
<pre><code>column1str|column2dt|column3str|column4int|column5int|column6dt
ab c1|10/20/2010|1234|10.02|530.55|30-01-2011
ab c2|10/10/2010|12346|11.03|531|05-05-2012
abc3|10/10/2010|122|12|532.44|11-09-2008
abc4|10/11/2010|110|13|533|01-11-2013
abc5|10/10/2010|11111|14|534|30-02-2012
</code></pre>
<p>I get the format of date string from client as input, in the above input there are two formats of date string <code>MM/dd/yyyy</code> and <code>dd-MM-yyyy</code>. </p>
<p>I want to convert all the dates in a particular format <code>dd-MM-yyyyTHH:mmZ</code> in the file itself. I know how to convert date string to desired date string when the input date format is given. The challenge I am facing here is how can I replace the date string at particular column in the file.</p>
| 2 | 2016-08-31T12:49:57Z | 39,250,454 | <p>try this:</p>
<pre><code>import pandas as pd
data=pd.read_csv('so.txt',delimiter='|',parse_dates=['column2dt','column6dt'])
</code></pre>
| 2 | 2016-08-31T13:03:27Z | [
"python",
"datetime"
] |
Format date string in a file using Python | 39,250,152 | <p>I get csv files from my client which contains variable number of columns. Out of these columns there can be some columns containing date string but the order is not defined, for example :</p>
<pre><code>column1str|column2dt|column3str|column4int|column5int|column6dt
ab c1|10/20/2010|1234|10.02|530.55|30-01-2011
ab c2|10/10/2010|12346|11.03|531|05-05-2012
abc3|10/10/2010|122|12|532.44|11-09-2008
abc4|10/11/2010|110|13|533|01-11-2013
abc5|10/10/2010|11111|14|534|30-02-2012
</code></pre>
<p>I get the format of date string from client as input, in the above input there are two formats of date string <code>MM/dd/yyyy</code> and <code>dd-MM-yyyy</code>. </p>
<p>I want to convert all the dates in a particular format <code>dd-MM-yyyyTHH:mmZ</code> in the file itself. I know how to convert date string to desired date string when the input date format is given. The challenge I am facing here is how can I replace the date string at particular column in the file.</p>
| 2 | 2016-08-31T12:49:57Z | 39,250,948 | <p>First, read that for reference for Python datetime.strptime() format strings:
<a href="https://docs.python.org/3.5/library/datetime.html#strftime-strptime-behavior" rel="nofollow">https://docs.python.org/3.5/library/datetime.html#strftime-strptime-behavior</a></p>
<p>And that for CSV parsing: <a href="https://docs.python.org/3.5/library/csv.html" rel="nofollow">https://docs.python.org/3.5/library/csv.html</a></p>
<p>My answer will use standard Python only. As a valid alternative you could use a specialized data analysis library such as pandas as already suggested.</p>
<p>your <code>MM/dd/yyyy</code> would be <code>%m/%d/%Y</code> in strptime format (which is actually C standard format), and <code>dd-MM-yyyy</code> would be <code>%d-%m-%Y</code>.</p>
<p>Now I'm not sure if you want the dates to be "autodiscovered" by your python script or if you want to be able to specify the appropriate columns and formats by hand. So I will suggest a script for both:</p>
<p>This will convert all dates in the columns names and input formats specified in the INPUT_DATE_FORMATS map:</p>
<pre><code>from datetime import datetime
import csv
# file that will be read as input
INPUT_FILENAME = 'yourfile.csv'
# file that will be produced as output (with properly formatted dates)
OUTPUT_FILENAME = 'newfile.csv'
INPUT_DATE_FORMATS = {'column2dt': '%m/%d/%Y', 'column6dt': '%d-%m-%Y'}
OUTPUT_DATE_FORMAT = '%d-%m-%YT%H:%MZ'
with open(INPUT_FILENAME, 'rt') as finput:
reader = csv.DictReader(finput, delimiter='|')
with open(OUTPUT_FILENAME, 'wt') as foutput:
writer = csv.DictWriter(foutput, fieldnames=reader.fieldnames, delimiter='|') # you can change delimiter if you want
for row in reader: # read each entry one by one
for header, value in row.items(): # read each field one by one
date_format = INPUT_DATE_FORMATS.get(header)
if date_format:
parsed_date = datetime.strptime(value, date_format)
row[header] = parsed_date.strftime(OUTPUT_DATE_FORMAT)
writer.writerow(row)
</code></pre>
<p>This will try yo parse each field in the input file with all formats specificied in INPUT_DATE_FORMATS and will write a new file with all those dates formatted with OUTPUT_DATE_FORMAT:</p>
<pre><code>from datetime import datetime
import csv
# file that will be read as input
INPUT_FILENAME = 'yourfile.csv'
# file that will be produced as output (with properly formatted dates)
OUTPUT_FILENAME = 'newfile.csv'
INPUT_DATE_FORMATS = ('%m/%d/%Y', '%d-%m-%Y')
OUTPUT_DATE_FORMAT = '%d-%m-%YT%H:%MZ'
with open(INPUT_FILENAME, 'rt') as finput:
reader = csv.DictReader(finput, delimiter='|')
with open(OUTPUT_FILENAME, 'wt') as foutput:
writer = csv.DictWriter(foutput, fieldnames=reader.fieldnames, delimiter='|') # you can change delimiter if you want
for row in reader: # read each entry one by one
for header, value in row.items(): # read each field one by one
for date_format in INPUT_DATE_FORMATS: # try to parse a date
try:
parsed_date = datetime.strptime(value, date_format)
row[header] = parsed_date.strftime(OUTPUT_DATE_FORMAT)
except ValueError:
pass
writer.writerow(row)
</code></pre>
| 1 | 2016-08-31T13:26:35Z | [
"python",
"datetime"
] |
Overlay two images without losing the intensity of the color using OpenCV and Python | 39,250,226 | <p>How can i overlay two images without losing the intensity of the colors of the two images.</p>
<p>I have Image1 and Image2:</p>
<ol>
<li><a href="http://i.stack.imgur.com/kidOJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/kidOJ.png" alt="enter image description here"></a> 2. <a href="http://i.stack.imgur.com/4hCe5.png" rel="nofollow"><img src="http://i.stack.imgur.com/4hCe5.png" alt="enter image description here"></a></li>
</ol>
<p>I tried using 0.5 alpha and beta but it gives me a merged image with half the color intensity</p>
<pre><code>dst = cv2.addWeighted(img1,0.5,img2,0.5,0)
</code></pre>
<p><a href="http://i.stack.imgur.com/7dLsU.png" rel="nofollow"><img src="http://i.stack.imgur.com/7dLsU.png" alt="enter image description here"></a></p>
<p>but when i try using 1 to both alpha and beta channels, it only gives me the merged regions.</p>
<pre><code> dst = cv2.addWeighted(img1,1,img2,1,0)
</code></pre>
<p><a href="http://i.stack.imgur.com/Q2BNO.png" rel="nofollow"><img src="http://i.stack.imgur.com/Q2BNO.png" alt="enter image description here"></a></p>
<p>I have to get an output that looks like this.</p>
<p><a href="http://i.stack.imgur.com/lKu57.png" rel="nofollow"><img src="http://i.stack.imgur.com/lKu57.png" alt="enter image description here"></a></p>
| 1 | 2016-08-31T12:53:47Z | 39,251,633 | <p>Actually the <code>dst</code> is created according the following formula:</p>
<pre><code>dst = src1*alpha + src2*beta + gamma
</code></pre>
<p>When you multiply your image which is an 3D array with alpha you are multiplying all the items for example for a blue pixle you have <code>[255, 0, 0]</code> and the white <code>[255, 255, 255]</code>, and when you are adding the matrices together, if you want to see the original color of your images you have to convert the white pixels to black which their values are all zero. So you can simply find the white pixels with and advanced numpy indexing <code>img1[img1[:, :, 1:].all(axis=-1)]</code> then convert them to zero.</p>
<pre><code>import cv2
img1 = cv2.imread('img1.png')
img2 = cv2.imread('img2.png')
img1[img1[:, :, 1:].all(axis=-1)] = 0
img2[img2[:, :, 1:].all(axis=-1)] = 0
dst = cv2.addWeighted(img1, 1, img2, 1, 0)
cv2.imshow('sas', dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>Result:</p>
<p><a href="http://i.stack.imgur.com/wWqUf.png" rel="nofollow"><img src="http://i.stack.imgur.com/wWqUf.png" alt="enter image description here"></a></p>
| 1 | 2016-08-31T13:55:12Z | [
"python",
"image",
"opencv"
] |
python matrix creation without using numpy or anything and max sum of rows and columns | 39,250,315 | <p>Want to create a matrix like below in python</p>
<pre><code>[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
</code></pre>
<p>but not sure how to do, i tried list comprehension like below</p>
<pre><code>[[y+x for y in range(4)] for x in range(4)]
</code></pre>
<p>which gives output like below</p>
<pre><code>[[0, 1, 2, 3], [1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6]]
</code></pre>
<p>and also want print the row with maximum sum and columns with maximum sum.</p>
<p>Thanks in advance </p>
| 2 | 2016-08-31T12:57:28Z | 39,250,378 | <p>The range function accepts 3 arguments <code>start</code>, <code>end</code> and <code>step</code>. You can use one range with step 4 and another one for creating the nested lists using the outer range.</p>
<pre><code>>>> [[i for i in range(i, i+4)] for i in range(1, 17, 4)]
[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
</code></pre>
<p>here is another way with one <code>range</code>:</p>
<pre><code>>>> main_range = range(1, 5)
>>> [[i*j + (i-1)*(4-j) for j in main_range] for i in main_range]
[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
</code></pre>
<p>And here is a Numpythonic approach:</p>
<pre><code>>>> n = 4
>>> np.split(np.arange(1, n*n + 1), np.arange(n ,n*n, n))
[array([1, 2, 3, 4]), array([5, 6, 7, 8]), array([ 9, 10, 11, 12]), array([13, 14, 15, 16])]
</code></pre>
| 3 | 2016-08-31T13:00:07Z | [
"python",
"matrix"
] |
python matrix creation without using numpy or anything and max sum of rows and columns | 39,250,315 | <p>Want to create a matrix like below in python</p>
<pre><code>[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
</code></pre>
<p>but not sure how to do, i tried list comprehension like below</p>
<pre><code>[[y+x for y in range(4)] for x in range(4)]
</code></pre>
<p>which gives output like below</p>
<pre><code>[[0, 1, 2, 3], [1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6]]
</code></pre>
<p>and also want print the row with maximum sum and columns with maximum sum.</p>
<p>Thanks in advance </p>
| 2 | 2016-08-31T12:57:28Z | 39,250,395 | <pre><code>[[x * 4 + y + 1 for y in range(4)] for x in range(4)]
</code></pre>
<p>which is equivalent to:</p>
<pre><code>[[(x << 2) + y for y in range(1, 5)] for x in range(4)]
</code></pre>
<p>Here's a little benchmark:</p>
<pre><code>import timeit
def f1():
return [[x * 4 + y + 1 for y in range(4)] for x in range(4)]
def f2():
return [[(x << 2) + y for y in range(1, 5)] for x in range(4)]
def f3():
a = range(1, 5)
return [[(x << 2) + y for y in a] for x in range(4)]
N = 5000000
print timeit.timeit('f1()', setup='from __main__ import f1', number=N)
print timeit.timeit('f2()', setup='from __main__ import f2', number=N)
print timeit.timeit('f3()', setup='from __main__ import f3', number=N)
# 13.683984791
# 13.4605276559
# 9.65608339037
# [Finished in 36.9s]
</code></pre>
<p>Where we can conclude both f1 & f2 methods give almost the same performance. So it'd be a good choice to calculate the inner <code>range(1,5)</code> only once like f3</p>
| 4 | 2016-08-31T13:00:56Z | [
"python",
"matrix"
] |
python matrix creation without using numpy or anything and max sum of rows and columns | 39,250,315 | <p>Want to create a matrix like below in python</p>
<pre><code>[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
</code></pre>
<p>but not sure how to do, i tried list comprehension like below</p>
<pre><code>[[y+x for y in range(4)] for x in range(4)]
</code></pre>
<p>which gives output like below</p>
<pre><code>[[0, 1, 2, 3], [1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6]]
</code></pre>
<p>and also want print the row with maximum sum and columns with maximum sum.</p>
<p>Thanks in advance </p>
| 2 | 2016-08-31T12:57:28Z | 39,250,552 | <p>try this,</p>
<pre><code>In [1]: [range(1,17)[n:n+4] for n in range(0, len(range(1,17)), 4)]
Out[1]: [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
</code></pre>
| 1 | 2016-08-31T13:08:34Z | [
"python",
"matrix"
] |
python matrix creation without using numpy or anything and max sum of rows and columns | 39,250,315 | <p>Want to create a matrix like below in python</p>
<pre><code>[[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
</code></pre>
<p>but not sure how to do, i tried list comprehension like below</p>
<pre><code>[[y+x for y in range(4)] for x in range(4)]
</code></pre>
<p>which gives output like below</p>
<pre><code>[[0, 1, 2, 3], [1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6]]
</code></pre>
<p>and also want print the row with maximum sum and columns with maximum sum.</p>
<p>Thanks in advance </p>
| 2 | 2016-08-31T12:57:28Z | 39,251,076 | <p>This might be a quick fix</p>
<pre><code>[([x-3,x-2,x-1,x]) for x in range(1,17) if x%4 ==0]
</code></pre>
<p>But what do you mean by maximum sum of column and row</p>
| 0 | 2016-08-31T13:31:52Z | [
"python",
"matrix"
] |
C-numpy: Setting the data type for fixed-width strings? | 39,250,355 | <p>I'm working with some data that is represented in C as strings. I'd like to return a numpy array based on this data. However, I'd like the array to have dtype='SX' where X is a number determined at runtime.</p>
<p>So far I am copying the data in C like so:</p>
<pre><code> buffer_len_alt = (MAX_WIDTH)*(MAX_NUMBER_OF_ITEMS);
output_buffer = (char *) calloc(sizeof(char), buffer_len_alt);
column = PyArray_SimpleNewFromData(1, &buffer_len_alt, NPY_BYTE, output_buffer);
if (column == NULL){
return (PyObject *) NULL;
}
/* Put strings of length MAX_WIDTH in output_buffer */
return column;
</code></pre>
<p>As you can see, I am telling PyArray_SimpleNewFromData, that 'column' is a 1D array of bytes, so when the pointer we called 'column' becomes the python object 'col' we see this:</p>
<pre><code>print(col)
>> array([48, 0, 0, 50, 48, 48, 48, 0, 0, 50, 48, 48, 50, 48, 48, 48, 0, 0], dtype=int8)
print(col.view('S3'))
>> array([b'0', b'200', b'0', b'200', b'200', b'0'], dtype='|S3')
</code></pre>
<p>The 'b' prefix tells me they are still interpreted as byte-arrays, but I want to instead have the strings "0", "200", etc. In this example the strings are digits but that is not always the case.</p>
<p>I know I can individually call b'200'.decode(format) to turn each individual bytes-object into a string, but the whole point of writing a C extension to numpy was to get all the loops running in C. The old chararray interface (now deprecated?) also provided an array.decode method that would decode every sequence in an array, but again the objects returned by the numpy-C interface are just plain ndarrays.</p>
<p><strong>Question</strong>
What typenum should I pass to SimpleNewFromData instead of NPY_BYTE so that python receives the array with the correct type information (e.g. dtype='S5') ? </p>
<p>ALternatively, if no typenum achieves this with SimpleNewFromData, then perhaps I need to use SimpleNewFromDescr, but I don't know how to set the PyArray_Descr parameters correctly, and the documentation is really spotty on this, so I'd greatly appreciate any form of guidance. </p>
| 0 | 2016-08-31T12:59:07Z | 39,254,725 | <p>I'm not familiar with the <code>C</code> part of your code, but it appears that you are confusing the representation of byte strings and unicode strings. The <code>b'200'</code> display indicates that you are working in Py3, where unicode the default string type.</p>
<p>In a Py3 session:</p>
<p>The raw bytes:</p>
<pre><code>In [482]: x=np.array([48, 0, 0, 50, 48, 48, 48, 0, 0, 50, 48, 48, 50, 48, 48, 48, 0, 0], dtype=np.int8)
</code></pre>
<p>viewed a 3 byte strings. In a PY2 session the <code>b</code> would not be used. But the view is the same.</p>
<pre><code>In [483]: x.view('S3')
Out[483]:
array([b'0', b'200', b'0', b'200', b'200', b'0'],
dtype='|S3')
</code></pre>
<p>A <code>view</code> does not change the data buffer, but <code>astype</code> can convert the elements as needed, and make a new array with a new data buffer.</p>
<pre><code>In [484]: x.view('S3').astype('U3')
Out[484]:
array(['0', '200', '0', '200', '200', '0'],
dtype='<U3')
In [485]: x.view('S3').astype('U3').view(np.uint8)
Out[485]:
array([48, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 50, 0, 0, 0, 48,
0, 0, 0, 48, 0, 0, 0, 48, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 50, 0, 0, 0, 48, 0, 0, 0, 48, 0, 0, 0, 50, 0, 0,
0, 48, 0, 0, 0, 48, 0, 0, 0, 48, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], dtype=uint8)
</code></pre>
<p>The unicode version has 72 bytes in its buffer, 4 bytes per character.</p>
<p><code>np.char</code> is still around, but mostly to apply string methods to <code>S</code> and <code>U</code> type arrays. <code>np.char.decode</code> does the same thing as the <code>astype</code>.</p>
<pre><code>In [489]: np.char.decode(x.view('S3'))
Out[489]:
array(['0', '200', '0', '200', '200', '0'],
dtype='<U3')
</code></pre>
| 1 | 2016-08-31T16:30:48Z | [
"python",
"numpy",
"python-c-api"
] |
Count occurences in DataFrame | 39,250,504 | <p>I've a Dataframe in this format:</p>
<pre><code>| Department | Person | Power | ... |
|------------|--------|--------|-----|
| ABC | 1234 | 75 | ... |
| ABC | 1235 | 25 | ... |
| DEF | 1236 | 50 | ... |
| DEF | 1237 | 100 | ... |
| DEF | 1238 | 25 | ... |
| DEF | 1239 | 50 | ... |
</code></pre>
<p>What I now want to get is the sum of occurences for each value in the power column. How can I get this from my DataFrame?</p>
<pre><code>| Department | 100 | 75 | 50 | 25 |
|------------|-----|-----|-----|-----|
| ABC | 0 | 1 | 0 | 1 |
| DEF | 1 | 0 | 2 | 1 |
</code></pre>
| 2 | 2016-08-31T13:05:44Z | 39,250,558 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow"><code>value_counts</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sort_index.html" rel="nofollow"><code>sort_index</code></a>, then generate <code>DataFrame</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_frame.html" rel="nofollow"><code>to_frame</code></a> and last transpose by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow"><code>T</code></a>:</p>
<pre><code>print (df.Power.value_counts().sort_index(ascending=False).to_frame().T)
100 75 50 25
Power 1 1 2 2
</code></pre>
<p>EDIT by comment:</p>
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow"><code>crosstab</code></a>:</p>
<pre><code>print (pd.crosstab(df.Department, df.Power).sort_index(axis=1, ascending=False))
Power 100 75 50 25
Department
ABC 0 1 0 1
DEF 1 0 2 1
</code></pre>
<p>Faster another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a>:</p>
<pre><code>print (df.groupby(['Department','Power'])
.size()
.unstack(fill_value=0)
.sort_index(axis=1, ascending=False))
Power 100 75 50 25
Department
ABC 0 1 0 1
DEF 1 0 2 1
</code></pre>
<p>If need <code>groupby</code> by columns <code>Department</code> and <code>Person</code>, add column <code>Person</code> to <code>groupby</code> to second position (thank you <a href="http://stackoverflow.com/questions/39250504/count-occurences-in-dataframe/39250558?noredirect=1#comment65838428_39250558">piRSquared</a>):</p>
<pre><code>print (df.groupby(['Department','Person', 'Power'])
.size()
.unstack(fill_value=0)
.sort_index(axis=1, ascending=False))
Power 100 75 50 25
Department Person
ABC 1234 0 1 0 0
1235 0 0 0 1
DEF 1236 0 0 1 0
1237 1 0 0 0
1238 0 0 0 1
1239 0 0 1 0
</code></pre>
<p>EDIT1 by comment:</p>
<p>If need add another missing values, use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a>:</p>
<pre><code>print (df.groupby(['Department','Power'])
.size()
.unstack(fill_value=0)
.reindex(columns=[100,75,50,25,0], fill_value=0))
Power 100 75 50 25 0
Department
ABC 0 1 0 1 0
DEF 1 0 2 1 0
</code></pre>
| 3 | 2016-08-31T13:08:46Z | [
"python",
"pandas",
"pivot-table",
"reshape",
"crosstab"
] |
Count occurences in DataFrame | 39,250,504 | <p>I've a Dataframe in this format:</p>
<pre><code>| Department | Person | Power | ... |
|------------|--------|--------|-----|
| ABC | 1234 | 75 | ... |
| ABC | 1235 | 25 | ... |
| DEF | 1236 | 50 | ... |
| DEF | 1237 | 100 | ... |
| DEF | 1238 | 25 | ... |
| DEF | 1239 | 50 | ... |
</code></pre>
<p>What I now want to get is the sum of occurences for each value in the power column. How can I get this from my DataFrame?</p>
<pre><code>| Department | 100 | 75 | 50 | 25 |
|------------|-----|-----|-----|-----|
| ABC | 0 | 1 | 0 | 1 |
| DEF | 1 | 0 | 2 | 1 |
</code></pre>
| 2 | 2016-08-31T13:05:44Z | 39,251,156 | <p>or it can be done this way:</p>
<pre><code>>>> df.groupby(['Department','Power']).count().unstack().fillna(0)
Person
Power 25 50 75 100
Department
ABC 1.0 0.0 1.0 0.0
DEF 1.0 2.0 0.0 1.0
</code></pre>
| 1 | 2016-08-31T13:35:02Z | [
"python",
"pandas",
"pivot-table",
"reshape",
"crosstab"
] |
Python: find smallest missing positive integer in ordered list | 39,250,520 | <p>I need to find the first missing number in a list. If there is no number missing, the next number should be the last +1.</p>
<p>It should first check to see if the first number is > 1, and if so then the new number should be 1.</p>
<p>Here is what I tried. The problem is here: <code>if next_value - items > 1:</code>
results in an error because at the end and in the beginning I have a <code>None</code>. </p>
<pre><code>list = [1,2,5]
vlans3=list
for items in vlans3:
if items in vlans3:
index = vlans3.index(items)
previous_value = vlans3[index-1] if index -1 > -1 else None
next_value = vlans3[index+1] if index + 1 < len(vlans3) else None
first = vlans3[0]
last = vlans3[-1]
#print ("index: ", index)
print ("prev item:", previous_value)
print ("-cur item:", items)
print ("nxt item:", next_value)
#print ("_free: ", _free)
#print ("...")
if next_value - items > 1:
_free = previous_value + 1
print ("free: ",_free)
break
print ("**************")
print ("first item:", first)
print ("last item:", last)
print ("**************")
</code></pre>
<hr>
<p>Another method:</p>
<pre><code>L = vlans3
free = ([x + 1 for x, y in zip(L[:-1], L[1:]) if y - x > 1][0])
</code></pre>
<p>results in a correct number if there is a gap between the numbers, but if no space left error occurs: <code>IndexError: list index out of range</code>. However I need to specify somehow that if there is no free space it should give a new number (last +1). But with the below code it gives an error and I do not know why.</p>
<pre><code>if free = []:
print ("no free")
else:
print ("free: ", free)
</code></pre>
| 0 | 2016-08-31T13:07:00Z | 39,251,396 | <p>First avoid using reserved word <code>list</code> for variable.
Second use try:except to quickly and neatly avoid this kind of issues.</p>
<pre><code>def free(l):
if l == []: return 0
if l[0] > 1: return 1
if l[-1] - l[0] + 1 == len(l): return l[-1] + 1
for i in range(len(l)):
try:
if l[i+1] - l[i] > 1: break
except IndexError:
break
return l[i] + 1
</code></pre>
| 0 | 2016-08-31T13:44:41Z | [
"python",
"python-3.x"
] |
Python: find smallest missing positive integer in ordered list | 39,250,520 | <p>I need to find the first missing number in a list. If there is no number missing, the next number should be the last +1.</p>
<p>It should first check to see if the first number is > 1, and if so then the new number should be 1.</p>
<p>Here is what I tried. The problem is here: <code>if next_value - items > 1:</code>
results in an error because at the end and in the beginning I have a <code>None</code>. </p>
<pre><code>list = [1,2,5]
vlans3=list
for items in vlans3:
if items in vlans3:
index = vlans3.index(items)
previous_value = vlans3[index-1] if index -1 > -1 else None
next_value = vlans3[index+1] if index + 1 < len(vlans3) else None
first = vlans3[0]
last = vlans3[-1]
#print ("index: ", index)
print ("prev item:", previous_value)
print ("-cur item:", items)
print ("nxt item:", next_value)
#print ("_free: ", _free)
#print ("...")
if next_value - items > 1:
_free = previous_value + 1
print ("free: ",_free)
break
print ("**************")
print ("first item:", first)
print ("last item:", last)
print ("**************")
</code></pre>
<hr>
<p>Another method:</p>
<pre><code>L = vlans3
free = ([x + 1 for x, y in zip(L[:-1], L[1:]) if y - x > 1][0])
</code></pre>
<p>results in a correct number if there is a gap between the numbers, but if no space left error occurs: <code>IndexError: list index out of range</code>. However I need to specify somehow that if there is no free space it should give a new number (last +1). But with the below code it gives an error and I do not know why.</p>
<pre><code>if free = []:
print ("no free")
else:
print ("free: ", free)
</code></pre>
| 0 | 2016-08-31T13:07:00Z | 39,251,409 | <p>To get the smallest integer that is not a member of <code>vlans3</code>:</p>
<pre><code>ints_list = range(min(vlans3), max(vlans3) + 1)
missing_list = [x for x in ints_list if x not in vlans3]
first_missing = min(missing_list)
</code></pre>
<p>However you want to return 1 if the smallest value in your list is greater than 1, and the last value + 1 if there are no missing values, so this becomes:</p>
<pre><code>ints_list = [1] + list(range(min(vlan3), max(vlan3) + 2))
missing_list = [x for x in ints_list if x not in vlan3]
first_missing = min(missing_list)
</code></pre>
| 1 | 2016-08-31T13:45:01Z | [
"python",
"python-3.x"
] |
Multi-Label Classifier in Tensorflow | 39,250,828 | <p>I want to develop a multi-label classifier with TensorFlow, i try to mean there exists multiple label which contains multiple classes. To illustrate you can image the situation like: </p>
<ul>
<li>label-1 classes: lighting raining, raining, partial raining, no raining </li>
<li>label-2 classes: sunny, partly cloudy, cloudy, very cloudy. </li>
</ul>
<p>I want to classify these two labels with neural network. For now, i used different class label for every (label-1, label-2) pair classes. Which means i have 4 x 4 = 16 different label. </p>
<p>By training my model with </p>
<h1>current loss</h1>
<pre><code>cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction), reduction_indices=[1]))
# prediction is sofmaxed
loss = cross_entropy + regul * schema['regul_ratio'] # regul things is for regularization
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
</code></pre>
<p>However i think that a multi-label training will work nicer in such a condition. </p>
<ul>
<li>My features will be [n_samples, n_features]</li>
<li>My labels will be [n_samples, n_classes, 2]</li>
</ul>
<p>n_samples of [x1, x2, x3, x4 ...] # features</p>
<p>n_samples of [[0, 0, 0, 1], [0, 0, 1, 0]] # no raining and cloudy </p>
<p>How can i make a softmax probability distribution predictor with tensorflow. Is there any working example for multilabel problem like this one. How will be my loss tensor like?</p>
| 1 | 2016-08-31T13:21:33Z | 39,254,757 | <p>Why not just have your network produce two different outputs?</p>
<p>network -> prediction1 and prediction2</p>
<p>where prediction1 and prediction2 are both [#,#,#,#] but what I describe below works even if they're different sizes.</p>
<p>Then just run </p>
<pre><code>loss1 = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction1, labels_1))
loss2 = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction2, labels_2))
loss = loss1 + loss2
</code></pre>
| 2 | 2016-08-31T16:32:52Z | [
"python",
"machine-learning",
"tensorflow",
"classification",
"multilabel-classification"
] |
Python change xml attribute | 39,250,857 | <p>Python 2
Is it posible to change xml file using python when</p>
<pre><code><Label name="qotl_type_label" position="910,980" font="headline_light" />
</code></pre>
<p>Search by name attribute and then change position?</p>
| 0 | 2016-08-31T13:22:46Z | 39,250,940 | <p>You can use the built-in <a href="https://docs.python.org/2/library/xml.etree.elementtree.html" rel="nofollow"><code>xml.etree.ElementTree</code></a> module to parse the XML, locate the <code>Label</code> element and change the <code>position</code> attribute via <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.attrib" rel="nofollow"><code>.attrib</code></a> property:</p>
<pre><code>>>> import xml.etree.ElementTree as ET
>>>
>>> s = '<root><Label name="qotl_type_label" position="910,980" font="headline_light" /></root>'
>>>
>>> root = ET.fromstring(s)
>>> label = root.find(".//Label[@name='qotl_type_label']")
>>> label.attrib['position'] = 'new,position'
>>> ET.tostring(root)
'<root><Label font="headline_light" name="qotl_type_label" position="new,position" /></root>'
</code></pre>
<p>Note that the order of attributes is not preserved, attributes are unordered <em>by definition</em>.</p>
| 1 | 2016-08-31T13:26:17Z | [
"python",
"python-2.7"
] |
Think Python 4.1: Why does printing bob give me an object rather than an instance? | 39,250,869 | <p>All other answers on stackoverflow about instances vs. objects refer to classes, which the book hasn't covered yet.
This is the code:</p>
<pre><code>world= TurtleWorld()
bob= Turtle()
print bob
wait_for_user()
</code></pre>
<p>and this is the result:</p>
<p>It shows the turtle in a box, and prints</p>
<pre><code><swampy.TurtleWorld.Turtle object at 0x03C3D730>
</code></pre>
<p>The book says it should say 'instance', not object. What's the difference, and what am I doing wrong?</p>
| 0 | 2016-08-31T13:23:31Z | 39,251,082 | <p>You have an instance; an instance is a <em>kind</em> of Python object. If you didn't have an instance, you'd see something different:</p>
<pre><code><class 'swampy.TurtleWorld.Turtle'>
</code></pre>
<p>Now, in Python 2 using old-style classes, it would indeed have said <code>instance</code>; I can create such an object by <em>not</em> inheriting from <code>object</code>:</p>
<pre><code>>>> class Turtle: pass
...
>>> Turtle()
<__main__.Turtle instance at 0x10199fe18>
</code></pre>
<p>However, the current <a href="http://www.greenteapress.com/thinkpython/swampy/" rel="nofollow">swampy release</a> uses <em>new-style classes</em>; where the class inherits from <code>object</code>:</p>
<pre><code>>>> from swampy.TurtleWorld import Turtle
>>> Turtle.__mro__
(<class 'swampy.TurtleWorld.Turtle'>, <class 'swampy.World.Animal'>, <type 'object'>)
</code></pre>
<p>New-style classes do things a little different, and some things have been renamed and unified. I think you are using the first edition of Think Python, based on code not using new-style classes. </p>
<p>You can probably complete your tutorial with only minor cosmetic changes like these showing up. </p>
<p>However, you may want to find a newer, more recent book; Think Python has a 2nd edition using Python 3, for example.</p>
| 1 | 2016-08-31T13:32:06Z | [
"python",
"python-2.7",
"oop"
] |
maya component id data | 39,250,993 | <p>Does anyone know how to obtain the data in maya called the component ID of the vertices.</p>
<p>I know how to get the vert number but the component ID on the vert is something that changes as the model has been changed. </p>
<p>It seems that there is data in the vertex but just can't find any command to extract it. Any help would be helpful. </p>
<p>I even tried using the maya api but this also just seem to give me the vertices index number and not the actual ID (which is not a sequence as the vertices indexes)</p>
<p>Thanks</p>
| 0 | 2016-08-31T13:28:43Z | 39,253,808 | <p>try that, </p>
<pre><code>import maya.OpenMaya as om
sel = om.MSelectionList()
om.MGlobal.getActiveSelectionList(sel)
dag = om.MDagPath()
comp = om.MObject()
sel.getDagPath(0, dag, comp)
itr = om.MItMeshFaceVertex(dag, comp)
print '| %-15s| %-15s| %-15s' % ('Face ID', 'Object VertID', 'Face-relative VertId')
while not itr.isDone():
print '| %-15s| %-15s| %-15s' % (itr.faceId(), itr.vertId(), itr.faceVertId())
itr.next()
</code></pre>
<p>there are many solutions, i find this one.... src: <a href="http://tech-artists.org/forum/showthread.php?3649-OpenMaya-getting-the-current-componet-selection" rel="nofollow">Link</a></p>
| 0 | 2016-08-31T15:40:27Z | [
"python",
"maya"
] |
maya component id data | 39,250,993 | <p>Does anyone know how to obtain the data in maya called the component ID of the vertices.</p>
<p>I know how to get the vert number but the component ID on the vert is something that changes as the model has been changed. </p>
<p>It seems that there is data in the vertex but just can't find any command to extract it. Any help would be helpful. </p>
<p>I even tried using the maya api but this also just seem to give me the vertices index number and not the actual ID (which is not a sequence as the vertices indexes)</p>
<p>Thanks</p>
| 0 | 2016-08-31T13:28:43Z | 39,379,215 | <p>Maya components do not have persistent identities; the 'vertex id' is just the index of one entry in the vertex table (or the tables for faces, normals, etc). That's why its so easy to mess up a model with construction history if you go 'upstream' and change things that affect the model's component count or topology.</p>
<p>You can attach persistent data to a vertex using the <a href="http://download.autodesk.com/global/docs/maya2014/en_us/index.html?url=files/Blind_data_What_is_blind_data_.htm,topicNumber=d30e168928" rel="nofollow">PolyBlindData</a> system, which attaches arbitrary info to faces, vertices or edges. You could attach data to a particular vertex and the data would <em>probably</em> survive, though the same considerations which can mess up things like vert colors or UVs when construction history changes upstream will also mess with blind data.</p>
| 0 | 2016-09-07T21:23:19Z | [
"python",
"maya"
] |
How can I copy a DOM Element form a existing Tree to a new Tree with minidom? | 39,251,026 | <p>I have to proceed a large XML file. I need a copy of the file, which contains specific Elements of the original file.
I tried to generate the new DOM-document like this</p>
<pre><code>import xml.dom.minidom as dom
xml = dom.parse("somefile.xml")
tree = dom.Document()
copy = dom.Element("xmlcopy")
tree.appendChild(copy)
header = xml.getElementsByTagName("header")[0]
tree.appendChild(header)
</code></pre>
<p>At this point I already get "AttributeError: ownerDocument".</p>
<pre><code>headercopy = header.cloneNode(True)
</code></pre>
<p>Didn't change anything. I'm new to python so I only know basics.
How can I copy Elements to the new Document?</p>
| 0 | 2016-08-31T13:30:11Z | 39,251,435 | <p>Use <code>importNode</code>: <code>tree.appendChild(tree.ownerDocument.importNode(header, TRUE))</code>.</p>
| 0 | 2016-08-31T13:46:12Z | [
"python",
"xml",
"dom"
] |
How to read the status of the Power-LED on Raspberry Pi 3 with Python | 39,251,079 | <p>How to read the status of the Power-LED (or the rainbow square - if you like that better) on Raspberry Pi 3 to detect an low-voltage-condition in a python script? Since the wiring of the Power-LED has changed since Raspberry Pi 2, it seems that GPIO 35 cannot longer be used for that purpose.</p>
<hr>
<p><strong>Update:</strong></p>
<p>Since it seems to be non-trivial to detect a low-power-condition in code on Raspberry Pi 3, i solved it with a quick hardware hack. I soldered a wire between the Output of the APX803 (the power-monitoring device used on Pi 3) and GPIO26 and that way I can simply read GPIO26 to get the power status. Works like a charm.</p>
| 1 | 2016-08-31T13:31:56Z | 39,251,247 | <p>Because of Pi3's BT/wifi support Power LED is controlled directly from the GPU through a GPIO expander. </p>
<p>I believe that there's no way to do what you want</p>
| 2 | 2016-08-31T13:38:59Z | [
"python",
"raspberry-pi3"
] |
How to read the status of the Power-LED on Raspberry Pi 3 with Python | 39,251,079 | <p>How to read the status of the Power-LED (or the rainbow square - if you like that better) on Raspberry Pi 3 to detect an low-voltage-condition in a python script? Since the wiring of the Power-LED has changed since Raspberry Pi 2, it seems that GPIO 35 cannot longer be used for that purpose.</p>
<hr>
<p><strong>Update:</strong></p>
<p>Since it seems to be non-trivial to detect a low-power-condition in code on Raspberry Pi 3, i solved it with a quick hardware hack. I soldered a wire between the Output of the APX803 (the power-monitoring device used on Pi 3) and GPIO26 and that way I can simply read GPIO26 to get the power status. Works like a charm.</p>
| 1 | 2016-08-31T13:31:56Z | 39,251,786 | <p>I have dig into the topic. Found a very good <a href="https://github.com/raspberrypi/linux/issues/1332" rel="nofollow">discussion</a>. In RPi 3 control of power LED from GPIO has been removed as OP mentioned.</p>
<p>The probable reason:</p>
<blockquote>
<p>The power LED is different on Pi3. It is controlled from GPU through a
GPIO expander which is configured as an input. It may not be possible
to drive it from the arm (certainly it is not currently). <strong>Pi3's
BT/wifi support required a number of additional GPIOs compared to Pi2</strong>.
So, the ACT and PWR leds had to move to make that possible.</p>
</blockquote>
<p>Also:</p>
<blockquote>
<p>Since the B+ the functionality of PWR LED has doubled, power indicator
and as well as an under-voltage indicator. Since GPIOs are in short
supply, as well as being an indicator it is also the under-voltage
detector - there is some external circuitry that can detect the GPIO
direction so that it is either an input with an automatic LED or an
output. This circuitry is only present on the red LED. <strong>The ability to
change the "PWR" LED pin to an output, bypasses the under-voltage detection, which is permitted but
not recommended</strong>.</p>
<p>These GPIOs may never be available as fully
featured GPIOs because they are too slow to use in many circumstances.
There is a simple GPIO driver that can efficiently control the ACT
LED, but more effort is needed to make it work with the PWR LED and
change its direction to an output. <strong>This is something we may address,
but it isn't high on the list of priorities at the moment</strong>.</p>
</blockquote>
<hr>
<p><strong>The possible wayout</strong>: Didn't find any way of using Python to access GPU, but found some discussions regarding accessing GPU configuration - sharing it as looking into it could be helpful for you in replicating an equivalent functionality in Python.</p>
<blockquote>
<p>There is also the mailbox service that would allow userspace to <strong>fiddle
with the outputs on the expander</strong>, but not set direction - see
<a href="https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=109137&start=100#p990152" rel="nofollow">https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=109137&start=100#p990152</a>
and <a href="https://github.com/6by9/rpi3-gpiovirtbuf" rel="nofollow">https://github.com/6by9/rpi3-gpiovirtbuf</a></p>
</blockquote>
<p><a href="https://github.com/raspberrypi/linux/issues/1332#issuecomment-231744888" rel="nofollow">Also</a>:</p>
<blockquote>
<p>For the complete list of GPIO allocations, look at the dt-blob.dts
that configures the GPU side -
<a href="https://github.com/raspberrypi/firmware/blob/master/extra/dt-blob.dts#L1175" rel="nofollow">https://github.com/raspberrypi/firmware/blob/master/extra/dt-blob.dts#L1175</a>
Documentation is at
<a href="https://www.raspberrypi.org/documentation/configuration/pin-configuration.md" rel="nofollow">https://www.raspberrypi.org/documentation/configuration/pin-configuration.md</a></p>
</blockquote>
<p>If I get something helpful 'll post it as update 2.</p>
| 0 | 2016-08-31T14:01:56Z | [
"python",
"raspberry-pi3"
] |
Pyqt Terminal hangs after excuting close window command | 39,251,122 | <p>I have read lots of threads online, but still I could not find the solution. My question should be very simple: how to close a Pyqt window WITHOUT clicking a button or using a timer.
The code I tried is pasted below</p>
<pre><code>from PyQt4 import QtGui, QtCore
import numpy as np
import progressMeter_simple
import sys
import time
import pdb
class ProgressMeter(progressMeter_simple.Ui_Dialog, QtGui.QDialog):
def __init__(self):
QtGui.QDialog.__init__(self)
progressMeter_simple.Ui_Dialog.__init__(self)
self.setupUi(self)
self.progressBar.setRange(0, 0)
QtGui.QApplication.processEvents()
def termination(self):
time.sleep(5)
self.close()
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
Dialog = ProgressMeter()
Dialog.show()
Dialog.termination()
sys.exit(app.exec_())
</code></pre>
<p>My Pyqt GUI is designed using Qt designer, and it is nothing but a progress bar that keeps moving from left to right (busy indication).</p>
<p>However, when I run the code above, the terminal still hangs after the Pyqt window is closed. Ctrl+C also couldn't kill the process.
In short, how can I properly close/terminate a Pyqt window without clicking a button or using a timer?</p>
| 0 | 2016-08-31T13:33:35Z | 39,255,543 | <p>It's not working because you're calling GUI methods on the dialog (<code>close()</code>) outside of the event loop. The event loop doesn't start until you call <code>app.exec_()</code>.</p>
<p>If you really want to close the dialog <em>immediately</em> after it opens without using a <code>QTimer</code>, you can override the <a href="http://pyqt.sourceforge.net/Docs/PyQt4/qwidget.html#showEvent" rel="nofollow"><code>showEvent()</code></a> method and call <code>termination()</code> from there, which gets called when the dialog is first displayed.</p>
<pre><code>class ProgressMeter(progressMeter_simple.Ui_Dialog, QtGui.QDialog):
def __init__(self):
QtGui.QDialog.__init__(self)
progressMeter_simple.Ui_Dialog.__init__(self)
self.setupUi(self)
self.progressBar.setRange(0, 0)
def showEvent(self, event):
super(ProgressMeter, self).showEvent(event)
self.termination()
</code></pre>
| 0 | 2016-08-31T17:23:13Z | [
"python",
"pyqt",
"pyqt4"
] |
mpi4py: Spawn processes as Python threads for easier debugging | 39,251,169 | <p>To use mpi4py, the standard approach is to use <code>mpiexec</code> to start a program using multiple MPI processes. For example <code>mpiexec -n 4 python3.5 myprog.py</code>.</p>
<p>Now, that makes debugging difficult, because one can not straight-forwardly use the Python interpreter plus maybe an IDE debugger using the Python interpreter. However, it is no problem to debug a multi-threaded application.</p>
<p>So my idea would be: <strong>Instead of using mpiexec to spwan the processes, I have a Python script that will spwan several threads, each of them will act as an MPI process, all happening within the Python interpreter.</strong> So the use of <code>mpiexec</code> woud not be necessary and I could debug my application like any other multi-threaded Python program. Would that be possible, and how?</p>
<p>(In general, I'd be very happy to find some good example collections or tutorials for mpi4py, there's not very much available.)</p>
| 0 | 2016-08-31T13:35:43Z | 39,260,892 | <p>You do not need to implement any hardcore idea. <code>mpiexec</code> is already spawning process for you. Assuming your using the pudb Python debugger you could do the following:</p>
<p><code>mpiexec -np 4 xterm -e python2.7 -m pudb.run helloword.py</code></p>
<p>The <code>-e</code> option of <code>xterm</code> is specifying which program <code>xterm</code> is going to execute.</p>
<p>PS: I have not test it with Python 3.5 but a similar solution will work</p>
| 0 | 2016-09-01T00:14:58Z | [
"python",
"multithreading",
"mpi",
"mpi4py"
] |
Nested lists to nested dicts | 39,251,223 | <p>I've a list with master keys and a list of list of lists, where the first value of each enclosed list (like <code>'key_01'</code>) shall be a sub key for the corresponding values (like <code>'val_01', 'val_02'</code>). The data is shown here:</p>
<pre><code>master_keys = ["Master_01", "Master_02", "Master_03"]
data_long = [[['key_01','val_01','val_02'],['key_02','val_03','val_04'], ['key_03','val_05','val_06']],
[['key_04','val_07','val_08'], ['key_05','val_09','val_10'], ['key_06','val_11','val_12']],
[['key_07','val_13','val_14'], ['key_08','val_15','val_16'], ['key_09','val_17','val_18']]]
</code></pre>
<p>I would like these lists to be combined into a dictionary of dictionaries, like this:</p>
<pre><code>master_dic = {
"Master_01": {'key_01':['val_01','val_02'],'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06']},
"Master_02": {'key_04': ['val_07','val_08'], 'key_05': ['val_09','val_10'], 'key_06': ['val_11','val_12']},
"Master_03": {'key_07': ['val_13','val_14'], ['key_08': ['val_15','val_16'], 'key_09': ['val_17','val_18']}
}
</code></pre>
<p>What I've got so far is the sub dict:</p>
<pre><code>import itertools
master_dic = {}
servant_dic = {}
keys = []
values = []
for line in data_long:
for item in line:
keys.extend(item[:1])
values.append(item[1:])
servant_dic = dict(itertools.izip(keys, values))
</code></pre>
<p>Which puts out a dictionary, as expected.</p>
<pre><code>servant_dic = {
'key_06': ['val_11','val_12'], 'key_04': ['val_08','val_07'], 'key_05': ['val_09','val_10'],
'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06'], 'key_01': ['val_01','val_02']
}
</code></pre>
<p>The problem is, that if I want to add the master_keys to this dictionary, so I get the wanted result, I'd have to do this in a certain order, which would be possible, if each line had a counter like this:</p>
<pre><code>enumerated_dic =
{
0: {'key_01':['val_01','val_02'],'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06']},
1: {'key_04': ['val_07','val_08'], 'key_05': ['val_09','val_10'], 'key_06': ['val_11','val_12']},
2: {'key_07': ['val_13','val_14'], ['key_08': ['val_15','val_16'], 'key_09': ['val_17','val_18']}
}
</code></pre>
<p>I'd love to do this with <code>enumerate()</code>, while each line of the <code>servant_dic</code> is build, but can't figure out how. Since afterwards, i could simply replace the counters 0, 1, 2 etc. with the <code>master_keys</code>. </p>
<p>Thanks for your help.</p>
| 4 | 2016-08-31T13:37:51Z | 39,251,450 | <pre><code>master_keys = ["Master_01", "Master_02", "Master_03"]
data_long = [[['key_01','val_01','val_02'],['key_02','val_03','val_04'], ['key_03','val_05','val_06']],
[['key_04','val_07','val_08'], ['key_05','val_09','val_10'], ['key_06','val_11','val_12']],
[['key_07','val_13','val_14'], ['key_08','val_15','val_16'], ['key_09','val_17','val_18']]]
_dict = {}
for master_key, item in zip(master_keys, data_long):
_dict[master_key] = {x[0]: x[1:] for x in item}
print _dict
</code></pre>
| 6 | 2016-08-31T13:46:54Z | [
"python",
"list",
"dictionary",
"enumerate"
] |
Nested lists to nested dicts | 39,251,223 | <p>I've a list with master keys and a list of list of lists, where the first value of each enclosed list (like <code>'key_01'</code>) shall be a sub key for the corresponding values (like <code>'val_01', 'val_02'</code>). The data is shown here:</p>
<pre><code>master_keys = ["Master_01", "Master_02", "Master_03"]
data_long = [[['key_01','val_01','val_02'],['key_02','val_03','val_04'], ['key_03','val_05','val_06']],
[['key_04','val_07','val_08'], ['key_05','val_09','val_10'], ['key_06','val_11','val_12']],
[['key_07','val_13','val_14'], ['key_08','val_15','val_16'], ['key_09','val_17','val_18']]]
</code></pre>
<p>I would like these lists to be combined into a dictionary of dictionaries, like this:</p>
<pre><code>master_dic = {
"Master_01": {'key_01':['val_01','val_02'],'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06']},
"Master_02": {'key_04': ['val_07','val_08'], 'key_05': ['val_09','val_10'], 'key_06': ['val_11','val_12']},
"Master_03": {'key_07': ['val_13','val_14'], ['key_08': ['val_15','val_16'], 'key_09': ['val_17','val_18']}
}
</code></pre>
<p>What I've got so far is the sub dict:</p>
<pre><code>import itertools
master_dic = {}
servant_dic = {}
keys = []
values = []
for line in data_long:
for item in line:
keys.extend(item[:1])
values.append(item[1:])
servant_dic = dict(itertools.izip(keys, values))
</code></pre>
<p>Which puts out a dictionary, as expected.</p>
<pre><code>servant_dic = {
'key_06': ['val_11','val_12'], 'key_04': ['val_08','val_07'], 'key_05': ['val_09','val_10'],
'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06'], 'key_01': ['val_01','val_02']
}
</code></pre>
<p>The problem is, that if I want to add the master_keys to this dictionary, so I get the wanted result, I'd have to do this in a certain order, which would be possible, if each line had a counter like this:</p>
<pre><code>enumerated_dic =
{
0: {'key_01':['val_01','val_02'],'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06']},
1: {'key_04': ['val_07','val_08'], 'key_05': ['val_09','val_10'], 'key_06': ['val_11','val_12']},
2: {'key_07': ['val_13','val_14'], ['key_08': ['val_15','val_16'], 'key_09': ['val_17','val_18']}
}
</code></pre>
<p>I'd love to do this with <code>enumerate()</code>, while each line of the <code>servant_dic</code> is build, but can't figure out how. Since afterwards, i could simply replace the counters 0, 1, 2 etc. with the <code>master_keys</code>. </p>
<p>Thanks for your help.</p>
| 4 | 2016-08-31T13:37:51Z | 39,251,502 | <p>You can also use <code>pop</code> and a dict comprehension:</p>
<pre><code>for key, elements in zip(master_keys, data_long):
print {key: {el.pop(0): el for el in elements}}
...:
{'Master_01': {'key_02': ['val_03', 'val_04'], 'key_03': ['val_05', 'val_06']}}
{'Master_02': {'key_06': ['val_11', 'val_12'], 'key_04': ['val_07', 'val_08'], 'key_05': ['val_09', 'val_10']}}
{'Master_03': {'key_07': ['val_13', 'val_14'], 'key_08': ['val_15', 'val_16'], 'key_09': ['val_17', 'val_18']}}
</code></pre>
| 1 | 2016-08-31T13:48:47Z | [
"python",
"list",
"dictionary",
"enumerate"
] |
Nested lists to nested dicts | 39,251,223 | <p>I've a list with master keys and a list of list of lists, where the first value of each enclosed list (like <code>'key_01'</code>) shall be a sub key for the corresponding values (like <code>'val_01', 'val_02'</code>). The data is shown here:</p>
<pre><code>master_keys = ["Master_01", "Master_02", "Master_03"]
data_long = [[['key_01','val_01','val_02'],['key_02','val_03','val_04'], ['key_03','val_05','val_06']],
[['key_04','val_07','val_08'], ['key_05','val_09','val_10'], ['key_06','val_11','val_12']],
[['key_07','val_13','val_14'], ['key_08','val_15','val_16'], ['key_09','val_17','val_18']]]
</code></pre>
<p>I would like these lists to be combined into a dictionary of dictionaries, like this:</p>
<pre><code>master_dic = {
"Master_01": {'key_01':['val_01','val_02'],'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06']},
"Master_02": {'key_04': ['val_07','val_08'], 'key_05': ['val_09','val_10'], 'key_06': ['val_11','val_12']},
"Master_03": {'key_07': ['val_13','val_14'], ['key_08': ['val_15','val_16'], 'key_09': ['val_17','val_18']}
}
</code></pre>
<p>What I've got so far is the sub dict:</p>
<pre><code>import itertools
master_dic = {}
servant_dic = {}
keys = []
values = []
for line in data_long:
for item in line:
keys.extend(item[:1])
values.append(item[1:])
servant_dic = dict(itertools.izip(keys, values))
</code></pre>
<p>Which puts out a dictionary, as expected.</p>
<pre><code>servant_dic = {
'key_06': ['val_11','val_12'], 'key_04': ['val_08','val_07'], 'key_05': ['val_09','val_10'],
'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06'], 'key_01': ['val_01','val_02']
}
</code></pre>
<p>The problem is, that if I want to add the master_keys to this dictionary, so I get the wanted result, I'd have to do this in a certain order, which would be possible, if each line had a counter like this:</p>
<pre><code>enumerated_dic =
{
0: {'key_01':['val_01','val_02'],'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06']},
1: {'key_04': ['val_07','val_08'], 'key_05': ['val_09','val_10'], 'key_06': ['val_11','val_12']},
2: {'key_07': ['val_13','val_14'], ['key_08': ['val_15','val_16'], 'key_09': ['val_17','val_18']}
}
</code></pre>
<p>I'd love to do this with <code>enumerate()</code>, while each line of the <code>servant_dic</code> is build, but can't figure out how. Since afterwards, i could simply replace the counters 0, 1, 2 etc. with the <code>master_keys</code>. </p>
<p>Thanks for your help.</p>
| 4 | 2016-08-31T13:37:51Z | 39,251,554 | <p>Hope this will help:</p>
<pre><code>{master_key: {i[0]: i[1:] for i in subkeys} for master_key, subkeys in zip(master_keys, data_long)}
</code></pre>
| 2 | 2016-08-31T13:51:22Z | [
"python",
"list",
"dictionary",
"enumerate"
] |
Nested lists to nested dicts | 39,251,223 | <p>I've a list with master keys and a list of list of lists, where the first value of each enclosed list (like <code>'key_01'</code>) shall be a sub key for the corresponding values (like <code>'val_01', 'val_02'</code>). The data is shown here:</p>
<pre><code>master_keys = ["Master_01", "Master_02", "Master_03"]
data_long = [[['key_01','val_01','val_02'],['key_02','val_03','val_04'], ['key_03','val_05','val_06']],
[['key_04','val_07','val_08'], ['key_05','val_09','val_10'], ['key_06','val_11','val_12']],
[['key_07','val_13','val_14'], ['key_08','val_15','val_16'], ['key_09','val_17','val_18']]]
</code></pre>
<p>I would like these lists to be combined into a dictionary of dictionaries, like this:</p>
<pre><code>master_dic = {
"Master_01": {'key_01':['val_01','val_02'],'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06']},
"Master_02": {'key_04': ['val_07','val_08'], 'key_05': ['val_09','val_10'], 'key_06': ['val_11','val_12']},
"Master_03": {'key_07': ['val_13','val_14'], ['key_08': ['val_15','val_16'], 'key_09': ['val_17','val_18']}
}
</code></pre>
<p>What I've got so far is the sub dict:</p>
<pre><code>import itertools
master_dic = {}
servant_dic = {}
keys = []
values = []
for line in data_long:
for item in line:
keys.extend(item[:1])
values.append(item[1:])
servant_dic = dict(itertools.izip(keys, values))
</code></pre>
<p>Which puts out a dictionary, as expected.</p>
<pre><code>servant_dic = {
'key_06': ['val_11','val_12'], 'key_04': ['val_08','val_07'], 'key_05': ['val_09','val_10'],
'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06'], 'key_01': ['val_01','val_02']
}
</code></pre>
<p>The problem is, that if I want to add the master_keys to this dictionary, so I get the wanted result, I'd have to do this in a certain order, which would be possible, if each line had a counter like this:</p>
<pre><code>enumerated_dic =
{
0: {'key_01':['val_01','val_02'],'key_02': ['val_03','val_04'], 'key_03': ['val_05','val_06']},
1: {'key_04': ['val_07','val_08'], 'key_05': ['val_09','val_10'], 'key_06': ['val_11','val_12']},
2: {'key_07': ['val_13','val_14'], ['key_08': ['val_15','val_16'], 'key_09': ['val_17','val_18']}
}
</code></pre>
<p>I'd love to do this with <code>enumerate()</code>, while each line of the <code>servant_dic</code> is build, but can't figure out how. Since afterwards, i could simply replace the counters 0, 1, 2 etc. with the <code>master_keys</code>. </p>
<p>Thanks for your help.</p>
| 4 | 2016-08-31T13:37:51Z | 39,251,576 | <p>My functional approach:</p>
<pre><code>master_dic = dict(zip(master_keys, [{k[0]: k[1::] for k in emb_list} for emb_list in data_long]))
print(master_dic)
</code></pre>
| 1 | 2016-08-31T13:52:29Z | [
"python",
"list",
"dictionary",
"enumerate"
] |
Get path of the current repo in PyGit2 from anywhere within the repo | 39,251,365 | <p>I'm using Pygit2 to run certain operations within the repo I'm working on.</p>
<p>If my code file is not at the root of the repo, how can I get the path of the repo from anywhere within the repo?</p>
<p>I can do the below in case the function is called from root, but how to do it if I run it from anywhere within the repository code?</p>
<pre><code>$ cd /home/test/Desktop/code/Project
$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
$ ipython3
In [1]: import os, pygit2
In [2]: repo = pygit2.Repository(os.getcwd())
</code></pre>
| 0 | 2016-08-31T13:43:23Z | 39,252,289 | <p>One option is simply to traverse parent directories until you find a <code>.git</code> directory:</p>
<pre><code>import os
import pathlib
import pygit2
def find_toplevel(path, last=None):
path = pathlib.Path(path).absolute()
if path == last:
return None
if (path / '.git').is_dir():
return path
return find_toplevel(path.parent, last=path)
toplevel = find_toplevel('.')
if toplevel is not None:
repo = pygit2.Repository(str(toplevel))
</code></pre>
<p>There are some caveats here, of course. You won't necessarily find a
<code>.git</code> directory if someone has set the <code>GIT_DIR</code> environment
variable. If you have a git worktree, then <code>.git</code> is a file, not a
directory, and <code>libgit2</code> doesn't seem to handle this (as of version
0.24).</p>
| 1 | 2016-08-31T14:24:46Z | [
"python",
"git",
"python-3.x",
"pygit2"
] |
Using list comprehension to keep items not in second list | 39,251,386 | <p>I am trying to use list comprehension to remove a number of items from a list by just keeping those not specified.</p>
<p>For example if I have 2 lists <code>a = [1,3,5,7,10]</code> and <code>b = [2,4]</code> I want to keep all items from <code>a</code> that are not at an index corresponding to a number in <code>b</code>.</p>
<p>Now, I tried to use <code>y = [a[x] for x not in b]</code> but this produces a SyntaxError.</p>
<p><code>y = [a[x] for x in b]</code> works fine and keeps just exact the elements that i want removed.</p>
<p>So how do I achieve this? And on a side note, is this a good way to do it or should I use <code>del</code>?</p>
| 0 | 2016-08-31T13:44:12Z | 39,251,445 | <p>You can use <a href="https://docs.python.org/2/library/functions.html#enumerate"><code>enumerate()</code></a> and look up indexes in <code>b</code>:</p>
<pre><code>>>> a = [1, 3, 5, 7, 10]
>>> b = [2, 4]
>>> [item for index, item in enumerate(a) if index not in b]
[1, 3, 7]
</code></pre>
<p>Note that to improve the lookup time, better have the <code>b</code> as a <em>set</em> instead of a list. <a href="https://wiki.python.org/moin/TimeComplexity">Lookups into sets are <code>O(1)</code> on average</a> while in a list - <code>O(n)</code> where <code>n</code> is the length of the list.</p>
| 6 | 2016-08-31T13:46:41Z | [
"python",
"list",
"list-comprehension",
"not-operator"
] |
Using list comprehension to keep items not in second list | 39,251,386 | <p>I am trying to use list comprehension to remove a number of items from a list by just keeping those not specified.</p>
<p>For example if I have 2 lists <code>a = [1,3,5,7,10]</code> and <code>b = [2,4]</code> I want to keep all items from <code>a</code> that are not at an index corresponding to a number in <code>b</code>.</p>
<p>Now, I tried to use <code>y = [a[x] for x not in b]</code> but this produces a SyntaxError.</p>
<p><code>y = [a[x] for x in b]</code> works fine and keeps just exact the elements that i want removed.</p>
<p>So how do I achieve this? And on a side note, is this a good way to do it or should I use <code>del</code>?</p>
| 0 | 2016-08-31T13:44:12Z | 39,251,534 | <p>after this:</p>
<pre><code>y = [a[x] for x in b]
</code></pre>
<p>just add:</p>
<pre><code>for x in y:
a.remove(x)
</code></pre>
<p>then you end up with a stripped down list in a</p>
| -1 | 2016-08-31T13:50:08Z | [
"python",
"list",
"list-comprehension",
"not-operator"
] |
Using list comprehension to keep items not in second list | 39,251,386 | <p>I am trying to use list comprehension to remove a number of items from a list by just keeping those not specified.</p>
<p>For example if I have 2 lists <code>a = [1,3,5,7,10]</code> and <code>b = [2,4]</code> I want to keep all items from <code>a</code> that are not at an index corresponding to a number in <code>b</code>.</p>
<p>Now, I tried to use <code>y = [a[x] for x not in b]</code> but this produces a SyntaxError.</p>
<p><code>y = [a[x] for x in b]</code> works fine and keeps just exact the elements that i want removed.</p>
<p>So how do I achieve this? And on a side note, is this a good way to do it or should I use <code>del</code>?</p>
| 0 | 2016-08-31T13:44:12Z | 39,251,903 | <p>Guess you're looking for somthing like : </p>
<pre><code>[ x for x in a if a.index(x) not in b ]
</code></pre>
<p>Or, using filter:</p>
<pre><code>filter(lambda x : a.index(x) not in b , a)
</code></pre>
| 1 | 2016-08-31T14:07:45Z | [
"python",
"list",
"list-comprehension",
"not-operator"
] |
Using list comprehension to keep items not in second list | 39,251,386 | <p>I am trying to use list comprehension to remove a number of items from a list by just keeping those not specified.</p>
<p>For example if I have 2 lists <code>a = [1,3,5,7,10]</code> and <code>b = [2,4]</code> I want to keep all items from <code>a</code> that are not at an index corresponding to a number in <code>b</code>.</p>
<p>Now, I tried to use <code>y = [a[x] for x not in b]</code> but this produces a SyntaxError.</p>
<p><code>y = [a[x] for x in b]</code> works fine and keeps just exact the elements that i want removed.</p>
<p>So how do I achieve this? And on a side note, is this a good way to do it or should I use <code>del</code>?</p>
| 0 | 2016-08-31T13:44:12Z | 39,251,982 | <p>Try this it will work </p>
<pre><code> [j for i,j in enumerate(a) if i not in b ]
</code></pre>
| 0 | 2016-08-31T14:10:30Z | [
"python",
"list",
"list-comprehension",
"not-operator"
] |
python invoking PV-Wave | 39,251,432 | <p>'/usr/local/bin/wave' only accepts a filename as input, so I need to invoke the process, then "send in" the commands, and wait for the output file to be written. Then my process can proceed to read the output file. Here is my code that does not write to the output file:</p>
<pre><code>hdfFile = "/archive/HDF/16023343.hdf"
pngFile = "/xrfc_calib/xrfc.130.png"
lpFile = os.environ['DOCUMENT_ROOT'] + pngFile
waveCmd = "hdfview, '" + hdfFile + "', outfile='" + lpFile + "', web, view='RASTER', /neg"
os.environ['WAVE_PATH'] = "/oudvmt/wave/pro:/dvmt/wave/pro"
wfile = subprocess.Popen ('/usr/local/bin/wave >&2', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
wfile.stdin = "\@hdf_startup\n\@hdf_common\n" + waveCmd + "\nquit\n"
</code></pre>
| 0 | 2016-08-31T13:46:01Z | 39,257,681 | <p>I found what I was missing. The change is to the last 2 lines. They are:</p>
<pre><code>wfile = subprocess.Popen ('/usr/local/bin/wave', stdin=subprocess.PIPE, stdout=subprocess.PIPE)
wfile.communicate("\@hdf_startup\n\@hdf_common\n" + waveCmd + "\nquit\n")
</code></pre>
<p>I needed to set "stdout" to avoid extra output from PV-Wave.
I needed to use "communicate" to wait for the process to complete.</p>
| 0 | 2016-08-31T19:36:22Z | [
"python",
"linux"
] |
Opening a SSL socket using a pem certificate | 39,251,535 | <p>I'm trying to connect to a ssl server using Java. I've already managed to do that in Python, however I've got a <code>PEM</code> file which isn't supported by Java. Converting it to <code>PKCS12</code> didn't work </p>
<p>Error when trying to connect was:</p>
<pre><code>sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
</code></pre>
<p>My question is: Can you give me the Java equivalent? (Using another library is also ok)</p>
<pre><code>import ssl
import socket
mysock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
mysslsock = ssl.wrap_socket(mysock, keyfile='mykey.pem', certfile='mycert.pem')
mysslsock.connect(("SOMEHOST", XXXXX))
</code></pre>
<p>Please note that the server requires client authentication.</p>
<p><strong>Edit</strong></p>
<p>That's what I did in Java:</p>
<p>I used openssl to convert my certificate into PKCS12 format:</p>
<pre><code>openssl pkcs12 -export -out mystore.p12 -inkey mykey.pem -in mycert.pem
</code></pre>
<p>Then I've used the keytool that comes with the JDK to convert it into JKS:</p>
<pre><code>keytool -importkeystore -destkeystore mystore.jks -srcstoretype PKCS12 -srckeystore mystore.p12
</code></pre>
<p>And that's my Java code:</p>
<pre><code>System.setProperty("javax.net.ssl.keyStore", "mystore.jks");
System.setProperty("javax.net.ssl.keyStorePassword", "123456");
System.setProperty("javax.net.ssl.keyStoreType", "JKS");
SSLSocketFactory socketFactory = (SSLSocketFactory) SSLSocketFactory.getDefault();
SSLSocket socket = (SSLSocket) socketFactory.createSocket(HOST, PORT);
socket.startHandshake(); // That's the line I get the exception
socket.close();
</code></pre>
<p>I'm sure I'm making some really stupid mistake as I don't have any experience with SSL.</p>
<p><strong>Edit:</strong>
Probably I've somehow the wrong certificates so that's what they look like:</p>
<pre><code><mykey.pem>
-----BEGIN RSA PRIVATE KEY-----
ljnoabndibnwzb12387uGJBEIUQWBIDAB
....... (Some more lines)
-----END RSA PRIVATE KEY-----
<mycert.pem>
Bag Attributes
localKeyId: XX XX XX XX
subject:...
issuer:...
-----BEGIN CERTIFICATE-----
LAinaw8921hnA.......
.....
-----END CERTIFICATE-----
</code></pre>
| 0 | 2016-08-31T13:50:12Z | 39,251,705 | <p>don't you need to load the Key into the Java Keystore?</p>
<p>that is a seperate program.</p>
<p><a href="http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html" rel="nofollow">http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/keytool.html</a></p>
<p>-importcert {-alias alias} {-file cert_file} [-keypass keypass] {-noprompt}
{-trustcacerts} {-storetype storetype} {-keystore keystore} [-storepass
storepass] {-providerName provider_name} {-providerClass provider_class_name
{-providerArg provider_arg}} {-v} {-protected} {-Jjavaoption}</p>
<p>Reads the certificate or certificate chain (where the latter is supplied
in a PKCS#7 formatted reply or a sequence of X.509 certificates) from the
file cert_file, and stores it in the keystore entry identified by alias. If
no file is given, the certificate or certificate chain is read from stdin.</p>
<p>keytool can import X.509 v1, v2, and v3 certificates, and PKCS#7
formatted certificate chains consisting of certificates of that type. The
data to be imported must be provided either in binary encoding format, or in
printable encoding format (also known as Base64 encoding) as defined by the
Internet RFC 1421 standard. In the latter case, the encoding must be bounded
at the beginning by a string that starts with "-----BEGIN", and bounded at
the end by a string that starts with "-----END".</p>
| 1 | 2016-08-31T13:58:48Z | [
"java",
"python",
"sockets",
"ssl",
"ssl-certificate"
] |
Two different ajax urls return the same data | 39,251,588 | <p>I am using jQuery ajax calls on my django site to populate the options in a select box which gets presented in a modal window that pops up (bootstrap) to the user. Specifically, it provides a list of available schemas in a particular database. The ajax url is dynamic which means it includes a db_host and a db_name in the URL. I have a site with clients and each client has to point to a different host/db. </p>
<p>The db_host and db_name passed via the URL are used in the django view to execute the necessary sql statement on the relevant host/db.</p>
<p>So, the URL should be unique for each db_host/db_name combination. </p>
<p>I am running into a problem where I am on client A's page, I click the button to display the modal. The ajax call is made. I get the data I expect. Everything is fine. Let's say the ajax URL is "/ajax/db_host/server_a/db_name/client_a_db/schema_dropdown".</p>
<p>Now, I got to client B's page and click the same button to display the modal. The ajax call is made. Let's say this time, the ajax URL is
"/ajax/db_host/server_b/db_name/client_b_db/schema_dropdown." </p>
<p>However, the data returned is the actually the data that was returned for the previous ajax call I made ("/ajax/db_host/server_a/db_name/client_a_db/schema_dropdown") and not for the URL (specifically host/db) that I just passed.</p>
<p>I have double/triple checked that my URL is, in fact different, each time. Any help would be appreciated. </p>
<p>Here is my javascript function that gets called just before displaying the modal that populates my select.</p>
<pre><code>function populate_schema_dropdown(db_host, db_name) {
var ajax_url = "/ajax/db_host/" + db_host + "/db_name/" + db_name + "/schema_dropdown"
$.ajax({
url: ajax_url,
success: function (data) {
$("#select-element").empty()
$.each(data, function (i, v) {
$("#schema-element").append("<option value='" + v[0] + "'>" + v[1] + "</option>")
})
}
});
};
</code></pre>
<p>Below is my django view.</p>
<pre><code>def ajax_schema_dropdown(request, db_host, db_name):
cursor = get_temporary_db_connection(db_host, db_name).cursor()
cursor.execute("""
SELECT
NAME
FROM [sys].[schemas]
""")
data = [(each[0], each[0]) for each in cursor.fetchall()]
cursor.close()
return HttpResponse(json.dumps(data), content_type="application/json")
</code></pre>
<p>Below is get_temporary_db_connection function...</p>
<pre><code>def get_temporary_db_connection(db_host, db_name):
temporary_name = str(uuid.uuid4)
connections.databases[temporary_name] = {
'ENGINE': 'sqlserver_ado',
'NAME': db_name,
'HOST': db_host,
'USER': '',
'PASSWORD': '',
'OPTIONS': {
'provider': 'SQLNCLI11'
}
}
return connections[temporary_name]
</code></pre>
| 1 | 2016-08-31T13:52:56Z | 39,251,796 | <p>Likely the problem in the way of setting <code>temporary_name</code> variable. It's not dynamic like it suppose to be ( I guess ), that's same database as first one are being used for all proceeding ajax calls, to fix it you have to replace line:</p>
<pre><code>temporary_name = str(uuid.uuid4)
</code></pre>
<p>to</p>
<pre><code>temporary_name = str(uuid.uuid4()) # note i've changed uuid.uuid4 -> uuid.uuid4()
________________________________^^____
</code></pre>
<p><strong>Note:</strong> Likely it's make sense close connection to database after working with it, otherwise you will end up with a lot of connections in <code>connections</code> dict. As an option you can keep connection to db in connections dict after it was created and reconnect ( if some condition occured) or simply return existed connection without performing new connection.</p>
| 0 | 2016-08-31T14:02:11Z | [
"javascript",
"jquery",
"python",
"ajax",
"django"
] |
Formatting HTML style in PyCharm | 39,251,661 | <p>How to automatically generate tabs to get a proper tree view on a generated HTML file?</p>
<p>For example, I've got a code like this:</p>
<pre><code> def creating_html(self):
something ='''<table style="width:100%">
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Age</th>
</tr>
<tr>
<td>Jill</td>
<td>Smith</td>
<td>50</td>
</tr>
<tr>
<td>Eve</td>
<td>Jackson</td>
<td>94</td>
</tr>
</table>
'''
file.write(something)
</code></pre>
<p>This code will generate code with proper tabs etc. but I feel my code is really messy because of that and also I think that I'm wasting time to check for a good format. So I was thinking about something like this, to make my code little bit cleaner.</p>
<pre><code> def creating_html(self):
something ='''
<table style="width:100%">
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Age</th>
</tr>
<tr>
<td>Jill</td>
<td>Smith</td>
<td>50</td>
</tr>
<tr>
<td>Eve</td>
<td>Jackson</td>
<td>94</td>
</tr>
</table>
'''
file.write(something)
</code></pre>
<p>but now the format is of course invalid.</p>
<p>I was searching for a good solution and one of them is just open this HTML file after generating it with PyCharm and use shortcut <kbd>Ctrl</kbd>+<kbd>Alt</kbd>+<kbd>L</kbd>. I'm looking for a solution to do it automatically.</p>
| 0 | 2016-08-31T13:56:37Z | 39,251,797 | <pre><code>from lxml import etree, html
html_string ='''
<table style="width:100%">
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Age</th>
</tr>
<tr>
<td>Jill</td>
<td>Smith</td>
<td>50</td>
</tr>
<tr>
<td>Eve</td>
<td>Jackson</td>
<td>94</td>
</tr>
</table>
'''
html_object = html.fromstring(html_string)
file.write(etree.tostring(html_object, pretty_print=True))
</code></pre>
| 1 | 2016-08-31T14:02:17Z | [
"python",
"html",
"python-2.7",
"pycharm"
] |
Formatting HTML style in PyCharm | 39,251,661 | <p>How to automatically generate tabs to get a proper tree view on a generated HTML file?</p>
<p>For example, I've got a code like this:</p>
<pre><code> def creating_html(self):
something ='''<table style="width:100%">
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Age</th>
</tr>
<tr>
<td>Jill</td>
<td>Smith</td>
<td>50</td>
</tr>
<tr>
<td>Eve</td>
<td>Jackson</td>
<td>94</td>
</tr>
</table>
'''
file.write(something)
</code></pre>
<p>This code will generate code with proper tabs etc. but I feel my code is really messy because of that and also I think that I'm wasting time to check for a good format. So I was thinking about something like this, to make my code little bit cleaner.</p>
<pre><code> def creating_html(self):
something ='''
<table style="width:100%">
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Age</th>
</tr>
<tr>
<td>Jill</td>
<td>Smith</td>
<td>50</td>
</tr>
<tr>
<td>Eve</td>
<td>Jackson</td>
<td>94</td>
</tr>
</table>
'''
file.write(something)
</code></pre>
<p>but now the format is of course invalid.</p>
<p>I was searching for a good solution and one of them is just open this HTML file after generating it with PyCharm and use shortcut <kbd>Ctrl</kbd>+<kbd>Alt</kbd>+<kbd>L</kbd>. I'm looking for a solution to do it automatically.</p>
| 0 | 2016-08-31T13:56:37Z | 39,252,093 | <p>You may use Beautiful Soup 4 installing it with pip and use it like follows:</p>
<pre><code>from bs4 import BeautifulSoup
something ='''
<table style="width:100%">
<tr>
<th>Firstname</th>
<th>Lastname</th>
<th>Age</th>
</tr>
<tr>
<td>Jill</td>
<td>Smith</td>
<td>50</td>
</tr>
<tr>
<td>Eve</td>
<td>Jackson</td>
<td>94</td>
</tr>
</table>
'''
messed_code= BeautifulSoup(something, 'html.parser')
file.write(messed_code.prettify())
</code></pre>
<p>Personally, I think the string looks better in the first case as far as it's more readable</p>
| 2 | 2016-08-31T14:16:18Z | [
"python",
"html",
"python-2.7",
"pycharm"
] |
Tornado with_timeout correct usage | 39,251,682 | <p>I've got a web-server that runs some shell command. The command usually takes a couple of seconds, but in some occasions it takes more and in that case the client (it is not a web-browser or curl) disconnects.
I have no possibility to fix the client, so I thought about fixing the server. It is based on tornado framework. I rewrote it using tornado.gen.with_timeout function, but for some reason it doesn't work as I expect. I set timeout to 5 seconds, so when quering the server I expect to get either "finished in X seconds" (where X < 5)
or "already took more than 5 seconds... still running". And in both cases I expect to get response in less than 5 seconds.
Here is the code:</p>
<pre><code>import os
import json
import datetime
import random
from tornado.ioloop import IOLoop
from tornado.web import RequestHandler, Application
from tornado.gen import coroutine, with_timeout, TimeoutError, Return
@coroutine
def run_slow_command(command):
res = os.system(command)
raise Return(res)
class MainHandler(RequestHandler):
@coroutine
def get(self):
TIMEOUT = 5
duration = random.randint(1, 15)
try:
yield with_timeout(datetime.timedelta(seconds=TIMEOUT), run_slow_command('sleep %d' % duration))
response = {'status' : 'finished in %d seconds' % duration}
except TimeoutError:
response = {'status' : 'already took more than %d seconds... still running' % TIMEOUT}
self.set_header("Content-type", "application/json")
self.write(json.dumps(response) + '\n')
def make_app():
return Application([
(r"/", MainHandler),
])
if __name__ == "__main__":
app = make_app()
app.listen(8080)
IOLoop.current().start()
</code></pre>
<p>And here is the output from curl:</p>
<pre><code>for i in `seq 1 5`; do curl http://127.0.0.1:8080/; done
{"status": "finished in 15 seconds"}
{"status": "finished in 12 seconds"}
{"status": "finished in 3 seconds"}
{"status": "finished in 11 seconds"}
{"status": "finished in 13 seconds"}
</code></pre>
<p>What am I doing wrong?</p>
| 1 | 2016-08-31T13:57:26Z | 39,252,642 | <p>Although <code>run_slow_command</code> is decorated with <code>coroutine</code> it's still blocking, so Tornado is blocked and cannot run any code, including a timer, until the <code>os.system</code> call completes. You should defer the call to a thread:</p>
<pre><code>from concurrent.futures import ThreadPoolExecutor
thread_pool = ThreadPoolExecutor(4)
@coroutine
def run_slow_command(command):
res = yield thread_pool.submit(os.system, command)
raise Return(res)
</code></pre>
<p>Or, since you just want the Future that's returned by <code>submit</code>, don't use <code>coroutine</code> at all:</p>
<pre><code>def run_slow_command(command):
return thread_pool.submit(os.system, command)
</code></pre>
<p>Instead of using <code>os.system</code>, however, you should use Tornado's <a href="http://www.tornadoweb.org/en/stable/process.html" rel="nofollow">own Subprocess support</a>. Putting it all together, this example waits 5 seconds for a subprocess, then times out:</p>
<pre><code>from datetime import timedelta
from functools import partial
from tornado import gen
from tornado.ioloop import IOLoop
from tornado.process import Subprocess
@gen.coroutine
def run_slow_command(command):
yield gen.with_timeout(
timedelta(seconds=5),
Subprocess(args=command.split()).wait_for_exit())
IOLoop.current().run_sync(partial(run_slow_command, 'sleep 10'))
</code></pre>
| 1 | 2016-08-31T14:41:21Z | [
"python",
"tornado"
] |
Multiple Dynamic OptionMenu using DataFrame in Python Tkinter GUI-PY_VAR21 Error | 39,251,721 | <p>I have been working on filtering the DataFrame rows so that I could arrive at a Single row with respect to different options chosen in OptionMenus in sequence.
I did try to store the selected variable from OptionMenu and when I printed it, I get </p>
<p>'PY_VAR21' </p>
<p>as output. </p>
<p>Can you kindly clarify me. I enclose my datafile where I am willing to have Multiple dynamic OptionMenus where selected value goes further to another class.</p>
<pre><code>from Tkinter import *
master = Tk()
class MyClass():
def __init__(self):
self.zero=data.Category.unique()
self.variable0=StringVar()
option_menu=OptionMenu(master,self.variable0,*self.zero)
option_menu.pack()
x=MyClass()
aa=x.variable0
print aa
mainloop()
</code></pre>
<p>I have got thousands of values and categories in each column and thus I want to automate the Dynamic OptionMenu using the available DataFrame.</p>
<p>I have been trying with different approaches and am failing for many number of days.Thank you.<a href="http://i.stack.imgur.com/re2Ff.png" rel="nofollow">Datafile</a></p>
| 0 | 2016-08-31T13:59:24Z | 39,253,153 | <p>When you see the <code>PY_VAR21</code> that is the instance of the variable. However if you say:</p>
<pre><code>print self.variable0.get()
</code></pre>
<p>you should see the actual value of the variable.</p>
| 1 | 2016-08-31T15:05:55Z | [
"python",
"python-2.7",
"tkinter"
] |
Multiple Dynamic OptionMenu using DataFrame in Python Tkinter GUI-PY_VAR21 Error | 39,251,721 | <p>I have been working on filtering the DataFrame rows so that I could arrive at a Single row with respect to different options chosen in OptionMenus in sequence.
I did try to store the selected variable from OptionMenu and when I printed it, I get </p>
<p>'PY_VAR21' </p>
<p>as output. </p>
<p>Can you kindly clarify me. I enclose my datafile where I am willing to have Multiple dynamic OptionMenus where selected value goes further to another class.</p>
<pre><code>from Tkinter import *
master = Tk()
class MyClass():
def __init__(self):
self.zero=data.Category.unique()
self.variable0=StringVar()
option_menu=OptionMenu(master,self.variable0,*self.zero)
option_menu.pack()
x=MyClass()
aa=x.variable0
print aa
mainloop()
</code></pre>
<p>I have got thousands of values and categories in each column and thus I want to automate the Dynamic OptionMenu using the available DataFrame.</p>
<p>I have been trying with different approaches and am failing for many number of days.Thank you.<a href="http://i.stack.imgur.com/re2Ff.png" rel="nofollow">Datafile</a></p>
| 0 | 2016-08-31T13:59:24Z | 39,268,883 | <p>see below for printing the variable outside the class after the loop has finished. </p>
<pre><code>from Tkinter import *
class GUI():
def __init__(self, master):
self.variable0 = StringVar()
self.variable0.set('Option1')
self.option = OptionMenu(master, self.variable0, 'Option1', 'Option2', 'Option3')
self.option.pack()
self.close = Button(master, text='Close', command=master.quit)
self.close.pack(side=BOTTOM)
root = Tk()
x = GUI(root)
root.mainloop()
var = x.variable0
print var.get()
</code></pre>
| 0 | 2016-09-01T10:18:18Z | [
"python",
"python-2.7",
"tkinter"
] |
Command error makemigrations in a Python/Django project | 39,251,806 | <p>Python: <code>2.7</code></p>
<p>Django: <code>1.6</code></p>
<p>I'm using <code>virtualenv</code> to manage my projects.</p>
<p>I have my app added to INSTALLED_APPS.</p>
<p>Tried to run the following command:</p>
<pre><code>(pythonenv-1.6)xxxxx@xxxx.com
python manage.py makemigrations
Unknown command: 'makemigrations'
Type 'manage.py help' for usage.
</code></pre>
<p>I've tried the <code>python manage.py makemigrations my_app_name</code> and didn't work too. :(</p>
| 0 | 2016-08-31T14:02:42Z | 39,251,930 | <p>The message says that command is unknown. It was surely added in a later django version. If you don't like south you can drop table and run syncdb or manually change the SQL with ALTER command.</p>
| 0 | 2016-08-31T14:08:42Z | [
"python",
"django",
"virtualenv"
] |
Command error makemigrations in a Python/Django project | 39,251,806 | <p>Python: <code>2.7</code></p>
<p>Django: <code>1.6</code></p>
<p>I'm using <code>virtualenv</code> to manage my projects.</p>
<p>I have my app added to INSTALLED_APPS.</p>
<p>Tried to run the following command:</p>
<pre><code>(pythonenv-1.6)xxxxx@xxxx.com
python manage.py makemigrations
Unknown command: 'makemigrations'
Type 'manage.py help' for usage.
</code></pre>
<p>I've tried the <code>python manage.py makemigrations my_app_name</code> and didn't work too. :(</p>
| 0 | 2016-08-31T14:02:42Z | 39,252,562 | <p>Adding to Ninja Puppy comment, here's a formal answer for future reference:</p>
<p>Django 1.6 relies on 3rd party add-on "South" and therefore we have to use this for initialising</p>
<pre><code>./manage.py my_app_name southtut --initial
</code></pre>
<p>and then this for updating changes </p>
<pre><code>./manage.py my_app_name southtut --auto
</code></pre>
<p>instead of </p>
<pre><code>python manage.py makemigrations my_app_name
</code></pre>
| 0 | 2016-08-31T14:38:07Z | [
"python",
"django",
"virtualenv"
] |
UnsupportedAlgorithm: This backend does not support this key serialization. - Python cryptography load_pem_private_key | 39,251,810 | <p>I am trying to generate signed urls for AWS Cloudfront based on the example <a href="http://boto3.readthedocs.io/en/latest/reference/services/cloudfront.html#examples" rel="nofollow">here</a>. On the line </p>
<pre><code>private_key = serialization.load_pem_private_key(
key_file.read(),
password=None,
backend=default_backend()
)
</code></pre>
<p>I get the error <code>UnsupportedAlgorithm: This backend does not support this key serialization.</code> The full trace is as below:</p>
<pre><code>File "command_util.py", line 98, in rsa_signer
backend=default_backend()
File "runtime/cryptography/hazmat/primitives/serialization.py", line 20, in load_pem_private_key
return backend.load_pem_private_key(data, password)
File "runtime/cryptography/hazmat/backends/multibackend.py", line 286, in load_pem_private_key
_Reasons.UNSUPPORTED_SERIALIZATION
UnsupportedAlgorithm: This backend does not support this key serialization.
</code></pre>
<p>On reading the docs it says that the exception occurs because of the following:</p>
<pre><code>cryptography.exceptions.UnsupportedAlgorithm â the serialized key is of a type that is not supported by the backend or if
the key is encrypted with a symmetric cipher that is not supported by the backend.
</code></pre>
<p>The PEM file given starts with <code>-----BEGIN RSA PRIVATE KEY-----</code> and ends with <code>-----END RSA PRIVATE KEY-----</code> . </p>
<p>I am using google appengine sdk while developing this application.</p>
<p>I need help understanding this error message and how to make this work.</p>
| 4 | 2016-08-31T14:03:00Z | 39,268,177 | <p>Unfortunately the python cryptography library cannot be used with google appengine(GAE) as this library needs to have C extensions and you cannot install C extensions in GAE. You can only use pure python packages.</p>
| 2 | 2016-09-01T09:47:52Z | [
"python",
"google-app-engine",
"boto3",
"pem",
"python-cryptography"
] |
Python not taking picture at highest resolution from Raspberry Pi Camera | 39,251,815 | <p>I have a Raspberry Pi Camera version v2.1, capable of taking a picture with 3280x2464 resolution.</p>
<p>I have taken a test with raspistill command, and this seems to work out fine:</p>
<pre><code>raspistill -o 8mp.png -w 3280 -h 2464
</code></pre>
<p>returns the info:</p>
<pre><code>8mp.png JPEG 3280x2464 3280x2464+0+0 8-bit sRGB 4.524MB 0.010u 0:00.010
</code></pre>
<p>However, when I use the Python code to take a picture, it will refuse it. Here's the code I'm working with:</p>
<pre><code>#!/usr/bin/python
import picamera
camera = picamera.PiCamera()
camera.resolution = (3280,2464)
camera.capture("test.png")
camera.close()
</code></pre>
<p>And this is the error:</p>
<pre><code>mmal: mmal_vc_port_enable: failed to enable port vc.ril.image_encode:out:0(PNG ): ENOSPC
mmal: mmal_port_enable: failed to enable port vc.ril.image_encode:out:0(PNG )(0x700090) (ENOSPC)
Traceback (most recent call last):
File "pic.py", line 6, in <module>
camera.capture("test.png")
File "/usr/local/lib/python2.7/dist-packages/picamera/camera.py", line 1383, in capture
encoder.start(output)
File "/usr/local/lib/python2.7/dist-packages/picamera/encoders.py", line 1024, in start
super(PiCookedOneImageEncoder, self).start(output)
File "/usr/local/lib/python2.7/dist-packages/picamera/encoders.py", line 394, in start
self.output_port.enable(self._callback)
File "/usr/local/lib/python2.7/dist-packages/picamera/mmalobj.py", line 813, in enable
prefix="Unable to enable port %s" % self.name)
File "/usr/local/lib/python2.7/dist-packages/picamera/exc.py", line 157, in mmal_check
raise PiCameraMMALError(status, prefix)
picamera.exc.PiCameraMMALError: Unable to enable port vc.ril.image_encode:out:0(PNG ): Out of resources (other than memory)
</code></pre>
<p>I have noticed doing it with .jpg instead of .png will work. This seems a little odd to me, as the documentation says it should work, and the raspistill command also works with this resolution on .png.</p>
<p>Any ideas?</p>
| 0 | 2016-08-31T14:03:12Z | 40,000,497 | <p>I ran into the same problem. I was able to solve it by adjusting the Pi's "Memory Split" setting to 256MB. This changes how much memory is available to the GPU. </p>
<p>You can access this setting by running <code>sudo raspi-config</code>. "Memory Split" is under "Advanced Options".</p>
| 2 | 2016-10-12T13:54:46Z | [
"python",
"camera",
"raspberry-pi"
] |
Find all items in a list that match a specific format | 39,251,998 | <p>I am trying to find everything in a list that has an format like "######-##"</p>
<p>I thought I had the right idea in my following code, but it isn't printing anything. Some values in my list have that format, and I would think it should print it. Could you tell me what's wrong?</p>
<pre><code>for line in list_nums:
if (line[-1:].isdigit()):
if (line[-2:-1].isdigit()):
if (line[-6:-5].isdigit()):
if ("-" in line[-3:-2]):
print(list_nums)
</code></pre>
<p>The values in my list consist of formats like 123456-56 and 123456-98-98, which is why what I did above. It is pulled from an excel sheet.</p>
<p><strong>This is my updated code.</strong></p>
<pre><code>import xlrd
from re import compile, match
file_location = "R:/emily/emilylistnum.xlsx"
workbook = xlrd.open_workbook(file_location)
sheet = workbook.sheet_by_index(0)
regexp = compile(r'^\d{d}-\d{2}$')
list_nums = ""
for row in range(sheet.nrows):
cell = sheet.cell_value(row,0)
if regexp.match(cell):
list_nums += cell + "\n"
print(list_nums)
</code></pre>
<p><strong>my excel sheet consists of:</strong>
581094-001
581095-001
581096-001
581097-01
5586987-007
SMX53-5567-53BP
552392-01-01
552392-02
552392-03-01
552392-10-01
552392-10-01
580062
580063
580065
580065
580066
543921-01
556664-55</p>
<p>(in each cell down in one column)</p>
| 0 | 2016-08-31T14:11:19Z | 39,252,304 | <p>If you need to only match the pattern <code>######-##</code> (where <code>#</code> is a digit):</p>
<pre><code>>>> from re import compile, match
>>> regexp = compile(r'^\d{6}-\d{2}$')
>>> print([line for line in list_nums if regexp.match(line)])
['132456-78']
</code></pre>
<hr>
<h1>Explanations</h1>
<p>You <code>compile</code> the pattern into a regexp object to be more efficient when matching. The regexp is <code>^\d{6}-\d{2}$</code> where:</p>
<pre><code>^ # start of the line
\d{6}-\d{2} # 6 digits, one dot then 2 digits
$ # end of the line
</code></pre>
<p>In the regexp, <code>\d</code> means digit (an integer from 0 to 9) and <code>{6}</code> means 6 times. So <code>\d{3}</code> means 3 digits. You should read the Python documentation about <a href="https://docs.python.org/2/library/re.html" rel="nofollow">regexp</a>s.</p>
<hr>
<h1>Full code</h1>
<p>An example based on your comment:</p>
<pre><code>file_location = 'file.xlsx'
workbook = xlrd.open_workbook(file_location)
sheet = workbook.sheet_by_index(0)
regexp = compile(r'^\d{6}-\d{2}$')
list_nums = ''
for row in range(sheet.nrows):
cell = sheet.cell_value(row, 0)
if regexp.match(cell):
list_nums += cell + "\n"
</code></pre>
| 1 | 2016-08-31T14:25:12Z | [
"python",
"loops",
"for-loop"
] |
Find all items in a list that match a specific format | 39,251,998 | <p>I am trying to find everything in a list that has an format like "######-##"</p>
<p>I thought I had the right idea in my following code, but it isn't printing anything. Some values in my list have that format, and I would think it should print it. Could you tell me what's wrong?</p>
<pre><code>for line in list_nums:
if (line[-1:].isdigit()):
if (line[-2:-1].isdigit()):
if (line[-6:-5].isdigit()):
if ("-" in line[-3:-2]):
print(list_nums)
</code></pre>
<p>The values in my list consist of formats like 123456-56 and 123456-98-98, which is why what I did above. It is pulled from an excel sheet.</p>
<p><strong>This is my updated code.</strong></p>
<pre><code>import xlrd
from re import compile, match
file_location = "R:/emily/emilylistnum.xlsx"
workbook = xlrd.open_workbook(file_location)
sheet = workbook.sheet_by_index(0)
regexp = compile(r'^\d{d}-\d{2}$')
list_nums = ""
for row in range(sheet.nrows):
cell = sheet.cell_value(row,0)
if regexp.match(cell):
list_nums += cell + "\n"
print(list_nums)
</code></pre>
<p><strong>my excel sheet consists of:</strong>
581094-001
581095-001
581096-001
581097-01
5586987-007
SMX53-5567-53BP
552392-01-01
552392-02
552392-03-01
552392-10-01
552392-10-01
580062
580063
580065
580065
580066
543921-01
556664-55</p>
<p>(in each cell down in one column)</p>
| 0 | 2016-08-31T14:11:19Z | 39,252,643 | <p>Your code seems to be doing the right thing, with the exception that you want it to <strong>print</strong> the value of <em>line</em> instead of the value of <em>list_nums</em>. </p>
<p>Another approach to the task at hand, would be to use regular expressions, which are ideal for pattern recognition.</p>
<p><strong>EDIT: CODE NOW ASSUMES list_nums TO BE A SINGLE STRING</strong> </p>
<pre><code>import re
rx = re.compile('\d{6}-\d{2}\Z')
for line in list_nums.split('\n'):
if rx.match(line):
print line
</code></pre>
| 0 | 2016-08-31T14:41:21Z | [
"python",
"loops",
"for-loop"
] |
Python - Checking if all and only the letters in a list match those in a string? | 39,252,019 | <p>I'm creating an <strong>Anagram Solver</strong> in <strong>Python 2.7</strong>.</p>
<p>The solver takes a user inputted anagram, converts each letter to a list item and then checks the list items against lines of a '.txt' file, appending any words that match the anagram's letters to a <code>possible_words</code> list, ready for printing.</p>
<p>It works... <em>almost!</em></p>
<hr>
<pre><code># Anagram_Solver.py
anagram = list(raw_input("Enter an Anagram: ").lower())
possible_words = []
with file('wordsEn.txt', 'r') as f:
for line in f:
if all(x in line + '\n' for x in anagram) and len(line) == len(anagram) + 1:
line = line.strip()
possible_words.append(line)
print "\n".join(possible_words)
</code></pre>
<hr>
<p>For anagrams with no duplicate letters it works fine, but for words such as 'hello', the output contains words such as 'helio, whole, holes', etc, as the solver doesn't seem to count the letter 'L' as being 2 separate entries?</p>
<p>What am I doing wrong? I feel like there is a simple solution that I'm missing?</p>
<p>Thanks!</p>
| 1 | 2016-08-31T14:12:25Z | 39,252,113 | <p>This is probably easiest to solve using a <code>collections.Counter</code></p>
<pre><code>>>> from collections import Counter
>>> Counter('Hello') == Counter('loleH')
True
>>> Counter('Hello') == Counter('loleHl')
False
</code></pre>
<p>The <code>Counter</code> will check that the letters and the number of times that each letter is present are the same.</p>
| 3 | 2016-08-31T14:16:57Z | [
"python",
"list",
"python-2.7",
"pattern-matching",
"anagram"
] |
Python - Checking if all and only the letters in a list match those in a string? | 39,252,019 | <p>I'm creating an <strong>Anagram Solver</strong> in <strong>Python 2.7</strong>.</p>
<p>The solver takes a user inputted anagram, converts each letter to a list item and then checks the list items against lines of a '.txt' file, appending any words that match the anagram's letters to a <code>possible_words</code> list, ready for printing.</p>
<p>It works... <em>almost!</em></p>
<hr>
<pre><code># Anagram_Solver.py
anagram = list(raw_input("Enter an Anagram: ").lower())
possible_words = []
with file('wordsEn.txt', 'r') as f:
for line in f:
if all(x in line + '\n' for x in anagram) and len(line) == len(anagram) + 1:
line = line.strip()
possible_words.append(line)
print "\n".join(possible_words)
</code></pre>
<hr>
<p>For anagrams with no duplicate letters it works fine, but for words such as 'hello', the output contains words such as 'helio, whole, holes', etc, as the solver doesn't seem to count the letter 'L' as being 2 separate entries?</p>
<p>What am I doing wrong? I feel like there is a simple solution that I'm missing?</p>
<p>Thanks!</p>
| 1 | 2016-08-31T14:12:25Z | 39,252,176 | <p>Your code does as it's expected. You haven't actually made it check whether a letter appears twice (or 3+ times), it just checks <code>if 'l' in word</code> twice, which will always be True for all words with at least one <code>l</code>.</p>
<p>One method would be to count the letters of each word. If the letter counts are equal, then it is an anagram. This can be achieved easily with the <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow">collections.Counter</a> class:</p>
<pre><code>from collections import Counter
anagram = raw_input("Enter an Anagram: ").lower()
with file('wordsEn.txt', 'r') as f:
for line in f:
line = line.strip()
if Counter(anagram) == Counter(line):
possible_words.append(line)
print "\n".join(possible_words)
</code></pre>
<p>Another method would be to use <a href="https://docs.python.org/2/library/functions.html#sorted" rel="nofollow">sorted()</a> function, as suggested by Chris in the other answer's comments. This sorts the letters in both the anagram and line into alphabetical order, and then checks to see if they match. This process runs faster than the collections method.</p>
<pre><code>anagram = raw_input("Enter an Anagram: ").lower()
with file('wordsEn.txt', 'r') as f:
for line in f:
line = line.strip()
if sorted(anagram) == sorted(line):
possible_words.append(line)
print "\n".join(possible_words)
</code></pre>
| 1 | 2016-08-31T14:19:53Z | [
"python",
"list",
"python-2.7",
"pattern-matching",
"anagram"
] |
pygame - Snap Mouse to Grid | 39,252,088 | <p>I'm making a little platformer game using pygame, and decided that making a level editor for each level would be easier than typing each blocks' coordinate and size.</p>
<hr>
<p>I'm using a set of lines, horizontally and vertically to make a grid to make plotting points easier.</p>
<p>Here's the code for my grid:</p>
<pre><code>def makeGrid(surface, width, height, spacing):
for x in range(0, width, spacing):
pygame.draw.line(surface, BLACK, (x,0), (x, height))
for y in range(0, height, spacing):
pygame.draw.line(surface, BLACK, (0,y), (width, y))
</code></pre>
<hr>
<p>I want the user's mouse to move at 10px intervals, to move to only the points of intersection. Here's what I tried to force the mouse to snap to the grid.</p>
<pre><code>def snapToGrid(mousePos):
if 0 < mousePos[0] < DISPLAYWIDTH and 0 < mousePos[1] < 700:
pygame.mouse.set_pos(roundCoords(mousePos[0],mousePos[1]))
</code></pre>
<p>(BTW, roundCoords() returns the coordinates rounded to the nearest ten unit.)</p>
<p>(Also BTW, snapToGrid() is called inside the main game loop (while not done))</p>
<p>...but <a href="https://youtu.be/Y_cfBKgNHgQ" rel="nofollow">this</a> happens, the mouse doesn't want to move anywhere else.</p>
<hr>
<p>Any suggestions on how to fix this? If I need to, I can change the grid code too.
Thanks a bunch.</p>
<p><strong>P.S. This is using the latest version of PyGame on 64 bit Python 2.7</strong></p>
| 1 | 2016-08-31T14:15:50Z | 39,266,755 | <p>First of all I think you're not far off.</p>
<p>I think the problem is that the code runs quite fast through each game loop, so your mouse doesn't have time to move far before being set to the position return by your function. </p>
<p>What I would have a look into is rather than to <code>pygame.mouse.set_pos()</code> just return the snapped coordinates to a variable and use this to blit a marker to the screen highlighting the intersection of interest (here I use a circle, but you could just blit the image of a mouse ;) ). And hide your actual mouse using <code>pygame.mouse.set_visible(False)</code>:</p>
<pre><code>def snapToGrid(mousePos):
if 0 < mousePos[0] < DISPLAYWIDTH and 0 < mousePos[1] < 700:
return roundCoords(mousePos[0],mousePos[1])
snap_coord = snapToGrid(mousePos)# save snapped coordinates to variable
pygame.draw.circle(Surface, color, snap_coord, radius, 0)# define the remaining arguments, Surface, color, radius as you need
pygame.mouse.set_visible(False)# hide the actual mouse pointer
</code></pre>
<p>I hope that works for you !</p>
| 1 | 2016-09-01T08:44:50Z | [
"python",
"python-2.7",
"pygame",
"grid-layout"
] |
Using strings and byte-like objects compatibly in code to run in both Python 2 & 3 | 39,252,140 | <p>I'm trying to modify the code shown far below, which works in Python 2.7.x, so it will also work unchanged in Python 3.x. However I'm encountering the following problem I can't solve in the first function, <code>bin_to_float()</code> as shown by the output below:</p>
<pre class="lang-none prettyprint-override"><code>float_to_bin(0.000000): '0'
Traceback (most recent call last):
File "binary-to-a-float-number.py", line 36, in <module>
float = bin_to_float(binary)
File "binary-to-a-float-number.py", line 9, in bin_to_float
return struct.unpack('>d', bf)[0]
TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>I tried to fix that by adding a <code>bf = bytes(bf)</code> right before the call to <code>struct.unpack()</code>, but doing so produced its own <code>TypeError</code>:</p>
<pre><code>TypeError: string argument without an encoding
</code></pre>
<p>So my questions are is it possible to fix this issue and achieve my goal? And if so, how? Preferably in a way that would work in both versions of Python. </p>
<p>Here's the code that works in Python 2:</p>
<pre><code>import struct
def bin_to_float(b):
""" Convert binary string to a float. """
bf = int_to_bytes(int(b, 2), 8) # 8 bytes needed for IEEE 754 binary64
return struct.unpack('>d', bf)[0]
def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
bytes = []
for _ in range(nbytes):
bytes.append(chr(n & 0xff))
n >>= 8
if minlen > 0 and len(bytes) < minlen: # zero pad?
bytes.extend((minlen-len(bytes)) * '0')
return ''.join(reversed(bytes)) # high bytes at beginning
# tests
def float_to_bin(f):
""" Convert a float into a binary string. """
ba = struct.pack('>d', f)
ba = bytearray(ba)
s = ''.join('{:08b}'.format(b) for b in ba)
s = s.lstrip('0') # strip leading zeros
return s if s else '0' # but leave at least one
for f in 0.0, 1.0, -14.0, 12.546, 3.141593:
binary = float_to_bin(f)
print('float_to_bin(%f): %r' % (f, binary))
float = bin_to_float(binary)
print('bin_to_float(%r): %f' % (binary, float))
print('')
</code></pre>
| 5 | 2016-08-31T14:18:21Z | 39,252,964 | <p>To make portable code that works with bytes in both Python 2 and 3 using libraries that literally use the different data types between the two, you need to explicitly declare them using the appropriate literal mark for every string (or add <a href="http://python-future.org/unicode_literals.html" rel="nofollow"><code>from __future__ import unicode_literals</code></a> to top of every module doing this). This step is to ensure your data types are correct internally in your code.</p>
<p>Secondly, make the decision to support Python 3 going forward, with fallbacks specific for Python 2. This means overriding <code>str</code> with <code>unicode</code>, and figure out methods/functions that do not return the same types in both Python versions should be modified and replaced to return the correct type (being the Python 3 version). Do note that <code>bytes</code> is a reserved word, too, so don't use that.</p>
<p>Putting this together, your code will look something like this:</p>
<pre><code>import struct
import sys
if sys.version_info < (3, 0):
str = unicode
chr = unichr
def bin_to_float(b):
""" Convert binary string to a float. """
bf = int_to_bytes(int(b, 2), 8) # 8 bytes needed for IEEE 754 binary64
return struct.unpack(b'>d', bf)[0]
def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
ba = bytearray(b'')
for _ in range(nbytes):
ba.append(n & 0xff)
n >>= 8
if minlen > 0 and len(ba) < minlen: # zero pad?
ba.extend((minlen-len(ba)) * b'0')
return u''.join(str(chr(b)) for b in reversed(ba)).encode('latin1') # high bytes at beginning
# tests
def float_to_bin(f):
""" Convert a float into a binary string. """
ba = struct.pack(b'>d', f)
ba = bytearray(ba)
s = u''.join(u'{:08b}'.format(b) for b in ba)
s = s.lstrip(u'0') # strip leading zeros
return (s if s else u'0').encode('latin1') # but leave at least one
for f in 0.0, 1.0, -14.0, 12.546, 3.141593:
binary = float_to_bin(f)
print(u'float_to_bin(%f): %r' % (f, binary))
float = bin_to_float(binary)
print(u'bin_to_float(%r): %f' % (binary, float))
print(u'')
</code></pre>
<p>I used the <code>latin1</code> codec simply because that's what the byte mappings are originally defined, and it seems to work</p>
<pre><code>$ python2 foo.py
float_to_bin(0.000000): '0'
bin_to_float('0'): 0.000000
float_to_bin(1.000000): '11111111110000000000000000000000000000000000000000000000000000'
bin_to_float('11111111110000000000000000000000000000000000000000000000000000'): 1.000000
float_to_bin(-14.000000): '1100000000101100000000000000000000000000000000000000000000000000'
bin_to_float('1100000000101100000000000000000000000000000000000000000000000000'): -14.000000
float_to_bin(12.546000): '100000000101001000101111000110101001111110111110011101101100100'
bin_to_float('100000000101001000101111000110101001111110111110011101101100100'): 12.546000
float_to_bin(3.141593): '100000000001001001000011111101110000010110000101011110101111111'
bin_to_float('100000000001001001000011111101110000010110000101011110101111111'): 3.141593
</code></pre>
<p>Again, but this time under Python 3.5)</p>
<pre><code>$ python3 foo.py
float_to_bin(0.000000): b'0'
bin_to_float(b'0'): 0.000000
float_to_bin(1.000000): b'11111111110000000000000000000000000000000000000000000000000000'
bin_to_float(b'11111111110000000000000000000000000000000000000000000000000000'): 1.000000
float_to_bin(-14.000000): b'1100000000101100000000000000000000000000000000000000000000000000'
bin_to_float(b'1100000000101100000000000000000000000000000000000000000000000000'): -14.000000
float_to_bin(12.546000): b'100000000101001000101111000110101001111110111110011101101100100'
bin_to_float(b'100000000101001000101111000110101001111110111110011101101100100'): 12.546000
float_to_bin(3.141593): b'100000000001001001000011111101110000010110000101011110101111111'
bin_to_float(b'100000000001001001000011111101110000010110000101011110101111111'): 3.141593
</code></pre>
<p>It's a lot more work, but in Python3 you can more clearly see that the types are done as proper bytes. I also changed your <code>bytes = []</code> to a bytearray to more clearly express what you were trying to do.</p>
| 3 | 2016-08-31T14:55:49Z | [
"python",
"string",
"python-2.7",
"python-3.x",
"bytestring"
] |
Using strings and byte-like objects compatibly in code to run in both Python 2 & 3 | 39,252,140 | <p>I'm trying to modify the code shown far below, which works in Python 2.7.x, so it will also work unchanged in Python 3.x. However I'm encountering the following problem I can't solve in the first function, <code>bin_to_float()</code> as shown by the output below:</p>
<pre class="lang-none prettyprint-override"><code>float_to_bin(0.000000): '0'
Traceback (most recent call last):
File "binary-to-a-float-number.py", line 36, in <module>
float = bin_to_float(binary)
File "binary-to-a-float-number.py", line 9, in bin_to_float
return struct.unpack('>d', bf)[0]
TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>I tried to fix that by adding a <code>bf = bytes(bf)</code> right before the call to <code>struct.unpack()</code>, but doing so produced its own <code>TypeError</code>:</p>
<pre><code>TypeError: string argument without an encoding
</code></pre>
<p>So my questions are is it possible to fix this issue and achieve my goal? And if so, how? Preferably in a way that would work in both versions of Python. </p>
<p>Here's the code that works in Python 2:</p>
<pre><code>import struct
def bin_to_float(b):
""" Convert binary string to a float. """
bf = int_to_bytes(int(b, 2), 8) # 8 bytes needed for IEEE 754 binary64
return struct.unpack('>d', bf)[0]
def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
bytes = []
for _ in range(nbytes):
bytes.append(chr(n & 0xff))
n >>= 8
if minlen > 0 and len(bytes) < minlen: # zero pad?
bytes.extend((minlen-len(bytes)) * '0')
return ''.join(reversed(bytes)) # high bytes at beginning
# tests
def float_to_bin(f):
""" Convert a float into a binary string. """
ba = struct.pack('>d', f)
ba = bytearray(ba)
s = ''.join('{:08b}'.format(b) for b in ba)
s = s.lstrip('0') # strip leading zeros
return s if s else '0' # but leave at least one
for f in 0.0, 1.0, -14.0, 12.546, 3.141593:
binary = float_to_bin(f)
print('float_to_bin(%f): %r' % (f, binary))
float = bin_to_float(binary)
print('bin_to_float(%r): %f' % (binary, float))
print('')
</code></pre>
| 5 | 2016-08-31T14:18:21Z | 39,253,156 | <p>I had a different approach from @metatoaster's answer. I just modified <code>int_to_bytes</code> to use and return a <code>bytearray</code>:</p>
<pre><code>def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
b = bytearray()
for _ in range(nbytes):
b.append(n & 0xff)
n >>= 8
if minlen > 0 and len(b) < minlen: # zero pad?
b.extend([0] * (minlen-len(b)))
return bytearray(reversed(b)) # high bytes at beginning
</code></pre>
<p>This seems to work without any other modifications under both Python 2.7.11 and Python 3.5.1.</p>
<p>Note that I zero padded with <code>0</code> instead of <code>'0'</code>. I didn't do much testing, but surely that's what you meant?</p>
| 1 | 2016-08-31T15:06:02Z | [
"python",
"string",
"python-2.7",
"python-3.x",
"bytestring"
] |
Using strings and byte-like objects compatibly in code to run in both Python 2 & 3 | 39,252,140 | <p>I'm trying to modify the code shown far below, which works in Python 2.7.x, so it will also work unchanged in Python 3.x. However I'm encountering the following problem I can't solve in the first function, <code>bin_to_float()</code> as shown by the output below:</p>
<pre class="lang-none prettyprint-override"><code>float_to_bin(0.000000): '0'
Traceback (most recent call last):
File "binary-to-a-float-number.py", line 36, in <module>
float = bin_to_float(binary)
File "binary-to-a-float-number.py", line 9, in bin_to_float
return struct.unpack('>d', bf)[0]
TypeError: a bytes-like object is required, not 'str'
</code></pre>
<p>I tried to fix that by adding a <code>bf = bytes(bf)</code> right before the call to <code>struct.unpack()</code>, but doing so produced its own <code>TypeError</code>:</p>
<pre><code>TypeError: string argument without an encoding
</code></pre>
<p>So my questions are is it possible to fix this issue and achieve my goal? And if so, how? Preferably in a way that would work in both versions of Python. </p>
<p>Here's the code that works in Python 2:</p>
<pre><code>import struct
def bin_to_float(b):
""" Convert binary string to a float. """
bf = int_to_bytes(int(b, 2), 8) # 8 bytes needed for IEEE 754 binary64
return struct.unpack('>d', bf)[0]
def int_to_bytes(n, minlen=0): # helper function
""" Int/long to byte string. """
nbits = n.bit_length() + (1 if n < 0 else 0) # plus one for any sign bit
nbytes = (nbits+7) // 8 # number of whole bytes
bytes = []
for _ in range(nbytes):
bytes.append(chr(n & 0xff))
n >>= 8
if minlen > 0 and len(bytes) < minlen: # zero pad?
bytes.extend((minlen-len(bytes)) * '0')
return ''.join(reversed(bytes)) # high bytes at beginning
# tests
def float_to_bin(f):
""" Convert a float into a binary string. """
ba = struct.pack('>d', f)
ba = bytearray(ba)
s = ''.join('{:08b}'.format(b) for b in ba)
s = s.lstrip('0') # strip leading zeros
return s if s else '0' # but leave at least one
for f in 0.0, 1.0, -14.0, 12.546, 3.141593:
binary = float_to_bin(f)
print('float_to_bin(%f): %r' % (f, binary))
float = bin_to_float(binary)
print('bin_to_float(%r): %f' % (binary, float))
print('')
</code></pre>
| 5 | 2016-08-31T14:18:21Z | 39,253,462 | <p>In Python 3, integers have a <code>to_bytes()</code> method that can perform the conversion in a single call. However, since you asked for a solution that works on Python 2 and 3 unmodified, here's an alternative approach.</p>
<p>If you take a detour via hexadecimal representation, the function <code>int_to_bytes()</code> becomes very simple:</p>
<pre><code>import codecs
def int_to_bytes(n, minlen=0):
hex_str = format(n, "0{}x".format(2 * minlen))
return codecs.decode(hex_str, "hex")
</code></pre>
<p>You might need some special case handling to deal with the case when the hex string gets an odd number of characters.</p>
<p>Note that I'm not sure this works with all versions of Python 3. I remember that pseudo-encodings weren't supported in some 3.x version, but I don't remember the details. I tested the code with Python 3.5.</p>
| 1 | 2016-08-31T15:21:59Z | [
"python",
"string",
"python-2.7",
"python-3.x",
"bytestring"
] |
How to handle non-numeric entries in an integer valued column | 39,252,170 | <p>I have a dataframe <code>df</code> and one of the columns <code>count</code> is contains strings. These strings are mostly convertable to integers (e.g. <code>0006</code>) which is what I will do with them. However some of the entries in <code>count</code> are blank strings of spaces. How can I </p>
<ul>
<li>Drop all the rows where the <code>count</code> value is a blank string.</li>
<li>Substitute all the blank values in that column with some numeric value of my choice.</li>
</ul>
<p>The dataframe is very large if there are particularly efficient ways of doing this.</p>
| 2 | 2016-08-31T14:19:39Z | 39,253,837 | <p>Use <code>dropna</code> or <code>fillna</code> after <code>pd.to_numeric(errosr='coerce')</code></p>
<p>consider a pandas series <code>s</code></p>
<pre><code>s = pd.Series(np.random.choice(('0001', ''), 1000000), name='intish')
</code></pre>
<p><strong><em>drop method 1</em></strong> (less robust)</p>
<pre><code>s[s != ''].astype(int)
</code></pre>
<p><strong><em>drop method 2</em></strong> (more robust)</p>
<pre><code>pd.to_numeric(s, 'coerce').dropna().astype(int)
</code></pre>
<p><strong><em>drop timing</em></strong></p>
<p><a href="http://i.stack.imgur.com/Tqwgp.png" rel="nofollow"><img src="http://i.stack.imgur.com/Tqwgp.png" alt="enter image description here"></a></p>
<p><em>More robust method is faster</em></p>
<hr>
<p><strong><em>fill method 1</em></strong></p>
<pre><code>pd.to_numeric(s, 'coerce').fillna(0).astype(int)
</code></pre>
<p><strong><em>fill method 2</em></strong></p>
<pre><code>s.where(s.astype(bool), 0).astype(int)
</code></pre>
<p><strong><em>fill timing</em></strong></p>
<p><a href="http://i.stack.imgur.com/jvMGG.png" rel="nofollow"><img src="http://i.stack.imgur.com/jvMGG.png" alt="enter image description here"></a></p>
<p><em>Takes same amount of time as dropping</em></p>
| 3 | 2016-08-31T15:42:12Z | [
"python",
"pandas"
] |
How to handle non-numeric entries in an integer valued column | 39,252,170 | <p>I have a dataframe <code>df</code> and one of the columns <code>count</code> is contains strings. These strings are mostly convertable to integers (e.g. <code>0006</code>) which is what I will do with them. However some of the entries in <code>count</code> are blank strings of spaces. How can I </p>
<ul>
<li>Drop all the rows where the <code>count</code> value is a blank string.</li>
<li>Substitute all the blank values in that column with some numeric value of my choice.</li>
</ul>
<p>The dataframe is very large if there are particularly efficient ways of doing this.</p>
| 2 | 2016-08-31T14:19:39Z | 39,253,857 | <p>It seems that you want two different things. But first, convert column <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html#pandas.to_numeric" rel="nofollow">to numeric</a> and coerce errors:</p>
<pre><code>df['count'] = pd.to_numeric(df['count'], errors='coerce')
</code></pre>
<p>To drop rows (use <code>subset</code> to avoid dropping <code>NaN</code> from other columns):</p>
<pre><code>df.dropna(subset=['count'])
</code></pre>
<p>To replace with default value:</p>
<pre><code>df['count'] = df['count'].fillna(default_value)
</code></pre>
| 3 | 2016-08-31T15:42:42Z | [
"python",
"pandas"
] |
Gensim: how to retrain doc2vec model using previous word2vec model | 39,252,207 | <p>With Doc2Vec modelling, I have trained a model and saved following files:</p>
<pre><code>1. model
2. model.docvecs.doctag_syn0.npy
3. model.syn0.npy
4. model.syn1.npy
5. model.syn1neg.npy
</code></pre>
<p>However, I have a new way to label the documents and want to train the model again. since the word vectors already obtained from previous version. Is there any way to reuse that model (e.g., taking the previous w2v results as initial vectors for training)? Any one know how to do it? </p>
| 0 | 2016-08-31T14:21:26Z | 39,364,018 | <p>I've figured out that, we can just load the model and continue to train. </p>
<pre><code>model = Doc2Vec.load("old_model")
model.train(sentences)
</code></pre>
| 0 | 2016-09-07T07:42:51Z | [
"python",
"gensim",
"word2vec",
"doc2vec"
] |
How can I unit test a method using a datetime? | 39,252,328 | <p>I have the follow class and method:</p>
<pre><code>class DateTimeHelper(object):
@staticmethod
def get_utc_millisecond_timestamp():
(dt, micro) = datetime.datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f').split('.')
return "%s.%03d" % (dt, int(micro) / 1000) # UTC time with millisecond
</code></pre>
<p>How can I unit test it? I am completely stumped although this is simple. It's my first unit test. </p>
| 1 | 2016-08-31T14:26:32Z | 39,252,463 | <p>Use the <a href="https://docs.python.org/3/library/unittest.mock.html" rel="nofollow"><code>unittest.mock</code> library</a> (Python 3.3 and newer, backported as <a href="https://pypi.python.org/pypi/mock" rel="nofollow"><code>mock</code></a>), to replace calls to any code external to your code-under-test.</p>
<p>Here, I'd mock out not only <code>utcnow()</code> but <code>strftime()</code> too, to just return a string object:</p>
<pre><code>with mock.patch('datetime.datetime') as dt_mock:
dt_mock.utcnow.return_value.strftime.return_value = '2016-08-04 12:22:44.123456'
result = DateTimeHelper.get_utc_millisecond_timestamp()
</code></pre>
<p>If you feel that testing the <code>strftime()</code> argument is important, give <code>dt_mock.utcnow.return_value</code> an explicit <code>datetime</code> object to return instead; you'd have to create that test object <em>before</em> you mock however, as you can't mock out just the <code>datetime.datetime.utcnow</code> class method:</p>
<pre><code>testdt = datetime.datetime(2016, 8, 4, 12, 22, 44, 123456)
with mock.patch('datetime.datetime') as dt_mock:
dt_mock.utcnow.return_value = testdt
result = DateTimeHelper.get_utc_millisecond_timestamp()
</code></pre>
<p>or, in your unittests, use <code>from datetime import datetime</code> to keep a reference to the class that isn't mocked.</p>
<p>Demo:</p>
<pre><code>>>> from unittest import mock
>>> import datetime
>>> class DateTimeHelper(object):
... @staticmethod
... def get_utc_millisecond_timestamp():
... (dt, micro) = datetime.datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f').split('.')
... return "%s.%03d" % (dt, int(micro) / 1000) # UTC time with millisecond
...
>>> with mock.patch('datetime.datetime') as dt_mock:
... dt_mock.utcnow.return_value.strftime.return_value = '2016-08-04 12:22:44.123456'
... result = DateTimeHelper.get_utc_millisecond_timestamp()
...
>>> result
'2016-08-04 12:22:44.123'
>>> testdt = datetime.datetime(2016, 8, 4, 12, 22, 44, 123456)
>>> with mock.patch('datetime.datetime') as dt_mock:
... dt_mock.utcnow.return_value = testdt
... result = DateTimeHelper.get_utc_millisecond_timestamp()
...
>>> result
'2016-08-04 12:22:44.123'
</code></pre>
| 1 | 2016-08-31T14:32:49Z | [
"python",
"unit-testing",
"datetime"
] |
Confusion with Fancy indexing (for non-fancy people) | 39,252,370 | <p>Let's assume a multi-dimensional array</p>
<pre><code>import numpy as np
foo = np.random.rand(102,43,35,51)
</code></pre>
<p>I know that those last dimensions represent a 2D space (35,51) of which I would like to index <strong>a range of rows</strong> of a <strong>column</strong>
Let's say I want to have rows 8 to 30 of column 0
From my understanding of indexing I should call</p>
<pre><code>foo[0][0][8::30][0]
</code></pre>
<p>Knowing my data though (unlike the random data used here), this is not what I expected</p>
<p>I could try this that does work but looks ridiculous</p>
<pre><code>foo[0][0][[8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],0]
</code></pre>
<p>Now from what I can find in <a href="https://docs.scipy.org/doc/numpy-dev/user/quickstart.html#head-864862d3f2bb4c32f04260fac61eb4ef34788c4c" rel="nofollow">this documentation </a> I can also use
something like:</p>
<pre><code>foo[0][0][[8,30],0]
</code></pre>
<p>which only gives me the values of rows 8 and 30
while this:</p>
<pre><code>foo[0][0][[8::30],0]
</code></pre>
<p>gives an error</p>
<pre><code>File "<ipython-input-568-cc49fe1424d1>", line 1
foo[0][0][[8::30],0]
^
SyntaxError: invalid syntax
</code></pre>
<p>I don't understand why the :: argument cannot be passed here. What is then a way to indicate a range in your indexing syntax?</p>
<p>So I guess my overall question is what would be the proper pythonic equivalent of this syntax:</p>
<pre><code>foo[0][0][[8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],0]
</code></pre>
| 1 | 2016-08-31T14:28:51Z | 39,255,017 | <p>Instead of </p>
<pre><code>foo[0][0][8::30][0]
</code></pre>
<p>try</p>
<pre><code>foo[0, 0, 8:30, 0]
</code></pre>
<p>The <code>foo[0][0]</code> part is the same as <code>foo[0, 0, :, :]</code>, selecting a 2d array (35 x 51). But <code>foo[0][0][8::30]</code> selects a subset of those rows</p>
<p>Consider what happens when is use <code>0::30</code> on 2d array:</p>
<pre><code>In [490]: np.zeros((35,51))[0::30].shape
Out[490]: (2, 51)
In [491]: np.arange(35)[0::30]
Out[491]: array([ 0, 30])
</code></pre>
<p>The <code>30</code> is the <code>step</code>, not the <code>stop</code> value of the slice.</p>
<p>the last <code>[0]</code> then picks the first of those rows. The end result is the same as <code>foo[0,0,0,:]</code>.</p>
<p>It is better, in most cases, to index multiple dimensions with the comma syntax. And if you want the first 30 rows use <code>0:30</code>, not <code>0::30</code> (that's basic slicing notation, applicable to lists as well as arrays).</p>
<p>As for: </p>
<pre><code>foo[0][0][[8::30],0]
</code></pre>
<p>simplify it to <code>x[[8::30], 0]</code>. The Python interpreter accepts <code>[1:2:3, 0]</code>, translating it to <code>tuple(slice(1,2,3), 0)</code> and passing it to a <code>__getitem__</code> method. But the colon syntax is accepted in a very specific context. The interpreter is treating that inner set of brackets as a list, and colons are not accepted there.</p>
<pre><code>foo[0,0,[1,2,3],0]
</code></pre>
<p>is ok, because the inner brackets are a list, and the numpy <code>getitem</code> can handle those.</p>
<p><code>numpy</code> has a tool for converting a slice notation into a list of numbers. Play with that if it is still confusing:</p>
<pre><code>In [495]: np.r_[8:30]
Out[495]:
array([ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24,
25, 26, 27, 28, 29])
In [496]: np.r_[8::30]
Out[496]: array([0])
In [497]: np.r_[8:30:2]
Out[497]: array([ 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28])
</code></pre>
| 2 | 2016-08-31T16:48:04Z | [
"python",
"arrays",
"numpy",
"indexing",
"scipy"
] |
Make python for loop go only once | 39,252,474 | <p>Alright, so I'm trying to make a Facebook bot over here to do some stuff for me, But I don't really think that matters for you.</p>
<p>Anyway, in order to achieve what I want to do I need to do some stuff. So, using the Facebook API I am getting some posts id with the following code:</p>
<pre><code>for posts in parsed_json:
post_id = posts.get('id')
post_url = "http://facebook.com/" + str(post_id)
text_save(post_url)
</code></pre>
<p>But the problem is that this code gets me the last 25 post ID's and I only need the last one.
So bassicaly what I'm trying to do is: Get the last post ID and then execute the text_save() function with that.</p>
<p>But that loop gets me 25 ids, and I don't need them. I need only the first one.</p>
<p>So how do I limit the for loop to run only once? I tried the following thing:</p>
<pre><code>a = 0
while a < 1:
for posts in parsed_json:
post_id = posts.get('id')
post_url = "http://facebook.com/" + str(post_id)
text_save(post_url)
a = a + 1
</code></pre>
<p>But that didn't really work out, it still goes through it 25 times. Any ideeas?</p>
| -3 | 2016-08-31T14:33:25Z | 39,252,506 | <p>To get the last value (or first), simply use <code>"http://facebook.com/" + str(parsed_json[-1].get('id'))</code> (or parsed_json[0])</p>
<p>If you want to use loop instead,
to save just the last value, iterate and run the command afterwards:</p>
<pre><code>post_url = ''
for posts in parsed_json:
post_id = posts.get('id')
post_url = "http://facebook.com/" + str(post_id)
text_save(post_url)
</code></pre>
<p>To break from a loop after one interation use:</p>
<pre><code>a = 0
for posts in parsed_json:
if a >= 1: break
post_id = posts.get('id')
post_url = "http://facebook.com/" + str(post_id)
text_save(post_url)
a += 1
</code></pre>
| 3 | 2016-08-31T14:34:59Z | [
"python",
"facebook"
] |
Searching number of times a string (from file a) occurs in file b. Each string from file a is on a separate line | 39,252,500 | <p>I am a not new in Python but I am not an educated programmer and I use it very often for my work. So sorry in advance for any not elegant coding solutions. </p>
<p>I have a file (.txt) with a unique string on each line. I want to know how many times each string occurs in another file. I build the following, which works as long as I feed the text I want to find "2-177-382":</p>
<pre><code>count = 0
with open(Input_file_B) as f:
for line in f:
count += line.count("2-177-382")
print count
</code></pre>
<p>This results in:</p>
<pre><code>Number of times: 19
</code></pre>
<p>So far so good. When I replace it with a variable it also works.</p>
<pre><code>y="2-177-382"
count = 0
with open(Input_file_B) as f:
for line in f:
count += line.count(y)
print count
</code></pre>
<p>This also works and results also in:</p>
<pre><code>Number of times: 19
</code></pre>
<p>But now I want to fill <em>y</em> using a loop with the data extracted from file A, this is where it goes wrong. File A looks like this:</p>
<pre><code>1-1-24
1-1-25
1-1-26
1-1-27
14-178-370
5-224-309
5-226-307
2-177-382
</code></pre>
<p>I show this as an example below (without how I extract it from the other file). </p>
<pre><code>total_A = open(Input_file_A,'r')
Data_input_file_A = total_A.readlines()
y = Data_input_file_A[7]
print y
</code></pre>
<p>Results in: </p>
<pre><code>2-177-382
</code></pre>
<p>Exactly the same as before and also a string, when I feed this into the earlier code:</p>
<pre><code>total_A = open(Input_file_A,'r')
Data_input_file_A = total_A.readlines()
y = Data_input_file_A[7]
count = 0
with open(Input_file_B) as f:
for line in f:
count += line.count(y)
print count
</code></pre>
<p>Result:
Number of times: 0</p>
<p>How is this possible, please help me out!?</p>
| -1 | 2016-08-31T14:34:36Z | 39,252,722 | <p><code>readlines()</code> does not remove the newline character. This should be clear if instead of <code>print y</code> you do <code>repr (y)</code> to reveal all the characters. So in your code you need:</p>
<pre><code>y = Data_input_file_A[7].rstrip()
</code></pre>
| 0 | 2016-08-31T14:44:32Z | [
"python",
"search"
] |
Recursion code not working as expected - Python 3 | 39,252,515 | <p>I am a beginner to programming and in this case cannot understand why this Python code doesn't work as expected.</p>
<p>I am trying to use recursion by calling a function within a function; calling the function n times, and reducing n by 1 to 0 with each loop at which point it will stop.</p>
<p>Instead my code prints 'freak' once, then I get a 'maximum recursion depth' error message.</p>
<pre><code>def print_m():
print ('freak')
def do_n(arg, n)):
if n >= 0:
print (do_n(arg,n))
n = n - 1
</code></pre>
| 1 | 2016-08-31T14:35:27Z | 39,252,609 | <p>Recursion is not a loop!</p>
<p>You should be <em>passing</em> <code>n - 1</code> to the recursive call instead of merely <code>n</code>, not subtracting one from <code>n</code> <em>after</em> this call.</p>
| 6 | 2016-08-31T14:39:48Z | [
"python",
"recursion"
] |
Recursion code not working as expected - Python 3 | 39,252,515 | <p>I am a beginner to programming and in this case cannot understand why this Python code doesn't work as expected.</p>
<p>I am trying to use recursion by calling a function within a function; calling the function n times, and reducing n by 1 to 0 with each loop at which point it will stop.</p>
<p>Instead my code prints 'freak' once, then I get a 'maximum recursion depth' error message.</p>
<pre><code>def print_m():
print ('freak')
def do_n(arg, n)):
if n >= 0:
print (do_n(arg,n))
n = n - 1
</code></pre>
| 1 | 2016-08-31T14:35:27Z | 39,252,689 | <p>Passing <code>n-1</code> as parameter to recursive call fixes the problem.</p>
| 1 | 2016-08-31T14:43:11Z | [
"python",
"recursion"
] |
Recursion code not working as expected - Python 3 | 39,252,515 | <p>I am a beginner to programming and in this case cannot understand why this Python code doesn't work as expected.</p>
<p>I am trying to use recursion by calling a function within a function; calling the function n times, and reducing n by 1 to 0 with each loop at which point it will stop.</p>
<p>Instead my code prints 'freak' once, then I get a 'maximum recursion depth' error message.</p>
<pre><code>def print_m():
print ('freak')
def do_n(arg, n)):
if n >= 0:
print (do_n(arg,n))
n = n - 1
</code></pre>
| 1 | 2016-08-31T14:35:27Z | 39,291,844 | <p>In the recursive call in line 6 you should pass print (do_n(arg,n-1)) .</p>
| 0 | 2016-09-02T11:55:43Z | [
"python",
"recursion"
] |
Using xpath to extract date | 39,252,521 | <p>I'm trying to extract the date from a webpage, '07/18/16' in the comment below. I'm not clear on the syntax for xpath, how would you grab just the date?</p>
<pre><code>#<p>Opened <a class="timeline" href="/trac3/timeline?from=2016-07-
#18T14%3A46%3A43-04%3A00&amp;precision=second" title="See timeline at
#07/18/16 14:46:43">6 weeks ago</a></p>
from lxml import html
import requests
page = requests.get(webpage)
tree = html.fromstring(page.content)
openDate = tree.xpath('//Opened/text()')
print 'Open Date: ', openDate
</code></pre>
| 0 | 2016-08-31T14:35:37Z | 39,252,887 | <p>You cant do this. Xpath directly choose tag, not fields in it.
So "//p/a[text()]" return all <code><a class="timeline" href="/trac3/timeline?from=2016-07-18T14%3A46%3A43-04%3A00&amp;precision=second" title="See timeline at 07/18/16 14:46:43">6 weeks ago</a></code>
Or you can choose by condition like "//p/a[text() = "6 weeks ago"]"
So get this <code><a></a></code> tag and then parse it with python</p>
| -1 | 2016-08-31T14:52:33Z | [
"python",
"regex",
"xpath",
"python-requests",
"lxml"
] |
Using xpath to extract date | 39,252,521 | <p>I'm trying to extract the date from a webpage, '07/18/16' in the comment below. I'm not clear on the syntax for xpath, how would you grab just the date?</p>
<pre><code>#<p>Opened <a class="timeline" href="/trac3/timeline?from=2016-07-
#18T14%3A46%3A43-04%3A00&amp;precision=second" title="See timeline at
#07/18/16 14:46:43">6 weeks ago</a></p>
from lxml import html
import requests
page = requests.get(webpage)
tree = html.fromstring(page.content)
openDate = tree.xpath('//Opened/text()')
print 'Open Date: ', openDate
</code></pre>
| 0 | 2016-08-31T14:35:37Z | 39,252,900 | <p>Here's one way only using xpath 1.0:</p>
<pre><code>substring-before(substring-after(normalize-space(//a[contains(concat(' ',normalize-space(@class),' '),' timeline ')]/@title),'See timeline at '), ' ')
</code></pre>
<p>The <code>contains(concat(' ',normalize-space(@class),' '),' timeline ')</code> might seem like overkill, but will account for the possibility of classes other than "timeline" being in the class attribute.</p>
<p>XPath test: <a href="http://www.xpathtester.com/xpath/7805b0601b1468ea17209127e14fa470" rel="nofollow">http://www.xpathtester.com/xpath/7805b0601b1468ea17209127e14fa470</a></p>
<p><strong>lxml Example</strong></p>
<pre class="lang-py prettyprint-override"><code>from lxml import html
page = """<p>Opened <a class="timeline" href="/trac3/timeline?from=2016-07-18T14%3A46%3A43-04%3A00&amp;precision=second" title="See timeline at 07/18/16 14:46:43">6 weeks ago</a></p>"""
tree = html.fromstring(page)
try:
openDate = tree.xpath("substring-before(substring-after(normalize-space(//a[contains(concat(' ',normalize-space(@class),' '),' timeline ')]/@title),'See timeline at '), ' ')")
print 'Open Date: ', openDate
#Open Date: 07/18/16
except:
print("Something went wrong")
</code></pre>
| 1 | 2016-08-31T14:52:57Z | [
"python",
"regex",
"xpath",
"python-requests",
"lxml"
] |
Using xpath to extract date | 39,252,521 | <p>I'm trying to extract the date from a webpage, '07/18/16' in the comment below. I'm not clear on the syntax for xpath, how would you grab just the date?</p>
<pre><code>#<p>Opened <a class="timeline" href="/trac3/timeline?from=2016-07-
#18T14%3A46%3A43-04%3A00&amp;precision=second" title="See timeline at
#07/18/16 14:46:43">6 weeks ago</a></p>
from lxml import html
import requests
page = requests.get(webpage)
tree = html.fromstring(page.content)
openDate = tree.xpath('//Opened/text()')
print 'Open Date: ', openDate
</code></pre>
| 0 | 2016-08-31T14:35:37Z | 39,253,097 | <p>XPath works by matching elements in an XML structured document.</p>
<p>Your XPath will fail, because what you are saying is search the entire document ("//") for any elements called "Opened" (i.e. <code><Opened/></code>) and return their inner text ("text()").</p>
<p>Assuming your HTML is consistent, what you actually want to do is scrape the contents of the anchor titles for the date, something like this: </p>
<pre><code>//p[contains(text(),'Opened')]/a[@class='timeline']/@title
</code></pre>
<p>this will search the entire document for any anchors which are of the class 'timeline' and that are within paragraphs containing the word 'Opened' and return the contents of their 'title' attributes.</p>
<p>Notice I said "any anchors"; you're result will be a list of matched titles, so you'll need to decide what to do if you have multiple matches.</p>
<p>Once you have the title(s), you'll need to do some string slicing in python to retrieve the date part.</p>
<p>I'm assuming it's just XPath you're struggling with so I have left out any python examples. I recommend this site as a great starting point for XPath: <a href="http://dh.obdurodon.org/introduction-xpath.xhtml" rel="nofollow">http://dh.obdurodon.org/introduction-xpath.xhtml</a></p>
| 0 | 2016-08-31T15:02:58Z | [
"python",
"regex",
"xpath",
"python-requests",
"lxml"
] |
Using xpath to extract date | 39,252,521 | <p>I'm trying to extract the date from a webpage, '07/18/16' in the comment below. I'm not clear on the syntax for xpath, how would you grab just the date?</p>
<pre><code>#<p>Opened <a class="timeline" href="/trac3/timeline?from=2016-07-
#18T14%3A46%3A43-04%3A00&amp;precision=second" title="See timeline at
#07/18/16 14:46:43">6 weeks ago</a></p>
from lxml import html
import requests
page = requests.get(webpage)
tree = html.fromstring(page.content)
openDate = tree.xpath('//Opened/text()')
print 'Open Date: ', openDate
</code></pre>
| 0 | 2016-08-31T14:35:37Z | 39,253,234 | <p>Like this?</p>
<pre><code>import re
from lxml import html
data = """<p>Opened <a class="timeline" href="/trac3/timeline?from=2016-07-18T14%3A46%3A43-04%3A00&amp;precision=second" title="See timeline at 07/18/16 14:46:43">6 weeks ago</a></p>"""
tree = html.fromstring(data)
try:
href = tree.xpath("//a[@class='timeline']/@href")[0]
openDate = re.search(r'from=(\d+-\d+-\d+)', href).group(1)
print('Open Date: ', openDate)
# Open Date: 2016-07-18
except:
print("Something went wrong")
</code></pre>
<p>This grabs the <code>@href</code> attribute first, then analyzes it with a regular expression.
<hr>
Having read the question again, you might rather look for the <strong>title attribute</strong>:</p>
<pre><code>try:
href = tree.xpath("//a[@class='timeline']/@title")[0]
openDate = re.search(r'\d+/\d+/\d+', href).group(0)
print('Open Date: ', openDate)
# Open Date: 07/18/16
except:
print("Something went wrong")
</code></pre>
| 2 | 2016-08-31T15:09:57Z | [
"python",
"regex",
"xpath",
"python-requests",
"lxml"
] |
What is happning over here?Does python Override | 39,252,542 | <p><strong>Case 1:</strong></p>
<pre><code>class A(object):
def __init__(self):
print "A"
class B(A):
pass
c = b()
#output:
#A
</code></pre>
<p><strong>Case 2:</strong></p>
<pre><code>class A(object):
def __init__(self):
print "A"
class B(A):
def __init__(self):
print "B"
c = b()
#output:
#B
</code></pre>
<p>In case1 it runs the constructor of <code>class A</code> and in case2 it runs the constructor of <code>class B</code>.</p>
<p>So if case1 prints A is understood that it is running the <code>class A</code> constructor because <code>class B</code> has inherited it.</p>
<p>Then in case2 it only runs the <code>class B</code> constructor but not <code>class A</code> constructor why is that.</p>
<p>Now what is happening over here. Is python overriding the <code>class A</code> constructor ? Or what is happening
"I am new to programming."</p>
| 0 | 2016-08-31T14:37:14Z | 39,252,771 | <p>According to the documentation, when a class is constructed the base class is always remembered. Thus, it will resolve all the dependencies if some attribute is not found, the process works in all base classes. In your case, the class B does not have a init method, so it calls its parent method. In the second example, if want to run the constructor of the base class, you can use the super() function. </p>
| 2 | 2016-08-31T14:47:06Z | [
"python",
"oop",
"constructor"
] |
Using Python's multiprocessing to calculate the sum of integers from one long input line | 39,252,595 | <p>I want to use Python's multiprocessing module for the following:
Map an input line to a list of integers and calculate the sum of this list.</p>
<p>The input line is initially a string where the items to be summed are separated by spaces.</p>
<p>What I have tried is this:</p>
<pre><code>from itertools import imap
my_input = '1000000000 ' * int(1e6)
print sum(imap(int, my_input.split()))
</code></pre>
<p>This takes about 600ms on my machine, but I would like to make it faster with multiprocessing.</p>
<p>It seems that the bottleneck is in the mapping-part since the sum-method is pretty fast when applied to a ready list of integers:</p>
<pre><code>>>> int_list = [int(1e9)] * int(1e6)
>>> %time sum(int_list)
CPU times: user 7.38 ms, sys: 5 µs, total: 7.38 ms
Wall time: 7.4 ms
>>> 1000000000000000
</code></pre>
<p>I tried to apply the instructions from <a href="http://stackoverflow.com/questions/29785427/how-to-parallel-sum-a-loop-using-multiprocessing-in-python">this question</a> but as I'm quite new to using multiprocessing, I couldn't fit the instructions to this problem.</p>
| 1 | 2016-08-31T14:39:26Z | 39,253,051 | <p>So, this seems to roughly boil down to three steps:</p>
<ol>
<li>Make a <a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool" rel="nofollow">pool</a></li>
<li>Map int() across the list within that pool</li>
<li>Sum the results.</li>
</ol>
<p>So:</p>
<pre><code>if __name__ == '__main__':
import multiprocessing
my_input = '1000000000 ' * int(1e6)
string_list = my_input.split()
# Pool over all CPUs
int_list = multiprocessing.Pool().map(int, string_list)
print sum(int_list)
</code></pre>
<p>It may be more efficient for time to use generators where possible:</p>
<pre><code>if __name__ == '__main__':
import multiprocessing
import re
my_input = '1000000000 ' * int(1e6)
# use a regex iterator matching whitespace
string_list = (x.group(0) for x in re.finditer(r'[^\s]+\s', my_input))
# Pool over all CPUs
int_list = multiprocessing.Pool().imap(int, string_list)
print sum(int_list)
</code></pre>
<p>The regex will likely be slower than <code>split</code>, but using <code>re.finditer</code> should allow the <code>Pool</code> to start mapping as fast as individual splits are made, and using <code>imap</code> rather than <code>map</code> should do similarly for <code>sum</code> (allowing it to start adding numbers as they become available). Credit to <a href="http://stackoverflow.com/a/9770397/6051861">this answer</a> for the <code>re.finditer</code> idea.</p>
<p>It may or may not be more efficient to multiprocess than doing it in a single process. You might end up losing more time making new processes and passing the results back from them than you gain in doing things all at once. The same goes for if you were to try putting the adding into the pool as well.</p>
<p>On the system I'm testing this on, which has two CPUs, I get the one-process solution to run in about half a second, the non-generator multiprocess solution in about 1 second, and the generator solution in 12-13 seconds.</p>
| 0 | 2016-08-31T15:00:56Z | [
"python",
"python-2.7",
"python-multiprocessing"
] |
Using Python's multiprocessing to calculate the sum of integers from one long input line | 39,252,595 | <p>I want to use Python's multiprocessing module for the following:
Map an input line to a list of integers and calculate the sum of this list.</p>
<p>The input line is initially a string where the items to be summed are separated by spaces.</p>
<p>What I have tried is this:</p>
<pre><code>from itertools import imap
my_input = '1000000000 ' * int(1e6)
print sum(imap(int, my_input.split()))
</code></pre>
<p>This takes about 600ms on my machine, but I would like to make it faster with multiprocessing.</p>
<p>It seems that the bottleneck is in the mapping-part since the sum-method is pretty fast when applied to a ready list of integers:</p>
<pre><code>>>> int_list = [int(1e9)] * int(1e6)
>>> %time sum(int_list)
CPU times: user 7.38 ms, sys: 5 µs, total: 7.38 ms
Wall time: 7.4 ms
>>> 1000000000000000
</code></pre>
<p>I tried to apply the instructions from <a href="http://stackoverflow.com/questions/29785427/how-to-parallel-sum-a-loop-using-multiprocessing-in-python">this question</a> but as I'm quite new to using multiprocessing, I couldn't fit the instructions to this problem.</p>
| 1 | 2016-08-31T14:39:26Z | 39,258,010 | <p>Using a feature of Unix systems called <a href="https://en.wikipedia.org/wiki/Fork_(system_call)" rel="nofollow">forking</a>, you can read (not write) data from the parent process with zero overhead. Normally, you would have to copy the data over, but forking a process in Unix allows you to circumvent this.</p>
<p>Using this, the job in the pool can access the whole input string and extract the part that it will work on. It can then split and parse this section of the string on its own and return the sum of the integers in its section. </p>
<pre><code>from multiprocessing import Pool, cpu_count
from time import time
def serial(data):
return sum(map(int, data.split()))
def parallel(data):
processes = cpu_count()
with Pool(processes) as pool:
args = zip(
["input_"] * processes, # name of global to access
range(processes), # job number
[processes] * processes # total number of jobs
)
return sum(pool.map(job, args, chunksize=1))
def job(args):
global_name, job_number, total_jobs = args
data = globals()[global_name]
chunk = get_chunk(data, job_number, total_jobs)
return serial(chunk)
def get_chunk(string, job_number, total_jobs):
"""This function may mess up if the number of integers in each chunk is low (1-2).
It also assumes there is only 1 space separating integers."""
approx_chunk_size = len(string) // total_jobs
# initial estimates
start = approx_chunk_size * job_number
end = start + approx_chunk_size
if start and not string.startswith(" ", start - 1):
# if string[start] is not beginning of a number, advance to start of next number
start = string.index(" ", start) + 1
if job_number == total_jobs:
# last job
end = None
elif not string.startswith(" ", end - 1):
# if string[end] is part of a number, then advance to end of number
end = string.index(" ", end - 1)
return string[start:end]
def timeit(func, *args, **kwargs):
"Simple timing function"
start = time()
result = func(*args, **kwargs)
end = time()
print("{} took {} seconds".format(func.__name__, end - start))
return result
if __name__ == "__main__":
# from multiprocessing.dummy import Pool # uncomment this for testing
input_ = "1000000000 " * int(1e6)
actual = timeit(parallel, input_)
expected = timeit(serial, input_)
assert actual == expected
</code></pre>
| 0 | 2016-08-31T19:57:37Z | [
"python",
"python-2.7",
"python-multiprocessing"
] |
Caffe: Loading images and labels within python for fine-tuning | 39,252,655 | <p>I am having some problems when I try to load a batch of images + labels within python and try use it to train a network.
I am working with pair of images, that I transform into one (for example, by averaging equivalent pixels from both images) and then I feed it to the network. As the number of pairs is too big (combination of the individual images two by two) to store all in memory, I am creating each batch in each iteration and I would like to give it to the network.</p>
<p>I am instantiating the network this way:</p>
<pre><code>solver = caffe.get_solver(path_to_solver_file)
</code></pre>
<p>where the input network layers are defined with:</p>
<pre><code>layer {
name: "data"
type: "ImageData"
top: "data"
top: "label"
include {
phase: TRAIN
}
image_data_param {
batch_size: 1
new_height: 227
new_width: 227
}
}
layer {
name: "data"
type: "ImageData"
top: "data"
top: "label"
include {
phase: TEST
}
image_data_param {
batch_size: 1
new_height: 227
new_width: 227
}
}
</code></pre>
<p>In my earlier experiments, I had a <strong>source</strong> parameter inside <strong>image_data_param</strong>, where I would pass a file with my images and labels for both training and test.
However, as I want to load them within python, I removed the source parameter, but got the following error:</p>
<pre><code>0830 17:01:49.014819 1967923200 layer_factory.hpp:77] Creating layer data
I0830 17:01:49.014858 1967923200 net.cpp:91] Creating Layer data
I0830 17:01:49.014868 1967923200 net.cpp:399] data -> data
I0830 17:01:49.014890 1967923200 net.cpp:399] data -> label
I0830 17:01:49.014910 1967923200 image_data_layer.cpp:38] Opening file
I0830 17:01:49.014935 1967923200 image_data_layer.cpp:53] A total of 0 images.
Segmentation fault: 11
</code></pre>
<p>I haven't got to this point yet, but after I'm able to instantiate the network, I was going to load the batch and perform one step of the SGD optimization using:</p>
<pre><code>net.blobs["data"].data[...] = images
net.blobs["label"].data[...] = labels
net.step(1)
</code></pre>
<p>I have searched for examples and tutorials (for example, <a href="http://nbviewer.jupyter.org/github/BVLC/caffe/blob/tutorial/examples/03-fine-tuning.ipynb" rel="nofollow">here</a> and <a href="http://christopher5106.github.io/deep/learning/2015/09/04/Deep-learning-tutorial-on-Caffe-Technology.html" rel="nofollow">here</a>) that perform fine-tuning and testing using python, however the great majority only discuss the forward passes during a test phase, whereas the ones that fine-tune a network define the training data (labels and images) using the source parameter, and not by loading directly from the python interface.</p>
| 1 | 2016-08-31T14:41:45Z | 39,278,014 | <p>The layer <code>ImageData</code> can only read images from a text file which holds the address of images on each line. You cannot use an <code>ImageData</code> layer and feed in images on the fly.</p>
<p>If you are interested in loading images on the fly, checkout this <a href="http://stackoverflow.com/a/39097123/5465000">link</a>. I suggest using the deploy version (3rd option explained in that post), which is way handier and more flexible.</p>
<p>When using the deploy version, you can set the input data for your network like this:</p>
<pre><code>x = data;
y = labels;
solver.net.blobs['data'].data[...] = x
net.blobs['label'].data[...] = y
</code></pre>
<p>Then you can either call <code>solver.net.forward()</code> to only see the outputs;
or you can call <code>solver.net.step(1)</code> to run a forward and a backward (train for 1 iteration).</p>
| 0 | 2016-09-01T17:54:01Z | [
"python",
"input",
"batch-processing",
"caffe",
"pycaffe"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.