title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
print([[i+j for i in "abc"] for j in "def"])
| 39,373,220
|
<p>I'm new to Python.</p>
<p>I stumped upon with one of this comprehension</p>
<pre><code>print([[i+j for i in "abc"] for j in "def"])
</code></pre>
<p>Could you please help me convert the comprehension in for loop?</p>
<p>I'm not getting the desired result by for loop:</p>
<pre><code>list = []
list2 = []
for j in 'def':
for i in 'abc':
list.append(i+j)
list2 = list
print (list)
</code></pre>
<p>the above is my try with for loop. I' missing something. Below should be the desired result in for loop that i want.</p>
<p><code>([[âadâ, âbdâ, âcdâ], [âaeâ, âbeâ, âceâ], [âafâ, âbfâ, âcfâ]])</code></p>
<p>which I believe is a matrice. </p>
<p>Thanks in advance.</p>
| -2
|
2016-09-07T14:54:52Z
| 39,373,404
|
<p>The easiest thing to unravel a comprehension like this is to take it one comprehension at a time and write <em>that</em> as a loop. So:</p>
<pre><code>[[i+j for i in "abc"] for j in "def"]
</code></pre>
<p>becomes:</p>
<pre><code>outer_list = []
for j in "def":
outer_list.append([i + j for i in "abc"])
</code></pre>
<p>Alright, cool. Now we've gotten rid of the outer comprehension so we can unravel the inner comprehension next:</p>
<pre><code>outer_list = []
for j in "def":
inner_list = []
for i in "abc":
inner_list.append(i + j)
outer_list.append(inner_list)
</code></pre>
| 3
|
2016-09-07T15:03:15Z
|
[
"python",
"list-comprehension"
] |
print([[i+j for i in "abc"] for j in "def"])
| 39,373,220
|
<p>I'm new to Python.</p>
<p>I stumped upon with one of this comprehension</p>
<pre><code>print([[i+j for i in "abc"] for j in "def"])
</code></pre>
<p>Could you please help me convert the comprehension in for loop?</p>
<p>I'm not getting the desired result by for loop:</p>
<pre><code>list = []
list2 = []
for j in 'def':
for i in 'abc':
list.append(i+j)
list2 = list
print (list)
</code></pre>
<p>the above is my try with for loop. I' missing something. Below should be the desired result in for loop that i want.</p>
<p><code>([[âadâ, âbdâ, âcdâ], [âaeâ, âbeâ, âceâ], [âafâ, âbfâ, âcfâ]])</code></p>
<p>which I believe is a matrice. </p>
<p>Thanks in advance.</p>
| -2
|
2016-09-07T14:54:52Z
| 39,373,415
|
<pre><code>a = 'abc'
b = 'def'
>>> [[x+y for x in a]for y in b]
[['ad', 'bd', 'cd'], ['ae', 'be', 'ce'], ['af', 'bf', 'cf']]
</code></pre>
<h2>Loop</h2>
<pre><code>>>> for y in b:
... for x in a:
... print x+y,
...
ad bd cd ae be ce af bf cf
</code></pre>
| 1
|
2016-09-07T15:03:46Z
|
[
"python",
"list-comprehension"
] |
What is the encoding of the body of Gmail message? How to decode it?
| 39,373,243
|
<p>I am using the Python API for Gmail. I am querying for some messages and retrieving them correctly, but the body of the messages looks like total nonsense, even when the MIME type it's said to be <code>text/plain</code> or <code>text/html</code>.</p>
<p>I have been searching all over the API docs, but they keep saying it's a string, when it obviously must be some encoding... I thought it could be <code>base64</code> encoding, but trying to decode it with Python <code>base64</code> gives me <code>TypeError: Incorrect padding</code>, so either it's not <code>base64</code> or I'm decoding badly.</p>
<p>I'd love to provide a good example, but since I'm handling sensitive information I'll have to obfuscate it a bit...</p>
<pre><code>{
"payload": {
"mimeType": "multipart/mixed",
"filename": "",
"headers": [
...
],
"body": {
"size": 0
},
"parts": [
{
"mimeType": "multipart/alternative",
"filename": "",
"headers": [
{
"name": "Content-Type",
"value": "multipart/alternative; boundary=001a1140b160adc309053bd7ec57"
}
],
"body": {
"size": 0
},
"parts": [
{
"partId": "0.0",
"mimeType": "text/plain",
"filename": "",
"headers": [
{
"name": "Content-Type",
"value": "text/plain; charset=UTF-8"
},
{
"name": "Content-Transfer-Encoding",
"value": "quoted-printable"
}
],
"body": {
"size": 4067,
"data": "LS0tLS0tLS0tLSBGb3J3YXJkZWQgbWVzc2FnZSAtLS0tLS0tLS0tDQpGcm9tOiBMaW5rZWRJbiA8am9iLWFwcHNAbGlua2VkaW4uY29tPg0KRGF0ZTogU2F0LCBTZXAgMywgMjAxNiBhdCA5OjMwIEFNDQpTdWJqZWN0OiBBcHBsaWNhdGlvbiBmb3IgU2VuaW9yIEJhY2tlbmQgRGV2ZWxvcG..."
}
</code></pre>
<p>The field that I'm talking about is <code>payload.parts[0].parts[0].body.data</code>. I have truncated it at a random point, so I doubt is decodable like that, but you get the point... What is that encoding?</p>
<p>Also, wouldn't hurt to know where in the docs they explicitly say its base64 (unless it's the standard encoding for MIME?).</p>
<p><strong>UPDATE:</strong> So in the end there was some bad luck involved. I have 5 mails like this, and turns out that the first one is malformed, for some unknown reason. After moving on to the other ones, I am able to decode all of them with the suggested approaches in the answers. Thank you all!</p>
| 3
|
2016-09-07T14:56:20Z
| 39,373,321
|
<p>It's base64. You can use base64.decodestring to read it.
The part of the message that your attached is: '---------- Forwarded message ----------\r\nFrom: LinkedIn <job-apps@linkedin.com>\r\nDate: Sat, Sep 3, 2016 at 9:30 AM\r\nSubject: Application for Senior Backend Develo'</p>
<p>The incorrect padding error means that you're decoding an incorrect number of characters. You're probably trying to decode a truncated message.</p>
| 2
|
2016-09-07T14:59:52Z
|
[
"python",
"encoding",
"gmail-api"
] |
What is the encoding of the body of Gmail message? How to decode it?
| 39,373,243
|
<p>I am using the Python API for Gmail. I am querying for some messages and retrieving them correctly, but the body of the messages looks like total nonsense, even when the MIME type it's said to be <code>text/plain</code> or <code>text/html</code>.</p>
<p>I have been searching all over the API docs, but they keep saying it's a string, when it obviously must be some encoding... I thought it could be <code>base64</code> encoding, but trying to decode it with Python <code>base64</code> gives me <code>TypeError: Incorrect padding</code>, so either it's not <code>base64</code> or I'm decoding badly.</p>
<p>I'd love to provide a good example, but since I'm handling sensitive information I'll have to obfuscate it a bit...</p>
<pre><code>{
"payload": {
"mimeType": "multipart/mixed",
"filename": "",
"headers": [
...
],
"body": {
"size": 0
},
"parts": [
{
"mimeType": "multipart/alternative",
"filename": "",
"headers": [
{
"name": "Content-Type",
"value": "multipart/alternative; boundary=001a1140b160adc309053bd7ec57"
}
],
"body": {
"size": 0
},
"parts": [
{
"partId": "0.0",
"mimeType": "text/plain",
"filename": "",
"headers": [
{
"name": "Content-Type",
"value": "text/plain; charset=UTF-8"
},
{
"name": "Content-Transfer-Encoding",
"value": "quoted-printable"
}
],
"body": {
"size": 4067,
"data": "LS0tLS0tLS0tLSBGb3J3YXJkZWQgbWVzc2FnZSAtLS0tLS0tLS0tDQpGcm9tOiBMaW5rZWRJbiA8am9iLWFwcHNAbGlua2VkaW4uY29tPg0KRGF0ZTogU2F0LCBTZXAgMywgMjAxNiBhdCA5OjMwIEFNDQpTdWJqZWN0OiBBcHBsaWNhdGlvbiBmb3IgU2VuaW9yIEJhY2tlbmQgRGV2ZWxvcG..."
}
</code></pre>
<p>The field that I'm talking about is <code>payload.parts[0].parts[0].body.data</code>. I have truncated it at a random point, so I doubt is decodable like that, but you get the point... What is that encoding?</p>
<p>Also, wouldn't hurt to know where in the docs they explicitly say its base64 (unless it's the standard encoding for MIME?).</p>
<p><strong>UPDATE:</strong> So in the end there was some bad luck involved. I have 5 mails like this, and turns out that the first one is malformed, for some unknown reason. After moving on to the other ones, I am able to decode all of them with the suggested approaches in the answers. Thank you all!</p>
| 3
|
2016-09-07T14:56:20Z
| 39,373,410
|
<pre><code>>>> "LS0tLS0tLS0tLSBGb3J3YXJkZWQgbWVzc2FnZSAtLS0tLS0tLS0tDQpGcm9tOiBMaW5rZWRJbiA8am9iLWFwcHNAbGlua2VkaW4uY29tPg0KRGF0ZTogU2F0LCBTZXAgMywgMjAxNiBhdCA5OjMwIEFNDQpTdWJqZWN0OiBBcHBsaWNhdGlvbiBmb3IgU2VuaW9yIEJhY2tlbmQgRGV2ZWxvcG==".decode('base64')
'---------- Forwarded message ----------\r\nFrom: LinkedIn <job-apps@linkedin.com>\r\nDate: Sat, Sep 3, 2016 at 9:30 AM\r\nSubject: Application for Senior Backend Develop'
</code></pre>
| 0
|
2016-09-07T15:03:31Z
|
[
"python",
"encoding",
"gmail-api"
] |
What is the encoding of the body of Gmail message? How to decode it?
| 39,373,243
|
<p>I am using the Python API for Gmail. I am querying for some messages and retrieving them correctly, but the body of the messages looks like total nonsense, even when the MIME type it's said to be <code>text/plain</code> or <code>text/html</code>.</p>
<p>I have been searching all over the API docs, but they keep saying it's a string, when it obviously must be some encoding... I thought it could be <code>base64</code> encoding, but trying to decode it with Python <code>base64</code> gives me <code>TypeError: Incorrect padding</code>, so either it's not <code>base64</code> or I'm decoding badly.</p>
<p>I'd love to provide a good example, but since I'm handling sensitive information I'll have to obfuscate it a bit...</p>
<pre><code>{
"payload": {
"mimeType": "multipart/mixed",
"filename": "",
"headers": [
...
],
"body": {
"size": 0
},
"parts": [
{
"mimeType": "multipart/alternative",
"filename": "",
"headers": [
{
"name": "Content-Type",
"value": "multipart/alternative; boundary=001a1140b160adc309053bd7ec57"
}
],
"body": {
"size": 0
},
"parts": [
{
"partId": "0.0",
"mimeType": "text/plain",
"filename": "",
"headers": [
{
"name": "Content-Type",
"value": "text/plain; charset=UTF-8"
},
{
"name": "Content-Transfer-Encoding",
"value": "quoted-printable"
}
],
"body": {
"size": 4067,
"data": "LS0tLS0tLS0tLSBGb3J3YXJkZWQgbWVzc2FnZSAtLS0tLS0tLS0tDQpGcm9tOiBMaW5rZWRJbiA8am9iLWFwcHNAbGlua2VkaW4uY29tPg0KRGF0ZTogU2F0LCBTZXAgMywgMjAxNiBhdCA5OjMwIEFNDQpTdWJqZWN0OiBBcHBsaWNhdGlvbiBmb3IgU2VuaW9yIEJhY2tlbmQgRGV2ZWxvcG..."
}
</code></pre>
<p>The field that I'm talking about is <code>payload.parts[0].parts[0].body.data</code>. I have truncated it at a random point, so I doubt is decodable like that, but you get the point... What is that encoding?</p>
<p>Also, wouldn't hurt to know where in the docs they explicitly say its base64 (unless it's the standard encoding for MIME?).</p>
<p><strong>UPDATE:</strong> So in the end there was some bad luck involved. I have 5 mails like this, and turns out that the first one is malformed, for some unknown reason. After moving on to the other ones, I am able to decode all of them with the suggested approaches in the answers. Thank you all!</p>
| 3
|
2016-09-07T14:56:20Z
| 39,373,553
|
<p>This is base64.</p>
<p>Your truncated message is:</p>
<pre><code>---------- Forwarded message ----------
From: LinkedIn <job-apps@linkedin.com>
Date: Sat, Sep 3, 2016 at 9:30 AM
Subject: Application for Senior Backend Develop
</code></pre>
<p>Here's some sample code:</p>
<p>I had to remove the last 3 characters from your truncated message because I was getting the same padding error as you. You probably have some garbage the message you're trying to decode.</p>
<pre><code>import base64
body = "LS0tLS0tLS0tLSBGb3J3YXJkZWQgbWVzc2FnZSAtLS0tLS0tLS0tDQpGcm9tOiBMaW5rZWRJbiA8am9iLWFwcHNAbGlua2VkaW4uY29tPg0KRGF0ZTogU2F0LCBTZXAgMywgMjAxNiBhdCA5OjMwIEFNDQpTdWJqZWN0OiBBcHBsaWNhdGlvbiBmb3IgU2VuaW9yIEJhY2tlbmQgRGV2ZWxv"
result = base64.b64decode(body)
print(result)
</code></pre>
<h2>UPDATE</h2>
<p>Here's a snippet for gettting and decoding the message body. The decoding part was taken from the gMail API documentation:</p>
<pre><code> message = service.users().messages().get(userId='me', id=msg_id, format='full').execute()
msg_str = base64.urlsafe_b64decode(message['payload']['body']['data'].encode('UTF8'))
mime_msg = email.message_from_string(msg_str)
print(msg_str)
</code></pre>
<p>Reference doc:
<a href="https://developers.google.com/gmail/api/v1/reference/users/messages/get#python" rel="nofollow">https://developers.google.com/gmail/api/v1/reference/users/messages/get#python</a></p>
| 2
|
2016-09-07T15:10:36Z
|
[
"python",
"encoding",
"gmail-api"
] |
What is the encoding of the body of Gmail message? How to decode it?
| 39,373,243
|
<p>I am using the Python API for Gmail. I am querying for some messages and retrieving them correctly, but the body of the messages looks like total nonsense, even when the MIME type it's said to be <code>text/plain</code> or <code>text/html</code>.</p>
<p>I have been searching all over the API docs, but they keep saying it's a string, when it obviously must be some encoding... I thought it could be <code>base64</code> encoding, but trying to decode it with Python <code>base64</code> gives me <code>TypeError: Incorrect padding</code>, so either it's not <code>base64</code> or I'm decoding badly.</p>
<p>I'd love to provide a good example, but since I'm handling sensitive information I'll have to obfuscate it a bit...</p>
<pre><code>{
"payload": {
"mimeType": "multipart/mixed",
"filename": "",
"headers": [
...
],
"body": {
"size": 0
},
"parts": [
{
"mimeType": "multipart/alternative",
"filename": "",
"headers": [
{
"name": "Content-Type",
"value": "multipart/alternative; boundary=001a1140b160adc309053bd7ec57"
}
],
"body": {
"size": 0
},
"parts": [
{
"partId": "0.0",
"mimeType": "text/plain",
"filename": "",
"headers": [
{
"name": "Content-Type",
"value": "text/plain; charset=UTF-8"
},
{
"name": "Content-Transfer-Encoding",
"value": "quoted-printable"
}
],
"body": {
"size": 4067,
"data": "LS0tLS0tLS0tLSBGb3J3YXJkZWQgbWVzc2FnZSAtLS0tLS0tLS0tDQpGcm9tOiBMaW5rZWRJbiA8am9iLWFwcHNAbGlua2VkaW4uY29tPg0KRGF0ZTogU2F0LCBTZXAgMywgMjAxNiBhdCA5OjMwIEFNDQpTdWJqZWN0OiBBcHBsaWNhdGlvbiBmb3IgU2VuaW9yIEJhY2tlbmQgRGV2ZWxvcG..."
}
</code></pre>
<p>The field that I'm talking about is <code>payload.parts[0].parts[0].body.data</code>. I have truncated it at a random point, so I doubt is decodable like that, but you get the point... What is that encoding?</p>
<p>Also, wouldn't hurt to know where in the docs they explicitly say its base64 (unless it's the standard encoding for MIME?).</p>
<p><strong>UPDATE:</strong> So in the end there was some bad luck involved. I have 5 mails like this, and turns out that the first one is malformed, for some unknown reason. After moving on to the other ones, I am able to decode all of them with the suggested approaches in the answers. Thank you all!</p>
| 3
|
2016-09-07T14:56:20Z
| 39,374,729
|
<p>Important distinction, it is <strong>web safe base64</strong> encoded (aka "base64url") . The docs are not very good on it, the MessagePartBody is best documented here:
<a href="https://developers.google.com/gmail/api/v1/reference/users/messages/attachments" rel="nofollow">https://developers.google.com/gmail/api/v1/reference/users/messages/attachments</a></p>
<p>And it says the type is "bytes" (which obviously isn't save to transmit over JSON as-is), but I agree with you, it doesn't clearly specify it's base64url encoded like other "bytes" fields are in the API.</p>
<p>As for padding issues, is it because you're truncating? If not, check that "len(data) % 4 == 0", if not, it means the API is returning unpadded data, which would be unexpected.</p>
| 1
|
2016-09-07T16:07:17Z
|
[
"python",
"encoding",
"gmail-api"
] |
Cannot create new ipython notebook or start jupyter
| 39,373,397
|
<p>Whenever I try to start Jupyter ipython notebook. I get the following error<a href="http://i.stack.imgur.com/TbhAr.png" rel="nofollow"><img src="http://i.stack.imgur.com/TbhAr.png" alt="enter image description here"></a></p>
<p>Sometimes, after restarting the system I am able to start the ipython notebook, I cannot create a notebook and it gives me the error saying Forbidden.</p>
<p>My command Prompt also says 0 active kernels</p>
<p><a href="http://i.stack.imgur.com/Vnrg9.png" rel="nofollow"><img src="http://i.stack.imgur.com/Vnrg9.png" alt="enter image description here"></a>
Anyone else have this issue or know the answer to this problem</p>
| 0
|
2016-09-07T15:03:00Z
| 39,400,277
|
<p>I used to have this issue too. It seems to be a bug because I restarted my PC and It would work normally again. It's been weeks I don't have this issue anymore and I didn't do anything to intentionally solve it.</p>
| 0
|
2016-09-08T21:12:03Z
|
[
"python",
"ipython",
"jupyter",
"jupyter-notebook"
] |
Craft raw wifi packet
| 39,373,497
|
<p>I'd like to craft single wifi packets, getting the raw binary data just before they are converted to a waveform and transmitted. As I understand it, this should be at the data link layer, and include all the headers (sync bits, CRC, etc) and the data itself. Is there a way to do this (preferably with Python)? I've looked into scapy, Wireshark, etc but I can't tell if or how they can get me what I need.</p>
| 0
|
2016-09-07T15:08:13Z
| 39,393,537
|
<p>You can dump all packet via monitor mode.<br>
For example, this code sniffing all data packets from mon0 interface:</p>
<pre><code>from scapy.all import *
def handler(pkt):
if pkt.haslayer(Dot11):
if pkt.type == 2:
pkt.show()
sniff(iface="mon0", prn=handler)
</code></pre>
| 1
|
2016-09-08T14:25:09Z
|
[
"python",
"wifi",
"packet"
] |
how to replace elements in xml using python
| 39,373,532
|
<p>sorry for my poor English. but i need your help ;( </p>
<p>i have 2 xml files.</p>
<p>one is:</p>
<pre><code><root>
<data name="aaaa">
<value>"old value1"</value>
<comment>"this is an old value1 of aaaa"</comment>
</data>
<data name="bbbb">
<value>"old value2"</value>
<comment>"this is an old value2 of bbbb"</comment>
</data>
</root>
</code></pre>
<p>two is:</p>
<pre><code><root>
<data name="aaaa">
<value>"value1"</value>
<comment>"this is a value 1 of aaaa"</comment>
</data>
<data name="bbbb">
<value>"value2"</value>
<comment>"this is a value2 of bbbb"</comment>
</data>
<data name="cccc">
<value>"value3"</value>
<comment>"this is a value3 of cccc"</comment>
</data>
</root>
</code></pre>
<p>one.xml will be updated from two.xml.</p>
<p>so, the one.xml should be like this.</p>
<p>one.xml(after) :</p>
<pre><code><root>
<data name="aaaa">
<value>"value1"</value>
<comment>"this is a value1 of aaaa"</comment>
</data>
<data name="bbbb">
<value>"value2"</value>
<comment>"this is a value2 of bbbb"</comment>
</data>
</root>
</code></pre>
<p>data name="cccc" is not exist in one.xml. therefore ignored.</p>
<p>actually what i want to do is </p>
<ol>
<li>download two.xml(whole list) from db</li>
<li>update my one.xml (it contains DATA-lists that only the app uses) by two.xml</li>
</ol>
<p>Any can help me please !!
Thanks!!</p>
<p>==============================================================
xml.etree.ElementTree<br>
your code works with the example. but i found a problem in real xml file.</p>
<p>the real one.xml contains :</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<root>
<resheader name="resmimetype">
<value>text/microsoft-resx</value>
</resheader>
<resheader name="version">
<value>2.0</value>
</resheader>
<resheader name="reader">
<value>System.Resources.ResXResourceReader, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</value>
</resheader>
<resheader name="writer">
<value>System.Resources.ResXResourceWriter, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</value>
</resheader>
<data name="NotesLabel" xml:space="preserve">
<value>Hinweise:</value>
<comment>label for input field</comment>
</data>
<data name="NotesPlaceholder" xml:space="preserve">
<value>z . Milch kaufen</value>
<comment>example input for notes field</comment>
</data>
<data name="AddButton" xml:space="preserve">
<value>Neues Element hinzufügen</value>
<comment>this string appears on a button to add a new item to the list</comment>
</data>
</root>
</code></pre>
<p>it seems, resheader causes trouble.
do you have any idea to fix? </p>
| 1
|
2016-09-07T15:09:59Z
| 39,374,035
|
<p>You can use <a href="https://docs.python.org/3/library/xml.etree.elementtree.html" rel="nofollow">xml.etree.ElementTree</a> and while there are propably more elegant ways, this should work on files that fit in memory if <code>name</code>s are unique in <code>two.xml</code></p>
<pre><code>import xml.etree.ElementTree as ET
tree_one = ET.parse('one.xml')
root_one = tree_one.getroot()
tree_two = ET.parse('two.xml')
root_two = tree_two.getroot()
data_two=dict((e.get("name"), e) for e in root_two.findall("data"))
for eo in root_one.findall("data"):
name=eo.get("name")
tail=eo.tail
eo.clear()
eo.tail=tail
en=data_two[name]
for k,v in en.items():
eo.set(k,v)
eo.extend(en.findall("*"))
eo.text=en.text
tree_one.write("one.xml")
</code></pre>
<p>If your files do not fit in memory you can still use <a href="https://docs.python.org/3/library/xml.dom.pulldom.html#module-xml.dom.pulldom" rel="nofollow">xml.dom.pulldom</a> as long as single <code>data</code> entries do fit.</p>
| 0
|
2016-09-07T15:32:49Z
|
[
"python",
"xml"
] |
Python, how to insert value in Powerpoint template?
| 39,373,550
|
<p>I want to use an existing powerpoint presentation to generate a series of reports:</p>
<p>In my imagination the powerpoint slides will have content in such or similar form:</p>
<pre><code>Date of report: {{report_date}}
Number of Sales: {{no_sales}}
...
</code></pre>
<p>Then my python app opens the powerpoint, fills in the values for this report and saves the report with a new name.
I googled, but could not find a solution for this.</p>
<p>There is python-pptx out there, but this is all about creating a new presentation and not inserting values in a template.</p>
<p>Can anybody advice?</p>
| 2
|
2016-09-07T15:10:34Z
| 39,377,894
|
<p>I tried this on a ".ppx" file I had hanging around.<br>
A microsoft office power point ".pptx" file is in ".zip" format.<br>
When I unzipped my file, I got an ".xml" file and three directories.<br>
My ".pptx" file has 116 slides comprised of 3,477 files and 22 directories/subdirectories.<br>
Normally, I would say it is not workable, but since you have only two short changes you probably could figure out what to change and zip the files to make a new ".ppx" file.<br>
A warning: there are some xml blobs of binary data in one or more of the ".xml" files.</p>
| 0
|
2016-09-07T19:43:06Z
|
[
"python",
"powerpoint"
] |
Python, how to insert value in Powerpoint template?
| 39,373,550
|
<p>I want to use an existing powerpoint presentation to generate a series of reports:</p>
<p>In my imagination the powerpoint slides will have content in such or similar form:</p>
<pre><code>Date of report: {{report_date}}
Number of Sales: {{no_sales}}
...
</code></pre>
<p>Then my python app opens the powerpoint, fills in the values for this report and saves the report with a new name.
I googled, but could not find a solution for this.</p>
<p>There is python-pptx out there, but this is all about creating a new presentation and not inserting values in a template.</p>
<p>Can anybody advice?</p>
| 2
|
2016-09-07T15:10:34Z
| 39,381,549
|
<p>You can definitely do what you want with python-pptx, just perhaps not as straightforwardly as you imagine.</p>
<p>You can read the objects in a presentation, including the slides and the shapes on the slides. So if you wanted to change the text of the second shape on the second slide, you could do it like this:</p>
<pre><code>slide = prs.slides[1]
shape = slide.shapes[1]
shape.text = 'foobar'
</code></pre>
<p>The only real question is how you find the shape you're interested in. If you can make non-visual changes to the presentation (template), you can determine the shape id or shape name and use that. Or you could fetch the text for each shape and use regular expressions to find your keyword/replacement bits.</p>
<p>It's not without its challenges, and python-pptx doesn't have features specifically designed for this role, but based on the parameters of your question, this is definitely a doable thing.</p>
| 0
|
2016-09-08T02:10:05Z
|
[
"python",
"powerpoint"
] |
Python, how to insert value in Powerpoint template?
| 39,373,550
|
<p>I want to use an existing powerpoint presentation to generate a series of reports:</p>
<p>In my imagination the powerpoint slides will have content in such or similar form:</p>
<pre><code>Date of report: {{report_date}}
Number of Sales: {{no_sales}}
...
</code></pre>
<p>Then my python app opens the powerpoint, fills in the values for this report and saves the report with a new name.
I googled, but could not find a solution for this.</p>
<p>There is python-pptx out there, but this is all about creating a new presentation and not inserting values in a template.</p>
<p>Can anybody advice?</p>
| 2
|
2016-09-07T15:10:34Z
| 39,382,136
|
<p>Ultimately, barring some other library which has additional functionality, you need some sort of brute force approach to iterate the Slides collection and each Slide's respective Shapes collection in order to identify the matching shape (unless there is some other library which has additional "Find" functionality in PPT). Here is brute force using only <code>win32com</code>:</p>
<pre><code>from win32com import client
find_date = r'{{report_date}}'
find_sales = r'{{no_sales}}'
report_date = '01/01/2016' # Modify as needed
no_sales = '604' # Modify as needed
path = 'c:/path/to/file.pptx'
outpath = 'c:/path/to/output.pptx'
ppt = client.Dispatch("PowerPoint.Application")
pres = ppt.Presentations.Open(path, WithWindow=False)
for sld in pres.Slides:
for shp in sld.Shapes:
with shp.TextFrame.TextRange as tr:
if find_date in tr.Text
tr.Replace(find_date, report_date)
elif find_sales in shp.TextFrame.Characters.Text
tr.Replace(find_sales, no_sales)
pres.SaveAs(outpath)
pres.Close()
ppt.Quit()
</code></pre>
<p>If these strings are inside other strings with <em>mixed</em> text formatting, it gets trickier to preserve existing formatting, but it should still be possible.</p>
<p>If the template file is still in design and subject to your control, I would consider giving the shape a unique identifier like a <code>CustomXMLPart</code> or you could assign something to the shapes' <code>AlternativeText</code> property. The latter is easier to work with because it doesn't require well-formed XML, and also because it's able to be seen & manipulated via the native UI, whereas the <code>CustomXMLPart</code> is only accessible programmatically, and even that is kind of counterintuitive. You'll still need to do shape-by-shape iteration, but you can avoid the string comparisons just by checking the relevant property value.</p>
| 0
|
2016-09-08T03:27:29Z
|
[
"python",
"powerpoint"
] |
TypeError: list indices must be integers, not tuple in Python SVD model
| 39,373,557
|
<p>I am testing a recommender based on SVD model. But I got an error message after running it as below: </p>
<p>Here is my testing code:</p>
<pre><code>import sys
from sys import argv
import csv
import recsys.algorithm
recsys.algorithm.VERBOSE = True
from recsys.algorithm.factorize import SVD
from recsys.datamodel.data import Data
likes = []
with open('/Users/xps13mynotebook/Desktop/w2v/likes.tsv', 'r') as f:
for line in f.readlines():
username,user_likes = line.strip().split('\t')
likes.append((username,user_likes))
data = Data()
VALUE = 1.0
for username in likes:
for user_likes in likes[username]:
data.add_tuple((VALUE, username, user_likes)) # Tuple format is: <value, row, column>
svd = SVD()
svd.set_data(data)
k = 5
svd.compute(k=k, min_values=3, pre_normalize=None, mean_center=False, post_normalize=True)
svd.similar('sheila')
</code></pre>
<p>Error:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-30-913000ff4e0e> in <module>()
10 VALUE = 1.0
11 for username in likes:
---> 12 for user_likes in likes[username]:
13 data.add_tuple((VALUE, username, user_likes)) # Tuple format is: <value, row, column>
14
TypeError: list indices must be integers, not tuple
</code></pre>
| 0
|
2016-09-07T15:10:42Z
| 39,373,693
|
<p>The TypeError is saying that you can't access element with a list when you provide it with a tuple, it needs an integer that is the position in the list.</p>
<p>Now why is this happening?</p>
<pre><code>likes.append((username,user_likes))
</code></pre>
<p>and</p>
<pre><code>for username in likes:
</code></pre>
<p>likes is a list where tuples are stored, so your username in like is something like <code>("mike", 6)</code>.</p>
<p>Then you are passing that to a list as a position of element. That's why you are getting an error. I don't know exactly what you want your code to do, but right now it's pretty much nonsense</p>
| 0
|
2016-09-07T15:16:42Z
|
[
"python",
"tuples",
"svd"
] |
TypeError: list indices must be integers, not tuple in Python SVD model
| 39,373,557
|
<p>I am testing a recommender based on SVD model. But I got an error message after running it as below: </p>
<p>Here is my testing code:</p>
<pre><code>import sys
from sys import argv
import csv
import recsys.algorithm
recsys.algorithm.VERBOSE = True
from recsys.algorithm.factorize import SVD
from recsys.datamodel.data import Data
likes = []
with open('/Users/xps13mynotebook/Desktop/w2v/likes.tsv', 'r') as f:
for line in f.readlines():
username,user_likes = line.strip().split('\t')
likes.append((username,user_likes))
data = Data()
VALUE = 1.0
for username in likes:
for user_likes in likes[username]:
data.add_tuple((VALUE, username, user_likes)) # Tuple format is: <value, row, column>
svd = SVD()
svd.set_data(data)
k = 5
svd.compute(k=k, min_values=3, pre_normalize=None, mean_center=False, post_normalize=True)
svd.similar('sheila')
</code></pre>
<p>Error:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-30-913000ff4e0e> in <module>()
10 VALUE = 1.0
11 for username in likes:
---> 12 for user_likes in likes[username]:
13 data.add_tuple((VALUE, username, user_likes)) # Tuple format is: <value, row, column>
14
TypeError: list indices must be integers, not tuple
</code></pre>
| 0
|
2016-09-07T15:10:42Z
| 39,373,852
|
<p>When you're iterating over list of tuples, each value is tuple itself. Your code suggests that's it's a first element of tuple (or index, I'm not quite sure - what's clear is that it's blatantly wrong).</p>
<pre><code>for username in likes:
# username is now tuple from list
for user_likes in likes[username]: # list[tuple_stored_in_list] is invalid and causes TypeError
pass # do something
</code></pre>
<p>When iterating over list of tuples, use <code>tuple unpacking</code> to achieve both element stored in tuple at once:</p>
<pre><code>for username, user_likes in likes:
data.add_tuple((VALUE, username, user_likes)) # Tuple format is: <value, row, column>
</code></pre>
| 0
|
2016-09-07T15:24:15Z
|
[
"python",
"tuples",
"svd"
] |
How to manage a Apache Spark context in Django?
| 39,373,608
|
<p>I have a Django application that interacts with a Cassandra database and I want to try using Apache Spark to run operations on this database. I have some experience with Django and Cassandra but I'm new to Apache Spark.</p>
<p>I know that to interact with a Spark cluster first I need to create a SparkContext, something like this:</p>
<pre><code>from pyspark import SparkContext, SparkConf
conf = SparkConf().setAppName(appName).setMaster(master)
sc = SparkContext(conf=conf)
</code></pre>
<p>My question is the following: how should I treat this context? Should I instantiate it when my application starts and let it live during it's execution or should I start a SparkContext everytime before running an operation in the cluster and then kill it when the operation finishes? </p>
<p>Thank you in advance.</p>
| 1
|
2016-09-07T15:13:13Z
| 39,583,879
|
<p>For the last days I've been working on this, since no one answered I will post what was my approach.</p>
<p>Apparently creating a SparkContext generates a bit of overhead, so stopping the context after every operation is not a good idea. </p>
<p>Also, there is no downfall, apparently, on letting the context live while the application runs.</p>
<p>Therefore, my approach was treating the SparkContext like a database connection, I created a singleton that instantiates the context when the application starts running and used it where needed.</p>
<p>I hope this can be helpful to someone, and I am open to new suggestions on how to deal with this, I'm still new to Apache Spark.</p>
| 0
|
2016-09-20T00:15:36Z
|
[
"python",
"django",
"apache",
"apache-spark"
] |
How to get a max string length in nested lists
| 39,373,620
|
<p>They is a follow up question to my earlier post (re printing table from a list of lists)</p>
<p>I'm trying t get a string max value of the following nested list:</p>
<pre><code>tableData = [['apples', 'oranges', 'cherries', 'banana'],
['Alice', 'Bob', 'Carol', 'David'],
['dogs', 'cats', 'moose', 'goose']]
for i in tableData:
print(len(max(i)))
</code></pre>
<p>which gives me 7, 5, 5. But "cherries" is 8 </p>
<p>what a my missing here?
Thanks.</p>
| 3
|
2016-09-07T15:13:49Z
| 39,373,664
|
<p>You've done the <em>length of the maximum word</em>. This gives the wrong answer because words are ordered <a href="https://en.wikipedia.org/wiki/Lexicographical_order" rel="nofollow">lexicographically</a>:</p>
<pre><code>>>> 'oranges' > 'cherries'
True
</code></pre>
<p>What you probably wanted is the <em>maximum of the lengths of words</em>:</p>
<pre><code>max(len(word) for word in i)
</code></pre>
<p>Or equivalently:</p>
<pre><code>len(max(i, key=len))
</code></pre>
| 4
|
2016-09-07T15:15:38Z
|
[
"python"
] |
How to get a max string length in nested lists
| 39,373,620
|
<p>They is a follow up question to my earlier post (re printing table from a list of lists)</p>
<p>I'm trying t get a string max value of the following nested list:</p>
<pre><code>tableData = [['apples', 'oranges', 'cherries', 'banana'],
['Alice', 'Bob', 'Carol', 'David'],
['dogs', 'cats', 'moose', 'goose']]
for i in tableData:
print(len(max(i)))
</code></pre>
<p>which gives me 7, 5, 5. But "cherries" is 8 </p>
<p>what a my missing here?
Thanks.</p>
| 3
|
2016-09-07T15:13:49Z
| 39,373,687
|
<p>You were printing the length of the max element in each row. Python computes the <code>max</code> string element as the one that is seen last in a dictionary (lexicographic sorting). What you want instead is <code>max(len(s) for s in i)</code> instead of <code>len(max(i))</code></p>
<pre><code>for row in tableData:
print(max(len(s) for s in row))
</code></pre>
| -1
|
2016-09-07T15:16:24Z
|
[
"python"
] |
How to get a max string length in nested lists
| 39,373,620
|
<p>They is a follow up question to my earlier post (re printing table from a list of lists)</p>
<p>I'm trying t get a string max value of the following nested list:</p>
<pre><code>tableData = [['apples', 'oranges', 'cherries', 'banana'],
['Alice', 'Bob', 'Carol', 'David'],
['dogs', 'cats', 'moose', 'goose']]
for i in tableData:
print(len(max(i)))
</code></pre>
<p>which gives me 7, 5, 5. But "cherries" is 8 </p>
<p>what a my missing here?
Thanks.</p>
| 3
|
2016-09-07T15:13:49Z
| 39,373,727
|
<p>In string operations <code>max</code> will Returns the maximum alphabetical character from the string not in basics of length. So u have to do something like this.</p>
<pre><code>len(max(sum(tableData,[]),key=len))
</code></pre>
<p><code>sum(tableData,[])</code> will convert the <code>list of list</code> to <code>list</code> this helps to iterate through the list of list. </p>
<p>Length in each row</p>
<pre><code>In [1]: [len(max(i,key=len)) for i in tableData]
Out[1]: [8, 5, 5]
</code></pre>
<p>See the diffrence,</p>
<pre><code>In [2]: max(sum(tableData,[]))
Out[2]: 'oranges'
In [3]: max(sum(tableData,[]),key=len)
Out[3]: 'cherries'
</code></pre>
| 0
|
2016-09-07T15:17:59Z
|
[
"python"
] |
Modify tf-idf vectorizer for some keywords
| 39,373,683
|
<p>I am creating a tf-idf matrix for finding cosine similarity. But I want some frequent words from a set to have more weightage(i.e, tf-idf value). </p>
<pre><code>tfidf_vectorizer = TfidfVectorizer()
tfidf_matrix = tfidf_vectorizer.fit_transform(documents)
</code></pre>
<p>How can I modify the above tfidf_matrix for words in a particular set.</p>
| 1
|
2016-09-07T15:16:07Z
| 39,390,640
|
<p>I converted tfidf-matrix of csr-type to 2-D array using,</p>
<pre><code>my_matrix = tfidf_matrix.toarray()
</code></pre>
<p>Then, found out the index of keyword using,</p>
<pre><code>tfidf_vectorizer.vocabulary_.get(keyword)
</code></pre>
<p>After that, Iterated over 2-D matrix and changed the tf-idf value according to requirements. Here, keyword_list contains the index of keywords for which we want to modify the tf-idf value.</p>
<pre><code> for i in range(0, len(my_matrix)):
for key in keyword_list:
if key != None:
key = (int)(key)
if my_matrix[i][key] > 0.0:
my_matrix[i][key] = new_value
</code></pre>
<p>Again, changed my_matrix to csr_type using,</p>
<pre><code>tfidf_matrix = sparse.csr_matrix(my_matrix)
</code></pre>
<p>Hence, tfidf_matrix was modified for the list of keywords.</p>
| 0
|
2016-09-08T12:15:41Z
|
[
"python",
"machine-learning",
"scipy",
"nlp",
"nltk"
] |
Ansible: printing out JSON first level keys names
| 39,373,733
|
<p>Example :</p>
<pre><code>{
"fw1": {
"ipv4": {
"rtr": {
"ip": "1.2.3.4",
"net": "1.2.3.4",
}
}
},
"fw2": {
"ipv4": {
"rtr": {
"ip": "4.3.2.1",
"net": "4.3.2.1",
}
}
}
}
</code></pre>
<p>I need to list the first level keys of a json file.<br> Using 'from_json).keys()' i get a strange syntax -->
<strong>[u'fw1', u'fw2'].</strong>
<br>where do the <strong>'u'</strong> characters come from and how to get rid of them ? Is there any way to list the keys instead of getting them in an Array ?</p>
| 0
|
2016-09-07T15:18:14Z
| 39,374,839
|
<p>You don't need to use <code>from_json</code> here:</p>
<pre><code>---
- hosts: localhost
gather_facts: True
vars:
my_json: "{{ lookup('file','test.json') }}"
tasks:
- debug:
msg: "Keys list: {{ my_json.keys() }}"
</code></pre>
| 0
|
2016-09-07T16:12:48Z
|
[
"python",
"json",
"ansible",
"ansible-playbook"
] |
GAE NDB Expando Model with dynamic Kind
| 39,373,752
|
<p>Is it possible to assign a dynamic Entity Kind to an Expando Model? For example, I want to use this model for many types of dynamic entities:</p>
<pre><code>class Dynamic(ndb.Expando):
"""
Handles all "Post types", such as Pages, Posts, Users, Products, etc...
"""
col = ndb.StringProperty()
parent = ndb.IntegerProperty()
name = ndb.StringProperty()
slug = ndb.StringProperty()
</code></pre>
<p>Right now I use the "col" <code>StringProperty</code> to hold the Kind (like "Pages", "Posts", etc) and query for the "col" every time.</p>
<p>After reading the docs, I stumbled upon this @classmethod:</p>
<pre><code>class MyModel(ndb.Model):
@classmethod
def _get_kind(cls):
return 'AnotherKind'
</code></pre>
<p>Does that mean I can do this?</p>
<pre><code>class Dynamic(ndb.Expando):
"""
Handles all "Post types", such as Pages, Posts, Users, Products, etc...
"""
col = ndb.StringProperty()
parent = ndb.IntegerProperty()
name = ndb.StringProperty()
slug = ndb.StringProperty()
@classmethod
def _get_kind(cls):
return 'AnotherKind'
</code></pre>
<p>But how do I dynamically replace 'AnotherKind'? Can I do something like <code>return col</code>?</p>
<p>Thanks!</p>
| 0
|
2016-09-07T15:18:48Z
| 39,379,038
|
<p>I don't know if you can do that, but it sounds dangerous, and GAE updates might break your code.</p>
<p>Using subclasses seems like a much safer alternative. Something like this:</p>
<pre><code>class Dynamic(ndb.Expando):
parent = ndb.IntegerProperty()
name = ndb.StringProperty()
slug = ndb.StringProperty()
class Pages(Dynamic):
pass
class Posts(Dynamic):
pass
class Users(Dynamic):
pass
</code></pre>
<p>You could also try using <code>PolyModel</code>. </p>
<p>We need to know more about your application and what you are trying to accomplish to give more specific advice.</p>
| 1
|
2016-09-07T21:08:23Z
|
[
"python",
"google-app-engine",
"google-cloud-datastore",
"app-engine-ndb"
] |
Is pandas.DataFrame.groupby Guaranteed To Be Stable?
| 39,373,820
|
<p>I've noticed that there are several uses of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html"><code>pd.DataFrame.groupby</code></a> followed by an <code>apply</code> implicitly assuming that <code>groupby</code> is <a href="http://stackoverflow.com/questions/1517793/stability-in-sorting-algorithms"><em>stable</em></a> - that is, if <em>a</em> and <em>b</em> are instances of the same group, and pre-grouping, <em>a</em> appeared before <em>b</em>, then <em>a</em> will appear pre <em>b</em> following the grouping as well. </p>
<p>I think there are several answers clearly implicitly using this, but, to be concrete, here is <a href="http://stackoverflow.com/questions/15755057/using-cumsum-in-pandas-on-group">one using <code>groupby</code>+<code>cumsum</code></a>.</p>
<p>Is there anything actually promising this behavior? The documentation only states:</p>
<blockquote>
<p>Group series using mapper (dict or key function, apply given function to group, return result as series) or by a series of columns.</p>
</blockquote>
<p>Also, pandas having indices, the functionality could be theoretically be achieved also without this guarantee (albeit in a more cumbersome way).</p>
| 5
|
2016-09-07T15:22:47Z
| 39,374,529
|
<p>Although the docs don't state this internally, it uses stable sort when generating the groups. </p>
<p>See: </p>
<ul>
<li><a href="https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L291" rel="nofollow">https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L291</a> </li>
<li><a href="https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L4356" rel="nofollow">https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L4356</a></li>
</ul>
<p>As I mentioned in the comments, this is important if you consider <code>transform</code> which will return a Series with it's index aligned to the original df. If the sorting didn't preserve the order, then this would make alignment perform additional work as it would need to sort the Series prior to assigning. In fact, this is mentioned <a href="https://github.com/Blosc/bcolz/issues/76" rel="nofollow">in the comments</a>:</p>
<blockquote>
<p><code>_algos.groupsort_indexer</code> implements <strong>counting sort</strong> and it is at least
<code>O(ngroups)</code>, where</p>
<p><code>ngroups = prod(shape)</code></p>
<p><code>shape = map(len, keys)</code></p>
<p>That is, linear in the number of combinations (cartesian product) of unique
values of groupby keys. This can be huge when doing multi-key groupby.
<code>np.argsort(kind='mergesort')</code> is <code>O(count x log(count))</code> where count is the
length of the data-frame;
Both algorithms are <strong>stable</strong> sort and that is necessary for correctness of
groupby operations. </p>
<p>e.g. consider:
<code>df.groupby(key)[col].transform('first')</code></p>
</blockquote>
| 4
|
2016-09-07T15:57:40Z
|
[
"python",
"pandas",
"group-by",
"language-lawyer"
] |
Python Json to array
| 39,373,836
|
<p>I want to put json into array.
I have 6 Json links (with the same size but different issues)</p>
<p>That was my try:</p>
<pre><code>data=('0','0')
response = urllib.urlopen(URL)
data[0] = json.loads(response.read())
response = urllib.urlopen(URL)
data[1] = json.loads(response.read())
</code></pre>
<p>Do I have to initialize a 3d array?
Later it would be fine if I can work on the result like that:</p>
<pre><code>result = data[0]['resu']['spc']
</code></pre>
<p>In the end i want to build a for loop which use dynamical the json links like that:</p>
<pre><code>for w in range(0,len(URLs)):
URLs[w]['resu']['spc']
</code></pre>
| 0
|
2016-09-07T15:23:28Z
| 39,374,937
|
<p>I would strongly suggest using <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a> (the current documentation does so, too) but you <em>can</em> do:</p>
<pre><code>import json
import urllib2
urls=["http://example.com/json","https://example.com/json2"] # your urls here
data=[]
for u in urls:
response = urllib2.urlopen(u)
data.append(json.loads(response.read())) # while this normally works with Python 2, it is better to use data.append(json.loads(response.read().decode("utf8"))
</code></pre>
<p>for this you have to find out/guess the encoding of your response.</p>
<p>With requests it would be much simpler:</p>
<pre><code>import requests
urls=["http://example.com/json","https://example.com/json2"] # your urls here
data=[requests.get(u).json() for u in urls]
</code></pre>
| 0
|
2016-09-07T16:17:26Z
|
[
"python",
"arrays",
"json"
] |
Python Json to array
| 39,373,836
|
<p>I want to put json into array.
I have 6 Json links (with the same size but different issues)</p>
<p>That was my try:</p>
<pre><code>data=('0','0')
response = urllib.urlopen(URL)
data[0] = json.loads(response.read())
response = urllib.urlopen(URL)
data[1] = json.loads(response.read())
</code></pre>
<p>Do I have to initialize a 3d array?
Later it would be fine if I can work on the result like that:</p>
<pre><code>result = data[0]['resu']['spc']
</code></pre>
<p>In the end i want to build a for loop which use dynamical the json links like that:</p>
<pre><code>for w in range(0,len(URLs)):
URLs[w]['resu']['spc']
</code></pre>
| 0
|
2016-09-07T15:23:28Z
| 39,375,029
|
<p>Tuples are immutable which makes them great for sharing between threads, but not so good for frequent changes. Try using a list instead as they are designed for this very use case. That being said, a dictionary can be nice as well:</p>
<pre><code>import json
import urllib.request
data = {}
for url in urls:
with urllib.request.urlopen(url) as response:
if response.status_code != 200:
continue // Handle errors here, I chose to continue
// Remember that JSON must be in text format
data[url] = json.loads(response.read().decode())
</code></pre>
<p>Just for giggles, here is a one liner:</p>
<pre><code>// The lack of error checking in this is staggering though!
data = {u:json.loads(urllib.request.urlopen(u).read().decode()) for u in urls}
</code></pre>
| 0
|
2016-09-07T16:23:24Z
|
[
"python",
"arrays",
"json"
] |
Parse JSON find value and append random number?
| 39,373,858
|
<p>I have a JSON file with login and integer valuer (some result), like this:</p>
<pre><code>[
{
"name":"Tamara",
"results":"434.545.234.664"
},
{
"name":"Ted",
"results":"434.545.234.664"
}
]
</code></pre>
<p>I need to receive user login (name), find inserted name in JSON. If it already exists, add some number to the "results". </p>
<p>For example: If I input "Ted", some number will be appended to Ted's results like this: <code>"results":"434.545.234.664+4343}"</code></p>
<p>If name doesn't exist, add new record with:</p>
<pre><code>{
"name":"new_name",
"results":"some_number"
}
</code></pre>
<p>in it.</p>
<p>My code, which didn't work: </p>
<pre><code>with open('/Users/users_db.json') as jsonfile:
user_name = ''
while user_name == '':
data = json.load(jsonfile)
user_name = input('Your name: ')
for user in data:
if user_name in user['name']:
print('Old user')
break
else:
print('New user')
</code></pre>
| 0
|
2016-09-07T15:24:31Z
| 39,374,135
|
<p>You can <a href="https://docs.python.org/3.5/library/json.html" rel="nofollow">parse your json</a> like this</p>
<pre><code>import json
with open('yourJsonFile.json') as example_data:
data = json.load(example_data)
</code></pre>
<p>And now it's a matter of working with <code>data</code> to find whatever you need. For example, to access the first name you could do</p>
<pre><code>data[0]['name']
</code></pre>
<p>The rest depends on where/how you store the names and check if they already exist.</p>
<p>Finally, to add a new value you can use <code>append</code></p>
| 0
|
2016-09-07T15:38:44Z
|
[
"python",
"json",
"input"
] |
Parse JSON find value and append random number?
| 39,373,858
|
<p>I have a JSON file with login and integer valuer (some result), like this:</p>
<pre><code>[
{
"name":"Tamara",
"results":"434.545.234.664"
},
{
"name":"Ted",
"results":"434.545.234.664"
}
]
</code></pre>
<p>I need to receive user login (name), find inserted name in JSON. If it already exists, add some number to the "results". </p>
<p>For example: If I input "Ted", some number will be appended to Ted's results like this: <code>"results":"434.545.234.664+4343}"</code></p>
<p>If name doesn't exist, add new record with:</p>
<pre><code>{
"name":"new_name",
"results":"some_number"
}
</code></pre>
<p>in it.</p>
<p>My code, which didn't work: </p>
<pre><code>with open('/Users/users_db.json') as jsonfile:
user_name = ''
while user_name == '':
data = json.load(jsonfile)
user_name = input('Your name: ')
for user in data:
if user_name in user['name']:
print('Old user')
break
else:
print('New user')
</code></pre>
| 0
|
2016-09-07T15:24:31Z
| 39,374,249
|
<p>Here's one of the zillion possible ways to code your problem:</p>
<pre><code>import json
import random
import names
random.seed(1)
data = [
{
"name": "Tamara",
"results": "434.545.234.664"
},
{
"name": "Ted",
"results": "434.545.234.664"
}
]
def foo(lst, name):
some_number = random.randint(0, 4343)
values = filter(lambda d: d["name"] == name, lst)
if values:
for v in values:
v["results"] += "+{0}".format(some_number)
else:
lst.append({
"name": name,
"results": some_number
})
for name in ["Tamara", "Ted"] + [names.get_first_name() for i in range(8)]:
foo(data, name)
print(data)
</code></pre>
<p>This one will use <a href="https://pypi.python.org/pypi/names/" rel="nofollow">names</a> module to generate random test names.</p>
<p>One advice though, take some time to read the help page, especially the sections named <a href="http://stackoverflow.com/help/on-topic">"What topics can I ask about here?"</a> and <a href="http://stackoverflow.com/help/dont-ask">"What types of questions should I avoid asking?"</a>. And more importantly, please read <a href="http://meta.stackexchange.com/q/156810/204922">the Stack Overflow question checklist</a>. You might also want to learn about <a href="http://stackoverflow.com/help/mcve">Minimal, Complete, and Verifiable Examples</a></p>
| 1
|
2016-09-07T15:44:13Z
|
[
"python",
"json",
"input"
] |
max() arg is empty error when joining strings
| 39,373,988
|
<p>I'm new to coding and I was trying to make a script that will join the tuples inside 'sl', which are a sequence of letters, into a new tuple called 's' with the items as strings. and then print out the longest string inside s.</p>
<p>this is the code I came up with (or short version). When I try to print the max item of 's' in this code, returns a </p>
<blockquote>
<p>max() arg is empty</p>
</blockquote>
<p>error. </p>
<pre><code>sl = [['m','o','o','n'],['d','a','y'],['h','e','l','l','o']]
s = []
s = (''.join(i) for i in sl) # join the letters inside sl, put them into s
print(max(s, key=len)) # print longest string inside s
</code></pre>
<p>but I still can iterate throught s with:</p>
<pre><code>for i in s:
print(i)
</code></pre>
<p>and will print the words inside s, joined</p>
<p>I suppose that (''.join(i) for i in sl) isnt properly joining them as strings. Is there a way that the words inside 's' are join as strings?</p>
| 1
|
2016-09-07T15:30:27Z
| 39,375,959
|
<p>It works, just replace <code>()</code> with <code>[]</code></p>
<pre><code>sl = [['m','o','o','n'],['d','a','y'],['h','e','l','l','o']]
s = []
s = [''.join(i) for i in sl]
print(s)
print(max(s, key=len))
</code></pre>
| 0
|
2016-09-07T17:25:17Z
|
[
"python",
"join"
] |
Increment attributes of two class without modules?
| 39,373,998
|
<p>How to make a class which could operate like this without importing other modules?</p>
<pre><code>>>date(2014,2,2) + delta(month=3)
>>(2014, 5, 2)
>>
>>date(2014, 2, 2) + delta(day=3)
>>(2014, 2, 5)
>>date(2014, 2, 2) + delta(year=1, month=2)
>>(2015, 4, 2)
</code></pre>
<p>This is my code:</p>
<pre><code># class delta(date):
# year = date(year)
# def __init__(self,y,m,d):
# self.y = y + year
# self.m = m
# self.d = d
# def __call__(self):
# return self.y, self.m, self.d
class date(object):
def __init__(self,year,month,day):
self.year = year
self.month = month
self.day = day
def __call__(self):
return self.year, self.month, self.day
</code></pre>
| 0
|
2016-09-07T15:30:48Z
| 39,374,238
|
<p>Override the <code>__add__</code> method. In the delta class give the <code>__init__</code> default parameters so you can call it with only one or two arguments.</p>
<pre><code>class delta():
def __init__(self,year=0,month=0,day=0):
self.y = year
self.m = month
self.d = day
def __call__(self):
return self.y, self.m, self.d
class date():
def __init__(self,year,month,day):
self.year = year
self.month = month
self.day = day
def __call__(self):
return self.year, self.month, self.day
def isLeapYear (self):
if ((self.year % 4 == 0) and (self.year % 100 != 0)) or (self.year % 400 == 0):
return True
return False
def __add__(self,delta):
self.year=self.year+delta.y
self.month=self.month+delta.m
if self.month>12:
self.month=self.month%12
self.year+=1
self.day=self.day+delta.d
if self.isLeapYear() and self.day>29:
self.day=self.day%29
self.month+=1
elif not self.isLeapYear() and self.day>28:
self.day=self.day%28
self.month+=1
return self.year, self.month, self.day
print(date(2014, 2, 2) + delta(year=1, month=2)) # (2015, 4, 2)
birthdate=date(1990,1,1)
currentyear=birthdate+delta(year=20,month=2,day=5)
print(currentyear) # (2010, 3, 6)
</code></pre>
| 1
|
2016-09-07T15:43:28Z
|
[
"python",
"class",
"attributes"
] |
Pandas: Can you access rolling window items
| 39,374,020
|
<p>Can you access pandas rolling window object. </p>
<pre><code>rs = pd.Series(range(10))
rs.rolling(window = 3)
#print's
Rolling [window=3,center=False,axis=0]
</code></pre>
<p>Can I get as groups?: </p>
<pre><code>[0,1,2]
[1,2,3]
[2,3,4]
</code></pre>
| 3
|
2016-09-07T15:31:51Z
| 39,374,903
|
<p>Here's a workaround, but waiting to see if anyone has pandas solution:</p>
<pre><code>def rolling_window(a, step):
shape = a.shape[:-1] + (a.shape[-1] - step + 1, step)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
rolling_window(rs, 3)
array([[ 0, 1, 2],
[ 1, 2, 3],
[ 2, 3, 4],
[ 3, 4, 5],
[ 4, 5, 6],
[ 5, 6, 7],
[ 6, 7, 8],
[ 7, 8, 9],
[ 8, 9, 10]])
</code></pre>
| 1
|
2016-09-07T16:15:52Z
|
[
"python",
"pandas"
] |
Pandas: Can you access rolling window items
| 39,374,020
|
<p>Can you access pandas rolling window object. </p>
<pre><code>rs = pd.Series(range(10))
rs.rolling(window = 3)
#print's
Rolling [window=3,center=False,axis=0]
</code></pre>
<p>Can I get as groups?: </p>
<pre><code>[0,1,2]
[1,2,3]
[2,3,4]
</code></pre>
| 3
|
2016-09-07T15:31:51Z
| 39,380,478
|
<p>I will start off this by saying this is reaching in to the internal impl. But if you really really wanted to compute the indexers the same way as pandas.</p>
<p>You will need v0.19.0rc1 (just about released), you can <code>conda install -c pandas pandas=0.19.0rc1</code></p>
<pre><code>In [41]: rs = pd.Series(range(10))
In [42]: rs
Out[42]:
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
dtype: int64
# this reaches into an internal implementation
# the first 3 is the window, then second the minimum periods we
# need
In [43]: start, end, _, _, _, _ = pandas._window.get_window_indexer(rs.values,3,3,None,use_mock=False)
# starting index
In [44]: start
Out[44]: array([0, 0, 0, 1, 2, 3, 4, 5, 6, 7])
# ending index
In [45]: end
Out[45]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
# windo size
In [3]: end-start
Out[3]: array([1, 2, 3, 3, 3, 3, 3, 3, 3, 3])
# the indexers
In [47]: [np.arange(s, e) for s, e in zip(start, end)]
Out[47]:
[array([0]),
array([0, 1]),
array([0, 1, 2]),
array([1, 2, 3]),
array([2, 3, 4]),
array([3, 4, 5]),
array([4, 5, 6]),
array([5, 6, 7]),
array([6, 7, 8]),
array([7, 8, 9])]
</code></pre>
<p>So this is sort of trivial in the fixed window case, this becomes extremely useful in a variable window scenario, e.g. in 0.19.0 you can specify things like <code>2S</code> for example to aggregate by-time.</p>
<p>All of that said, getting these indexers is not particularly useful. you generally want to <em>do</em> something with the results. That is the point of the aggregation functions, or <code>.apply</code> if you want to generically aggregate.</p>
| 0
|
2016-09-07T23:33:50Z
|
[
"python",
"pandas"
] |
405 error when custom-defined PATCH method is called for Django REST APIView
| 39,374,046
|
<p>I am making an API call in the test client:</p>
<pre><code>response2 = self.client.patch('/object/update/%d/' %
object_id, {'object_attribute':4})
</code></pre>
<p>The relevant serializer and view class for the object:</p>
<pre><code>class ObjectUpdateSerializer(serializers.ModelSerializer):
class Meta:
model = Object
include=('object_attribte','another_attribute',)
class ObjectView(APIView):
def patch(self, request, pk, format=None):
obj = Object.objects.get(id=pk)
data = request.data.copy()
"""do some stuff with the data here..."""
serializer = ObjectUpdateSerializer(instance = obj, data=data,
partial=True)
if serializer.is_valid(raise_exception=True):
serializer.save()
return Response(serializer.data,status.HTTP_200_OK)
</code></pre>
<p>I was able to work with the PUT method when I used that, but I wanted to have the API call methods be more in-line with what the methods actually mean (so PATCH would be a partial replacement). However, the response to the test client call above is this:</p>
<pre><code>{u'detail': u'Method "PATCH" not allowed.'}
</code></pre>
<p>Which is a 405 error (method not allowed). </p>
<p>I checked to see if there were any issues with Django 1.10, and I also got this output in the django shell:</p>
<pre><code>>>> from django.views.generic import View
>>> View.http_method_names
[u'get', u'post', u'put', u'patch', u'delete', u'head', u'options', u'trace']
</code></pre>
<p>It appears as if it isn't an issue with Django's settings, but something that I've set up. What could be the issue here?</p>
| 0
|
2016-09-07T15:33:27Z
| 39,519,115
|
<p>I faced the same problem. It appeared that it's not allowed to use different views for the same route:</p>
<pre><code>urlpatterns = [
# ...
url('^sessions/?$', views.TokenDelete.as_view()), # Viewer has delete()
url('^sessions/?$', views.TokenEdit.as_view()), # Viewer has patch()
]
</code></pre>
<p>The correct way is:</p>
<pre><code>urlpatterns = [
# ...
url('^sessions/?$', views.Token.as_view()), # Viewer has both patch() and delete()
]
</code></pre>
| 0
|
2016-09-15T19:41:27Z
|
[
"python",
"django",
"django-rest-framework",
"http-status-code-405"
] |
Wait until process completes using Python WMI
| 39,374,064
|
<p>My code launches process on remote host that at the end creates txt file. I want to copy this txt file so I need to wait until the process finish. How can I do that?</p>
<pre><code>import wmi
SW_SHOWNORMAL = 1
con = wmi.WMI(ip, user=username, password=password)
process_startup = con.Win32_ProcessStartup.new()
process_startup.ShowWindow = SW_SHOWNORMAL
process_id, result = con.Win32_Process.Create(CommandLine=command_line, ProcessStartupInformation=process_startup)
</code></pre>
| 0
|
2016-09-07T15:34:29Z
| 39,378,982
|
<p>I'm actually working on a similar thing at the moment. If you haven't found your answer, check this link - <a href="http://timgolden.me.uk/python/wmi/cookbook.html#run-notepad-wait-until-it-s-closed-and-then-show-its-text" rel="nofollow">http://timgolden.me.uk/python/wmi/cookbook.html#run-notepad-wait-until-it-s-closed-and-then-show-its-text</a></p>
<p>My current Python solution is as follows:</p>
<pre><code>process_id, result = client.Win32_Process.Create(CommandLine=command, CurrentDirectory=directory)
watcher = client.watch_for(
notification_type="Deletion",
wmi_class="Win32_Process",
delay_secs=1,
ProcessId=process_id
)
watcher()
</code></pre>
<p>This seems to work for me.</p>
| 1
|
2016-09-07T21:04:10Z
|
[
"python",
"process",
"wmi"
] |
Fill NaN values
| 39,374,067
|
<p>I have a dataframe</p>
<pre><code>TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR
2016-01-01 00:00:00 116 HC 250
2016-01-01 00:10:00 121 HC 250
2016-01-01 00:20:00 121 NaN 250
</code></pre>
<p>To use this dataframe, I must to fill the NaN values by (HC or HP) based on this condition:</p>
<pre><code>If (hour extracted from TIMESTAMP is in {0,1,2, 3, 4, 5, 22, 23}
</code></pre>
<p>So I replace NaN by HC,
else by HP.
I did this function:</p>
<pre><code>def prep_data(data):
data['PERIODE_TARIF']=np.where(data['PERIODE_TARIF']in (0, 1,2, 3, 4, 5, 22, 23),'HC','HP')
return data
</code></pre>
<p>But I get this error:</p>
<pre><code> ValueError Traceback (most recent call last)
<ipython-input-23-c1fb7e3d7b82> in <module>()
----> 1 prep_data(df_energy2)
<ipython-input-22-04bd325f91cd> in prep_data(data)
1 # Nettoyage des données
2 def prep_data(data):
----> 3 data['PERIODE_TARIF']=np.where(data['PERIODE_TARIF']in (0, 1),'HC','HP')
4 return data
C:\Users\Demonstrator\Anaconda3\lib\site-packages\pandas\core\generic.py
in __nonzero__(self)
890 raise ValueError("The truth value of a {0} is ambiguous. "
891 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
--> 892 .format(self.__class__.__name__))
893
894 __bool__ = __nonzero__
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>How can I fix this?</p>
| 2
|
2016-09-07T15:34:55Z
| 39,374,271
|
<p>use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> to test for membership:</p>
<pre><code>data['PERIODE_TARIF']=np.where(data['PERIODE_TARIF'].isin([0, 1,2, 3, 4, 5, 22, 23]),'HC','HP')
</code></pre>
<p><code>in</code> doesn't understand how to evaluate an array of boolean values as it becomes ambiguous if you have more than 1 <code>True</code> in the array hence the error</p>
| 2
|
2016-09-07T15:45:42Z
|
[
"python",
"pandas",
"missing-data"
] |
Plotting multiple graphs does not work using pylab
| 39,374,075
|
<p>I want to visualize the <a href="https://en.wikipedia.org/wiki/Birthday_problem" rel="nofollow">Birthday Problem</a> with different <code>n</code>. My aim is to plot multiple graphs in the same figure but it does not work. It only plots the last graph and ignores the others. I am using the Jupyter Notebook.
This is my Code:</p>
<pre><code>from decimal import Decimal
def calc_p_distinct(n):
p_distinct = numpy.arange(0, n.size, dtype=Decimal)
for i in n:
p_distinct[i] = Decimal(1.0)
for i in n:
for person in range(i):
p_distinct[i] = Decimal(p_distinct[i]) * Decimal(((Decimal(365-person))/Decimal(365)))
return p_distinct
# n is the number of people
n = numpy.arange(0, 20)
n2 = numpy.arange(0, 50)
n3 = numpy.arange(0, 100)
# plot the probability distribution
p_distinct = calc_p_distinct(n)
pylab.plot(n, p_distinct, 'r')
p_distinct2 = calc_p_distinct(n2)
pylab.plot(n2, p_distinct2, 'g')
p_distinct3 = calc_p_distinct(n3)
pylab.plot(n3, p_distinct3, 'b')
# set the labels of the axis and title
pylab.xlabel("n", fontsize=18)
pylab.ylabel("probability", fontsize=18)
pylab.title("birthday problem", fontsize=20)
# show grid
pylab.grid(True)
# show the plot
pylab.show()
</code></pre>
<p>When I replace one of the calc_p_distinct() function calls with another built-in function (e.g. numpy.sin(n)), it will show me two graphs. So, I conclude that it must have something to do with my function. What am I doing wrong here?</p>
| 0
|
2016-09-07T15:35:17Z
| 39,374,631
|
<p>This isn't a problem with <code>matplotlib</code>; all the lines are there, just on top of each other (which makes perfect sense; for 100 people, the probability for only the first 20 is the same as for a group of just 20 people).</p>
<p>If I quickly plot them with a different line width:</p>
<p><a href="http://i.stack.imgur.com/HbTFe.png" rel="nofollow"><img src="http://i.stack.imgur.com/HbTFe.png" alt="enter image description here"></a></p>
| 0
|
2016-09-07T16:02:26Z
|
[
"python",
"matplotlib",
"jupyter"
] |
unable to install package using pip
| 39,374,141
|
<p>I am trying to install module using pip and I get this error:</p>
<pre class="lang-none prettyprint-override"><code>$ pip install virtualenv
Collecting virtualenv
Downloading virtualenv-15.0.3-py2.py3-none-any.whl (3.5MB)
100% |ââââââââââââââââââââââââââââââââ| 3.5MB 312kB/s
Installing collected packages: virtualenv
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 742, in install
**kwargs
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 831, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 1032, in move_wheel_files
isolated=self.isolated,
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/wheel.py", line 346, in move_wheel_files
clobber(source, lib_dir, True)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/wheel.py", line 324, in clobber
shutil.copyfile(srcfile, destfile)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 83, in copyfile
with open(dst, 'wb') as fdst:
IOError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/virtualenv.py'
</code></pre>
<p>What is the problem and how can I resolve it?</p>
| 0
|
2016-09-07T15:38:57Z
| 39,374,426
|
<p>the problem is caused because you have not given the super user permission to the system. in order to make any changes into the system you should go to super user mode, for that you have to type the code as</p>
<pre><code>sudo pip install virtualenv
</code></pre>
<p>it will help you out </p>
| 0
|
2016-09-07T15:53:06Z
|
[
"python",
"pip"
] |
unable to install package using pip
| 39,374,141
|
<p>I am trying to install module using pip and I get this error:</p>
<pre class="lang-none prettyprint-override"><code>$ pip install virtualenv
Collecting virtualenv
Downloading virtualenv-15.0.3-py2.py3-none-any.whl (3.5MB)
100% |ââââââââââââââââââââââââââââââââ| 3.5MB 312kB/s
Installing collected packages: virtualenv
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_set.py", line 742, in install
**kwargs
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 831, in install
self.move_wheel_files(self.source_dir, root=root, prefix=prefix)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/req/req_install.py", line 1032, in move_wheel_files
isolated=self.isolated,
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/wheel.py", line 346, in move_wheel_files
clobber(source, lib_dir, True)
File "/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/pip/wheel.py", line 324, in clobber
shutil.copyfile(srcfile, destfile)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 83, in copyfile
with open(dst, 'wb') as fdst:
IOError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/virtualenv.py'
</code></pre>
<p>What is the problem and how can I resolve it?</p>
| 0
|
2016-09-07T15:38:57Z
| 39,374,548
|
<p>It's probably because the user you are logged as can't install to that folder.</p>
<p><strong>First option</strong>: You can do: </p>
<pre><code>sudo pip install virtualenv
</code></pre>
<p>to download as root user </p>
<p><strong>Second Option</strong>: you could do these commands in sequence in terminal:</p>
<p>First:</p>
<pre><code>cd /Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg/
</code></pre>
<p>This command will go to the folder you have pip installed in.</p>
<p>Second:</p>
<pre><code>ls -l
</code></pre>
<p>This command will show permissions for files/folders. In on the of the columns you will see users who has access to folder pip (eg. root).</p>
<p>Third: Change it to the user you are logged in as with instead of root:</p>
<pre><code>sudo chown -R your_username:your_username path/to/pip/
</code></pre>
<p>This is assuming that folders up in the hierarchy don't have root permissions, otherwise you would need to change them too. I'm talking about: Library, Python etc.</p>
| 0
|
2016-09-07T15:58:12Z
|
[
"python",
"pip"
] |
SQL / Pandas equivalent
| 39,374,195
|
<p>What would be the Pandas equivalent for this SQL query :</p>
<pre><code>select column1,
sum(column2) as A,
count(distinct column3) as B,
sum(column2) / count(distinct column3) as C
from table1
group by column1
</code></pre>
<p>Thanks for any help on that!!</p>
| -6
|
2016-09-07T15:41:52Z
| 39,374,819
|
<p>I'm not sure that the <code>sum(column2) / count(distinct column3) as C</code> part can be done in the same single step, but you can easily do it in two steps:</p>
<p>Demo:</p>
<pre><code>In [47]: df = pd.DataFrame(np.random.randint(0,5,size=(15, 3)), columns=['c1','c2','c3'])
In [48]: df
Out[48]:
c1 c2 c3
0 4 0 3
1 2 3 2
2 1 2 3
3 3 3 0
4 1 0 4
5 1 1 1
6 2 3 3
7 2 2 2
8 4 0 0
9 1 1 0
10 1 3 0
11 4 3 1
12 0 0 3
13 3 1 0
14 4 3 1
In [49]: x = df.groupby('c1').agg({'c2':'sum', 'c3': 'nunique'}).reset_index().rename(columns={'c2':'A', 'c3':'B'})
In [50]: x
Out[50]:
c1 A B
0 0 0 1
1 1 7 4
2 2 8 2
3 3 4 1
4 4 6 3
In [51]: x['C'] = x.A / x.B
In [52]: x
Out[52]:
c1 A B C
0 0 0 1 0.00
1 1 7 4 1.75
2 2 8 2 4.00
3 3 4 1 4.00
4 4 6 3 2.00
</code></pre>
| 0
|
2016-09-07T16:11:53Z
|
[
"python",
"sql",
"sql-server",
"pandas",
"dataframe"
] |
Algorithm for checking diagonal in N queens algortihm
| 39,374,235
|
<p>I am trying to implement N- Queens problem in python. I need a small help in designing the algorithm to check if given a position of Queen check whether any other queen on the board is present on its diagonal or not.</p>
<p>I am trying to design a function diagonal_check(board, row, col) where board is N*N matrix of arrays where '1' represents presence of queen and '0' represents absence.
I will pass array and position of queen (row,col) to the function. My function must return false if any other queen is present on its diagonal or else return true.</p>
<p>If anyone could help me with algorithm for diagonal_check function. Not looking for any specific language code.</p>
| -4
|
2016-09-07T15:43:25Z
| 39,374,469
|
<p>Let the top left corner be (0,0)</p>
<p>The down-right oriented diagonal for a square (row,col) is <code>col-row+7</code></p>
<p>The up-right oriented diagonal for a square (row,col) is <code>row+col</code></p>
<p>Simply check if 2 queens have the same <code>col-row+7</code> or <code>row+col</code> should tell you if 2 queens are on the same diagonal. If you still are a bit confused, look up Google for a chessboard image.</p>
| 0
|
2016-09-07T15:54:57Z
|
[
"python",
"algorithm",
"language-agnostic",
"n-queens"
] |
Algorithm for checking diagonal in N queens algortihm
| 39,374,235
|
<p>I am trying to implement N- Queens problem in python. I need a small help in designing the algorithm to check if given a position of Queen check whether any other queen on the board is present on its diagonal or not.</p>
<p>I am trying to design a function diagonal_check(board, row, col) where board is N*N matrix of arrays where '1' represents presence of queen and '0' represents absence.
I will pass array and position of queen (row,col) to the function. My function must return false if any other queen is present on its diagonal or else return true.</p>
<p>If anyone could help me with algorithm for diagonal_check function. Not looking for any specific language code.</p>
| -4
|
2016-09-07T15:43:25Z
| 39,375,457
|
<pre><code>boolean diagonalCheck(board, row, col) {
int tempRow ;
int tempCol ;
//algorithm to check left diagonal
if (row >= col) {
tempRow = row-col;
tempCol = 0;
} else {
tempRow = 0;
tempCol = col-row;
}
while (tempRow != N-1 && tempCol != N-1) {
if (tempRow == row && tempCol ==col ){
//no need to check
} else if(queen(tempRow,tempCol) == 1 ) {
return true;
}
tempRow++;
tempCol++;
}
//algorithm to check right diagonal
if (row + col >= N-1) {
tempCol = N-1;
tempRow = (row + col) -(N-1)
} else {
tempRow = 0;
tempCol = row + col;
}
while (tempRow != N-1 && tempCol != 0) {
if (tempRow == row && tempCol ==col ) {
//no need to check
} else if(queen(tempRow,tempCol) == 1 ) {
return true;
}
tempRow++;
tempCol--;
}
return false;
}
</code></pre>
| 0
|
2016-09-07T16:51:13Z
|
[
"python",
"algorithm",
"language-agnostic",
"n-queens"
] |
Python, OpenCV, Raspberry Pi-3 - Attribute Error - 'NoneType' object
| 39,374,335
|
<p>I am trying to track a green ball with my v2 camera module using openCV and python (using virtualenv), however I keep encountering an <strong>AttributeError: 'NoneType' object has no attribute 'shape'</strong></p>
<h1>Any help would be much appreciated!</h1>
<p>Traceback (most recent call last):</p>
<pre><code> File "/home/pi/ball-track/ball_tracking.py", line 52, in <module>
frame = imutils.resize(frame, width=600)
File "/usr/local/lib/python2.7/dist-packages/imutils/convenience.py", line 45, in resize
(h, w) = image.shape[:2]
AttributeError: 'NoneType' object has no attribute 'shape'
</code></pre>
<p>I have <strong>tried</strong> to add this bit of code in to amend the error however it does not work:</p>
<pre><code>while True:
grabbed, frame = camera.read()
if not grabbed:
continue
# the rest of the program
</code></pre>
<p><strong>I have a video file that I am trying to use within the script and contains the desired object (to be tracked)</strong></p>
<h1>Here is the code:</h1>
<pre><code># python ball_tracking.py --video ball_tracking_example.mp4
# python ball_tracking.py
# import the necessary packages
from collections import deque
import numpy as np
import argparse
import imutils
import cv2
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video",
help="path to the (optional) video file")
ap.add_argument("-b", "--buffer", type=int, default=64,
help="max buffer size")
args = vars(ap.parse_args())
# define the lower and upper boundaries of the "green"
# ball in the HSV color space, then initialize the
# list of tracked points
greenLower = (29, 86, 6)
greenUpper = (64, 255, 255)
pts = deque(maxlen=args["buffer"])
# if a video path was not supplied, grab the reference
# to the webcam
if not args.get("video", False):
camera = cv2.VideoCapture(0)
# otherwise, grab a reference to the video file
else:
camera = cv2.VideoCapture(args["video"])
# keep looping
while True:
# grab the current frame
(grabbed, frame) = camera.read()
# if we are viewing a video and we did not grab a frame,
# then we have reached the end of the video
if args.get("video") and not grabbed:
break
# resize the frame, blur it, and convert it to the HSV
# color space
frame = imutils.resize(frame, width=600)
# blurred = cv2.GaussianBlur(frame, (11, 11), 0)
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# construct a mask for the color "green", then perform
# a series of dilations and erosions to remove any small
# blobs left in the mask
mask = cv2.inRange(hsv, greenLower, greenUpper)
mask = cv2.erode(mask, None, iterations=2)
mask = cv2.dilate(mask, None, iterations=2)
# find contours in the mask and initialize the current
# (x, y) center of the ball
cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[-2]
center = None
# only proceed if at least one contour was found
if len(cnts) > 0:
# find the largest contour in the mask, then use
# it to compute the minimum enclosing circle and
# centroid
c = max(cnts, key=cv2.contourArea)
((x, y), radius) = cv2.minEnclosingCircle(c)
M = cv2.moments(c)
center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))
# only proceed if the radius meets a minimum size
if radius > 10:
# draw the circle and centroid on the frame,
# then update the list of tracked points
cv2.circle(frame, (int(x), int(y)), int(radius),
(0, 255, 255), 2)
cv2.circle(frame, center, 5, (0, 0, 255), -1)
# update the points queue
pts.appendleft(center)
# loop over the set of tracked points
for i in xrange(1, len(pts)):
# if either of the tracked points are None, ignore
# them
if pts[i - 1] is None or pts[i] is None:
continue
# otherwise, compute the thickness of the line and
# draw the connecting lines
thickness = int(np.sqrt(args["buffer"] / float(i + 1)) * 2.5)
cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness)
# show the frame to our screen
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# if the 'q' key is pressed, stop the loop
if key == ord("q"):
break
# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()
</code></pre>
| 0
|
2016-09-07T15:48:43Z
| 39,389,126
|
<p>For anybody experiencing the same issue I have resolved the error by typing the command:</p>
<pre><code> sudo modprobe bcm2835-v4l2
</code></pre>
<p>Works like a charm!</p>
| 0
|
2016-09-08T10:56:20Z
|
[
"python",
"opencv",
"raspberry-pi",
"virtualenv"
] |
Python find and replace last appearance in list
| 39,374,355
|
<p>In Python, I have a list of list</p>
<pre><code>list3 = ['PA0', 'PA1']
list2 = ['PB0', 'PB1']
list1 = ['PC0', 'PC1', 'PC2']
[(list1[i], list2[j], list3[k]) for i in xrange(len(list1)) for j in xrange(len(list2)) for k in xrange(len(list3))]
#Result
[('PC0', 'PB0', 'PA0'),
('PC0', 'PB0', 'PA1'),
('PC0', 'PB1', 'PA0'),
('PC0', 'PB1', 'PA1'),
('PC1', 'PB0', 'PA0'),
('PC1', 'PB0', 'PA1'),
('PC1', 'PB1', 'PA0'),
('PC1', 'PB1', 'PA1'),
('PC2', 'PB0', 'PA0'),
('PC2', 'PB0', 'PA1'),
('PC2', 'PB1', 'PA0'),
('PC2', 'PB1', 'PA1')]
</code></pre>
<p>How can I find the <strong>last appearance</strong> and add <strong>E</strong> as suffix</p>
<pre>
[('PC0', 'PB0', 'PA0'),
('PC0', 'PB0', 'PA1'),
('PC0', 'PB1', 'PA0'),
(<b>'PC0E'</b>, 'PB1', 'PA1'),
('PC1', 'PB0', 'PA0'),
('PC1', 'PB0', 'PA1'),
('PC1', 'PB1', 'PA0'),
(<b>'PC1E'</b>, 'PB1', 'PA1'),
('PC2', 'PB0', 'PA0'),
('PC2', <b>'PB0E'</b>, 'PA1'),
('PC2', 'PB1', <b>'PA0E'</b>),
(<b>'PC2E'</b>, <b>'PB1E'</b>, <b>'PA1E'</b>)]
</pre>
| 3
|
2016-09-07T15:49:27Z
| 39,374,497
|
<p>Process your input list in reverse, then mark the <strong>first</strong> occurrence of any value. You can use a list of sets to track what values you've already seen. Reverse the output list you build when you are done:</p>
<pre><code>seensets = [set() for _ in inputlist[0]]
outputlist = []
for entry in reversed(inputlist):
newentry = []
for value, seen in zip(entry, seensets):
newentry.append(value + 'E' if value not in seen else value)
seen.add(value)
outputlist.append(tuple(newentry))
outputlist.reverse()
</code></pre>
<p>Demo:</p>
<pre><code>>>> seensets = [set() for _ in inputlist[0]]
>>> outputlist = []
>>> for entry in reversed(inputlist):
... newentry = []
... for value, seen in zip(entry, seensets):
... newentry.append(value + 'E' if value not in seen else value)
... seen.add(value)
... outputlist.append(tuple(newentry))
...
>>> outputlist.reverse()
>>> pprint(outputlist)
[('PC0', 'PB0', 'PA0'),
('PC0', 'PB0', 'PA1'),
('PC0', 'PB1', 'PA0'),
('PC0E', 'PB1', 'PA1'),
('PC1', 'PB0', 'PA0'),
('PC1', 'PB0', 'PA1'),
('PC1', 'PB1', 'PA0'),
('PC1E', 'PB1', 'PA1'),
('PC2', 'PB0', 'PA0'),
('PC2', 'PB0E', 'PA1'),
('PC2', 'PB1', 'PA0E'),
('PC2E', 'PB1E', 'PA1E')]
</code></pre>
| 3
|
2016-09-07T15:56:02Z
|
[
"python",
"list"
] |
Python find and replace last appearance in list
| 39,374,355
|
<p>In Python, I have a list of list</p>
<pre><code>list3 = ['PA0', 'PA1']
list2 = ['PB0', 'PB1']
list1 = ['PC0', 'PC1', 'PC2']
[(list1[i], list2[j], list3[k]) for i in xrange(len(list1)) for j in xrange(len(list2)) for k in xrange(len(list3))]
#Result
[('PC0', 'PB0', 'PA0'),
('PC0', 'PB0', 'PA1'),
('PC0', 'PB1', 'PA0'),
('PC0', 'PB1', 'PA1'),
('PC1', 'PB0', 'PA0'),
('PC1', 'PB0', 'PA1'),
('PC1', 'PB1', 'PA0'),
('PC1', 'PB1', 'PA1'),
('PC2', 'PB0', 'PA0'),
('PC2', 'PB0', 'PA1'),
('PC2', 'PB1', 'PA0'),
('PC2', 'PB1', 'PA1')]
</code></pre>
<p>How can I find the <strong>last appearance</strong> and add <strong>E</strong> as suffix</p>
<pre>
[('PC0', 'PB0', 'PA0'),
('PC0', 'PB0', 'PA1'),
('PC0', 'PB1', 'PA0'),
(<b>'PC0E'</b>, 'PB1', 'PA1'),
('PC1', 'PB0', 'PA0'),
('PC1', 'PB0', 'PA1'),
('PC1', 'PB1', 'PA0'),
(<b>'PC1E'</b>, 'PB1', 'PA1'),
('PC2', 'PB0', 'PA0'),
('PC2', <b>'PB0E'</b>, 'PA1'),
('PC2', 'PB1', <b>'PA0E'</b>),
(<b>'PC2E'</b>, <b>'PB1E'</b>, <b>'PA1E'</b>)]
</pre>
| 3
|
2016-09-07T15:49:27Z
| 39,374,713
|
<p>If you are not looking for lightning speed here, you could do the following:</p>
<ol>
<li>Flatten the list using <a href="http://stackoverflow.com/a/952952/2988730">http://stackoverflow.com/a/952952/2988730</a></li>
<li>Find the unique elements</li>
<li>Find the index of the last occurrence of each unique element (by reversing the list)</li>
<li>Update the element</li>
<li>Reshape the flattened list back using <a href="http://stackoverflow.com/a/10124783/2988730">http://stackoverflow.com/a/10124783/2988730</a></li>
</ol>
<p>Here is a sample implementation:</p>
<pre><code># 1
flat = list(reversed([x for group in mylist for x in group]))
# 2
uniq = set(flat)
# 3, 4
for x in uniq:
flat[flat.index(x)] += 'E'
# 5
mylist = list(zip(*[reversed(flat)]*3))
</code></pre>
<p>Result:</p>
<pre><code>[('PC0', 'PB0', 'PA0'),
('PC0', 'PB0', 'PA1'),
('PC0', 'PB1', 'PA0'),
('PC0E', 'PB1', 'PA1'),
('PC1', 'PB0', 'PA0'),
('PC1', 'PB0', 'PA1'),
('PC1', 'PB1', 'PA0'),
('PC1E', 'PB1', 'PA1'),
('PC2', 'PB0', 'PA0'),
('PC2', 'PB0E', 'PA1'),
('PC2', 'PB1', 'PA0E'),
('PC2E', 'PB1E', 'PA1E')]
</code></pre>
| 1
|
2016-09-07T16:06:20Z
|
[
"python",
"list"
] |
Python find and replace last appearance in list
| 39,374,355
|
<p>In Python, I have a list of list</p>
<pre><code>list3 = ['PA0', 'PA1']
list2 = ['PB0', 'PB1']
list1 = ['PC0', 'PC1', 'PC2']
[(list1[i], list2[j], list3[k]) for i in xrange(len(list1)) for j in xrange(len(list2)) for k in xrange(len(list3))]
#Result
[('PC0', 'PB0', 'PA0'),
('PC0', 'PB0', 'PA1'),
('PC0', 'PB1', 'PA0'),
('PC0', 'PB1', 'PA1'),
('PC1', 'PB0', 'PA0'),
('PC1', 'PB0', 'PA1'),
('PC1', 'PB1', 'PA0'),
('PC1', 'PB1', 'PA1'),
('PC2', 'PB0', 'PA0'),
('PC2', 'PB0', 'PA1'),
('PC2', 'PB1', 'PA0'),
('PC2', 'PB1', 'PA1')]
</code></pre>
<p>How can I find the <strong>last appearance</strong> and add <strong>E</strong> as suffix</p>
<pre>
[('PC0', 'PB0', 'PA0'),
('PC0', 'PB0', 'PA1'),
('PC0', 'PB1', 'PA0'),
(<b>'PC0E'</b>, 'PB1', 'PA1'),
('PC1', 'PB0', 'PA0'),
('PC1', 'PB0', 'PA1'),
('PC1', 'PB1', 'PA0'),
(<b>'PC1E'</b>, 'PB1', 'PA1'),
('PC2', 'PB0', 'PA0'),
('PC2', <b>'PB0E'</b>, 'PA1'),
('PC2', 'PB1', <b>'PA0E'</b>),
(<b>'PC2E'</b>, <b>'PB1E'</b>, <b>'PA1E'</b>)]
</pre>
| 3
|
2016-09-07T15:49:27Z
| 39,375,340
|
<p>Another approach that gathers keeps adding the indexes so you end up with the indexes for the last occurrence, <em>itertools.product</em> will also create the initial list for you:</p>
<pre><code>from itertools import product
def last_inds(prod):
# the key/value will be overwritten so we always keep the last seen
return {ele: (i1, i2) for i1, prod in enumerate(prod) for i2, ele in enumerate(prod)}
prod = list(product(*(list1, list2, list3)))
# use the indexes to change the last occurrences.
for r, c in last_inds(prod).values():
lst = list(prod[r])
lst[c] += "E"
prod[r] = tuple(lst)
</code></pre>
<p>Which gives you the expected output:</p>
<pre><code>[('PC0', 'PB0', 'PA0'),
('PC0', 'PB0', 'PA1'),
('PC0', 'PB1', 'PA0'),
('PC0E', 'PB1', 'PA1'),
('PC1', 'PB0', 'PA0'),
('PC1', 'PB0', 'PA1'),
('PC1', 'PB1', 'PA0'),
('PC1E', 'PB1', 'PA1'),
('PC2', 'PB0', 'PA0'),
('PC2', 'PB0E', 'PA1'),
('PC2', 'PB1', 'PA0E'),
('PC2E', 'PB1E', 'PA1E')]
</code></pre>
<p>On my timings it is the fastest approach using your data.</p>
<pre><code>In [37]: %%timeit
prod = list(product(*(list1, list2, list3)))
m(prod)
....:
10000 loops, best of 3: 20.7 µs per loop
In [38]: %%timeit
prod = list(product(*(list1, list2, list3)))
for r, c in last_inds(prod).values():
lst = list(prod[r])
lst[c] += "E"
prod[r] = tuple(lst)
....:
100000 loops, best of 3: 12.2 µs per loop
</code></pre>
<p>Where m is:</p>
<pre><code>def m(inputlist):
seensets = [set() for _ in inputlist[0]]
outputlist = []
for entry in reversed(inputlist):
newentry = []
for value, seen in zip(entry, seensets):
newentry.append(value + 'E' if value not in seen else value)
seen.add(value)
outputlist.append(tuple(newentry))
outputlist.reverse()
</code></pre>
| 2
|
2016-09-07T16:43:03Z
|
[
"python",
"list"
] |
Python ASCII Reading tables
| 39,374,384
|
<p>I'm really sorry for asking probably silly question but I have spend half a day and could not find reasonable solution.
I have ASCII file:</p>
<pre><code> "X" "Z" "Y"
285807.2 -1671.056 2405.91
285807.2 -1651.162 2394.932
285807.2 -1631.269 2383.962
285807.2 -1611.375 2372.988
285807.2 -1591.481 2362.01
285807.2 -1571.587 2351.01
</code></pre>
<p>.............................................
~1 000 000 rows</p>
<p>And I normaly read it:</p>
<pre><code>from astropy.io import ascii
data =ascii.read('C:\\Users\\Protoss\\Desktop\\Ishodnik1.dat')
print (data)
</code></pre>
<p>But How could I deal with columns? For instance sum each rows or make an average value, etc only from columns Z and Y? As I understand I have to converts all my date to list of float valuse except for headliner and then to write new ASCII file, doesn't it?</p>
| -1
|
2016-09-07T15:51:01Z
| 39,395,150
|
<p>What I did. I separated columns and added to different lists. Now I have got access for different columns:</p>
<pre><code>import numpy as np
with open('C:\\Users\\Protoss\\Desktop\\Ishodnik.dat' ,'r') as f:
header1 = f.readline()
X_list=[]
Z_list=[]
V_list=[]
for line in f:
line = line.strip()
columns = line.split()
X = (float(columns[0]))
Z = (float(columns[1]))
V = (float(columns[2]))
X_list.append(X)
Z_list.append(Z)
V_list.append(V)
</code></pre>
| 0
|
2016-09-08T15:40:15Z
|
[
"python",
"ascii"
] |
Python ASCII Reading tables
| 39,374,384
|
<p>I'm really sorry for asking probably silly question but I have spend half a day and could not find reasonable solution.
I have ASCII file:</p>
<pre><code> "X" "Z" "Y"
285807.2 -1671.056 2405.91
285807.2 -1651.162 2394.932
285807.2 -1631.269 2383.962
285807.2 -1611.375 2372.988
285807.2 -1591.481 2362.01
285807.2 -1571.587 2351.01
</code></pre>
<p>.............................................
~1 000 000 rows</p>
<p>And I normaly read it:</p>
<pre><code>from astropy.io import ascii
data =ascii.read('C:\\Users\\Protoss\\Desktop\\Ishodnik1.dat')
print (data)
</code></pre>
<p>But How could I deal with columns? For instance sum each rows or make an average value, etc only from columns Z and Y? As I understand I have to converts all my date to list of float valuse except for headliner and then to write new ASCII file, doesn't it?</p>
| -1
|
2016-09-07T15:51:01Z
| 39,471,759
|
<p>It's possible to treat the file as CSV and use <code>Sniffer</code> to autodetect the format:</p>
<pre><code>import csv
with open('C:\\Users\\Protoss\\Desktop\\Ishodnik1.dat', 'r') as f:
# Sniff to autodetect the format
dialect = csv.Sniffer().sniff(f.read(1024))
f.seek(0)
reader = csv.reader(f, dialect)
# Read line by line and store as list of tuples
data = []
header = tuple(next(reader))
for row in reader:
data.append(tuple(row))
</code></pre>
| 0
|
2016-09-13T13:48:27Z
|
[
"python",
"ascii"
] |
Python bokeh apply hovertools only on model not on figure
| 39,374,400
|
<p>I want to have a scatter plot and a (base)line on the same figure. And I want to use <code>HoverTool</code> only on the circles of scatter but not on the line. Is it possible?</p>
<p>With the code below I get tooltips with <code>index: 0</code> and <code>(x, y): (???, ???)</code> when I hover on the line (any part of the line). But the <code>index: 0</code> data in <code>source</code> is totally different (<code>(x, y): (1, 2)</code>)...</p>
<pre><code>df = pd.DataFrame({'a':[1, 3, 6, 9], 'b':[2, 3, 5, 8]})
from bokeh.models import HoverTool
import bokeh.plotting as bplt
TOOLS = ['box_zoom', 'box_select', 'wheel_zoom', 'reset', 'pan', 'resize', 'save']
source = bplt.ColumnDataSource(data=df)
hover = HoverTool(tooltips=[("index", "$index"), ("(x, y)", "(@a, @b)")])
p = bplt.figure(plot_width=600, plot_height=600, tools=TOOLS+[hover],
title="My sample bokeh plot", webgl=True)
p.circle('a', 'b', size=10, source=source)
p.line([0, 10], [0, 10], color='red')
bplt.save(p, 'c:/_teszt.html')
</code></pre>
<p>Thank you!!</p>
| 0
|
2016-09-07T15:52:06Z
| 39,374,741
|
<p>To limit which renderers you want the HoverTool is active on (by default it's active on all) you can either set a <code>name</code> attr on your glyphs, then specify which names you want your HoverTool to be active on:</p>
<pre><code>p.circle('a', 'b', size=10, name='circle', source=source)
hover = HoverTool(names=['circle'])
</code></pre>
<p>docs: <a href="http://bokeh.pydata.org/en/latest/docs/reference/models/tools.html#bokeh.models.tools.HoverTool.names" rel="nofollow">http://bokeh.pydata.org/en/latest/docs/reference/models/tools.html#bokeh.models.tools.HoverTool.names</a></p>
<p>or you can add the renderers to the HoverTool.</p>
<pre><code>circle = p.circle('a', 'b', size=10, source=source)
hover = HoverTool(renderers=['circle'])
</code></pre>
<p>docs: <a href="http://bokeh.pydata.org/en/latest/docs/reference/models/tools.html#bokeh.models.tools.HoverTool.renderers" rel="nofollow">http://bokeh.pydata.org/en/latest/docs/reference/models/tools.html#bokeh.models.tools.HoverTool.renderers</a></p>
| 2
|
2016-09-07T16:07:56Z
|
[
"python",
"hover",
"tooltip",
"bokeh"
] |
How to install and cron python3 Scrapy on cloud linux
| 39,374,448
|
<p>I have a Scrapy spider written in python 3 and I want to run it as a cron job on my cloud Linux server (I have root access)
first, I couldn't install using <code>pip3 install scrapy</code>, I faced :</p>
<pre><code>Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/tarfile.py", line 1642, in bz2open
import bz2
File "/usr/local/lib/python3.4/bz2.py", line 20, in <module>
from _bz2 import BZ2Compressor, BZ2Decompressor
ImportError: No module named '_bz2'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/usr/local/lib/python3.4/site-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/usr/local/lib/python3.4/site-packages/pip/req.py", line 1197, in prepare_files
do_download,
File "/usr/local/lib/python3.4/site-packages/pip/req.py", line 1375, in unpack_url
self.session,
File "/usr/local/lib/python3.4/site-packages/pip/download.py", line 582, in unpack_http_url
unpack_file(temp_location, location, content_type, link)
File "/usr/local/lib/python3.4/site-packages/pip/util.py", line 625, in unpack_file
untar_file(filename, location)
File "/usr/local/lib/python3.4/site-packages/pip/util.py", line 543, in untar_file
tar = tarfile.open(filename, mode)
File "/usr/local/lib/python3.4/tarfile.py", line 1567, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/local/lib/python3.4/tarfile.py", line 1644, in bz2open
raise CompressionError("bz2 module is not available")
tarfile.CompressionError: bz2 module is not available
</code></pre>
<p>then how can I run it as a cron job ?</p>
| -1
|
2016-09-07T15:54:06Z
| 39,374,786
|
<p>It appears as though you do not have the bzip module installed on your cloud server. It may very well be linked to a lack of the bzip2 library being missing.</p>
<p>In order to install it, you can type:</p>
<pre><code>apt-get install bzip2
</code></pre>
<p>You may need to prefix that command with <code>sudo</code>.</p>
<p>In all honesty, it should be working "out of the box" as this is one of Python's major strengths. Hopefully this helps!</p>
| -1
|
2016-09-07T16:10:05Z
|
[
"python",
"linux",
"centos",
"scrapy",
"pip"
] |
List comprehension of 2+ variables in R
| 39,374,587
|
<p>What's the best R equivalent of the Python 2-variable list comprehension</p>
<pre><code>[datetime(y,m,15) for y in xrange(2000,2020) for m in [3,6,9,12]]
</code></pre>
<p>The result</p>
<pre><code>[datetime.datetime(2000, 3, 15, 0, 0),
datetime.datetime(2000, 6, 15, 0, 0),
datetime.datetime(2000, 9, 15, 0, 0),
datetime.datetime(2000, 12, 15, 0, 0),
datetime.datetime(2001, 3, 15, 0, 0) ... ]
</code></pre>
| 0
|
2016-09-07T16:00:08Z
| 39,374,820
|
<p>This will produce equivalent results in R</p>
<pre><code>with(expand.grid(m=c(3,6,9,12), y=2000:2020), ISOdate(y,m,15))
</code></pre>
<p>We use <code>expand.grid</code> to get all combinations of year and month, and then we just use the vectorized <code>ISOdate</code> function to get the values.</p>
| 5
|
2016-09-07T16:11:58Z
|
[
"python"
] |
DynamoDB Parallel Scan not splitting results
| 39,374,617
|
<p>I'm using the <code>Segment</code> and <code>TotalSegments</code> parameters to split my DynamoDB scan over multiple workers (as shown in the <a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScan.html#QueryAndScanParallelScan" rel="nofollow">Parallel Scan section</a> of the developer guide).</p>
<p>However, all of the results get returned to one worker. What could be the issue here? Is there perhaps an issue with how I've implemented threading?</p>
<pre><code>import threading
import boto3
def scan_foo_table(foo, segment, total_segments):
print 'Looking at segment ' + str(segment)
session = boto3.session.Session()
dynamoDbClient = session.client('dynamodb')
response = dynamoDbClient.scan(
TableName='FooTable',
FilterExpression='bar=:bar',
ExpressionAttributeValues={
':bar': {'S': bar}
},
Segment=segment,
TotalSegments=total_segments,
)
print 'Segment ' + str(segment) + ' returned ' + str(len(response['Items'])) + ' items'
def create_threads(bar):
thread_list = []
total_threads = 3
for i in range(total_threads):
# Instantiate and store the thread
thread = threading.Thread(target=scan_foo_table, args=(bar, i, total_threads))
thread_list.append(thread)
# Start threads
for thread in thread_list:
thread.start()
# Block main thread until all threads are finished
for thread in thread_list:
thread.join()
def lambda_handler(event, context):
create_threads('123')
</code></pre>
<p>Output:</p>
<pre><code>Looking at segment 0
Looking at segment 1
Looking at segment 2
Segment 1 returned 0 items
Segment 2 returned 0 items
Segment 0 returned 10000 items
</code></pre>
| 2
|
2016-09-07T16:01:36Z
| 39,383,698
|
<p>One thing that jumps at me is the filter expression.</p>
<p>It is possible that the items that match your filter expression are located all in the first segment.</p>
<p>It is also worth noting that a parallel scan doesn't split the items, but it splits the key-space that is searched for items. Think of it as dividing a very large highway into multiple lanes. It is possible le that most cars are in the fast lane and you won't see any cars in other lanes.</p>
<p>Though in this case it seems more likely that the filter expression is what is causing only one segment to return items.</p>
| 0
|
2016-09-08T06:13:58Z
|
[
"python",
"amazon-dynamodb",
"boto",
"aws-lambda"
] |
pandas apply individual logic to group
| 39,374,620
|
<p>If i have a pandas data frame that looks like:</p>
<pre><code>day id val
1-Jan A -5
2-Jan A -4
3-Jan A 3
1-Jan B 2
2-Jan B 1
3-Jan B -5
</code></pre>
<p>how can i add a new column where, for all rows with the same id, if val was negative on 1-Jan, all rows are "Y" and "N" if not? something like this:</p>
<pre><code>day id val neg_on_jan_1
1-Jan A -5 y
2-Jan A -4 y
3-Jan A 3 y
1-Jan B 2 n
2-Jan B 1 n
3-Jan B -5 n
</code></pre>
<p>I've looked at group by and apply-lambda functions but still feel like i'm missing something. I'm just starting out with pandas, coming from a background in SQL, so please forgive me if my brain still thinks in rows and Oracle analytic functions :)</p>
| 2
|
2016-09-07T16:01:41Z
| 39,374,957
|
<p>Included <code>map</code> per @Ami Tavory's suggestion</p>
<pre><code>gb = df.set_index(['day', 'id']).groupby(level='id')
s = gb.val.transform(lambda s: s.loc['1-Jan'].lt(0)).map({1: 'y', 0:'n'})
s
day id
1-Jan A y
2-Jan A y
3-Jan A y
1-Jan B n
2-Jan B n
3-Jan B n
Name: val, dtype: object
</code></pre>
<hr>
<pre><code>df.merge(s.to_frame('neg_on_jan_1').reset_index())
</code></pre>
<p><a href="http://i.stack.imgur.com/wubU6.png" rel="nofollow"><img src="http://i.stack.imgur.com/wubU6.png" alt="enter image description here"></a></p>
| 3
|
2016-09-07T16:18:22Z
|
[
"python",
"pandas"
] |
lxml non-recursive full tag
| 39,374,683
|
<p>Given the following xml:</p>
<pre><code><node a='1' b='1'>
<subnode x='25'/>
</node>
</code></pre>
<p>I would like to extract the tagname and all attributes for the first node, i.e., the verbatim code:</p>
<pre><code><node a='1' b='1'>
</code></pre>
<p>without the subnode.</p>
<p>For example in Python, <code>tostring</code> returns too much:</p>
<pre><code>from lxml import etree
root = etree.fromstring("<node a='1' b='1'><subnode x='25'>some text</subnode></node>")
print(etree.tostring(root))
</code></pre>
<p>returns</p>
<pre><code>b'<node a="1" b="1"><subnode x="25">some text</subnode></node>'
</code></pre>
<p>The following gives the desired result, but is much too verbose:</p>
<pre><code>tag = root.tag
for att, val in root.attrib.items():
tag += ' '+att+'="'+val+'"'
tag = '<'+tag+'>'
print(tag)
</code></pre>
<p>result:</p>
<pre><code><node a="1" b="1">
</code></pre>
<p>What is an easier (and guaranteed attribute order preserving) way of doing this?</p>
| 0
|
2016-09-07T16:04:21Z
| 39,375,212
|
<p>You can remove all of the subnodes.</p>
<pre><code>from lxml import etree
root = etree.fromstring("<node a='1' b='1'><subnode x='25'>some text</subnode></node>")
for subnode in root.xpath("//subnode"):
subnode.getparent().remove(subnode)
etree.tostring(root) # '<node a="1" b="1"/>'
</code></pre>
<p>Alternatively, you can use a simple regex. Order is guaranteed.</p>
<pre><code>import re
res = re.search('<(.*?)>', etree.tostring(root))
res.group(1) # "node a='1' b='1'"
</code></pre>
| 1
|
2016-09-07T16:34:36Z
|
[
"python",
"recursion",
"lxml"
] |
Urllib problom: AttributeError: 'module' object has no attribute 'maketrans'
| 39,374,783
|
<p>The environment is Win10 64-bit, Python 2.7.12, Anaconda.
The code is quite simple for web-scrapy:</p>
<pre><code>import urllib
fhand = urllib.urlopen('http://www.reddit.com')
for line in fhand:
print line.strip()
</code></pre>
<p>And the result is weird:</p>
<pre><code>0.8475
Traceback (most recent call last):
File ".\catch-web.py", line 1, in <module>
import urllib
File "C:\Users\XxX\Anaconda2\lib\urllib.py", line 30, in <module>
import base64
File "C:\Users\XxX\Anaconda2\lib\base64.py", line 98, in <module>
_urlsafe_encode_translation = string.maketrans(b'+/', b'-_')
AttributeError: 'module' object has no attribute 'maketrans'
</code></pre>
<p>The code could run on the other PC with iPython, but could not work on this one. I have re-installed Anaconda for several times, but failed.</p>
<p>I would appreciate it if you could solve it. </p>
| 0
|
2016-09-07T16:09:50Z
| 39,411,673
|
<p>try Spyder, everything works.</p>
<p>still no idea for the problem.</p>
| 0
|
2016-09-09T12:30:58Z
|
[
"python",
"anaconda",
"urllib"
] |
Fill_in_blank_game. Unable to select level
| 39,374,809
|
<p>Whenever I type in one of the level options the console prints</p>
<p>"C-3PO: That entry will not compute sir." and then prmpts again.</p>
<p>So in other words I am unable to select the level of the game. I enter in Padawan for example, which is one of the selections and instead of showing the paragraph with blanks to the code runs the while loop of chosen_level not in choices.</p>
<pre><code>padawan_level = "Luke Skywalker is the son of \n Darth __1__! \n Boba Fett is the son of __2__ Fett. \nis type in:\n Stormtroopers were previously known as __3__ troopers\n Yoda was a Jedi __4__ \n"padawan_inputs = ['Darth Vader','Jango','Clone','Master']
jedi_level = "\n Han Solo owed money to Jabba the __1__ \n Princess Leia\'s last name is __2__.\n Han Solo's Ship is called the __3__ Falcon.\n Boba Fett's ship was called the __4__ 1.\n"jedi_inputs = ['Hutt','Organa','Millennium','Slave']
master_level = "Princess Leia's home planet was __1__.\n Darth Vader was born on the planet__2__.\n Senetor Palpatine was also known as Darth __3__.\n Luke Skywalker was raised by his unlce __4__.\n" master_inputs = ['Alderaan','Tatooine','Sidious','Owen']
def select_level():
"""Prompts user for level'"""
prompt = "Please select a game difficulty by typing it in!\n"
prompt += "Possible choices include Padawan, Jedi, and Master.\n"
choices = {x:"Padawan" for x in ("Padawan", '1',)}
choices.update({y:"Jedi" for y in ("Jedi", '2',)})
choices.update({z:"Master" for z in ("Master", '3')})
chosen_level = raw_input(prompt).lower()
while chosen_level not in choices:
print "C-3PO: That entry will not compute sir."
chosen_level = raw_input(prompt).lower()
print "C-3PO: You've selected " + str(choices[chosen_level]) + '!\n'
return choices[chosen_level]
def get_answers(level):
global padawan_level
global padawan_inputs
global jedi_level
global jedi_inputs
global master_level
global master_inputs
if level == 'Padawan':
return (padawan_level, padawan_inputs)
if level == 'Jedi':
return (jedi_level, jedi_inputs)
if level == 'Master':
return (master_level, master_inputs)
print "C-3PO: Error, try again."
raise ValueError
def ask_question(blank_game, blank_num, answer, max_try = 3):
trys_left = max_trys
to_replace = '__' + str(blank_num) + '__'
prompt = make_display(blank_game, to_replace, trys_left, max_trys)
user_guess = raw_input(prompt).lower()
while user_guess != answer.lower() and trys_left > 1:
trys_left -= 1
prompt = make_display(blank_game, to_replace, trys_left, max_trys)
user_guess = raw_input(prompt).lower()
if trys_left > 1:
print '\nCorrect!\n'
return (blank_game.replace(to_replace, answer), blank_num + 1)
else:
return (None, blank_num + 1)
def make_display(current_mode, to_replace, trys_left, max_trys):
"""Returns a string to user."""
prompt = "\nC-3PO: current data reads as such:\n{}\n\n"
prompt += "C-3PO: What should be in place of space {}?"
prompt = prompt.format(current_mode, to_replace)
if trys_left == max_trys:
return prompt
new_prompt = "Incorrect sir...Don't blame me. I'm an interpreter."
if trys_left > 1:
new_prompt += "Excuse me sir, but might I inquire as to what's going on? {} trys left!\n"
else:
new_prompt += "If I may say so, you only have {} try left!\n"
return new_prompt.format(trys_left) + prompt
def find_max_guess():
print "C-3PO: You have 4 guesses per question"
return 4
def play_game():
level = select_level()
blank_game, answers = get_answers(level)
max_guess = find_max_guess()
current_blank = 1
while current_blank <= len(answers):
blank_game, current_blank = ask_question(blank_game, current_blank, answers[current_blank - 1], max_guess)
if blank_game is None:
print "C-3PO: We're doomed."
return False
print blank_game + "\nOh, yes, that\'s very good, I like that.\n"
return True
play_game()
</code></pre>
<p><a href="http://i.stack.imgur.com/P7Ihk.jpg" rel="nofollow">Here is my code.</a></p>
| -2
|
2016-09-07T16:11:24Z
| 39,376,137
|
<p>You have a few problems with your code. First <code>padawan_inputs</code>, <code>jedi_inputs</code>, and <code>master_inputs</code> should each be on a new line. Next this segment:</p>
<pre><code>print "C-3PO: You've selected " + str(choices[chosen_level]) + '!\n'
return choices[chosen_level]
</code></pre>
<p>Should probably be indented to be in the <code>select_level</code> function. Also in <code>ask_question</code> you get an error with <code>trys_left = max_trys</code> because <code>max_trys</code> isn't defined, remove the <code>s</code>. Onto your question, right now <code>choices</code> is <code>{'1': 'Padawan', 'Jedi': 'Jedi', '2': 'Jedi', 'Master': 'Master', 'Padawan': 'Padawan', '3': 'Master'}</code> and you check if the users lowercase input is the array:</p>
<pre><code>chosen_level = raw_input(prompt).lower()
while chosen_level not in choices:
print "C-3PO: That entry will not compute sir."
chosen_level = raw_input(prompt).lower()
</code></pre>
<p>None of the values in <code>choices</code> are lowercase only. Either remove <code>lower()</code> or fix <code>choices</code> to have valid entry items. When I removed <code>lower()</code> a level was selected.</p>
<pre><code>chosen_level = raw_input(prompt)
</code></pre>
| 0
|
2016-09-07T17:37:13Z
|
[
"python"
] |
How to return the Index when slicing Pandas Dataframe
| 39,374,871
|
<pre><code> df2= pd.DataFrame(df1.iloc[:, [n for n in random.sample(range(1, 7), 3)]])
</code></pre>
<blockquote>
<p>returns df1 rows and selected columns but it returns a generic index 0,1,2,3..etc instead
of returning the Datetime index of df1 which is what I want to keep. I tried:</p>
</blockquote>
<pre><code>df2=df1.copy(deep=True)
df2= pd.DataFrame(data=None, columns=df1.columns, index=df1.index)
df2= df1.iloc[:, [n for n in random.sample(range(1, 7), 3)]]
</code></pre>
<p>but it does not work... </p>
| 1
|
2016-09-07T16:14:18Z
| 39,374,913
|
<p>what about slightly different approach?</p>
<pre><code>In [66]: df
Out[66]:
c1 c2 c3
2016-01-01 4 0 3
2016-01-02 2 3 2
2016-01-03 1 2 3
2016-01-04 3 3 0
2016-01-05 1 0 4
2016-01-06 1 1 1
2016-01-07 2 3 3
2016-01-08 2 2 2
2016-01-09 4 0 0
2016-01-10 1 1 0
2016-01-11 1 3 0
2016-01-12 4 3 1
2016-01-13 0 0 3
2016-01-14 3 1 0
2016-01-15 4 3 1
In [67]: rnd = df.sample(n=6)
In [68]: rnd.index
Out[68]: DatetimeIndex(['2016-01-04', '2016-01-03', '2016-01-12', '2016-01-02', '2016-01-01', '2016-01-13'], dtype='datetime64[ns]', freq=None)
In [69]: rnd
Out[69]:
c1 c2 c3
2016-01-04 3 3 0
2016-01-03 1 2 3
2016-01-12 4 3 1
2016-01-02 2 3 2
2016-01-01 4 0 3
2016-01-13 0 0 3
</code></pre>
| 0
|
2016-09-07T16:16:20Z
|
[
"python",
"pandas",
"indexing",
"dataframe",
"slice"
] |
How to return the Index when slicing Pandas Dataframe
| 39,374,871
|
<pre><code> df2= pd.DataFrame(df1.iloc[:, [n for n in random.sample(range(1, 7), 3)]])
</code></pre>
<blockquote>
<p>returns df1 rows and selected columns but it returns a generic index 0,1,2,3..etc instead
of returning the Datetime index of df1 which is what I want to keep. I tried:</p>
</blockquote>
<pre><code>df2=df1.copy(deep=True)
df2= pd.DataFrame(data=None, columns=df1.columns, index=df1.index)
df2= df1.iloc[:, [n for n in random.sample(range(1, 7), 3)]]
</code></pre>
<p>but it does not work... </p>
| 1
|
2016-09-07T16:14:18Z
| 39,375,114
|
<p>Try this:</p>
<pre><code>df2 = pd.DataFrame(df1.ix[:,random.sample(range(1,7),3)])
</code></pre>
<p>This will give the result you wanted. </p>
<pre><code>df1
Out[130]:
one two
d NaN 4.0
b 2.0 2.0
c 3.0 3.0
a 1.0 1.0
df1.ix[:,random.sample(range(0,2),2)]
Out[131]:
two one
d 4.0 NaN
b 2.0 2.0
c 3.0 3.0
a 1.0 1.0
</code></pre>
<p>This will randomly sample your columns and returns them in df2. This will return all the rows of the randomly sampled columns with index as was in df1.</p>
<p>Edit-
As MaxU has suggested, you can simply use:</p>
<pre><code>df2 = df1.ix[:, random.sample(df.columns.tolist(), 3)].copy()
</code></pre>
<p>instead of calling the pd.DataFrame() constructor.</p>
| 2
|
2016-09-07T16:28:18Z
|
[
"python",
"pandas",
"indexing",
"dataframe",
"slice"
] |
Python create list combination
| 39,374,928
|
<p>I have a dictionary contain element and sequence count of element. I want to create a list of list that combined from these elements</p>
<p>Example</p>
<pre><code>Input: dictElement = {"PA":2,"PB":2}
Expected Output:
[('PB0', 'PA0'),
('PB0', 'PA1'),
('PB1', 'PA0'),
('PB1', 'PA1')]
</code></pre>
<hr>
<pre><code>Input: dictElement = {"PA":2,"PB":2,"PC":3}
Expected Output:
[('PC0', 'PB0', 'PA0'),
('PC0', 'PB0', 'PA1'),
('PC0', 'PB1', 'PA0'),
('PC0', 'PB1', 'PA1'),
('PC1', 'PB0', 'PA0'),
('PC1', 'PB0', 'PA1'),
('PC1', 'PB1', 'PA0'),
('PC1', 'PB1', 'PA1'),
('PC2', 'PB0', 'PA0'),
('PC2', 'PB0', 'PA1'),
('PC2', 'PB1', 'PA0'),
('PC2', 'PB1', 'PA1')]
</code></pre>
<p>Note: Number elements of dictionary can be change</p>
| -5
|
2016-09-07T16:16:45Z
| 39,375,044
|
<p>You haven't specified in what order the keys of the dictionary should be processed in the output. If one assumes reverse sorting order, you can do this trivially with <code>itertools.product()</code>:</p>
<pre><code>from itertools import product
combinations = product(*(['{0}{1}'.format(v, i) for i in range(dictElement[v])]
for v in sorted(dictElement, reverse=True))
</code></pre>
<p>Demo:</p>
<pre><code>>>> from itertools import product
>>> dictElement = {"PA":2,"PB":2,"PC":3}
>>> combinations = product(*(['{0}{1}'.format(v, i) for i in range(dictElement[v])]
... for v in sorted(dictElement, reverse=True)))
>>> for combo in combinations:
... print(combo)
...
('PC0', 'PB0', 'PA0')
('PC0', 'PB0', 'PA1')
('PC0', 'PB1', 'PA0')
('PC0', 'PB1', 'PA1')
('PC1', 'PB0', 'PA0')
('PC1', 'PB0', 'PA1')
('PC1', 'PB1', 'PA0')
('PC1', 'PB1', 'PA1')
('PC2', 'PB0', 'PA0')
('PC2', 'PB0', 'PA1')
('PC2', 'PB1', 'PA0')
('PC2', 'PB1', 'PA1')
</code></pre>
| 1
|
2016-09-07T16:24:01Z
|
[
"python",
"combinations"
] |
Create dataframe from specific column
| 39,374,995
|
<p>I am trying to create a dataframe in Pandas from the <code>AB</code> column in my csv file. (AB is the 27th column).</p>
<p>I am using this line:</p>
<pre><code>df = pd.read_csv(filename, error_bad_lines = False, usecols = [27])
</code></pre>
<p>... which is resulting in this error:</p>
<pre><code>ValueError: Usecols do not match names.
</code></pre>
<p>I'm very new to Pandas, could someone point out what i'm doing wrong to me?</p>
| 1
|
2016-09-07T16:20:39Z
| 39,375,132
|
<p>usecols uses the column name in your csv file rather than the column number.
in your case it should be usecols=['AB'] rather than usecols=[28] that is the reason of your error stating usecols do not match names.</p>
| -1
|
2016-09-07T16:29:13Z
|
[
"python",
"pandas"
] |
Create dataframe from specific column
| 39,374,995
|
<p>I am trying to create a dataframe in Pandas from the <code>AB</code> column in my csv file. (AB is the 27th column).</p>
<p>I am using this line:</p>
<pre><code>df = pd.read_csv(filename, error_bad_lines = False, usecols = [27])
</code></pre>
<p>... which is resulting in this error:</p>
<pre><code>ValueError: Usecols do not match names.
</code></pre>
<p>I'm very new to Pandas, could someone point out what i'm doing wrong to me?</p>
| 1
|
2016-09-07T16:20:39Z
| 39,376,419
|
<p>Here is a small demo:</p>
<p>CSV file (without header, i.e. there is NO column names):</p>
<pre><code>1,2,3,4,5,6,7,8,9,10
11,12,13,14,15,16,17,18,19,20
</code></pre>
<p>We are going to read only 8-<code>th</code> column:</p>
<pre><code>In [1]: fn = r'D:\temp\.data\1.csv'
In [2]: df = pd.read_csv(fn, header=None, usecols=[7], names=['col8'])
In [3]: df
Out[3]:
col8
0 8
1 18
</code></pre>
<p>PS pay attention at <code>header=None, usecols=[7], names=['col8']</code></p>
<p>If you don't use <code>header=None</code> and <code>names</code> parameters, the first row will be used as a header:</p>
<pre><code>In [6]: df = pd.read_csv(fn, usecols=[7])
In [7]: df
Out[7]:
8
0 18
In [8]: df.columns
Out[8]: Index(['8'], dtype='object')
</code></pre>
<p>and if we want to read only the last 10-<code>th</code> column:</p>
<pre><code>In [9]: df = pd.read_csv(fn, usecols=[10])
... skipped ...
ValueError: Usecols do not match names.
</code></pre>
<p>because pandas counts columns starting from <code>0</code>, so we have to do it this way:</p>
<pre><code>In [12]: df = pd.read_csv(fn, usecols=[9], names=['col10'])
In [13]: df
Out[13]:
col10
0 10
1 20
</code></pre>
| 2
|
2016-09-07T17:55:07Z
|
[
"python",
"pandas"
] |
using python reserved keyword as variable name
| 39,375,023
|
<p>im trying to send sms using a webservice , this is what webservice document suggest :</p>
<pre><code>response = client.service.SendSMS( fromNum = '09999999' ,
toNum = '0666666666666',
messageContent = 'test',
messageType = 'normal',
user = 'myusername',
pass = '123456' ,
)
</code></pre>
<p>to be fair they dont have document for python only php/asp so i've converted this from their php sample but as unlike me some may know <code>pass</code> is a reserved keyword of python </p>
<p>so i cant have variable name <code>pass</code> becuz i get syntax error ! </p>
<p>is there a way around trhis or i should switch to another webservice ? i wish we could put variable names in quotation mark or something </p>
| 1
|
2016-09-07T16:22:29Z
| 39,375,095
|
<p>Maybe try it like this:</p>
<pre><code>sms_kwargs = {
'toNum': '0666666666666',
'messageContent': 'test',
'messageType': 'normal',
'user': 'myusername',
'pass': '123456'
}
response = client.service.SendSMS(**sms_kwargs)
</code></pre>
| 5
|
2016-09-07T16:27:18Z
|
[
"python",
"django",
"web-services",
"python-3.x"
] |
using python reserved keyword as variable name
| 39,375,023
|
<p>im trying to send sms using a webservice , this is what webservice document suggest :</p>
<pre><code>response = client.service.SendSMS( fromNum = '09999999' ,
toNum = '0666666666666',
messageContent = 'test',
messageType = 'normal',
user = 'myusername',
pass = '123456' ,
)
</code></pre>
<p>to be fair they dont have document for python only php/asp so i've converted this from their php sample but as unlike me some may know <code>pass</code> is a reserved keyword of python </p>
<p>so i cant have variable name <code>pass</code> becuz i get syntax error ! </p>
<p>is there a way around trhis or i should switch to another webservice ? i wish we could put variable names in quotation mark or something </p>
| 1
|
2016-09-07T16:22:29Z
| 39,375,100
|
<p>You can pass in arbitrary strings as keyword arguments using the <code>**dictionary</code> call syntax:</p>
<pre><code>response = client.service.SendSMS( fromNum = '09999999' ,
toNum = '0666666666666',
messageContent = 'test',
messageType = 'normal',
user = 'myusername',
**{'pass': '123456'}
)
</code></pre>
<p>You can move all the keyword arguments into a dictionary if you want to, and assign that dictionary to a variable before applying.</p>
| 7
|
2016-09-07T16:27:34Z
|
[
"python",
"django",
"web-services",
"python-3.x"
] |
How to multiply pandas dataframes and preserve row keys
| 39,375,069
|
<p>I'm struggling with multiplying dataframes together and preserving the row keys.</p>
<p>I have two files, call them say F1 and F2. F1 has a multi-part group key (g1,g2,g3), a two-part Type key (k1,k2) and some weights (r1,r2). F2 has a series of values for each Type key.</p>
<p>I'd like to join them on k1 and k2, and multiply r1 and r2 for each n.</p>
<p>I'm thinking that groupby and dataframe multiply should work but I can't see how to do it. The only thing I've got to work is merge and then multiply column by column, but it's super-slow.</p>
<pre><code>F1
g1 g2 g3 k1 k2 r1 r2
A A A A A 1 2
A A A A B 3 4
A A B A B 2 3
F2
k1 k2 n r1 r2
A A 1 0 1
A A 2 1 1
A A 3 1 0
A B 1 3 4
A B 2 4 4
A B 3 4 3
A C 1 1 1
A C 3 4 5
A C 2 3 4
Result
g1 g2 g3 k1 k2 n r1 r2
A A A A A 1 0 2
A A A A A 2 1 2
A A A A A 3 1 0
A A A A B 1 9 16
A A A A B 2 12 16
A A A A B 3 12 12
A A B A B 1 6 12
A A B A B 2 8 12
A A B A B 3 8 9
</code></pre>
<p>Thanks</p>
| 3
|
2016-09-07T16:25:39Z
| 39,375,425
|
<pre><code>mrg = F1.merge(F2, on=['k1', 'k2'])
mrg['r1'] = mrg.filter(like='r1').prod(1)
mrg['r2'] = mrg.filter(like='r2').prod(1)
drops = ['r1_x', 'r1_y', 'r2_x', 'r2_y']
mrg.drop(drops, axis=1)
</code></pre>
<p><a href="http://i.stack.imgur.com/NR963.png" rel="nofollow"><img src="http://i.stack.imgur.com/NR963.png" alt="enter image description here"></a></p>
| 2
|
2016-09-07T16:48:52Z
|
[
"python",
"pandas",
"dataframe"
] |
Writing to a specific dictionary inside a json file Python
| 39,375,090
|
<p>I have a .json file with some information </p>
<pre><code>{"items": [{"phone": "testp"}, {"phone": "Test2"}]}
</code></pre>
<p>I want to add another dictionary into the next available slot in the array with some more information. I have tried many ways however no of them work, has anyone got any ideas?</p>
| -1
|
2016-09-07T16:27:04Z
| 39,375,202
|
<pre><code>dictionary["items"].append(new_dictionary)
</code></pre>
<p>You access key "items" in your dictionary, which is a list of dictionaries, to that list you simply append your new dictionary</p>
| 0
|
2016-09-07T16:33:45Z
|
[
"python",
"json",
"dictionary"
] |
Writing to a specific dictionary inside a json file Python
| 39,375,090
|
<p>I have a .json file with some information </p>
<pre><code>{"items": [{"phone": "testp"}, {"phone": "Test2"}]}
</code></pre>
<p>I want to add another dictionary into the next available slot in the array with some more information. I have tried many ways however no of them work, has anyone got any ideas?</p>
| -1
|
2016-09-07T16:27:04Z
| 39,375,206
|
<p>You can read the json into a variable using load function.</p>
<p>And then append the value of array using </p>
<pre><code>myVar["items"].append(dictVar)
</code></pre>
| 0
|
2016-09-07T16:34:08Z
|
[
"python",
"json",
"dictionary"
] |
can't workout why getting a type error on __init__
| 39,375,092
|
<p>Hope someone can help me...</p>
<p>Have the following:</p>
<p>smartsheet_test.py</p>
<pre><code>from pfcms.content import Content
def main():
ss = Content()
ss.smartsheet()
if __name__ == "__main__":
main()
</code></pre>
<p>content.py</p>
<pre><code>import smartsheet as ss
class Content:
""" PFCMS SmartSheet utilities """
def __init__(self):
self.token = 'xxxxxxxxxxxxxxxxxxx'
def smartsheet(self):
smartsheet = ss.Smartsheet(self.token)
</code></pre>
<p>however when I execute the code I get:</p>
<pre><code>python -d smartsheet_test.py
Traceback (most recent call last):
File "smartsheet_test.py", line 8, in <module>
main()
File "smartsheet_test.py", line 5, in main
ss.smartsheet()
File "/xxxxx/pfcms/pfcms/content.py", line 10, in smartsheet
smartsheet = ss.Smartsheet(self.token)
TypeError: __init__() takes exactly 1 argument (2 given)
</code></pre>
<p>is <code>self</code> being passed into <code>ss.Smartsheet(self.token)</code> somehow, all I can see is that I'm passing the argument <code>self.token</code>. My knowledge of Python isn't too deep at this point. Any help greatly appreciated.</p>
<p>Thanks
Alex </p>
| 1
|
2016-09-07T16:27:07Z
| 39,375,141
|
<p>You have (or had) a file called <code>smartsheet.py</code> in your current working directory. Delete or rename that file, and delete any related <code>.pyc</code> file.</p>
| 1
|
2016-09-07T16:29:34Z
|
[
"python",
"class",
"smartsheet-api"
] |
Repeat list if index range is out of bounds
| 39,375,122
|
<p>I have a Python list</p>
<pre><code>a = [1, 2, 3, 4]
</code></pre>
<p>and I'd like to get a range of indices such that if I select the indices <code>0</code> through <code>N</code>, I'm getting (for <code>N=10</code>) the repeated</p>
<pre><code>[1, 2, 3, 4, 1, 2, 3, 4, 1, 2]
</code></pre>
<p>I could of course repeat the list via <code>(int(float(N) / len(a) - 0.5) + 1) * a</code> first and select the range <code>[0:10]</code> out of that, but that feels rather clumsy.</p>
<p>Any hints?</p>
| 2
|
2016-09-07T16:28:44Z
| 39,375,166
|
<p>You can simply use the modulo operator when accessing the list, i.e.</p>
<pre><code>a[i % len(a)]
</code></pre>
<p>This will give you the same result, but doesn't require to actually store the redundant elements.</p>
| 6
|
2016-09-07T16:31:20Z
|
[
"python"
] |
Repeat list if index range is out of bounds
| 39,375,122
|
<p>I have a Python list</p>
<pre><code>a = [1, 2, 3, 4]
</code></pre>
<p>and I'd like to get a range of indices such that if I select the indices <code>0</code> through <code>N</code>, I'm getting (for <code>N=10</code>) the repeated</p>
<pre><code>[1, 2, 3, 4, 1, 2, 3, 4, 1, 2]
</code></pre>
<p>I could of course repeat the list via <code>(int(float(N) / len(a) - 0.5) + 1) * a</code> first and select the range <code>[0:10]</code> out of that, but that feels rather clumsy.</p>
<p>Any hints?</p>
| 2
|
2016-09-07T16:28:44Z
| 39,375,175
|
<p>You can use <a href="https://docs.python.org/3.6/library/itertools.html#itertools.cycle" rel="nofollow"><code>itertools.cycle</code></a> and <a href="https://docs.python.org/3.6/library/itertools.html#itertools.islice" rel="nofollow"><code>itertools.islice</code></a>:</p>
<pre><code>from itertools import cycle, islice
my_list = list(islice(cycle(my_list), 10))
</code></pre>
<p>Note that if you just want to iterate over this once, you should avoid calling <code>list</code> and just iterate over the iterable, since this avoids allocating repeated elements.</p>
| 4
|
2016-09-07T16:31:45Z
|
[
"python"
] |
Repeat list if index range is out of bounds
| 39,375,122
|
<p>I have a Python list</p>
<pre><code>a = [1, 2, 3, 4]
</code></pre>
<p>and I'd like to get a range of indices such that if I select the indices <code>0</code> through <code>N</code>, I'm getting (for <code>N=10</code>) the repeated</p>
<pre><code>[1, 2, 3, 4, 1, 2, 3, 4, 1, 2]
</code></pre>
<p>I could of course repeat the list via <code>(int(float(N) / len(a) - 0.5) + 1) * a</code> first and select the range <code>[0:10]</code> out of that, but that feels rather clumsy.</p>
<p>Any hints?</p>
| 2
|
2016-09-07T16:28:44Z
| 39,375,447
|
<pre><code>>>> a = [1, 2, 3, 4]
>>> (a*3)[:-2]
>>> [1, 2, 3, 4, 1, 2, 3, 4, 1, 2]
</code></pre>
| 0
|
2016-09-07T16:50:02Z
|
[
"python"
] |
Repeat list if index range is out of bounds
| 39,375,122
|
<p>I have a Python list</p>
<pre><code>a = [1, 2, 3, 4]
</code></pre>
<p>and I'd like to get a range of indices such that if I select the indices <code>0</code> through <code>N</code>, I'm getting (for <code>N=10</code>) the repeated</p>
<pre><code>[1, 2, 3, 4, 1, 2, 3, 4, 1, 2]
</code></pre>
<p>I could of course repeat the list via <code>(int(float(N) / len(a) - 0.5) + 1) * a</code> first and select the range <code>[0:10]</code> out of that, but that feels rather clumsy.</p>
<p>Any hints?</p>
| 2
|
2016-09-07T16:28:44Z
| 39,375,520
|
<p>Thought I would offer a solution using the <code>*</code> operator for lists.</p>
<pre><code>import math
def repeat_iterable(a, N):
factor = N / len(a) + 1
repeated_list = a * factor
return repeated_list[:N]
</code></pre>
<p><strong>Sample Output:</strong></p>
<pre><code>>>> print repeat_iterable([1, 2, 3, 4], 10)
[1, 2, 3, 4, 1, 2, 3, 4, 1, 2]
>>> print repeat_iterable([1, 2, 3, 4], 3)
[1, 2, 3]
>>> print repeat_iterable([1, 2, 3, 4], 0)
[]
>>> print repeat_iterable([1, 2, 3, 4], 14)
[1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2]
</code></pre>
| 1
|
2016-09-07T16:54:52Z
|
[
"python"
] |
Repeat list if index range is out of bounds
| 39,375,122
|
<p>I have a Python list</p>
<pre><code>a = [1, 2, 3, 4]
</code></pre>
<p>and I'd like to get a range of indices such that if I select the indices <code>0</code> through <code>N</code>, I'm getting (for <code>N=10</code>) the repeated</p>
<pre><code>[1, 2, 3, 4, 1, 2, 3, 4, 1, 2]
</code></pre>
<p>I could of course repeat the list via <code>(int(float(N) / len(a) - 0.5) + 1) * a</code> first and select the range <code>[0:10]</code> out of that, but that feels rather clumsy.</p>
<p>Any hints?</p>
| 2
|
2016-09-07T16:28:44Z
| 39,376,351
|
<p>How about faking it? Python is good at faking.</p>
<pre><code>class InfiniteList(object):
def __init__(self, data):
self.data = data
def __getitem__(self, i):
return self.data[i % len(self.data)]
x = InfiniteList([10, 20, 30])
x[0] # 10
x[34] # 20
</code></pre>
<p>Of course, you could add <code>__iter__</code>, support for slices etc. You could also add a limit (N), but this is the general idea.</p>
| 0
|
2016-09-07T17:51:41Z
|
[
"python"
] |
Repeat list if index range is out of bounds
| 39,375,122
|
<p>I have a Python list</p>
<pre><code>a = [1, 2, 3, 4]
</code></pre>
<p>and I'd like to get a range of indices such that if I select the indices <code>0</code> through <code>N</code>, I'm getting (for <code>N=10</code>) the repeated</p>
<pre><code>[1, 2, 3, 4, 1, 2, 3, 4, 1, 2]
</code></pre>
<p>I could of course repeat the list via <code>(int(float(N) / len(a) - 0.5) + 1) * a</code> first and select the range <code>[0:10]</code> out of that, but that feels rather clumsy.</p>
<p>Any hints?</p>
| 2
|
2016-09-07T16:28:44Z
| 39,384,792
|
<p>One easy way is to use modulo with list comprehensions à la </p>
<pre><code>a = [1, 2, 3 ,4]
[k % len(a) for k in range(10)]
</code></pre>
| 1
|
2016-09-08T07:19:33Z
|
[
"python"
] |
Is there a way to reduce the amount of code for RMSProp
| 39,375,173
|
<p>I have some code for a simple recurrent neural network and would like to know if there is a way for me to reduce the amount of code necessary for my update stage. The code I have so for:</p>
<pre><code>class RNN(object):
def__init___(self, data, hidden_size, eps=0.0001):
self.data = data
self.hidden_size = hidden_size
self.weights_hidden = np.random.rand(hidden_size, hidden_size) * 0.1 # W
self.weights_input = np.random.rand(hidden_size, len(data[0])) * 0.1 # U
self.weights_output = np.random.rand(len(data[0]), hidden_size) * 0.1 # V
self.bias_hidden = np.array([np.random.rand(hidden_size)]).T # b
self.bias_output = np.array([np.random.rand(len(data[0]))]).T # c
self.cache_w_hid, self.cache_w_in, self.cache_w_out = 0, 0, 0
self.cache_b_hid, self.cache_b_out = 0, 0
self.eps = eps
def train(self, seq_length, epochs, eta, decay_rate=0.9, learning_decay=0.0):
# Other stuff
self.update(seq, epoch, eta, decay_rate, learning_decay)
# Other Stuff
def update(self, seq, epoch, eta, decay_rate, learning_decay):
"""Updates the network's weights and biases by applying gradient
descent using backpropagation through time and RMSPROP.
"""
delta_nabla_c, delta_nabla_b,\
delta_nabla_V, delta_nabla_W, delta_nabla_U = self.backward_pass(seq)
eta = eta*np.exp(-epoch*learning_decay)
self.cache_w_hid += decay_rate * self.cache_w_hid \
+ (1 - decay_rate) * delta_nabla_W**2
self.weights_hidden -= eta * delta_nabla_W / (np.sqrt(self.cache_w_hid) + self.eps)
self.cache_w_in += decay_rate * self.cache_w_in \
+ (1 - decay_rate) * delta_nabla_U**2
self.weights_input -= eta * delta_nabla_U / (np.sqrt(self.cache_w_in) + self.eps)
self.cache_w_out += decay_rate * self.cache_w_out \
+ (1 - decay_rate) * delta_nabla_V**2
self.weights_output -= eta * delta_nabla_V / (np.sqrt(self.cache_w_out) + self.eps)
self.cache_b_hid += decay_rate * self.cache_b_hid \
+ (1 - decay_rate) * delta_nabla_b**2
self.bias_hidden -= eta * delta_nabla_b / (np.sqrt(self.cache_b_hid) + self.eps)
self.cache_b_out += decay_rate * self.cache_b_out \
+ (1 - decay_rate) * delta_nabla_c**2
self.bias_output -= eta * delta_nabla_c / (np.sqrt(self.cache_b_out) + self.eps)
</code></pre>
<p>For every variable under <code>#RMSProp</code> follows the update rule, namely:</p>
<pre><code>cache = decay_rate * cache + (1 - decay_rate) * dx**2
x += - learning_rate * dx / (np.sqrt(cache) + eps)
</code></pre>
<p>I have <code>cache_</code> all declared followed by <code>self.weight_</code> or <code>self.bias_</code> and would like to have this written more compactly. I was looking at using <code>zip()</code> but I'm not sure how to go about that. </p>
| 1
|
2016-09-07T16:31:39Z
| 39,376,556
|
<p>Judging from your question, I am guessing that you are trying to improve readability/elegance over any other kind of optimization here.</p>
<p>You can introduce a function to implement the update rule, then call it once for each variable. The trick here is that Python lets you access attributes by name, so you can pass in the name of your cache and weights attribute instead of the value. This will let you update the value for future passes:</p>
<pre><code>def update_rule(self, cache_attr, x_attr, decay_rate, learning_rate, dx):
cache = getattr(self, cache_attr)
cache = decay_rate * cache + (1 - decay_rate) * dx**2
setattr(self, cache_attr, cache)
x = getattr(self, x_attr)
x += - learning_rate * dx / (np.sqrt(cache) + self.eps)
setattr(self, x_attr, x)
def update(self, seq, epoch, eta, decay_rate, learning_decay):
"""Updates the network's weights and biases by applying gradient
descent using backpropagation through time and RMSPROP.
"""
delta_nabla_c, delta_nabla_b,\
delta_nabla_V, delta_nabla_W, delta_nabla_U = self.backward_pass(seq)
eta = eta*np.exp(-epoch*learning_decay)
self.update_rule('cache_w_hid', 'weights_hidden', decay_rate, eta, delta_nabla_W)
self.update_rule('cache_w_in', 'weights_input', decay_rate, eta, delta_nabla_U)
self.update_rule('cache_w_out', 'weights_output', decay_rate, eta, delta_nabla_V)
self.update_rule('cache_b_hid', 'bias_hidden', decay_rate, eta, delta_nabla_b)
self.update_rule('cache_b_out', 'bias_output', decay_rate, eta, delta_nabla_c)
</code></pre>
<p>In fact, you can save additional parameters and avoid exposing what is basically a private method by putting <code>update_rule</code> into <code>update</code>. This will expose the namespace of <code>update</code> to <code>update_rule</code> when it is called, so you do not have to pass in <code>decay_rate</code> and <code>learning_rate</code>:</p>
<pre><code>def update(self, seq, epoch, eta, decay_rate, learning_decay):
"""Updates the network's weights and biases by applying gradient
descent using backpropagation through time and RMSPROP.
"""
def update_rule(cache_attr, x_attr, dx):
cache = getattr(self, cache_attr)
cache = decay_rate * cache + (1 - decay_rate) * dx**2
setattr(self, cache_attr, cache)
x = getattr(self, x_attr)
x += - eta * dx / (np.sqrt(cache) + self.eps)
setattr(self, x_attr, x)
delta_nabla_c, delta_nabla_b,\
delta_nabla_V, delta_nabla_W, delta_nabla_U = self.backward_pass(seq)
eta = eta*np.exp(-epoch*learning_decay)
update_rule('cache_w_hid', 'weights_hidden', delta_nabla_W)
update_rule('cache_w_in', 'weights_input', delta_nabla_U)
update_rule('cache_w_out', 'weights_output', delta_nabla_V)
update_rule('cache_b_hid', 'bias_hidden', delta_nabla_b)
update_rule('cache_b_out', 'bias_output', delta_nabla_c)
</code></pre>
<p>Finally, if you really wanted, you could use <code>zip</code> to put the calls to <code>update_rule</code> into a loop. Notice that for this version, the order of the calls has been changed to match the order of the values returned by <code>self.backward_pass</code>. Personally I would not use this last version unless you really had a lot of updates to do because it is starting to look obfuscated in addition to the fact that it is very sensitive to the result of <code>backward_pass</code>.</p>
<pre><code>def update(self, seq, epoch, eta, decay_rate, learning_decay):
"""Updates the network's weights and biases by applying gradient
descent using backpropagation through time and RMSPROP.
"""
def update_rule(cache_attr, x_attr, dx):
cache = getattr(self, cache_attr)
cache = decay_rate * cache + (1 - decay_rate) * dx**2
setattr(self, cache_attr, cache)
x = getattr(self, x_attr)
x += - eta * dx / (np.sqrt(cache) + self.eps)
setattr(self, x_attr, x)
dx = self.backward_pass(seq)
eta = eta*np.exp(-epoch*learning_decay)
cache_attrs = ('cache_b_out', 'cache_b_hid', 'cache_w_out', 'cache_w_hid', 'cache_w_in')
x_attrs = ('bias_output', 'bias_hidden', 'weights_output', 'weights_hidden', 'weights_input')
for args in zip(cache_attrs, x_attrs, dx):
update_rule(*args)
</code></pre>
| 1
|
2016-09-07T18:05:16Z
|
[
"python"
] |
Python netcdf plot data onto grid
| 39,375,227
|
<p>I am using Lat, Lon data and would like to average all the sample_data within a grid cell (say 1km x 1km) uniformly across the whole area, and then plot it similar to this post, but with a basemap, I'm a bit stuck where to start:
<a href="http://stackoverflow.com/questions/25071968/heatmap-with-text-in-each-cell-with-matplotlibs-pyplot/25074150#25074150">Heatmap with text in each cell with matplotlib's pyplot</a></p>
<p>The code below plots values through each time point, and I'd like to average the data within defined grid squares across the whole data area at each time point, and plot the average value onto the basemap with a grid at set time intervals (ie. make a set of images for a timelapse/movie). </p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from netCDF4 import Dataset
import matplotlib.cm as cm
data = Dataset(netcdf_data,'r')
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
time = data.variables['time'][:]
fig = plt.figure(figsize=(1800,1200))
m = Basemap(projection='ortho',lon_0=5,lat_0=35,resolution='l')
m.drawcoastlines()
for value in range(0,len(sample_data)):
m.plot(lat[:,value], lon[:,value], alpha=1, latlon=True)
plt.show()
</code></pre>
| 0
|
2016-09-07T16:35:34Z
| 39,376,054
|
<p>Well, you can get the maximum and minimum of latitude and longitude. </p>
<p>And then get the age of <code>sample_data</code> at a particular time <code>t</code> using:</p>
<pre><code>data_within_box = sample_data[minLat:maxLat,minLon:maxLon,:,t]
avg_age = numpy.average(data_within_box)
</code></pre>
| 0
|
2016-09-07T17:31:26Z
|
[
"python",
"matplotlib",
"grid",
"netcdf"
] |
Python netcdf plot data onto grid
| 39,375,227
|
<p>I am using Lat, Lon data and would like to average all the sample_data within a grid cell (say 1km x 1km) uniformly across the whole area, and then plot it similar to this post, but with a basemap, I'm a bit stuck where to start:
<a href="http://stackoverflow.com/questions/25071968/heatmap-with-text-in-each-cell-with-matplotlibs-pyplot/25074150#25074150">Heatmap with text in each cell with matplotlib's pyplot</a></p>
<p>The code below plots values through each time point, and I'd like to average the data within defined grid squares across the whole data area at each time point, and plot the average value onto the basemap with a grid at set time intervals (ie. make a set of images for a timelapse/movie). </p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from netCDF4 import Dataset
import matplotlib.cm as cm
data = Dataset(netcdf_data,'r')
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
time = data.variables['time'][:]
fig = plt.figure(figsize=(1800,1200))
m = Basemap(projection='ortho',lon_0=5,lat_0=35,resolution='l')
m.drawcoastlines()
for value in range(0,len(sample_data)):
m.plot(lat[:,value], lon[:,value], alpha=1, latlon=True)
plt.show()
</code></pre>
| 0
|
2016-09-07T16:35:34Z
| 40,142,235
|
<p>This is a crude way of doing it:</p>
<pre><code>lat_1 = 58
lat_2 = 60
lon_1 = 2
lon_2 = 5
size = 0
age2 = 0
for parts in range(10):
for length in range(len(lon)):
if lon[length,parts] < lon_2:
if lon[length,parts] > lon_1:
if lat[length,parts] > lat_1:
if lat[length,parts] < lat_2:
age = (age[length, parts])
size = size + 1
age2 = age + age2
mean = age2/size
</code></pre>
| 0
|
2016-10-19T22:01:05Z
|
[
"python",
"matplotlib",
"grid",
"netcdf"
] |
Run a .bat file from a Python CGI file with Apache Webserver
| 39,375,244
|
<p>This might be really easy. But its just not working for me. </p>
<p>I have a .bat file I would like to run, which performs stuff on the Server, and should send an email with an Attachement.</p>
<p>The .bat file works fine, it sends the email with the log and everything. </p>
<p>Now I would like to run that file from a Webserver. So that I can click on an HTML form Button, and it executes. </p>
<p>I have installed Apache, Python 2.7 for it. </p>
<p>I have configured Apache to allow cgi files, and It works when I put a file as index.py with following code.</p>
<p>But when I press the Submit button it goes through, but the .bat files is not being executed. Help! :)</p>
<p>Is there another way I can run a .bat file to do stuff on my server from a Webserver maybe? thank you in beforehand. </p>
<p>I tried the action in the form to direct to a .py and .cgi file... don't get it to work</p>
<p>Below the code I a have been using. </p>
<pre><code>#!/Python27/python
#!/usr/bin/env python
import cgi
import cgitb; cgitb.enable()
print "Content-type: text/html"
print
print "<html><head>"
print "<form action='../cgi-bin/send_email.py'>"
print "<input type='submit' value='Submit'>"
print "</form>"
</code></pre>
<p>send_email.py looks like this. </p>
<pre><code>#!/Python27/python
#!/usr/bin/env python
import cgi
import cgitb; cgitb.enable()
from subprocess import Popen
p = Popen("batch.bat", cwd=r"C:\Path\to\batchfolder")
stdout, stderr = p.communicate()
</code></pre>
| 2
|
2016-09-07T16:36:18Z
| 39,375,547
|
<p>You can invoke the batch file with <code>cmd.exe</code>:</p>
<pre><code>...
cmd = r'c:\Windows\System32\cmd.exe'
batDir = r'C:\Path\to\batchfolder'
batName = r'batch.bat'
p = Popen(r"{0} /C {1}\{2}".format(cmd,batDir,batName), cwd=batDir)
...
</code></pre>
| 0
|
2016-09-07T16:56:15Z
|
[
"python",
"windows",
"apache",
"batch-file"
] |
In python, append dictionary value with each element in array
| 39,375,250
|
<p>Hi and thank you so much for your help!</p>
<p>I am sure this is a dumb question but I am trying to append a dictionary value with the elements within an array. Right now I can only get it to load the entire array of elements as one entry instead of separate values. Sorry if I am not explaining this well. Here is an example:</p>
<pre><code>Array = [4,5,6]
dictionary {'index'} = [1,2,3]
</code></pre>
<p>Here is what I am doing and it's wrong</p>
<pre><code>dictionary['index'].append(array)
</code></pre>
<p>It's wrong because if I inquiry how many elements are in dictionary ['index'][1] it returns 1 instead of 3. Look here:</p>
<pre><code>print range(len(dictionary['index'][0]))
</code></pre>
<p>The answer is 3, thats 1 2 and 3. however!</p>
<pre><code>print range(len(dictionary['index'][1]))
</code></pre>
<p>The answer is 1, [4,5,6]. I must be loading the array in incorrectly. Does anyone know what I am doing wrong? Thanks!</p>
| -2
|
2016-09-07T16:36:49Z
| 39,375,322
|
<p>If you want to append an entire list to an existing list, you can just use <code>extend()</code>:</p>
<pre><code>a = [4,5,6]
d = {'index': [1,2,3]}
d['index'].extend(a)
</code></pre>
<p>output:</p>
<pre><code>{'index': [1, 2, 3, 4, 5, 6]}
</code></pre>
<p>If you want to end up with a list containing two lists, [1,2,3] and [4,5,6], then you should consider how the dictionary value is initially created. You can do:</p>
<pre><code>d['index'] = [1,2,3]
d['index'] = [d['index']]
d['index'].append([4,5,6])
</code></pre>
<p>output:</p>
<pre><code>{'index': [[1, 2, 3], [4, 5, 6]]}
</code></pre>
| 3
|
2016-09-07T16:42:11Z
|
[
"python",
"arrays",
"dictionary",
"append"
] |
Training TensorFlow to predict a sum
| 39,375,283
|
<p>The TensorFlow provided examples are a little complicated for getting started, so I am trying to teach TensorFlow train a neural network to predict the sum of three binary digits. The network gets two of them as inputs; the third one is unknown. So an "optimal" network would guess that the sum will be the sum of the two known bits, plus 1/2 for the unknown bit. Let's say that the "loss" function is the square of the difference between the value predicted by the network and the actual value.</p>
<p>I have written code to generate the trials:</p>
<pre><code>import tensorflow as tf
import numpy as np
from random import randint
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_integer('batch_size', 5, 'Batch size. ')
flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.')
flags.DEFINE_integer('dim1', 3, 'layer size')
flags.DEFINE_integer('training_epochs', 10, 'Number of passes through the main training loop')
def ezString(list):
#debugging code so I can see what is going on
listLength = len(list)
r = ''
for i in range(listLength):
value = list[i]
valueString = str(value)
r = r + ' '
r = r + valueString
return r
def generateTrial():
inputs = np.zeros(2, dtype=np.int)
for i in range(2):
inputs[i] = randint(0,1)
unknownInput = randint(0,1)
sum = 0
for j in range(2):
sum = sum + inputs[j]
sum = sum + unknownInput
inputTensor = tf.pack(inputs)
print 'inputs' + ezString(inputs)
print 'unknown ' + str(unknownInput)
print 'sum ' + str(sum)
print ''
return inputTensor, sum
def printTensor(tensor):
sh = tensor.get_shape()
print(sh)
def placeholder_inputs(size):
output_placeholder = tf.placeholder(tf.int32, shape=(size))
input_placeholder = tf.placeholder(tf.int32, shape=(size,
2))
return input_placeholder, output_placeholder
def fill_feed_dict(inputs_pl, output_pl):
print ('Filling feed dict')
inputs_placeholder, output_placeholder = placeholder_inputs(FLAGS.batch_size)
inputs = []
outputs = []
for i in range(FLAGS.batch_size):
input, output = generateTrial()
inputTensor = tf.pack(input)
inputs.append(input)
outputs.append(output)
inputs_placeholder = tf.pack(inputs)
outputs_placeholder = tf.pack(outputs)
def run_training():
input_placeholder, output_placeholder = placeholder_inputs(FLAGS.batch_size)
fill_feed_dict(input_placeholder, output_placeholder)
printTensor(input_placeholder)
printTensor(output_placeholder)
run_training()
</code></pre>
<p>The output suggests that this much is working:</p>
<pre><code>Filling feed dict
inputs 1 0
unknown 0
sum 1
inputs 1 0
unknown 1
sum 2
inputs 0 1
unknown 1
sum 2
inputs 0 1
unknown 0
sum 1
inputs 0 0
unknown 0
sum 0
(5, 2)
(5,)
</code></pre>
<p>But I'm unclear on how I would finish it up. In particular, I need to define a loss function, and I also need to hook things up so that the outputs from my network get used to generate guesses for further training steps. Can anyone help?</p>
| 2
|
2016-09-07T16:39:27Z
| 39,376,780
|
<p>I'm not sure whether this code is what you wanted to get, but i hope you would find it useful anyway. Mean squared error is actually decreasing along the iterations, though I haven't tested it for making predictions, so it's up to you!</p>
<pre><code>import tensorflow as tf
import numpy as np
from random import randint
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_integer('batch_size', 50, 'Batch size.')
flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.')
flags.DEFINE_integer('dim1', 3, 'layer size')
flags.DEFINE_integer('training_epochs', 10, 'Number of passes through the main training loop')
flag.DEFINE_integer('num_iters', 100, 'Number of iterations')
def ezString(list):
#debugging code so I can see what is going on
listLength = len(list)
r = ''
for i in range(listLength):
value = list[i]
valueString = str(value)
r = r + ' '
r = r + valueString
return r
def generateTrial():
inputs = np.zeros(2, dtype = np.float)
for i in range(2):
inputs[i] = randint(0, 1)
unknownInput = randint(0, 1)
um = 0
for j in range(2):
sum = sum + inputs[j]
sum = sum + unknownInput
inputTensor = np.asarray(inputs)
return inputTensor, sum
def printTensor(tensor):
sh = tensor.get_shape()
print(sh)
def placeholder_inputs(size):
output_placeholder = tf.placeholder(tf.float32, shape=(size))
input_placeholder = tf.placeholder(tf.float32, shape=(size, 2))
return input_placeholder, output_placeholder
def fill_feed_dict(inputs_pl, output_pl):
inputs = []
outputs = []
for i in range(FLAGS.batch_size):
input, output = generateTrial()
inputs.append(input)
outputs.append(output)
return {inputs_pl: inputs, output_pl: outputs}
def loss(y, pred):
return tf.reduce_mean(tf.pow(y - pred, 2))
def NN(x, y, W1, b1, W2, b2):
layer1 = tf.add(tf.matmul(x, W1), b1)
layer1 = tf.nn.relu(layer1)
output = tf.add(tf.matmul(layer1, W2), b2)
return output, loss(y, output)
def get_params(dim_hidden):
with tf.variable_scope('nn_params'):
return tf.Variable(tf.truncated_normal([2, dim_hidden], stddev = 0.05)), tf.Variable(0.0, (dim_hidden)),\
tf.Variable(tf.truncated_normal([dim_hidden, 1], stddev = 0.05)), tf.Variable(0.0, 1)
def run_training():
input_placeholder, output_placeholder = placeholder_inputs(FLAGS.batch_size)
W1, b1, W2, b2 = get_params(FLAGS.dim1)
pred, loss = NN(input_placeholder, output_placeholder, W1, b1, W2, b2)
optm = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(loss)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for iters in range(FLAGS.num_iters):
l, _ = sess.run([loss, optm], feed_dict = fill_feed_dict(input_placeholder, output_placeholder))
print l, iters + 1
</code></pre>
| 1
|
2016-09-07T18:22:28Z
|
[
"python",
"tensorflow"
] |
Pandas optimization for multiple records
| 39,375,285
|
<p>I have a file with around 500K records.
Each record needs to be validated.
Records are de duplicated and store in a list:</p>
<pre><code>with open(filename) as f:
records = f.readlines()
</code></pre>
<p>The validation file I used is stored in a Pandas Dataframe
This DataFrame contains around 80K records and 9 columns (myfile.csv).</p>
<pre><code>filename = 'myfile.csv'
df = pd.read_csv(filename)
def check(df, destination):
try:
area_code = destination[:3]
office_code = destination[3:6]
subscriber_number = destination[6:]
if any(df['AREA_CODE'].astype(int) == area_code):
area_code_numbers = df[df['AREA_CODE'] == area_code]
if any(area_code_numbers['OFFICE_CODE'].astype(int) == office_code):
matching_records = area_code_numbers[area_code_numbers['OFFICE_CODE'].astype(int) == office_code]
start = subscriber_number >= matching_records['SUBSCRIBER_START']
end = subscriber_number <= matching_records['SUBSCRIBER_END']
# Perform intersection
record_found = matching_records[start & end]['LABEL'].to_string(index=False)
# We should return only 1 value
if len(record_found) > 0:
return record_found
else:
return 'INVALID_SUBSCRIBER'
else:
return 'INVALID_OFFICE_CODE'
else:
return 'INVALID_AREA_CODE'
except KeyError:
pass
except Exception:
pass
</code></pre>
<p>I'm looking for a way to improve the comparisons, as when I run it, it just hangs. If I run it with an small subset (10K) it works fine.
Not sure if there is a more efficient notation/recommendation.</p>
<pre><code>for record in records:
check(df, record)
</code></pre>
<p>Using MacOS 8GB/2.3 GHz Intel Core i7.</p>
<p>With Cprofile.run in <strong>check</strong> function alone shows: </p>
<pre><code>4253 function calls (4199 primitive calls) in 0.017 seconds.
</code></pre>
<p>Hence I assume 500K will take around 2 1/2 hours</p>
| 0
|
2016-09-07T16:39:34Z
| 39,382,590
|
<p>While no data is available, consider this untested approach with a couple of left join merges of both data pieces and then run the validation steps. This would avoid any looping and run conditional logic across columns:</p>
<pre><code>import pandas as pd
import numpy as np
with open('RecordsValidate.txt') as f:
records = f.readlines()
print(records)
rdf = pd.DataFrame({'rcd_id': list(range(1,len(records)+1)),
'rcd_area_code': [int(rcd[:3]) for rcd in records],
'rcd_office_code': [int(rcd[3:6]) for rcd in records],
'rcd_subscriber_number': [rcd[6:] for rcd in records]})
filename = 'myfile.csv'
df = pd.read_csv(filename)
# VALIDATE AREA CODE
mrgdf = pd.merge(df, rdf, how='left', left_on=['AREA_CODE'], right_on=['rcd_area_code'])
mrgdf['RETURN'] = np.where(pd.isnull('rcd_id'), 'INVALID_AREA_CODE', np.nan)
mrgdf.drop([c for c in rdf.columns], inplace=True,axis=1)
# VALIDATE OFFICE CODE
mrgdf = pd.merge(mrgdf, rdf, how='left', left_on=['AREA_CODE', 'OFFICE_CODE'],
right_on=['rcd_area_code', 'rcd_office_code'])
mrgdf['RETURN'] = np.where(pd.isnull('rcd_id'), 'INVALID_OFFICE_CODE', mrgdf['RETURN'])
# VALIDATE SUBSCRIBER
mrgdf['RETURN'] = np.where((mrgdf['rcd_subscriber_number'] < mrgdf['SUBSCRIBER_START']) |
(mrgdf['rcd_subscriber_number'] > mrgdf['SUBSCRIBER_END']) |
(mrgdf['LABEL'].str.len() = 0),
'INVALID_SUBSCRIBER', mrgdf['RETURN'])
mrgdf.drop([c for c in rdf.columns], inplace=True,axis=1)
</code></pre>
| 2
|
2016-09-08T04:25:22Z
|
[
"python",
"pandas"
] |
Attendees in Google Calendar not always in the same order
| 39,375,401
|
<p>So I've just started using the google calendar api and I've had good results so far. I add attendees with their name and email in the <code>events</code> dictionary, like so</p>
<pre><code>events = {
# other stuff here and then this
'items': [
# lots of stuff here, followed by
'attendees': [
{
'email': email1,
'displayName': name1
},
{
'email': email2,
'displayName': name2
},
],
###
]
}
</code></pre>
<p>Adding them goes fine, but when I access them, I'm never guaranteed of their order. I thought I could just access the emails like this</p>
<pre><code>for event in events['items']:
print "email1 = " + str(event['attendees'][0]['email'])
print "email2 = " + str(event['attendees'][1]['email'])
</code></pre>
<p>and I can. And I've learned that lists in python <a href="http://stackoverflow.com/questions/13694034/is-a-python-list-guaranteed-to-have-its-elements-stay-in-the-order-they-are-inse">always have their order preserved</a>, which is convenient because I wanted to access the dictionaries inside the list with the index of the list. But what I've learned is that sometimes the <code>0</code> index refers to <code>email1</code> and sometimes it refers to <code>email2</code>. Why the inconsistency? Is it inherent to the google calendar api or is there something about having dictionary objects within a python list that relaxes the order preservation assumption? Or is it something else I'm missing?</p>
| -1
|
2016-09-07T16:47:17Z
| 39,380,726
|
<p>So, as @Colonel Thirty Two pointed out, while lists preserve order, how google return data into a list may not be in the same order as it was submitted to them. This order inconsistency with attendees is inconvenient if you are wanting to count on that order for the retrieval of attendees with something like</p>
<pre><code>for event in events['items']:
print "email1 = " + str(event['attendees'][0]['email'])
print "email2 = " + str(event['attendees'][1]['email'])
</code></pre>
<p>What's more is that very few fields are writable with the google calendar api. What is writable, however, is <code>comments</code>. So, I added a value to that field to make the attendees identifiable. Like so</p>
<pre><code>'attendees': [
{
'email': agent_email,
'displayName': agent_name,
'comment': 'agent'
},
{
'email': event_attendee_email,
'displayName': event_attendee_name,
'comment': 'client'
},
</code></pre>
<p>Using <code>comment</code> as an identifier helped me in retrieving the <code>email</code> and <code>displayName</code> of each attendee with a simple if-statement.</p>
<pre><code>for i in range(len(event['attendees'])):
if event['attendees'][i]['comment'] == 'client':
event['attendees'][i]['displayName'] = event_attendee_name
event['attendees'][i]['email'] = event_attendee_email
</code></pre>
<p>Now it doesn't matter that the google calendar api submits my attendees back to me in a different order than the one in which I added them. I can now retrieve the attendees so I can change them. Problem solved.</p>
| 0
|
2016-09-08T00:09:35Z
|
[
"python",
"python-2.7",
"google-calendar"
] |
Python: Create list from function that returns single item or another list
| 39,375,420
|
<p>(Python 3.5). </p>
<p><strong>Problem Statement:</strong> Given a function that returns either an item or a list of items, is there a single line statement that would initialize a new list from the results of calling the aforementioned function?</p>
<p><strong>Details:</strong> I've looked at the documents on python lists, and tried some things out on the repl, but I can't seem to figure this one out. </p>
<p>I'm calling a third party function that reads an xml document. The function sometimes returns a list and sometimes returns a single item (depending on how many xml entries exist). </p>
<p>For my purposes, I always need a list that I can iterate over - even if it is a length of one. The code below correctly accomplishes what I desire. Given Python's elegance, however, it seems clunky. I suspect there is a single-line way of doing it. </p>
<pre><code>def force_list(item_or_list):
"""
Returns a list from either an item or a list.
:param item_or_list: Either a single object, or a list of objects
:return: A list of objects, potentially with a length of 1.
"""
if item_or_list is None: return None
_new_list = []
if isinstance(item_or_list, list):
_new_list.extend(item_or_list)
else:
_new_list.append(item_or_list)
return _new_list
</code></pre>
<p>Thanks in advance,
SteveJ</p>
| 2
|
2016-09-07T16:48:37Z
| 39,375,493
|
<p>If you're looking for a one-liner about listifying the result of a function call:</p>
<p>Let's say there's a function called <code>func</code> that returns either an item or a list of items:</p>
<pre><code>elem = func()
answer = elem if isinstance(elem, list) else [elem]
</code></pre>
<p>That being said, you should really refactor <code>func</code> to return one type of thing - make it return a list of many elements, or in the case that it returns only one element, make it return a list with that element. Thus you can avoid such type-checking</p>
| 4
|
2016-09-07T16:53:19Z
|
[
"python",
"list"
] |
Python: Create list from function that returns single item or another list
| 39,375,420
|
<p>(Python 3.5). </p>
<p><strong>Problem Statement:</strong> Given a function that returns either an item or a list of items, is there a single line statement that would initialize a new list from the results of calling the aforementioned function?</p>
<p><strong>Details:</strong> I've looked at the documents on python lists, and tried some things out on the repl, but I can't seem to figure this one out. </p>
<p>I'm calling a third party function that reads an xml document. The function sometimes returns a list and sometimes returns a single item (depending on how many xml entries exist). </p>
<p>For my purposes, I always need a list that I can iterate over - even if it is a length of one. The code below correctly accomplishes what I desire. Given Python's elegance, however, it seems clunky. I suspect there is a single-line way of doing it. </p>
<pre><code>def force_list(item_or_list):
"""
Returns a list from either an item or a list.
:param item_or_list: Either a single object, or a list of objects
:return: A list of objects, potentially with a length of 1.
"""
if item_or_list is None: return None
_new_list = []
if isinstance(item_or_list, list):
_new_list.extend(item_or_list)
else:
_new_list.append(item_or_list)
return _new_list
</code></pre>
<p>Thanks in advance,
SteveJ</p>
| 2
|
2016-09-07T16:48:37Z
| 39,375,494
|
<p>You may check it like in one line as:</p>
<pre><code> if item: # Check whether it is not None or empty list
# Check if it is list. If not, append it to existing list after converting it to list
_new_list.extend(item if isiinstance(item, list) else [item])
</code></pre>
| 1
|
2016-09-07T16:53:22Z
|
[
"python",
"list"
] |
Python: Create list from function that returns single item or another list
| 39,375,420
|
<p>(Python 3.5). </p>
<p><strong>Problem Statement:</strong> Given a function that returns either an item or a list of items, is there a single line statement that would initialize a new list from the results of calling the aforementioned function?</p>
<p><strong>Details:</strong> I've looked at the documents on python lists, and tried some things out on the repl, but I can't seem to figure this one out. </p>
<p>I'm calling a third party function that reads an xml document. The function sometimes returns a list and sometimes returns a single item (depending on how many xml entries exist). </p>
<p>For my purposes, I always need a list that I can iterate over - even if it is a length of one. The code below correctly accomplishes what I desire. Given Python's elegance, however, it seems clunky. I suspect there is a single-line way of doing it. </p>
<pre><code>def force_list(item_or_list):
"""
Returns a list from either an item or a list.
:param item_or_list: Either a single object, or a list of objects
:return: A list of objects, potentially with a length of 1.
"""
if item_or_list is None: return None
_new_list = []
if isinstance(item_or_list, list):
_new_list.extend(item_or_list)
else:
_new_list.append(item_or_list)
return _new_list
</code></pre>
<p>Thanks in advance,
SteveJ</p>
| 2
|
2016-09-07T16:48:37Z
| 39,375,873
|
<p>Another way of doing this in a single line is</p>
<pre><code>def force_list(item_or_list=[]):
return item_or_list if type(item_or_list) is list else [item_or_list]
</code></pre>
<p>print force_list("test")</p>
<blockquote>
<p>test</p>
</blockquote>
<p>print force_list(["test","test2"])</p>
<blockquote>
<p>["test","test2"]</p>
</blockquote>
| 0
|
2016-09-07T17:20:33Z
|
[
"python",
"list"
] |
Using a function to calculate new column values from old column values in pandas row by row
| 39,375,460
|
<p>I know there are many questions on this topic, but none of the suggested answers seem to work in this case, which I thought was trivial, but has been killing me for 2 days now.</p>
<p>This is my first effort to use pandas to process an export file from an eye-tracker. The export file contains 50 or so columns and 2 of them contain pupil dilation measures, PupilLeft and PupilRight. I want to create a new column, PupilAvg, which averages the two. When the eye tracker can't read one or both pupils, it records a -1. Since the required logic is simple but seemed a little long for a lambda, I wrote a function to return values for my new column:</p>
<pre><code>def getEyeAvg(left, right):
# calcs avg for Left and Right where one or both may be missing (= -1)
if left == -1 and right == -1: return np.nan
if left == -1: return right
if right == -1: return left
return (left + right)/2.0
</code></pre>
<p>Here's an example version of the dataframe:</p>
<pre><code>In[25]: dfd = pd.DataFrame.from_items([('PupilLeft', [3., -1., 4., -1]), ('PupilRight', [4., 4., -1., -1])])
In[26]: dfd
Out[26]:
PupilLeft PupilRight
0 3.0 4.0
1 -1.0 4.0
2 4.0 -1.0
3 -1.0 -1.0
</code></pre>
<p>I want to insert my new column after PupilRight, so I try the command:</p>
<pre><code>In[27]: dfd.insert(2, 'PupilAvg', getEyeAvg(dfd.PupilLeft, dfd.PupilRight))
</code></pre>
<p>What I expect for PupilAvg is:</p>
<pre><code> PupilLeft PupilRight PupilAvg
0 3.0 4.0 3.5
1 -1.0 4.0 4.0
2 4.0 -1.0 4.0
3 -1.0 -1.0 NaN
</code></pre>
<p>Of course this doesn't work and I get </p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>I've seen variations of this question asked over and over again, and it seems each answer uses some different "trick" that seems incomprehensible to me, given my relative beginner status. For example, I want neither 'any' nor 'all' rows where left == -1, I just want the current row, but this seems to be a request that pandas finds very difficult to handle.</p>
<p>It would be incredibly helpful if someone could provide a clear general solution to this problem, which basically boils down to </p>
<blockquote>
<p>"I want to use a function to calculate values for a new column using values from other columns on a row-by-row basis, not all at once. You know, just like in Excel. Is there a simple, general way to do that?"</p>
</blockquote>
<p>This is particularly hard for folks like me who are trying to transition from Excel solutions to python/pandas, because Excel is naturally row-by-row. You just enter a formula in the first row cell and copy it all the way down the column. Clearly that mindset has ill-prepared me for pandas.</p>
| 0
|
2016-09-07T16:51:23Z
| 39,375,711
|
<p>There's an easy way to achieve your goal while operating on whole columns.</p>
<pre><code>dfd.replace({-1:np.nan}, inplace=True)
dfd['PupilAvg'] = dfd.mean(axis=1)
</code></pre>
<p>If you need to retain the original -1 values for some reason, just copy them first and then proceed. Everything in pandas is easier with explicit nan values.</p>
<p>Your original code is failing because you're passing the entire column of data into getEyeAvg. In your example, it's trying to evaluate pd.DataFrame.from_items([('PupilLeft', [3., -1., 4., -1]) == -1, not 3. == 1. Operating on the entire column at once is the default mode in pandas, so it does require a new way of thinking. There isn't any one best way to do this, because the approaches that make the most sense coming from Excel (loop across rows directly by index or use df.apply(lambda, axis=1)) are much slower than using whole columns.</p>
| 0
|
2016-09-07T17:08:48Z
|
[
"python",
"pandas",
"dataframe",
"calculated-columns"
] |
Using XML ElementTree to create list of objects with atrributes
| 39,375,482
|
<p>I use the python requests module to get XML from the TeamCity rest api that looks like this:</p>
<pre><code><triggers count="10">
<trigger id="TRIGGER_1240" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt191"/>
</properties>
</trigger>
<trigger id="TRIGGER_1241" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt171"/>
</properties>
</trigger>
<trigger id="TRIGGER_1242" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt167"/>
</properties>
</trigger>
<trigger id="TRIGGER_1243" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt164"/>
</properties>
</trigger>
<trigger id="TRIGGER_1244" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt364"/>
</properties>
</trigger>
<trigger id="TRIGGER_736" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="Components_Ratchetdb"/>
</properties>
</trigger>
<trigger id="TRIGGER_149" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="Components_Filedb"/>
</properties>
</trigger>
<trigger id="TRIGGER_150" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt168"/>
</properties>
</trigger>
<trigger id="TRIGGER_1130" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="Components_Tbldb"/>
</properties>
</trigger>
<trigger id="vcsTrigger" type="vcsTrigger" inherited="true">
<properties count="3">
<property name="quietPeriod" value="60"/>
<property name="quietPeriodMode" value="USE_DEFAULT"/>
<property name="triggerRules" value="-:version.properties&#xA;-:comment=^Incremented:**&#xA;-:**/*-schema.sql"/>
</properties>
</trigger>
</code></pre>
<p></p>
<p>I am trying to create a list of "trigger" objects using a class. Ideally the object would have id, type, and a list of properties attributes as dictionaries of {name : value}. My code so far is:</p>
<pre><code>class triggerList:
def __init__(self, triggerId, triggerType):
self.id = triggerId
self.type = triggerType
self.properties = []
def add_property(self, buildProperty):
self.properties.append(buildProperty)
def getAllTriggers(buildId):
url = path + 'buildTypes/id:' + buildId + '/triggers'
r = requests.get(url, auth=auth)
tree = ElementTree.fromstring(r.content)
listOfTriggers = []
for trigger in tree.iter('trigger'):
triggerType = trigger.get('type')
triggerId = trigger.get('id')
triggerName = str(triggerId)
triggerName = triggerList(triggerId, triggerType)
listOfTriggers.append(triggerName)
for triggerProperty in tree.iter('property'):
propertyName = triggerProperty.get('name')
propertyValue = triggerProperty.get('value')
propDict = {propertyName : propertyValue}
triggerName.add_property(propDict)
</code></pre>
<p>This gives me a list of objects but every object has a list of every property dictionary. This is the output:</p>
<pre><code>a = listOfTriggers[1]
print a.id, a.type, a.properties
>>> TRIGGER_1241 buildDependencyTrigger [{'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt191'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt171'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt167'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt164'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt364'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'Components_Ratchetdb'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'Components_Filedb'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt168'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'Components_Tbldb'}, {'quietPeriod': '60'}, {'quietPeriodMode': 'USE_DEFAULT'}, {'triggerRules': '-:version.properties\n-:comment=^Incremented:**\n-:**/*-schema.sql'}]
</code></pre>
<p>I don't know how to stop the loop for just the properties for a specific trigger. Is there a way to use ElementTree to only get the properties for a specific trigger? Is there a more efficient way to create this object? </p>
| 0
|
2016-09-07T16:52:38Z
| 39,375,528
|
<p>Not directly answering the question, but you may be reinventing the wheel here, check <a href="http://lxml.de/objectify.html" rel="nofollow"><code>lxml.objectify</code> package</a>:</p>
<blockquote>
<p>The main idea is to hide the usage of XML behind normal Python
objects, sometimes referred to as data-binding. It allows you to use
XML as if you were dealing with a normal Python object hierarchy.
Accessing the children of an XML element deploys object attribute
access. If there are multiple children with the same name, slicing and
indexing can be used. Python data types are extracted from XML content
automatically and made available to the normal Python operators.</p>
</blockquote>
| 0
|
2016-09-07T16:55:26Z
|
[
"python",
"xml",
"class",
"object",
"elementtree"
] |
Using XML ElementTree to create list of objects with atrributes
| 39,375,482
|
<p>I use the python requests module to get XML from the TeamCity rest api that looks like this:</p>
<pre><code><triggers count="10">
<trigger id="TRIGGER_1240" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt191"/>
</properties>
</trigger>
<trigger id="TRIGGER_1241" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt171"/>
</properties>
</trigger>
<trigger id="TRIGGER_1242" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt167"/>
</properties>
</trigger>
<trigger id="TRIGGER_1243" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt164"/>
</properties>
</trigger>
<trigger id="TRIGGER_1244" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt364"/>
</properties>
</trigger>
<trigger id="TRIGGER_736" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="Components_Ratchetdb"/>
</properties>
</trigger>
<trigger id="TRIGGER_149" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="Components_Filedb"/>
</properties>
</trigger>
<trigger id="TRIGGER_150" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="bt168"/>
</properties>
</trigger>
<trigger id="TRIGGER_1130" type="buildDependencyTrigger">
<properties count="2">
<property name="afterSuccessfulBuildOnly" value="true"/>
<property name="dependsOn" value="Components_Tbldb"/>
</properties>
</trigger>
<trigger id="vcsTrigger" type="vcsTrigger" inherited="true">
<properties count="3">
<property name="quietPeriod" value="60"/>
<property name="quietPeriodMode" value="USE_DEFAULT"/>
<property name="triggerRules" value="-:version.properties&#xA;-:comment=^Incremented:**&#xA;-:**/*-schema.sql"/>
</properties>
</trigger>
</code></pre>
<p></p>
<p>I am trying to create a list of "trigger" objects using a class. Ideally the object would have id, type, and a list of properties attributes as dictionaries of {name : value}. My code so far is:</p>
<pre><code>class triggerList:
def __init__(self, triggerId, triggerType):
self.id = triggerId
self.type = triggerType
self.properties = []
def add_property(self, buildProperty):
self.properties.append(buildProperty)
def getAllTriggers(buildId):
url = path + 'buildTypes/id:' + buildId + '/triggers'
r = requests.get(url, auth=auth)
tree = ElementTree.fromstring(r.content)
listOfTriggers = []
for trigger in tree.iter('trigger'):
triggerType = trigger.get('type')
triggerId = trigger.get('id')
triggerName = str(triggerId)
triggerName = triggerList(triggerId, triggerType)
listOfTriggers.append(triggerName)
for triggerProperty in tree.iter('property'):
propertyName = triggerProperty.get('name')
propertyValue = triggerProperty.get('value')
propDict = {propertyName : propertyValue}
triggerName.add_property(propDict)
</code></pre>
<p>This gives me a list of objects but every object has a list of every property dictionary. This is the output:</p>
<pre><code>a = listOfTriggers[1]
print a.id, a.type, a.properties
>>> TRIGGER_1241 buildDependencyTrigger [{'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt191'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt171'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt167'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt164'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt364'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'Components_Ratchetdb'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'Components_Filedb'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'bt168'}, {'afterSuccessfulBuildOnly': 'true'}, {'dependsOn': 'Components_Tbldb'}, {'quietPeriod': '60'}, {'quietPeriodMode': 'USE_DEFAULT'}, {'triggerRules': '-:version.properties\n-:comment=^Incremented:**\n-:**/*-schema.sql'}]
</code></pre>
<p>I don't know how to stop the loop for just the properties for a specific trigger. Is there a way to use ElementTree to only get the properties for a specific trigger? Is there a more efficient way to create this object? </p>
| 0
|
2016-09-07T16:52:38Z
| 39,376,275
|
<p>Simple syntax mistake:</p>
<pre><code>for triggerProperty in trigger.iter('property'):
propertyName = triggerProperty.get('name')
propertyValue = triggerProperty.get('value')
propDict = {propertyName : propertyValue}
triggerName.add_property(propDict)
</code></pre>
<p>I was iterating over the whole tree, rather than the triggers. Should be:</p>
<p>for triggerProperty in <strong>trigger</strong>.iter('property'):</p>
| 0
|
2016-09-07T17:45:45Z
|
[
"python",
"xml",
"class",
"object",
"elementtree"
] |
Renaming files in Python: No such file or directory
| 39,375,483
|
<p>If I try to rename files in a directory, for some reason I get an error.
I think the problem may be that I have not inserted the directory in the proper format ?</p>
<p>Additional info:
python 2 &
linux machine</p>
<blockquote>
<p>OSError: [Errno 2] No such file or directory</p>
</blockquote>
<p>Though it prints the directories content just fine. What am I doing wrong?</p>
<pre><code>import os
for i in os.listdir("/home/fanna/Videos/strange"):
#print str(i)
os.rename(i, i[:-17])
</code></pre>
| 0
|
2016-09-07T16:52:42Z
| 39,375,596
|
<p><code>os.rename()</code> is expecting the full path to the file you want to rename. <code>os.listdir</code> only returns the filenames in the directory. Try this</p>
<pre><code>import os
baseDir = "/home/fanna/Videos/strange/"
for i in os.listdir( baseDir ):
os.rename( baseDir + i, baseDir + i[:-17] )
</code></pre>
| 3
|
2016-09-07T16:59:20Z
|
[
"python"
] |
Renaming files in Python: No such file or directory
| 39,375,483
|
<p>If I try to rename files in a directory, for some reason I get an error.
I think the problem may be that I have not inserted the directory in the proper format ?</p>
<p>Additional info:
python 2 &
linux machine</p>
<blockquote>
<p>OSError: [Errno 2] No such file or directory</p>
</blockquote>
<p>Though it prints the directories content just fine. What am I doing wrong?</p>
<pre><code>import os
for i in os.listdir("/home/fanna/Videos/strange"):
#print str(i)
os.rename(i, i[:-17])
</code></pre>
| 0
|
2016-09-07T16:52:42Z
| 39,375,598
|
<p>Suppose there is a file <code>/home/fanna/Videos/strange/name_of_some_video_file.avi</code>, and you're running the script from <code>/home/fanna</code>.</p>
<p><code>i</code> is <code>name_of_some_video_file.avi</code> (the name of the file, not including the full path to it). So when you run</p>
<pre><code>os.rename(i, i[:-17])
</code></pre>
<p>you're saying</p>
<pre><code>os.rename("name_of_some_video_file.avi", "name_of_some_video_file.avi"[:-17])
</code></pre>
<p>Python has no idea that these files came from <code>/home/fanna/Videos/strange</code>. It resolves them against the currrent working directory, so it's looking for <code>/home/fanna/name_of_some_video_file.avi</code>.</p>
| 2
|
2016-09-07T16:59:23Z
|
[
"python"
] |
Improve the quality of the letters in a image
| 39,375,498
|
<p>I'm working with images that have text. The problem is that these images are receipts, and after a lot of transformations, the text lost quality.
I'm using python and opencv.
I was trying with a lot of combinations of morphological transformations from the doc <a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html" rel="nofollow">Morphological Transformations</a>, but I don't get satisfactory results. </p>
<p>I'm doing this right now (I'll comment what I've tried, and just let uncommented what I'm using):</p>
<pre><code>kernel = np.ones((2, 2), np.uint8)
# opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
# closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
# dilation = cv2.dilate(opening, kernel, iterations=1)
# kernel = np.ones((3, 3), np.uint8)
erosion = cv2.erode(img, kernel, iterations=1)
# gradient = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel)
#
img = erosion.copy()
</code></pre>
<p>With this, from this original image:</p>
<p><a href="http://i.stack.imgur.com/fKJkH.png" rel="nofollow"><img src="http://i.stack.imgur.com/fKJkH.png" alt="enter image description here"></a></p>
<p>I get this:</p>
<p><a href="http://i.stack.imgur.com/Hvmvk.png" rel="nofollow"><img src="http://i.stack.imgur.com/Hvmvk.png" alt="enter image description here"></a></p>
<p>It's a little bit better, as you can see. But it still too bad. The OCR (tesseract) doesn't recognize the characters here very well. I've trained, but as you can note, every "e" is different, and so on. </p>
<p>I get good results, but I think, if I resolve this problem, they would be even better. </p>
<p>Maybe I can do another thing, or use a better combination of the morphological transformations. If there is another tool (PIL, imagemagick, etc..) that I could use, I can use it. </p>
<p>Here's the whole image, so you can see how it looks:</p>
<p><a href="http://i.stack.imgur.com/AorH6.png" rel="nofollow"><img src="http://i.stack.imgur.com/AorH6.png" alt="enter image description here"></a></p>
<p>As I said, it's not so bad, but a little be more "optimization" of the letters would be perfect. </p>
| 1
|
2016-09-07T16:53:38Z
| 39,376,668
|
<p>Did you consider the neighboring pixels and add sum of them.</p>
<p>For example:</p>
<pre><code>n = numpy.zeros((3,3))
s = numpy.zeros((3,3))
w = numpy.zeros((3,3))
e = numpy.zeros((3,3))
n[0][1] = 1
s[2][1] = 1
w[1][0] = 1
e[1][2] = 1
img_n = cv2.erode(img, n, iterations=1)
img_s = cv2.erode(img, s, iterations=1)
img_w = cv2.erode(img, w, iterations=1)
img_e = cv2.erode(img, e, iterations=1)
result = img_n + img_s + img_w + img_e + img
</code></pre>
<p>Also, you can either numpy or cv2 to add the arrays. </p>
| 0
|
2016-09-07T18:13:44Z
|
[
"python",
"image",
"opencv",
"letters"
] |
Improve the quality of the letters in a image
| 39,375,498
|
<p>I'm working with images that have text. The problem is that these images are receipts, and after a lot of transformations, the text lost quality.
I'm using python and opencv.
I was trying with a lot of combinations of morphological transformations from the doc <a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html" rel="nofollow">Morphological Transformations</a>, but I don't get satisfactory results. </p>
<p>I'm doing this right now (I'll comment what I've tried, and just let uncommented what I'm using):</p>
<pre><code>kernel = np.ones((2, 2), np.uint8)
# opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
# closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
# dilation = cv2.dilate(opening, kernel, iterations=1)
# kernel = np.ones((3, 3), np.uint8)
erosion = cv2.erode(img, kernel, iterations=1)
# gradient = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel)
#
img = erosion.copy()
</code></pre>
<p>With this, from this original image:</p>
<p><a href="http://i.stack.imgur.com/fKJkH.png" rel="nofollow"><img src="http://i.stack.imgur.com/fKJkH.png" alt="enter image description here"></a></p>
<p>I get this:</p>
<p><a href="http://i.stack.imgur.com/Hvmvk.png" rel="nofollow"><img src="http://i.stack.imgur.com/Hvmvk.png" alt="enter image description here"></a></p>
<p>It's a little bit better, as you can see. But it still too bad. The OCR (tesseract) doesn't recognize the characters here very well. I've trained, but as you can note, every "e" is different, and so on. </p>
<p>I get good results, but I think, if I resolve this problem, they would be even better. </p>
<p>Maybe I can do another thing, or use a better combination of the morphological transformations. If there is another tool (PIL, imagemagick, etc..) that I could use, I can use it. </p>
<p>Here's the whole image, so you can see how it looks:</p>
<p><a href="http://i.stack.imgur.com/AorH6.png" rel="nofollow"><img src="http://i.stack.imgur.com/AorH6.png" alt="enter image description here"></a></p>
<p>As I said, it's not so bad, but a little be more "optimization" of the letters would be perfect. </p>
| 1
|
2016-09-07T16:53:38Z
| 39,386,810
|
<p>In my experience erode impairs OCR quality. If you have grayscale image (not binary) you can use better binarization algorithm. I use SAUVOLA algorithm for binarization. If you have only binary image the best thing you can do is removing the noise (remove all small dots). </p>
| 0
|
2016-09-08T09:04:27Z
|
[
"python",
"image",
"opencv",
"letters"
] |
Is this approach "vectorized" - used against medium dataset it is relatively slow
| 39,375,546
|
<p>I have this data frame:</p>
<pre><code>df = pd.DataFrame({'a' : np.random.randn(9),
'b' : ['foo', 'bar', 'blah'] * 3,
'c' : np.random.randn(9)})
</code></pre>
<p>This function:</p>
<pre><code>def my_test2(row, x):
if x == 'foo':
blah = 10
if x == 'bar':
blah = 20
if x == 'blah':
blah = 30
return (row['a'] % row['c']) + blah
</code></pre>
<p>I am then creating 3 new columns like this:</p>
<pre><code>df['Value_foo'] = df.apply(my_test2, axis=1, x='foo')
df['Value_bar'] = df.apply(my_test2, axis=1, x='bar')
df['Value_blah'] = df.apply(my_test2, axis=1, x='blah')
</code></pre>
<p>It runs ok but when I make my_test2 more complex and expand df to several thousand rows it is slow - is the above what I hear described as "vectorized"? Can I easily speed things up?</p>
| 2
|
2016-09-07T16:56:10Z
| 39,376,939
|
<p>As Andrew, Ami Tavory and Sohier Dane have already mentioned in comments there are two "slow" things in your solution:</p>
<ol>
<li><code>.apply()</code> is generally slow as it loops under the hood.</li>
<li><code>.apply(..., axis=1)</code> is <strong>extremely</strong> slow (even compared to <code>.apply(..., axis=0)</code>) as it does looping on the row basis</li>
</ol>
<p>Here is a vectorized approach:</p>
<pre><code>In [74]: d = {
....: 'foo': 10,
....: 'bar': 20,
....: 'blah': 30
....: }
In [75]: d
Out[75]: {'bar': 20, 'blah': 30, 'foo': 10}
In [76]: for k,v in d.items():
....: df['Value_{}'.format(k)] = df.a % df.c + v
....:
In [77]: df
Out[77]:
a b c Value_bar Value_blah Value_foo
0 -0.747164 foo 0.438713 20.130262 30.130262 10.130262
1 -0.185182 bar 0.047253 20.003828 30.003828 10.003828
2 1.622818 blah -0.730215 19.432174 29.432174 9.432174
3 0.117658 foo 1.530249 20.117658 30.117658 10.117658
4 2.536363 bar -0.100726 19.917499 29.917499 9.917499
5 1.128002 blah 0.350663 20.076014 30.076014 10.076014
6 0.059516 foo 0.638910 20.059516 30.059516 10.059516
7 -1.184688 bar 0.073781 20.069590 30.069590 10.069590
8 1.440576 blah -2.231575 19.209001 29.209001 9.209001
</code></pre>
<p>Timing against 90K rows DF:</p>
<pre><code>In [80]: big = pd.concat([df] * 10**4, ignore_index=True)
In [81]: big.shape
Out[81]: (90000, 3)
In [82]: %%timeit
....: big['Value_foo'] = big.apply(my_test2, axis=1, x='foo')
....: big['Value_bar'] = big.apply(my_test2, axis=1, x='bar')
....: big['Value_blah'] = big.apply(my_test2, axis=1, x='blah')
....:
1 loop, best of 3: 10.5 s per loop
In [83]: big = pd.concat([df] * 10**4, ignore_index=True)
In [84]: big.shape
Out[84]: (90000, 3)
In [85]: %%timeit
....: for k,v in d.items():
....: big['Value_{}'.format(k)] = big.a % big.c + v
....:
100 loops, best of 3: 7.24 ms per loop
</code></pre>
<p><strong>Conclusion:</strong> vectorized approach is 1450 times faster...</p>
| 2
|
2016-09-07T18:34:59Z
|
[
"python",
"pandas",
"dataframe"
] |
How to return (a varying number of) objects with their original name in a function in Python
| 39,375,580
|
<p>I'm very new to Python. I'm looking to return a varying number of objects (will eventually be <code>lists</code> or <code>pandas</code>), ideally with their original name.</p>
<p>So far I'm looking at something like this:</p>
<pre><code>def function(*Args):
Args = list(Args)
for i in range(len(Args)):
Args[i] += 1
print Args[i]
return *all variables with original name, in this case a, b, c, d *
a=1
b=2
c=3
d=4
a, b, c, d = function(a, b, c, d)
</code></pre>
<p>Any help, ideas or comments would be very appreciated. Thanks.</p>
| 0
|
2016-09-07T16:58:17Z
| 39,375,656
|
<p>Are you just looking to return the tuple?</p>
<pre><code>def function(*Args):
Args = list(Args)
for i in range(len(Args)):
Args[i] += 1
print Args[i]
return Args
a, b, c, d = function(a, b, c, d)
</code></pre>
| 0
|
2016-09-07T17:04:14Z
|
[
"python",
"args"
] |
How to return (a varying number of) objects with their original name in a function in Python
| 39,375,580
|
<p>I'm very new to Python. I'm looking to return a varying number of objects (will eventually be <code>lists</code> or <code>pandas</code>), ideally with their original name.</p>
<p>So far I'm looking at something like this:</p>
<pre><code>def function(*Args):
Args = list(Args)
for i in range(len(Args)):
Args[i] += 1
print Args[i]
return *all variables with original name, in this case a, b, c, d *
a=1
b=2
c=3
d=4
a, b, c, d = function(a, b, c, d)
</code></pre>
<p>Any help, ideas or comments would be very appreciated. Thanks.</p>
| 0
|
2016-09-07T16:58:17Z
| 39,387,916
|
<p>In the meantime I tried it as @Xavier C. suggested and it seems to work. the lists/input arguments i would name after the industry and would therefore like to preseve. Is there any way to achieve that?</p>
<pre><code>def BBGLiveRequest(*args):
data = []
for i in range(len(args)):
print args[i]
result = BLPTS(args[i], ['PX_Last'])
result.get()
print result.output
data.append(result.output)
return data
</code></pre>
| 0
|
2016-09-08T09:57:11Z
|
[
"python",
"args"
] |
Is it possible to stack line graphs with df.plot() and if not how can it be done?
| 39,375,618
|
<p><a href="http://i.stack.imgur.com/2M0iT.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/2M0iT.jpg" alt="enter image description here"></a></p>
<p>I want to create something like this figure with a dataframe that contains 9 columns.</p>
| -1
|
2016-09-07T17:01:13Z
| 39,375,976
|
<p>Use the area type:</p>
<pre><code>df.plot(kind='area')
</code></pre>
<p>Starting pandas version 0.17:</p>
<pre><code>df.plot.area()
</code></pre>
| 1
|
2016-09-07T17:26:54Z
|
[
"python",
"pandas"
] |
How to see the plot made in python using pandas and matplotlib
| 39,375,660
|
<p>I am following the tutorial <a href="http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html" rel="nofollow">http://ahmedbesbes.com/how-to-score-08134-in-titanic-kaggle-challenge.html</a> and following is my code</p>
<pre><code>from IPython.core.display import HTML
HTML("""
<style>
.output_png {
display: table-cell;
text-align: center;
vertical-align: middle;
}
</style>
""")
# remove warnings
import warnings
warnings.filterwarnings('ignore')
# ---
import pandas as pd
pd.options.display.max_columns = 100
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
import numpy as np
pd.options.display.max_rows = 100
data = pd.read_csv('./data/train.csv')
data.head()
data['Age'].fillna(data['Age'].median(), inplace=True)
survived_sex = data[data['Survived']==1]['Sex'].value_counts()
dead_sex = data[data['Survived']==0]['Sex'].value_counts()
df = pd.DataFrame([survived_sex,dead_sex])
df.index = ['Survived','Dead']
df.plot(kind='bar',stacked=True, figsize=(15,8))
</code></pre>
<p>I dont actually see the plot. How do i see the plot?</p>
<p><a href="http://i.stack.imgur.com/g24E6.png" rel="nofollow"><img src="http://i.stack.imgur.com/g24E6.png" alt="enter image description here"></a></p>
| 2
|
2016-09-07T17:04:24Z
| 39,375,729
|
<p>Try using <code>plt.show()</code> at the end.</p>
<p>EDIT:</p>
<p>Also, you may need to add <code>%matplotlib inline</code> as explained here: <a href="http://stackoverflow.com/questions/19410042/how-to-make-ipython-notebook-matplotlib-plot-inline">How to make IPython notebook matplotlib plot inline</a></p>
| 1
|
2016-09-07T17:09:53Z
|
[
"python",
"matplotlib"
] |
translate() takes exactly one argument (2 given) in python error
| 39,375,712
|
<pre><code>import os
import re
def rename_files():
# get the files from dir
file_list=os.listdir(r"C:\OOP\prank")
print(file_list)
saved_path=os.getcwd()
print("current working directory"+saved_path)
os.chdir(r"C:\OOP\prank")
#rename the files
for file_name in file_list:
print("old name-"+file_name)
#print("new name-"+file_name.strip("0123456789"))
os.rename(file_name,file_name.translate(None,"0123456789"))
os.chdir(saved_path)
rename_files()
</code></pre>
<p>Here error is showing due to translate line ...help me what to do next ..I am using translate to remove the digit from filename.</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\vikash\AppData\Local\Programs\Python\Python35- 32\pythonprogram\secretName.py", line 17, in <module>
rename_files()
File "C:\Users\vikash\AppData\Local\Programs\Python\Python35- 32\pythonprogram\secretName.py", line 15, in rename_files
os.rename(file_name,file_name.translate(None,"0123456789"))
TypeError: translate() takes exactly one argument (2 given)
</code></pre>
| -2
|
2016-09-07T17:08:56Z
| 39,375,924
|
<p>Instead of translate why not just do this:</p>
<pre><code>os.rename(file_name,''.join([i for i in file_name if not i.isdigit()]))
</code></pre>
| 0
|
2016-09-07T17:23:25Z
|
[
"python",
"string",
"file"
] |
translate() takes exactly one argument (2 given) in python error
| 39,375,712
|
<pre><code>import os
import re
def rename_files():
# get the files from dir
file_list=os.listdir(r"C:\OOP\prank")
print(file_list)
saved_path=os.getcwd()
print("current working directory"+saved_path)
os.chdir(r"C:\OOP\prank")
#rename the files
for file_name in file_list:
print("old name-"+file_name)
#print("new name-"+file_name.strip("0123456789"))
os.rename(file_name,file_name.translate(None,"0123456789"))
os.chdir(saved_path)
rename_files()
</code></pre>
<p>Here error is showing due to translate line ...help me what to do next ..I am using translate to remove the digit from filename.</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\vikash\AppData\Local\Programs\Python\Python35- 32\pythonprogram\secretName.py", line 17, in <module>
rename_files()
File "C:\Users\vikash\AppData\Local\Programs\Python\Python35- 32\pythonprogram\secretName.py", line 15, in rename_files
os.rename(file_name,file_name.translate(None,"0123456789"))
TypeError: translate() takes exactly one argument (2 given)
</code></pre>
| -2
|
2016-09-07T17:08:56Z
| 39,375,994
|
<p><code>str.translate</code> requires a <code>dict</code> that maps unicode ordinals to other unicode oridinals (or <code>None</code> if you want to remove the character). You can create it like so:</p>
<pre><code>old_string = "file52.txt"
to_remove = "0123456789"
table = {ord(char): None for char in to_remove}
new_string = old_string.translate(table)
assert new_string == "file.txt"
</code></pre>
<p>However, there is simpler way of making a table though, by using the <code>str.maketrans</code> function. It can take a variety of arguments, but you want the three arg form. We ignore the first two args as they are for mapping characters to other characters. The third arg is characters you wish to remove.</p>
<pre><code>old_string = "file52.txt"
to_remove = "0123456789"
table = str.maketrans("", "", to_remove)
new_string = old_string.translate(table)
assert new_string == "file.txt"
</code></pre>
| 1
|
2016-09-07T17:27:53Z
|
[
"python",
"string",
"file"
] |
django-tinymce widget not outputting correct error message
| 39,375,745
|
<p>I just set up django-tinymce and made some changes to my form to do it. However, now my form is no longer outputting the correct error message.</p>
<p>My form:</p>
<pre><code>TITLE_LENGTH_ERROR = "This title is too long, please make it 200 characters or less."
TITLE_EMPTY_ERROR = "Youâll have to add a title."
TEXT_EMPTY_ERROR = "Please enter some text."
NO_CATEGORY_ERROR = "Please select a category."
NO_CITY_ERROR = "Please select a city."
class ArticleForm(ModelForm):
text = forms.CharField(widget=TinyMCE(attrs={'cols': 80, 'rows': 30}))
class Meta:
model = Article
fields = ['title', 'text', 'categories', 'city']
widgets = {'title': forms.TextInput(attrs={
'placeholder': 'Enter a descriptive title'}),
'categories': forms.CheckboxSelectMultiple(choices=Category.CATEGORY_CHOICES),
'city': forms.RadioSelect(choices=City.CITY_CHOICES),
}
error_messages = {
'title': {
'max_length': TITLE_LENGTH_ERROR,
'required': TITLE_EMPTY_ERROR,
},
'text': {
'required': TEXT_EMPTY_ERROR,
},
'categories': {
'required': NO_CATEGORY_ERROR,
},
'city': {
'required': NO_CITY_ERROR,
}
}
</code></pre>
<p>The test:</p>
<pre><code>from articles.models import Article, Category, City
from articles.forms import (
ArticleForm,
TITLE_LENGTH_ERROR,
TITLE_EMPTY_ERROR,
TEXT_EMPTY_ERROR,
NO_CATEGORY_ERROR,
NO_CITY_ERROR,
)
class ArticleFormTest(TestCase):
def setUp(self):
self.user = User.objects.create(username='testuser')
self.user.set_password('12345')
self.user.save()
self.client.login(username='testuser', password='12345')
def test_form_validation_for_blank_inputs(self):
form = ArticleForm(data={'title': '', 'text': '', 'categories': '', 'city': '', 'author': self.user})
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors['text'],
[TEXT_EMPTY_ERROR]
)
</code></pre>
<p>The traceback:</p>
<pre><code>(venv) Robins-MacBook-Pro:togethere robin$ python manage.py test articles/
Creating test database for alias 'default'...
.F....................
======================================================================
FAIL: test_form_validation_for_blank_inputs (articles.tests.test_forms.ArticleFormTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/robin/work/2016-06-04_togethere/togethere/articles/tests/test_forms.py", line 36, in test_form_validation_for_blank_inputs
[TEXT_EMPTY_ERROR]
AssertionError: ['This field is required.'] != ['Please enter some text.']
----------------------------------------------------------------------
Ran 22 tests in 4.171s
FAILED (failures=1)
Destroying test database for alias 'default'...
</code></pre>
<p>How do I make the form output the correct error message? Also, is it possible to declare the tinymce widget in the same way as the other widgets?</p>
| 0
|
2016-09-07T17:11:16Z
| 39,392,900
|
<p>Ok so a little more explanation and a couple links that might help shed some light on this. In the official Django documentation (<a href="https://docs.djangoproject.com/en/1.10/ref/forms/fields/" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/forms/fields/</a>) there is an example of using form field validation with custom error messages. The <code>error_messages</code> must be defined within the field you want the error message associated with:</p>
<pre><code>text = forms.CharField(widget=TinyMCE(attrs={'cols': 80, 'rows': 30}), error_messages = { 'required': TEXT_EMPTY_ERROR})
</code></pre>
<p>That being said, another Django documentation (<a href="https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/</a>) shows creating custom error_messages in the Meta class for a Model form. Since the first line is working that's great, but I think if you wanted to create error messages in the manner you originally posted, you might try following the documentation. Under the "Overriding Default Fields section in the docs they show an example very similar to what you are doing so that should give you an idea of maybe what went wrong. I believe it's a combination of using these <code>[]</code> instead of these <code>()</code> to wrap your fields and then not overriding the field in the Meta class itself. Hopefully this gives a bit of insight into Django and testing though!!</p>
| 1
|
2016-09-08T13:56:49Z
|
[
"python",
"django",
"django-forms",
"tinymce",
"django-tinymce"
] |
Is that Insertionsort?
| 39,375,805
|
<p>I really need your help, I am learning sorting-algorithms and i tried to make an Insertionsort-Algorithm. So could you please tell me whether this is an Insertionsort-Algorithm or not?</p>
<pre><code>def insertsort(l):
for k in range(len(l)):
for i in range(1,len(l)):
if l[i]<l[i-1]:
while l[i]<l[i-1]:
l.insert(i-1,l.pop(i))
return l
</code></pre>
| -2
|
2016-09-07T17:16:29Z
| 39,375,992
|
<p>yes, it is insertion sort. The pseudocode is as follows:</p>
<pre><code>1. for j = 2 to n
2. key â A [j]
3. // Insert A[j] into the sorted sequence A[1..j-1]
4. j â i â 1
5. while i > 0 and A[i] > key
6. A[i+1] â A[i]
7. i â i â 1
8. A[j+1] â key
</code></pre>
| 0
|
2016-09-07T17:27:51Z
|
[
"python",
"algorithm",
"sorting"
] |
Is that Insertionsort?
| 39,375,805
|
<p>I really need your help, I am learning sorting-algorithms and i tried to make an Insertionsort-Algorithm. So could you please tell me whether this is an Insertionsort-Algorithm or not?</p>
<pre><code>def insertsort(l):
for k in range(len(l)):
for i in range(1,len(l)):
if l[i]<l[i-1]:
while l[i]<l[i-1]:
l.insert(i-1,l.pop(i))
return l
</code></pre>
| -2
|
2016-09-07T17:16:29Z
| 39,376,568
|
<p>I do not think so. You used two nested <strong>for</strong> loops and a <strong>while</strong>. In the pseudocode provided by @dreadedHarvester and the implementation provided in <a href="https://rosettacode.org/wiki/Sorting_algorithms/Insertion_sort#Python" rel="nofollow">RosettaCode</a> just one <strong>for</strong> loop and one <strong>while</strong> are used. </p>
<pre><code>def insertion_sort(l):
for i in xrange(1, len(l)):
j = i-1
key = l[i]
while (l[j] > key) and (j >= 0):
l[j+1] = l[j]
j -= 1
l[j+1] = key
</code></pre>
| 0
|
2016-09-07T18:06:39Z
|
[
"python",
"algorithm",
"sorting"
] |
django rest framework manually display 404 page
| 39,375,826
|
<p>So I have typical generic view:</p>
<pre><code>class FooListAPIView(generics.ListAPIView):
serializer_class = FooSerializer
lookup_fields = ('area_id', 'category_id', )
def get_queryset(self):
area = Area.objects.get(pk=self.kwargs.get('area_id'))
area_tree = area.get_tree(parent=area) #returns queryset
category = Category.objects.get(pk=self.kwargs.get('category_id'))
queryset = Foo.objects.filter(area__in=area_tree, category=category)
return queryset
def get_object(self):
queryset = self.get_queryset()
queryset = self.filter_queryset(queryset)
filter = {}
for field in self.lookup_fields:
filter[field] = self.kwargs[field]
return get_object_or_404(queryset, **filter)
</code></pre>
<p>My problem is, if i try get area or category objects, which doesn't exist, browser throws me error: </p>
<blockquote>
<p>Area matching query does not exist.</p>
</blockquote>
<p>How can I make it so, that when Area matching query does not exist, I get standard rest framework 404 response?</p>
| 2
|
2016-09-07T17:17:40Z
| 39,376,960
|
<p>The problem here is that <code>get_queryset</code> doesn't really expect any failures. In your case, although you are returning a queryset, you seem to be hitting the database with the <code>Area.objects.get(pk=self.kwargs.get('area_id'))</code> call. When this fails, it violates the I/O defined by <code>get_queryset</code> which isn't expecting the <code>Area.DoesNotExist</code> exception. So it fails and you end up with a Django 500 error.</p>
<p>What you need to ensure is that the <code>get_queryset</code> method, returns a queryset, preferably without making any calls to the DB (I say preferably, since there is no such rule that says it shouldn't hit the DB, but its generally understood that get_queryset wont be the one to actually perform the DB query). Then if you must, you can freely perform any get operations on your DB inside the <code>get_object</code> with the <code>get_object_or_404</code> shortcut. Since <code>get_object_or_404</code> raises an <code>Http404</code> exception and <code>get_object</code> knows how to handle this exception, it will gracefully return the 404 page that you are expecting.</p>
<p>If you can ensure your <code>area.get_tree</code> implementation can work with a parent queryset, instead of a parent object, then you could do something like this:</p>
<pre><code>class FooListAPIView(generics.ListAPIView):
serializer_class = FooSerializer
lookup_fields = ('area_id', 'category_id', )
def get_queryset(self):
area = Area.objects.filter(pk=self.kwargs.get('area_id'))
area_tree = area.get_tree(parent=area) #returns queryset
category = Category.objects.filter(pk=self.kwargs.get('category_id'))
queryset = Foo.objects.filter(area__in=area_tree, category__in=category)
return queryset
def get_object(self):
queryset = self.get_queryset()
queryset = self.filter_queryset(queryset)
filter = {}
for field in self.lookup_fields:
filter[field] = self.kwargs[field]
return get_object_or_404(queryset, **filter)
</code></pre>
<p>If you are unable to get <code>area.tree</code> to work without a queryset, then you can delay some of your <code>get_queryset</code> logic to <code>get_object</code>. Like so:</p>
<pre><code>from django.http import Http404
class FooListAPIView(generics.ListAPIView):
serializer_class = FooSerializer
lookup_fields = ('area_id', 'category_id', )
queryset = Foo.objects.all()
def get_object(self):
queryset = self.get_queryset()
queryset = self.filter_queryset(queryset)
area = get_object_or_404(Area, **{'pk': self.kwargs.get('area_id')})
area_tree = area.get_tree(parent=area)
category = get_object_or_404(Category, **{'pk': self.kwargs.get('category_id')})
queryset = queryset.filter(area__in=area_tree, category=category)
filter = {}
for field in self.lookup_fields:
filter[field] = self.kwargs[field]
return get_object_or_404(queryset, **filter)
</code></pre>
| 0
|
2016-09-07T18:36:10Z
|
[
"python",
"django",
"django-rest-framework"
] |
Create pandas dataframe column from another column that has dictionary keys
| 39,375,878
|
<p>I have dataframe and a dict. These look like,</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'first':['john','oliver','sarah']})
df1_map = {'john': 'anderson', 'oliver': 'smith', 'sarah' : 'shively'}
print (df1)
print (df1_map)
first
0 john
1 oliver
2 sarah
{'oliver': 'smith', 'sarah': 'shively', 'john': 'anderson'}
</code></pre>
<p>The values of df1['first'] represent the key values of the dict.</p>
<p>I would like to add a second column to the data frame called df1['second'] so the dict relationship is maintained to get the following dataframe,</p>
<pre><code> first last
0 john anderson
1 oliver smith
2 sarah shively
</code></pre>
<p>Now, I could just iterate over the dataframe values, like so,</p>
<pre><code>df1['last'] = [ df1_map[i] for i in list(df1['first'])]
</code></pre>
<p>I was wondering if pandas support a vectorized implementation / function that can do this without iterating over rows of the df</p>
| 2
|
2016-09-07T17:20:52Z
| 39,375,928
|
<p>You can just map dictionaries values directly to the keys with:</p>
<pre><code>df1['last'] = df1['first'].map(df1_map)
</code></pre>
<p>result is:</p>
<pre><code>Out[6]:
first last
0 john anderson
1 oliver smith
2 sarah shively
</code></pre>
| 5
|
2016-09-07T17:23:43Z
|
[
"python",
"pandas"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.