title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Run Python function with input arguments form command line | 38,993,606 | <p>New to Python, used to use MATLAB.</p>
<p>My function convert.py is:</p>
<pre><code>def convert(a,b)
factor = 2194.2
return (a-b)*factor
</code></pre>
<p>How do I run it from the command line with input arguments 'a' and 'b' ?
I tried:</p>
<pre><code>python convert.py 32 46
</code></pre>
<p>But got an error.</p>
<p>I did try to find the answer online, found related things but not the answer:</p>
<ol>
<li><a href="http://stackoverflow.com/questions/3987041/python-run-function-from-the-command-line">Python: Run function from the command line</a> (Stack Overflow)</li>
<li><a href="http://stackoverflow.com/questions/1009860/command-line-arguments-in-python">Command Line Arguments In Python</a> (Stack Overflow)</li>
<li><a href="http://www.cyberciti.biz/faq/python-command-line-arguments-argv-example/" rel="nofollow">http://www.cyberciti.biz/faq/python-command-line-arguments-argv-example/</a></li>
<li><a href="http://www.saltycrane.com/blog/2007/12/how-to-pass-command-line-arguments-to/" rel="nofollow">http://www.saltycrane.com/blog/2007/12/how-to-pass-command-line-arguments-to/</a></li>
</ol>
<p>Also, were can I find the answer myself so that I can save this forum for more non-trivial questions?</p>
| -2 | 2016-08-17T10:05:56Z | 38,993,808 | <p>You could do:</p>
<pre><code>import sys
def convert(a,b):
factor = 2194.2
return (a-b)*factor
print(convert(int(sys.argv[1]), int(sys.argv[2])))
</code></pre>
<p>If that is all what should do the script, you dont have to define a function:</p>
<pre><code>import sys
factor = 2194.2
print((int(sys.argv[1]), int(sys.argv[2])*factor)
</code></pre>
<p>If you want change your file (nonetheless you have to add the colon after the function definiton), you could follow your first linked approach:</p>
<pre><code>python -c 'import convert, sys; print convert.convert(int(sys.argv[1]), int(sys.argv[2])'
</code></pre>
| 0 | 2016-08-17T10:15:04Z | [
"python",
"function",
"input",
"command-line",
"arguments"
] |
accessing client's x509 certificate from within twisted web WSGI app | 38,993,831 | <p>I have set up a twisted + flask https server that also does certificate-based client authentication by following the documentation at the Twisted site <a href="http://twistedmatrix.com/documents/current/core/howto/ssl.html#tls-server-with-client-authentication-via-client-certificate-verification" rel="nofollow">here</a>. So far, so good.</p>
<p>In addition to authenticating the client using a certificate, the application code within the flask app needs the user name (which is present in the client x509 certificate) in order to do its job. I couldn't find an easy way to access this information. The information (based on the documentation) seems to be in the pyopenssl X509Name object at the time it does authentication, and I need the identity at the flask layer every time I process a request from that client.</p>
<p>The request object flask is getting did not seem to have this information (unless I read it wrong), so I assume I need to modify some options at the Twisted level to send them through to flask. I also need to somehow get them out of the OpenSSL layer.</p>
<p>How would you do this?</p>
| 0 | 2016-08-17T10:16:21Z | 38,999,544 | <p><strong>Updated</strong>: using <code>HTTPChannel.allHeadersReceived</code> instead of <code>Protocol.dataReceived</code> for support of chunked requests.</p>
<p>You can use HTTP headers to store connection information: set them up in <code>HTTPChannel.allHeadersReceived</code> method and retrieve from <code>flask.request.headers</code>, e.g.:</p>
<pre><code>from twisted.application import internet, service
from twisted.internet import reactor
from twisted.web.http import HTTPChannel
from twisted.web.server import Site
from twisted.web.wsgi import WSGIResource
from flask import Flask, request
app = Flask('app')
@app.route('/')
def index():
return 'User ID: %s' % request.headers['X-User-Id']
class MyHTTPChannel(HTTPChannel):
def allHeadersReceived(self):
user_id = 'my_user_id'
req = self.requests[-1]
req.requestHeaders.addRawHeader('X-User-Id', user_id)
HTTPChannel.allHeadersReceived(self)
class MySite(Site):
protocol = MyHTTPChannel
application = service.Application('myapplication')
service = service.IServiceCollection(application)
http_resource = WSGIResource(reactor, reactor.getThreadPool(), app)
http_site = MySite(http_resource)
internet.TCPServer(8008, http_site).setServiceParent(service)
</code></pre>
<p>I'm not familiar with using client certificates in twisted. I assume you can retrieve its information in <code>Protocol.transport</code>.</p>
| 0 | 2016-08-17T14:32:38Z | [
"python",
"flask",
"openssl",
"twisted"
] |
How to join a list while preserving previous structure? | 38,993,845 | <p>I am having trouble joining a pre-split string after modification while preserving the previous structure.</p>
<p>say I have a string like this:</p>
<pre><code>string = """
This is a nice piece of string isn't it?
I assume it is so. I have to keep typing
to use up the space. La-di-da-di-da.
This is a spaced out sentence
Bonjour.
"""
</code></pre>
<p>I have to do some tests of that string.. finding specific words and characters within those words etc...and then replace them accordingly. so to accomplish that I had to break it up using </p>
<pre><code>string.split()
</code></pre>
<p>The problem with this is, is that split also gets rid of the \n and extra spaces immediately ruining the integrity of the previous structure</p>
<p>Are there some extra methods in split that will allow me to accomplish this or should I seek an alternative route?</p>
<p>Thank you.</p>
| 0 | 2016-08-17T10:16:50Z | 38,993,890 | <p>The <code>split</code> method takes an optional argument to specify the delimiter. If you only want to split words using space (<code>' '</code>) characters, you can pass that as an argument:</p>
<pre><code>>>> string = """
...
... This is a nice piece of string isn't it?
... I assume it is so. I have to keep typing
... to use up the space. La-di-da-di-da.
...
... Bonjour.
... """
>>>
>>> string.split()
['This', 'is', 'a', 'nice', 'piece', 'of', 'string', "isn't", 'it?', 'I', 'assume', 'it', 'is', 'so.', 'I', 'have', 'to', 'keep', 'typing', 'to', 'use', 'up', 'the', 'space.', 'La-di-da-di-da.', 'Bonjour.']
>>> string.split(' ')
['\n\nThis', 'is', 'a', 'nice', 'piece', 'of', 'string', "isn't", 'it?\nI', 'assume', 'it', 'is', 'so.', 'I', 'have', 'to', 'keep', 'typing\nto', 'use', 'up', 'the', 'space.', 'La-di-da-di-da.\n\nBonjour.\n']
>>>
</code></pre>
| 1 | 2016-08-17T10:19:19Z | [
"python",
"list",
"formatting"
] |
How to join a list while preserving previous structure? | 38,993,845 | <p>I am having trouble joining a pre-split string after modification while preserving the previous structure.</p>
<p>say I have a string like this:</p>
<pre><code>string = """
This is a nice piece of string isn't it?
I assume it is so. I have to keep typing
to use up the space. La-di-da-di-da.
This is a spaced out sentence
Bonjour.
"""
</code></pre>
<p>I have to do some tests of that string.. finding specific words and characters within those words etc...and then replace them accordingly. so to accomplish that I had to break it up using </p>
<pre><code>string.split()
</code></pre>
<p>The problem with this is, is that split also gets rid of the \n and extra spaces immediately ruining the integrity of the previous structure</p>
<p>Are there some extra methods in split that will allow me to accomplish this or should I seek an alternative route?</p>
<p>Thank you.</p>
| 0 | 2016-08-17T10:16:50Z | 38,993,892 | <p>The split method will split your string based on all white-spaces by default. If you want to split the lies separately, you can first split your string with new-lines then split the lines with white-space:</p>
<pre><code>>>> [line.split() for line in string.strip().split('\n')]
[['This', 'is', 'a', 'nice', 'piece', 'of', 'string', "isn't", 'it?'], ['I', 'assume', 'it', 'is', 'so.', 'I', 'have', 'to', 'keep', 'typing'], ['to', 'use', 'up', 'the', 'space.', 'La-di-da-di-da.'], [], ['Bonjour.']]
</code></pre>
| 1 | 2016-08-17T10:19:25Z | [
"python",
"list",
"formatting"
] |
How to join a list while preserving previous structure? | 38,993,845 | <p>I am having trouble joining a pre-split string after modification while preserving the previous structure.</p>
<p>say I have a string like this:</p>
<pre><code>string = """
This is a nice piece of string isn't it?
I assume it is so. I have to keep typing
to use up the space. La-di-da-di-da.
This is a spaced out sentence
Bonjour.
"""
</code></pre>
<p>I have to do some tests of that string.. finding specific words and characters within those words etc...and then replace them accordingly. so to accomplish that I had to break it up using </p>
<pre><code>string.split()
</code></pre>
<p>The problem with this is, is that split also gets rid of the \n and extra spaces immediately ruining the integrity of the previous structure</p>
<p>Are there some extra methods in split that will allow me to accomplish this or should I seek an alternative route?</p>
<p>Thank you.</p>
| 0 | 2016-08-17T10:16:50Z | 38,993,901 | <p>just do <code>string.split(' ')</code> (note the space argument to the split method).</p>
<p>this will keep your precious new lines within the strings that go into the resulting array...</p>
| 0 | 2016-08-17T10:19:59Z | [
"python",
"list",
"formatting"
] |
How to join a list while preserving previous structure? | 38,993,845 | <p>I am having trouble joining a pre-split string after modification while preserving the previous structure.</p>
<p>say I have a string like this:</p>
<pre><code>string = """
This is a nice piece of string isn't it?
I assume it is so. I have to keep typing
to use up the space. La-di-da-di-da.
This is a spaced out sentence
Bonjour.
"""
</code></pre>
<p>I have to do some tests of that string.. finding specific words and characters within those words etc...and then replace them accordingly. so to accomplish that I had to break it up using </p>
<pre><code>string.split()
</code></pre>
<p>The problem with this is, is that split also gets rid of the \n and extra spaces immediately ruining the integrity of the previous structure</p>
<p>Are there some extra methods in split that will allow me to accomplish this or should I seek an alternative route?</p>
<p>Thank you.</p>
| 0 | 2016-08-17T10:16:50Z | 38,994,000 | <p>Just split with a delimiter:</p>
<pre><code>>>> string.split(' ')
['\n\nThis', 'is', 'a', 'nice', 'piece', 'of', 'string', "isn't", 'it?\nI', 'assume', 'it', 'is', 'so.', 'I', 'have', 'to', 'keep', 'typing\nto', 'use', 'up', 'the', 'space.', 'La-di-da-di-da.\n\nThis', '', '', 'is', '', '', '', 'a', '', '', '', 'spaced', '', '', 'out', '', '', 'sentence\n\nBonjour.\n']
</code></pre>
<p>And to get it back:</p>
<pre><code>>>> ' '.join(a)
This is a nice piece of string isn't it?
I assume it is so. I have to keep typing
to use up the space. La-di-da-di-da.
This is a spaced out sentence
Bonjour.
</code></pre>
| 1 | 2016-08-17T10:24:23Z | [
"python",
"list",
"formatting"
] |
How to join a list while preserving previous structure? | 38,993,845 | <p>I am having trouble joining a pre-split string after modification while preserving the previous structure.</p>
<p>say I have a string like this:</p>
<pre><code>string = """
This is a nice piece of string isn't it?
I assume it is so. I have to keep typing
to use up the space. La-di-da-di-da.
This is a spaced out sentence
Bonjour.
"""
</code></pre>
<p>I have to do some tests of that string.. finding specific words and characters within those words etc...and then replace them accordingly. so to accomplish that I had to break it up using </p>
<pre><code>string.split()
</code></pre>
<p>The problem with this is, is that split also gets rid of the \n and extra spaces immediately ruining the integrity of the previous structure</p>
<p>Are there some extra methods in split that will allow me to accomplish this or should I seek an alternative route?</p>
<p>Thank you.</p>
| 0 | 2016-08-17T10:16:50Z | 38,994,548 | <p>You can save the spaces in another list then after modifying the words list you join them together. </p>
<pre><code>In [1]: from nltk.tokenize import RegexpTokenizer
In [2]: spacestokenizer = RegexpTokenizer(r'\s+', gaps=False)
In [3]: wordtokenizer = RegexpTokenizer(r'\s+', gaps=True)
In [4]: string = """
...:
...: This is a nice piece of string isn't it?
...: I assume it is so. I have to keep typing
...: to use up the space. La-di-da-di-da.
...:
...: This is a spaced out sentence
...:
...: Bonjour.
...: """
In [5]: spaces = spacestokenizer.tokenize(string)
In [6]: words = wordtokenizer.tokenize(string)
In [7]: print ''.join([s+w for s, w in zip(spaces, words)])
This is a nice piece of string isn't it?
I assume it is so. I have to keep typing
to use up the space. La-di-da-di-da.
This is a spaced out sentence
Bonjour.
</code></pre>
| 0 | 2016-08-17T10:49:16Z | [
"python",
"list",
"formatting"
] |
Fetching collection inside a subquery | 38,994,074 | <p>Supposing we have two tables, linked by a many-to-many relationship.</p>
<pre><code>class Student(db.Model):
id = db.Column(UUIDType, primary_key=True)
name = db.Column(db.String(255))
courses = db.relationship('Course',
secondary=student_courses,
backref=db.backref('students'))
class Course(db.Model):
id = db.Column(UUIDType, primary_key=True)
name = db.Column(db.String(255))
</code></pre>
<p>I am trying to query the name of the students with the names of the courses s/he is subscribed to using a subquery, but it only shows the name of the first matching course (not all of them). In other words, I would like to retrieve <code>(student_id, student_name, [list of course_names])</code>.</p>
<pre><code>sq = db.session.query(Student.id.label('student_id'),
Course.id.label('course_id'),
Course.name.label('course_name')) \
.join(Student.courses) \
.group_by(Student.id, Course.id).subquery('pattern_links_sq')
db.session.query(Student.id, Student.name, sq.c.course_name) \
.join(Student.courses)
.filter(Student.id == sq.c.student_id).all()
</code></pre>
| 1 | 2016-08-17T10:28:05Z | 38,998,629 | <p>You can use <a href="https://www.postgresql.org/docs/9.5/static/functions-aggregate.html" rel="nofollow"><code>array_agg</code></a> function in PostgreSQL</p>
<pre><code>from sqlalchemy import func
db.session.query(Student.id, Student.name, func.array_agg(Course.name))\
.join(Student.courses)\
.group_by(Student.id)\
.all()
</code></pre>
| 0 | 2016-08-17T13:52:01Z | [
"python",
"sqlalchemy",
"flask-sqlalchemy"
] |
django change list display column name with foreign field value | 38,994,195 | <p>I am using <code>django 1.9</code> , i have following model which have two foreing fields,</p>
<p><strong>models.py</strong></p>
<pre><code>class Mobile(models.Model):
name = models.CharField(max_length=100)
product = models.ForeignKey(Product)
owner = model.ForeignKey(Customer)
</code></pre>
<p>now i want to build a admin interface for <code>Mobile</code> model where i want to provide list display for both <code>product</code> and <code>owner</code> foreign fields,that is, the list display will show one of the field of that foreign fields and also want to change the column name,i have tried so but its not working at all</p>
<p><strong>admin.py</strong></p>
<pre><code>class MobileModelAdmin(admin.ModelAdmin):
list_display = ('name','product', 'owner')
def related_product(self, obj):
return obj.product.name
related_product.short_description = 'product name'
def related_owner(self, obj):
return obj.owner.name
related_owner.short_description = 'owner name'
admin.site.register(Mobile, MobileModelAdmin)
</code></pre>
<p>but neither its changing the column name nor showing the related value.</p>
| 0 | 2016-08-17T10:33:39Z | 38,994,470 | <p>I think you have to do</p>
<pre><code>list_display = ('name','related_product', 'related_owner') #means method name what you are given.
</code></pre>
| 0 | 2016-08-17T10:45:42Z | [
"python",
"django"
] |
Storing numpy array in sqlite3 database with python issue | 38,994,218 | <p>I have a problem with storing a numpy array in sqlite database. I have 1 table with Name and Data.</p>
<pre><code>import sqlite3 as sql
from DIP import dip # function to caclculate numpy array
name = input('Enter your full name\t')
data = dip()
con = sql.connect('Database.db')
c = con.cursor()
c.execute("CREATE TABLE IF NOT EXISTS database(Name text, Vein real )")
con.commit()
c.execute("INSERT INTO database VALUES(?,?)", (name, data))
con.commit()
c.execute("SELECT * FROM database")
df = c.fetchall()
print(data)
print(df)
con.close()
</code></pre>
<p>Everything is fine but when Data is being stored instead of this:</p>
<pre><code>[('Name', 0.03908678 0.04326234 0.18298542 ..., 0.15228545 0.09972548 0.03992807)]
</code></pre>
<p>I have this:</p>
<pre><code>[('Name', b'\xccX+\xa8.\x03\xa4?\xf7\xda[\x1f ..., x10l\xc7?\xbf\x14\x12\)]
</code></pre>
<p>What is problem with this? Thank you.</p>
<p>P.S. I tried the solution from here <a href="http://stackoverflow.com/questions/18621513/python-insert-numpy-array-into-sqlite3-database">Python insert numpy array into sqlite3 database</a> but it didn't work. And my numpy array is being calculated from skimage (scikit-image) library with HOG (histogram of oriented gradients). Maybe that's a problem...
Also tried to calculate and store it from opencv3 but have the same issue.</p>
| 0 | 2016-08-17T10:34:36Z | 39,004,777 | <p>On the assumption that it is saving <code>data.tostring()</code> to the database, I tried decoding it with <code>fromstring</code>.</p>
<p>Using your displayed string, and trimming off a few bytes I got:</p>
<pre><code>In [79]: np.fromstring(b'\xccX+\xa8.\x03\xa4?\xf7\xda[\x1f\x10l\xc7?', float)
Out[79]: array([ 0.03908678, 0.18298532])
</code></pre>
<p>There's at least one matching number, so this looks promising.</p>
| 0 | 2016-08-17T19:27:41Z | [
"python",
"arrays",
"sqlite",
"numpy",
"sqlite3"
] |
Split a list of dictionary if the value is empty? | 38,994,257 | <p>I want to split a list of dictionary <code>if the value is empty</code> and create new list of list.</p>
<p>Input : </p>
<pre><code>[{'k':'a'},{'k':'b'},{'k':''},{'k':'d'},{'k':''},{'k':'f'},{'k':'g'}]
</code></pre>
<p>Output : </p>
<pre><code>[[{'k': 'a'}, {'k': 'b'}, {'k': ''}], [{'k': 'd'}, {'k': ''}], [{'k': 'f'}, {'k': 'g'}]]
</code></pre>
<p>I have tried using loop, if and its working fine.</p>
<pre><code>sub_header_list = [{'k':'a'},{'k':'b'},{'k':''},{'k':'d'},{'k':''},{'k':'f'},{'k':'g'}]
index_data = [] ; data_list = []
for i in sub_header_list:
index_data.append(i)
if i['k'] == '':
data_list.append(index_data)
index_data = []
print(data_list+[index_data])
[[{'k': 'a'}, {'k': 'b'}, {'k': ''}], [{'k': 'd'}, {'k': ''}], [{'k': 'f'}, {'k': 'g'}]]
</code></pre>
<p>Is there any pythonic way to achieve the same, i mean by using in-built functions or something else ?</p>
| 1 | 2016-08-17T10:36:01Z | 38,994,332 | <p>You can use a <a href="https://docs.python.org/dev/library/itertools.html#itertools.groupby" rel="nofollow"><em>groupby</em></a>:</p>
<pre><code>from itertools import groupby, chain
l = [{'k':'a'},{'k':'b'},{'k':''},{'k':'d'},{'k':''},{'k':'f'},{'k':'g'}]
grps = groupby(l, lambda d: d["k"] == "")
print([list(chain(*(v, next(grps, [[], []])[1]))) for k, v in grps if k])
</code></pre>
<p>Output:</p>
<pre><code>[[{'k': 'a'}, {'k': 'b'}, {'k': ''}], [{'k': 'd'}, {'k': ''}], [{'k': 'f'}, {'k': 'g'}]]
</code></pre>
<p>Or use a generator function:</p>
<pre><code>def grp(lst, ):
temp = []
for dct in lst:
# would catch None, 0, for just empty strings use if dct["k"] == "".
if not dct["k"]:
temp.append(dct)
yield temp
temp = []
else:
temp.append(dct)
yield temp
</code></pre>
<p>Which gives you the same output:</p>
<pre><code>In [9]: list(grp(l))
Out[9]:
[[{'k': 'a'}, {'k': 'b'}, {'k': ''}],
[{'k': 'd'}, {'k': ''}],
[{'k': 'f'}, {'k': 'g'}]]
</code></pre>
<p>The generator function is by far the most efficient approach.</p>
<pre><code>In [8]: l = [{'k':'a'}, {'k':'b'}, {'k':''}, {'k':'d'}, {'k':''}, {'k':'f'}, {'k':'g'}]
In [9]: l = [dict(choice(l)) for _ in range(100000)]
In [10]: timeit list(grp(l))
10 loops, best of 3: 19.5 ms per loop
In [11]: %%timeit
index_list = [i + 1 for i, x in enumerate(l) if x == {'k': ''}]
[l[i:j] for i, j in zip([0] + index_list, index_list + [len(l)])]
....:
10 loops, best of 3: 31.6 ms per loop
In [12]: %%timeit grps = groupby(l, lambda d: d["k"] == "")
[list(chain(*(v, next(grps, [[], []])[1]))) for k, v in grps if k]
....:
10 loops, best of 3: 40 ms per loop
</code></pre>
| 2 | 2016-08-17T10:39:26Z | [
"python",
"list",
"loops",
"dictionary"
] |
Split a list of dictionary if the value is empty? | 38,994,257 | <p>I want to split a list of dictionary <code>if the value is empty</code> and create new list of list.</p>
<p>Input : </p>
<pre><code>[{'k':'a'},{'k':'b'},{'k':''},{'k':'d'},{'k':''},{'k':'f'},{'k':'g'}]
</code></pre>
<p>Output : </p>
<pre><code>[[{'k': 'a'}, {'k': 'b'}, {'k': ''}], [{'k': 'd'}, {'k': ''}], [{'k': 'f'}, {'k': 'g'}]]
</code></pre>
<p>I have tried using loop, if and its working fine.</p>
<pre><code>sub_header_list = [{'k':'a'},{'k':'b'},{'k':''},{'k':'d'},{'k':''},{'k':'f'},{'k':'g'}]
index_data = [] ; data_list = []
for i in sub_header_list:
index_data.append(i)
if i['k'] == '':
data_list.append(index_data)
index_data = []
print(data_list+[index_data])
[[{'k': 'a'}, {'k': 'b'}, {'k': ''}], [{'k': 'd'}, {'k': ''}], [{'k': 'f'}, {'k': 'g'}]]
</code></pre>
<p>Is there any pythonic way to achieve the same, i mean by using in-built functions or something else ?</p>
| 1 | 2016-08-17T10:36:01Z | 38,994,453 | <p>This is another Pythonic way of doing it:</p>
<pre><code>>>> d = [{'k':'a'}, {'k':'b'}, {'k':''}, {'k':'d'}, {'k':''}, {'k':'f'}, {'k':'g'}]
>>> index_list = [i + 1 for i, x in enumerate(d) if x == {'k': ''}]
>>> [d[i:j] for i, j in zip([0] + index_list, index_list + [len(d)])]
[[{'k': 'a'}, {'k': 'b'}, {'k': ''}], [{'k': 'd'}, {'k': ''}], [{'k': 'f'}, {'k': 'g'}]]
</code></pre>
| 2 | 2016-08-17T10:44:57Z | [
"python",
"list",
"loops",
"dictionary"
] |
How to create this sequence in ruby? | 38,994,265 | <p>When we
input : 10</p>
<p>output :01 02 03 04 05 06 07 08 09 10</p>
<p>When we
input :103</p>
<p>output :001 002 003...010 011 012 013.....100 101 002 103</p>
<p>How to create this sequence in ruby or python ?</p>
| -2 | 2016-08-17T10:36:29Z | 38,994,494 | <p>A very basic Python implementation. Note that it's a generator so it returns one value at a time.</p>
<pre><code>def get_range(n):
len_n = len(str(n))
for num in range(1, n + 1):
output = str(num)
while len(output) < len_n:
output = '0' + output
yield output
for i in get_range(100):
print(i)
>> 001
002
...
...
009
010
011
..
..
099
100
</code></pre>
| 1 | 2016-08-17T10:47:00Z | [
"python",
"ruby"
] |
How to create this sequence in ruby? | 38,994,265 | <p>When we
input : 10</p>
<p>output :01 02 03 04 05 06 07 08 09 10</p>
<p>When we
input :103</p>
<p>output :001 002 003...010 011 012 013.....100 101 002 103</p>
<p>How to create this sequence in ruby or python ?</p>
| -2 | 2016-08-17T10:36:29Z | 38,994,510 | <p>Using <code>zfill</code> you can add leading zeroes. </p>
<pre><code>num=input()
for i in range(1,int(num)+1):
print (str(i).zfill(len(num)))
</code></pre>
| 1 | 2016-08-17T10:47:42Z | [
"python",
"ruby"
] |
How to create this sequence in ruby? | 38,994,265 | <p>When we
input : 10</p>
<p>output :01 02 03 04 05 06 07 08 09 10</p>
<p>When we
input :103</p>
<p>output :001 002 003...010 011 012 013.....100 101 002 103</p>
<p>How to create this sequence in ruby or python ?</p>
| -2 | 2016-08-17T10:36:29Z | 38,995,017 | <p>Ruby implementation:</p>
<pre><code>n = gets
p (1..n.to_i).map{ |i| i.to_s.rjust(n.to_s.length, "0") }.join(" ")
</code></pre>
<p>Here <code>rjust</code> will add leading zeros.</p>
| 3 | 2016-08-17T11:11:42Z | [
"python",
"ruby"
] |
How to create this sequence in ruby? | 38,994,265 | <p>When we
input : 10</p>
<p>output :01 02 03 04 05 06 07 08 09 10</p>
<p>When we
input :103</p>
<p>output :001 002 003...010 011 012 013.....100 101 002 103</p>
<p>How to create this sequence in ruby or python ?</p>
| -2 | 2016-08-17T10:36:29Z | 38,996,553 | <p>Another one in Ruby:</p>
<pre><code>n = gets.chomp
'1'.rjust(n.size, '0').upto(n) { |s| puts s }
</code></pre>
<p><a href="http://ruby-doc.org/core-2.3.1/String.html#method-i-upto" rel="nofollow"><code>String#upto</code></a> handles numeric strings in a special way:</p>
<pre><code>'01'.upto('10').to_a
#=> ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10"]
</code></pre>
| 2 | 2016-08-17T12:22:48Z | [
"python",
"ruby"
] |
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available (error: cuda unavailable) | 38,994,291 | <p>in Ubuntu MATE 16.04 I'm trying to run the deep-learning python examples here using the GPU:</p>
<p><a href="http://deeplearning.net/software/theano/tutorial/using_gpu.html" rel="nofollow">testing Theano with GPU</a></p>
<p>I did run the example code, </p>
<pre><code>THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python check1.py
</code></pre>
<p>but it seems that it is used the CPU and not the GPU. Here is the last part of terminal output:</p>
<pre><code>WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu0 is not available (error: cuda unavailable)
...
Used the cpu
</code></pre>
<p>I tried to run this code too:</p>
<pre><code>THEANO_FLAGS=device=cuda0 python check1.py
</code></pre>
<p>but the output is:</p>
<pre><code>ERROR (theano.sandbox.gpuarray): pygpu was configured but could not be imported
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/sandbox/gpuarray/__init__.py", line 20, in <module>
import pygpu
ImportError: No module named pygpu
...
used cpu
</code></pre>
<p>I installed the cuda toolkit from apt.
Here there are (hopefully) useful data:</p>
<pre><code>python --version
Python 2.7.12
g++ -v
gcc version 5.4.0
nvcc --version
Cuda compilation tools, release 7.5, V7.5.17
lspci
NVIDIA Corporation GM107 [GeForce GTX 750 Ti] (rev a2)
nvidia-smi
+------------------------------------------------------+
| NVIDIA-SMI 361.42 Driver Version: 361.42 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 750 Ti Off | 0000:01:00.0 On | N/A |
| 29% 35C P8 1W / 38W | 100MiB / 2044MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 2861 G /usr/lib/xorg/Xorg 90MiB |
+-----------------------------------------------------------------------------+
</code></pre>
| 2 | 2016-08-17T10:37:50Z | 39,017,165 | <p>Finally I solved!
This post
<a href="https://github.com/Theano/Theano/issues/4430" rel="nofollow">Ubuntu 16.04, Theano and Cuda</a></p>
<p>suggests to add flag </p>
<pre><code>nvcc.flags=-D_FORCE_INLINES
</code></pre>
<p>to command line, so the command line becomes:</p>
<pre><code>THEANO_FLAGS=floatX=float32,device=gpu,nvcc.flags=-D_FORCE_INLINES python check1.py
</code></pre>
<p>It seems to fix a bug in using glibc 2.23</p>
<p><a href="https://github.com/Theano/Theano/pull/4369" rel="nofollow">fix for glibc 2.23</a></p>
<p>Now the program uses correctly the GPU, this is the correct output:</p>
<pre><code>THEANO_FLAGS=floatX=float32,device=gpu,nvcc.flags=-D_FORCE_INLINES python check1.py
Using gpu device 0: GeForce GTX 750 Ti (CNMeM is disabled, cuDNN not available)
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.317012 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
1.62323296]
Used the gpu
</code></pre>
<p>Note that before trying this solution, I removed nvidia-cuda-toolkit and installed CUDA from Nvidia website, following part of instructions found here: </p>
<p><a href="https://www.pugetsystems.com/labs/hpc/NVIDIA-CUDA-with-Ubuntu-16-04-beta-on-a-laptop-if-you-just-cannot-wait-775/" rel="nofollow">CUDA with Ubuntu 16.04</a></p>
<p>This is what exactly I did:</p>
<p>1) I downloaded CUDA from here
<a href="https://developer.nvidia.com/cuda-downloads" rel="nofollow">CUDA 7.5 download</a>
selecting LINUX, x86_64, Ubuntu 15.04, deb local</p>
<p>2) I installed the deb file</p>
<pre><code>dpkg -i cuda_repo-ubuntu1504-7-5-local_7.5-18_amd64.deb
</code></pre>
<p>3) Then run </p>
<pre><code>apt-get update
</code></pre>
<p>This gives some errors! I fixed it overwriting the file Release in \var\cuda-repo-7.5-local with the following lines:</p>
<pre><code>Origin: NVIDIA
Label: NVIDIA CUDA
Architecture: repogenstagetemp
MD5Sum:
51483bc34577facd49f0fbc8c396aea0 75379 Packages
4ef963dfa4276be01db8e7bf7d8a4f12 21448 Packages.gz
SHA256:
532b1bb3b392b9083de4445dab2639b36865d7df1f610aeef8961a3c6f304d8a 75379 Packages
2e48cc13b6cc5856c9c6f628c6fe8088ef62ed664e9e0046fc72819269f7432c 21448 Packages.gz
</code></pre>
<p>(sorry, I do not remember where I read this solution).</p>
<p>4) I succesfully run </p>
<pre><code>apt-get-update
apt-get install cuda
</code></pre>
<p>5) Everything was insatlled in \usr\local\cuda-7.5</p>
<p>6) I commented the line n 115 in file \usr\local\cuda-7.5\include\host-config.h</p>
<pre><code> #if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 9)
//#error -- unsupported GNU version! gcc versions later than 4.9 are not supported!
#endif /* __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 9) */
</code></pre>
<p>which seems to prevent CUDA from using gcc 5.4
After all these operations, I updated the .theanorc file, adding the cuda root</p>
<pre><code>[cuda]
root = /usr/local/cuda-7.5
</code></pre>
<p>That's all :)</p>
<p>PS: I do not know if it would work even with nvidia-cuda-toolkit!</p>
| 1 | 2016-08-18T11:34:02Z | [
"python",
"theano",
"theano-cuda"
] |
Receive push data in R Shiny application | 38,994,377 | <p>I have a simple python program, which should push its data into an R Shiny application. These lines in Shiny parse the "GET" input:</p>
<pre><code> # Parse the GET query string
output$queryText <- renderText({
query <- parseQueryString(session$clientData$url_search)
eventList[query$eventid] <<- query$event
})
</code></pre>
<p>This works fine with a browser calling "<a href="http://127.0.0.1:5923/?eventid=1223&event=somestring" rel="nofollow">http://127.0.0.1:5923/?eventid=1223&event=somestring</a>". If I try to call the URL in python I get a "Connection reset by peer" in R and nothing is added to the list. My Python code so far:</p>
<pre><code>request = urllib2.Request("http://127.0.0.1:5923/?eventid=1223&event=somestring")
test = urllib2.urlopen(request)
</code></pre>
<p>Does anyone know how to get this working or has a better solution to push data from outside into an R Shiny application?</p>
<p>Thanks for help!</p>
| 0 | 2016-08-17T10:41:39Z | 39,000,116 | <p>My complete solution using websockets with httpuv: </p>
<pre><code>library(httpuv)
startWSServer <- function(){
if(exists('server')){
stopDaemonizedServer(server)
}
app <- list(
onWSOpen = function(ws) {
ws$onMessage(function(binary, message) {
#handle your message, for example save it somewhere
#accessible by Shiny application, here it is just printed
print(message)
ws$send("message received")
})
}
)
server <<- startDaemonizedServer("0.0.0.0", 9454, app)
}
stopWSServer <- function(){
stopDaemonizedServer(server)
server <<- NULL
}
</code></pre>
<p>Hope this helps ;)</p>
| 1 | 2016-08-17T14:58:55Z | [
"python",
"get",
"shiny",
"shiny-server"
] |
I can't encode my input | 38,994,379 | <pre><code>if wb == 'Encode a sentence' or wb == 'encode a sentence':
print("Please enter the Sentence...")
str = input()
str = str.encode('base64','strict')
print(str)
</code></pre>
<p>It tells me that it can't be bytes, They should be string...Any help is appreciated!</p>
| 0 | 2016-08-17T10:41:43Z | 38,994,571 | <p>try this:</p>
<pre><code>from base64 import b64encode
b64encode(wb.encode())
</code></pre>
<p>also for your if line use this </p>
<pre><code>if wb.lower() == 'encode a sentence':
</code></pre>
| 1 | 2016-08-17T10:50:04Z | [
"python",
"byte",
"encode"
] |
I can't encode my input | 38,994,379 | <pre><code>if wb == 'Encode a sentence' or wb == 'encode a sentence':
print("Please enter the Sentence...")
str = input()
str = str.encode('base64','strict')
print(str)
</code></pre>
<p>It tells me that it can't be bytes, They should be string...Any help is appreciated!</p>
| 0 | 2016-08-17T10:41:43Z | 38,994,616 | <p>You can try this:</p>
<pre><code>import base64
if wb == 'Encode a sentence' or wb == 'encode a sentence':
print("Please enter the Sentence...")
str = input()
base64.b64encode(bytes(str, 'utf-8'))
print(str)
</code></pre>
| 1 | 2016-08-17T10:52:15Z | [
"python",
"byte",
"encode"
] |
Optimize Query in django | 38,994,381 | <p>I want to optimize the looping code using django orm as it is taking 41 queries because of the loop was shown in django-debug-toolbar, I have stated my models with the django ORM and there is a loop where i need to optimize using ORM approach so that i can avoid a loop. </p>
<pre><code>class Bills(models.Model):
library=models.ForeignKey(Library,null=True)
customer=models.ForeignKey(Customer, null=True)
total_price = models.FloatField()
History = '1'
Physics = '2'
Maths = '3'
Book_Category=(
(History,'History'),
(Physics,'Physics'),
(Maths,'Maths')
)
book_category_type=models.CharField(max_length=2,choices=Book_Category)
</code></pre>
<p>This is the Bill model storing the all the bills of specific customer.</p>
<pre><code>class LibraryOrder(models.Model):
hotel=models.ForeignKey(Hotel,null=True)
library = models.ForeignKey(Library,null=True)
customer=models.ForeignKey(Customer,null=True)
library_item = models.ForeignKey(LibraryItem,null=True)
quantity = models.FloatField()
total_price = models.FloatField()
comments=models.TextField(null=True)
bill=models.ForeignKey(Bills,null=True)
ORDER_STATUS = (
(NOT_PROCESSED, 'NotProcessed'),
(PROCESSING, 'Processing'),
(PROCESSED,'processed'),
)
order_status = models.CharField(max_length=3, choices=ORDER_STATUS, default=NOT_PROCESSED)
</code></pre>
<p>This is the Order model when some customer orders for some book in the library item.</p>
<p>Right now i am using this :</p>
<pre><code>customers = Customer.objects.filter(library="1").exclude(customer_status='2')
bills = Bills.objects.filter(library="1", customer=customers, book_category_type="1")
for bill in bills:
# if bill.order_type == "1":
not_processed_order = LibraryOrder.objects.filter(bill=bill, order_status="1")
notprocessed_lists.append(not_processed_order)
processing_order = LibraryOrder.objects.filter(bill=bill, order_status="2")
processing_lists.append(processing_order)
processed_order = LibraryOrder.objects.filter(bill=bill, order_status="3")
processed_lists.append(processed_order)
</code></pre>
<p>The bills which is getting looped is the loop where i am getting the array of bills by that orm, as in library order i have bill as foreign key i am using to get the order details and pushing to the array for displaying in the html.</p>
<p>I want to optimize the django orm approach which i have specified to two lines into single line, Those lists are separate as it is displaying in the separate tabs in html </p>
| 0 | 2016-08-17T10:41:51Z | 38,995,037 | <p>In this specific scenario you can use <code>prefetch_related</code> with custom <code>Prefetch</code> object/s to prefetch the necessary data and not make any queries inside the loop.</p>
<pre><code>customers = Customer.objects.filter(library="1").exclude(customer_status='2')
bills = Bills.objects.filter(library="1", customer=customers, book_category_type="1").prefetch_related(
Prefetch('libraryorder_set', queryset=LibraryOrder.objects.filter(order_status="1"), to_attr='not_processed_orders'),
Prefetch('libraryorder_set', queryset=LibraryOrder.objects.filter(order_status="2"), to_attr='processing_orders'),
Prefetch('libraryorder_set', queryset=LibraryOrder.objects.filter(order_status="3"), to_attr='processed_orders'),
)
for bill in bills:
notprocessed_lists.append(bill.not_processed_orders)
processing_lists.append(bill.processing_orders)
processed_lists.append(bill.processed_orders)
</code></pre>
<p>This way you will have 3 queries (1 query per <code>prefetch</code> object) instead of 40+. </p>
<p>You can optimize if further to 1 query, but you will have to do some more work in the python code:</p>
<pre><code>customers = Customer.objects.filter(library="1").exclude(customer_status='2')
bills = Bills.objects.filter(library="1", customer=customers, book_category_type="1").prefetch_related('libraryorder_set')
for bill in bills:
not_processed_orders = []
processing_orders = []
processed_orders = []
for order in bill.libraryorder_set.all():
if order.status == '1':
not_processed_orders.append(order)
elif order_status == '2':
processing_orders.append(order)
elif order_status == '3':
processed_orders.append(order_status)
notprocessed_lists.append(bill.not_processed_orders)
processing_lists.append(bill.processing_orders)
processed_lists.append(bill.processed_orders)
</code></pre>
<p>However in order to lean how to fish, my advice it to take a look at the following articles from the docs:</p>
<ul>
<li><a href="https://docs.djangoproject.com/en/1.10/topics/db/optimization/" rel="nofollow">Database access optimization</a></li>
<li><a href="https://docs.djangoproject.com/en/1.10/topics/performance/" rel="nofollow">Performance and optimization</a></li>
<li><a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#select-related" rel="nofollow">select_related()</a></li>
<li><a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#prefetch-related" rel="nofollow">prefetch_related()</a></li>
</ul>
| 0 | 2016-08-17T11:12:50Z | [
"python",
"mysql",
"django",
"django-models",
"orm"
] |
Optimize Query in django | 38,994,381 | <p>I want to optimize the looping code using django orm as it is taking 41 queries because of the loop was shown in django-debug-toolbar, I have stated my models with the django ORM and there is a loop where i need to optimize using ORM approach so that i can avoid a loop. </p>
<pre><code>class Bills(models.Model):
library=models.ForeignKey(Library,null=True)
customer=models.ForeignKey(Customer, null=True)
total_price = models.FloatField()
History = '1'
Physics = '2'
Maths = '3'
Book_Category=(
(History,'History'),
(Physics,'Physics'),
(Maths,'Maths')
)
book_category_type=models.CharField(max_length=2,choices=Book_Category)
</code></pre>
<p>This is the Bill model storing the all the bills of specific customer.</p>
<pre><code>class LibraryOrder(models.Model):
hotel=models.ForeignKey(Hotel,null=True)
library = models.ForeignKey(Library,null=True)
customer=models.ForeignKey(Customer,null=True)
library_item = models.ForeignKey(LibraryItem,null=True)
quantity = models.FloatField()
total_price = models.FloatField()
comments=models.TextField(null=True)
bill=models.ForeignKey(Bills,null=True)
ORDER_STATUS = (
(NOT_PROCESSED, 'NotProcessed'),
(PROCESSING, 'Processing'),
(PROCESSED,'processed'),
)
order_status = models.CharField(max_length=3, choices=ORDER_STATUS, default=NOT_PROCESSED)
</code></pre>
<p>This is the Order model when some customer orders for some book in the library item.</p>
<p>Right now i am using this :</p>
<pre><code>customers = Customer.objects.filter(library="1").exclude(customer_status='2')
bills = Bills.objects.filter(library="1", customer=customers, book_category_type="1")
for bill in bills:
# if bill.order_type == "1":
not_processed_order = LibraryOrder.objects.filter(bill=bill, order_status="1")
notprocessed_lists.append(not_processed_order)
processing_order = LibraryOrder.objects.filter(bill=bill, order_status="2")
processing_lists.append(processing_order)
processed_order = LibraryOrder.objects.filter(bill=bill, order_status="3")
processed_lists.append(processed_order)
</code></pre>
<p>The bills which is getting looped is the loop where i am getting the array of bills by that orm, as in library order i have bill as foreign key i am using to get the order details and pushing to the array for displaying in the html.</p>
<p>I want to optimize the django orm approach which i have specified to two lines into single line, Those lists are separate as it is displaying in the separate tabs in html </p>
| 0 | 2016-08-17T10:41:51Z | 38,995,061 | <pre><code>not_processed_order = LibraryOrder.objects.filter(bill__library="1", bill__book_category_type="1", order_status="1", bill__customer__library='1').exclude(bill__customer__status='2')
processing_order = LibraryOrder.objects.filter(bill__library="1", bill__book_category_type="1", order_status="2", bill__customer__library='1').exclude(bill__customer__status='2')
processed_order = LibraryOrder.objects.filter(bill__library="1", bill__book_category_type="1", order_status="3", bill__customer__library='1').exclude(bill__customer__status='2')
</code></pre>
| 3 | 2016-08-17T11:13:59Z | [
"python",
"mysql",
"django",
"django-models",
"orm"
] |
Python pandas with lambda apply difficulty | 38,994,408 | <p>I am running the following function but somehow struggling to have it take the length condition into account (the if part). It simply runs the first part if the function only:</p>
<p><code>stringDataFrame.apply(lambda x: x.str.replace(r'[^0-9]', '') if (len(x) >= 7) else x)</code></p>
<p>it somehow only runs the <code>x.str.replace(r'[^0-9]', '')</code> part for some reason, what am I doing wrong here i have been stuck.</p>
| 1 | 2016-08-17T10:43:11Z | 38,994,686 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.applymap.html" rel="nofollow"><code>applymap</code></a> if need works by each value separately, because <code>apply</code> works with <code>all column</code> (<code>Series</code>).</p>
<p>Then istead <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.replace.html" rel="nofollow"><code>str.replace</code></a>, which works with <code>regex</code> nice, use <a href="http://stackoverflow.com/a/5658439/2901002"><code>re.sub</code></a>:</p>
<pre><code>print (stringDataFrame.applymap(lambda x: re.sub(r'[^0-9]', '', x) if (len(x) >= 7) else x))
</code></pre>
<p>Sample:</p>
<pre><code>import pandas as pd
import re
stringDataFrame = pd.DataFrame({'A':['gdgdg454dgd','147ooo2', '123ss45678'],
'B':['gdgdg454dgd','x142', '12345678a'],
'C':['gdgdg454dgd','xx142', '12567dd8']})
print (stringDataFrame)
A B C
0 gdgdg454dgd gdgdg454dgd gdgdg454dgd
1 147ooo2 x142 xx142
2 123ss45678 12345678a 12567dd8
print (stringDataFrame.applymap(lambda x: re.sub(r'[^0-9]', '', x) if (len(x) >= 7) else x))
A B C
0 454 454 454
1 1472 x142 xx142
2 12345678 12345678 125678
</code></pre>
| 1 | 2016-08-17T10:55:47Z | [
"python",
"pandas",
"dataframe",
"lambda",
"apply"
] |
Pandas DataFrame.to_csv last column missing when field is empty | 38,994,458 | <p>I am extracting data from a database into a csv file.</p>
<p>I realize that each time the last field of a row in Null, then DataFrame.to_csv omit it.
This only happens when the empty field is at the last position.</p>
<p>Here is an exemple :</p>
<pre><code>dframe_iterator = pandas.read_sql_query(request, engine, chunksize=1000)
for i, dataframe in enumerate(dframe_iterator):
dataframe.to_csv('file.csv', index=False, header=True, sep='|', mode='a', encoding='utf-8', date_format='%d/%m/%Y')
</code></pre>
<p>Let say one n-uplet returned by the sql query contains 2 Null values :</p>
<pre><code>'blabla','blabla',Null, 'blabla', Null
</code></pre>
<p>Then, in the csv file, i get :</p>
<pre><code>blabla|blabla||blabla
</code></pre>
<p>You can see that the first Null field is there (||) but the second Null filed is omitted.</p>
<p>I would expect this :</p>
<pre><code>blabla|blabla||blabla|
</code></pre>
<p>Do you have any idea how to perform this?
Another application is expecting as much fields as returned by the sql query.</p>
<p>Thanks!</p>
| 0 | 2016-08-17T10:45:20Z | 38,996,898 | <p>Have you tried parameter <code>na_rep</code>? <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow">Doc</a></p>
<pre><code> na_rep : string, default ââ
Missing data representation
</code></pre>
| 0 | 2016-08-17T12:38:30Z | [
"python",
"pandas",
"dataframe"
] |
Pandas DataFrame.to_csv last column missing when field is empty | 38,994,458 | <p>I am extracting data from a database into a csv file.</p>
<p>I realize that each time the last field of a row in Null, then DataFrame.to_csv omit it.
This only happens when the empty field is at the last position.</p>
<p>Here is an exemple :</p>
<pre><code>dframe_iterator = pandas.read_sql_query(request, engine, chunksize=1000)
for i, dataframe in enumerate(dframe_iterator):
dataframe.to_csv('file.csv', index=False, header=True, sep='|', mode='a', encoding='utf-8', date_format='%d/%m/%Y')
</code></pre>
<p>Let say one n-uplet returned by the sql query contains 2 Null values :</p>
<pre><code>'blabla','blabla',Null, 'blabla', Null
</code></pre>
<p>Then, in the csv file, i get :</p>
<pre><code>blabla|blabla||blabla
</code></pre>
<p>You can see that the first Null field is there (||) but the second Null filed is omitted.</p>
<p>I would expect this :</p>
<pre><code>blabla|blabla||blabla|
</code></pre>
<p>Do you have any idea how to perform this?
Another application is expecting as much fields as returned by the sql query.</p>
<p>Thanks!</p>
| 0 | 2016-08-17T10:45:20Z | 38,997,453 | <p>Hemm, well, I apologie but my question was wrong.</p>
<p>Actually, the pandas behaviour is perfectly good :</p>
<pre><code>'blabla','blabla',Null, 'blabla', Null
</code></pre>
<p>would be :</p>
<pre><code>blabla|blabla||blabla|
</code></pre>
<p>I have been troubled by a dataset having lots of Null field at last positions.
Working on a different dataset made me realize this.
And also wrong client specs which would expect <code>blabla|blabla||blabla||</code> </p>
<p>I really do apologie for being stupid and too fast to post.</p>
| 2 | 2016-08-17T13:03:11Z | [
"python",
"pandas",
"dataframe"
] |
Autostart Python script and run in background with Ubuntu | 38,994,530 | <p>I'm running Ubuntu server 16.04 and still getting to grips with it. I have a python script that runs in an endless loop, performing a task related to fetching data from an external source.</p>
<p>What I'm trying to do, is make this python script start after (or during) boot and then run in the background.</p>
<p>I've tried editing rc.local but the boot sequence just hangs since the script keeps running.</p>
<p>Any advice would be greatly appreciated.</p>
| 0 | 2016-08-17T10:48:38Z | 38,994,864 | <p>tmux is a great utility for background desktops. You can use it for this:</p>
<pre><code>sudo apt get install tmux
</code></pre>
<p>Then add it to your rc.local:</p>
<pre><code>/usr/bin/tmux new-session -d 'python /path/to/your/script'
</code></pre>
<p>After boot you can use it as follow:</p>
<pre><code>tmux attach
</code></pre>
<p>And your console will be attached to the last desktop working at background. </p>
| 0 | 2016-08-17T11:05:12Z | [
"python",
"ubuntu",
"background",
"boot",
"autostart"
] |
Autostart Python script and run in background with Ubuntu | 38,994,530 | <p>I'm running Ubuntu server 16.04 and still getting to grips with it. I have a python script that runs in an endless loop, performing a task related to fetching data from an external source.</p>
<p>What I'm trying to do, is make this python script start after (or during) boot and then run in the background.</p>
<p>I've tried editing rc.local but the boot sequence just hangs since the script keeps running.</p>
<p>Any advice would be greatly appreciated.</p>
| 0 | 2016-08-17T10:48:38Z | 38,994,915 | <p>As one of the comments mentions, you can use cronjobs to start scripts at certain times such as at startup(as you would like to do). It also would not halt execution like you mentioned with rc.local</p>
<p>The line that you need to add to the chronjob list is - </p>
<p><code>@reboot python /home/MyPythonScript.py</code></p>
<p>Here is are a couple of useful tutorials that show you how to do this: <a href="http://kvz.io/blog/2007/07/29/schedule-tasks-on-linux-using-crontab/" rel="nofollow">http://kvz.io/blog/2007/07/29/schedule-tasks-on-linux-using-crontab/</a>
<a href="https://help.ubuntu.com/community/CronHowto" rel="nofollow">https://help.ubuntu.com/community/CronHowto</a> </p>
<p>If you would like to do it with python itself there is this handy python library - <a href="https://pypi.python.org/pypi/python-crontab/" rel="nofollow">https://pypi.python.org/pypi/python-crontab/</a></p>
| 0 | 2016-08-17T11:07:14Z | [
"python",
"ubuntu",
"background",
"boot",
"autostart"
] |
SSIS Execute Process Task Hanging when Standard Error Variable Provided | 38,994,725 | <p>I am using SSIS's Execute Process Task to execute a compiled python script.</p>
<p>The script executes as expected and completes as expected with either success or failure.</p>
<p>However, when I configure a variable to catch Standard Error or Standard Output, the application hangs. The command prompt flashes up and down indicating that the execution has completed but then the SSIS task itself never completes.</p>
<p>To reiterate, when I don't configure the variable, there is no issue and the task finishes as expected. I have also debugged the execution of the script independently and I can verify that:</p>
<ul>
<li><p>Status code is 0 when success.</p></li>
<li><p>Standard error contains text.</p></li>
</ul>
<p>Any ideas what is causing the task to hang?</p>
| 0 | 2016-08-17T10:58:01Z | 38,997,349 | <p>This is actually now solved - or rather, never actually broken; I was writing to a parent package variable (i.e. by creating the variable in the child package, configuring the task, setting delay validation to true and then deleting the variable) - it appears when I do this, it takes SSIS a long time to write to it! If i use a child package variable, it completes straight away but it takes 1-2 minutes for the parent package variable to be written to. </p>
<p>At least it's completing.</p>
| 0 | 2016-08-17T12:58:56Z | [
"python",
"ssis"
] |
How to find values that don't belong to sample in Python? | 38,994,770 | <p>I need to take a sample from a dataframe but also I need the values that don't belong to that sample. For examle:</p>
<pre><code>data = [[1,2,3,55], [1,2,34,5], [13,2,3,5], [1,2,32,5], [1,2,22,5]]
df = DataFrame(data=data, index=[0, 0, 1, 1, 1], columns=['A', 'B', 'C', 'D'])
</code></pre>
<p>Output:</p>
<pre><code>In[97]: df.sample(3)
Out[97]:
A B C D
1 1 2 32 5
0 1 2 3 55
1 13 2 3 5
</code></pre>
<p>How can I reach the rest of 2 samples? Is there any basic way to do that?</p>
| 1 | 2016-08-17T11:00:48Z | 38,994,954 | <p>With duplicates index it is problematic, so need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a> firstly, then use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.eq.html" rel="nofollow"><code>eq</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html" rel="nofollow"><code>isin</code></a>:</p>
<pre><code>df = df.reset_index()
sam = df.sample(3)
print (sam)
index A B C D
0 0 1 2 3 55
1 0 1 2 34 5
3 1 1 2 32 5
print ((df.eq(sam, 1)).all(1))
0 True
1 True
2 False
3 True
4 False
dtype: bool
print ((df.isin(sam)).all(1))
0 True
1 True
2 False
3 True
4 False
dtype: bool
print (df[~(df.isin(sam)).all(1)])
index A B C D
2 1 13 2 3 5
4 1 1 2 22 5
</code></pre>
<p>Last reasign index back:</p>
<pre><code>print (sam.set_index('index').rename_axis(None))
A B C D
0 1 2 3 55
0 1 2 34 5
1 1 2 32 5
print (df[~(df.isin(sam)).all(1)].set_index('index').rename_axis(None))
A B C D
1 13 2 3 5
1 1 2 22 5
</code></pre>
| 1 | 2016-08-17T11:08:50Z | [
"python",
"pandas",
"indexing",
"dataframe",
"sample"
] |
parsing PDB header information with python (protein structure files)? | 38,994,845 | <p>Is there a parser for PDB-files (Protein Data Bank) that can extract (most) information from the header/REMARK-section, like refinement statistics, etc.?</p>
<p>It might be worthwhile to note that I am mainly interested in accessing data from files right after they have been produced, not from structures that have already been deposited in the Protein Data Bank. This means that there is quite a variety of different "propriety" formats to deal with, depending on the refinement software used.</p>
<p>I've had a look at Biopython, but they explicitly state in the FAQ that "If you are interested in data mining the PDB header, you might want to look elsewhere because there is only limited support for this."</p>
<p>I am well aware that it would be a lot easier to extract this information from mmCIF-files, but unfortunately these are still not output routinely from many macromolecular crystallography programs. </p>
| -1 | 2016-08-17T11:04:12Z | 38,995,570 | <p>Maybe you should try that library?
<a href="https://pypi.python.org/pypi/bioservices" rel="nofollow">https://pypi.python.org/pypi/bioservices</a></p>
| -1 | 2016-08-17T11:37:38Z | [
"python",
"bioinformatics",
"biopython"
] |
Blender Script not working | 38,994,871 | <p>I want to make a Side Scroller in Blender with python. Could someone please explain to me why this script is not working?</p>
<pre><code>import bge
def main():
cont = bge.logic.getCurrentController()
player = cont.owner
scene = bge.logic.getCurrentScene()
keyboard = bge.logic.keyboard
if bge.logic.KX_INPUT_ACTIVE == keyboard.events[bge.events.DKEY]:
player.localPosition.x += 0.1
elif bge.logic.KX_INPUT_ACTIVE == keyboard.events[bge.events.AKEY]:
player.localPosition.x += -0.1
elif bge.logic.KX_INPUT_ACTIVE == keyboard.events[bge.events.WKEY]:
player.localPosition.z += 0.5
main()
</code></pre>
| 0 | 2016-08-17T11:05:27Z | 39,014,612 | <p>Going by the script working for me and the error you say you get, it would appear that you are using the script the wrong way. The bge module is only available when the game engine is running and it sounds like you are trying to run it as a normal blender script by clicking the run script button in blender's text editor.</p>
<p>To use a python script within blender's game engine you need to add logic bricks to the player object with your script assigned to a python controller that has a sensor attached to it, as your script reads straight from the keyboard, you can add an always sensor with pulse enabled.</p>
<p><a href="http://i.stack.imgur.com/6mO5R.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/6mO5R.jpg" alt="sample logic bricks"></a></p>
| 0 | 2016-08-18T09:29:40Z | [
"python",
"blender"
] |
How do I store a string in ArrayField? (Django and PostgreSQL) | 38,994,883 | <p>I am unable to store a string in ArrayField. There are no exceptions thrown when I try to save something in it, but the array remains empty.
Here is some code from models.py :</p>
<pre><code># models.py
from django.db import models
import uuid
from django.contrib.auth.models import User
from django.contrib.postgres.fields import JSONField, ArrayField
# Create your models here.
class UserDetail(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
key = models.CharField(max_length=50, default=False, primary_key=True)
api_secret = models.CharField(max_length=50)
user_categories = ArrayField(models.CharField(max_length = 1000), default = list)
def __str__(self):
return self.key
class PreParentProduct(models.Model):
product_user = models.ForeignKey(UserDetail, default=False, on_delete=models.CASCADE)
product_url = models.URLField(max_length = 1000)
pre_product_title = models.CharField(max_length=600)
pre_product_description = models.CharField(max_length=2000)
pre_product_variants_data = JSONField(blank=True, null=True)
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
def __str__(self):
return self.pre_product_title
</code></pre>
<p>I try to save it this way:</p>
<pre><code> catlist = ast.literal_eval(res.text)
for jsonitem in catlist:
key = jsonitem.get('name')
id = jsonitem.get("id")
dictionary = {}
dictionary['name'] = key
dictionary['id'] = id
tba = json.dumps(dictionary)
print("It works till here.")
print(type(tba))
usersearch[0].user_categories.append(tba)
print(usersearch[0].user_categories)
usersearch[0].save()
print(usersearch[0].user_categories)
</code></pre>
<p>The output I get is:</p>
<pre><code>It works till here.
<class 'str'>
[]
It works till here.
<class 'str'>
[]
[]
</code></pre>
<p>Is this the correct way to store a string inside ArrayField?
I cannot store JSONField inside an ArrayField, so I had to convert it to a string.</p>
<p>How do I fix this?</p>
| 1 | 2016-08-17T11:06:14Z | 38,995,288 | <h2>Solution to the append problem.</h2>
<p>You haven't demonstrated how your <code>usersearch[0]</code> I suspect it's something like this:</p>
<pre><code>usersearch = UserDetail.objects.all()
</code></pre>
<p>If that is so you are making changes to a resultset, those things are immutable. Try this you will see that the id is unchanged too:</p>
<pre><code>usersearch[0].id = 1000
print usersearch.id
</code></pre>
<p>But this works</p>
<pre><code>usersearch = list(UserDetail.objects.all())
</code></pre>
<p>and so does</p>
<pre><code>u = usersearch[0]
</code></pre>
<h2>Solution to the real problem</h2>
<pre><code>user_categories = ArrayField(models.CharField(max_length = 1000), default = list)
</code></pre>
<p>This is wrong. ArrayFields shouldn't be used in this manner. You will soon find that you need to search through them and </p>
<blockquote>
<p>Arrays are not sets; searching for specific array elements can be a
sign of database misdesign. Consider using a separate table with a row
for each item that would be an array element. This will be easier to
search, and is likely to scale better for a large number of elements</p>
</blockquote>
<p>ref: <a href="https://www.postgresql.org/docs/9.5/static/arrays.html" rel="nofollow">https://www.postgresql.org/docs/9.5/static/arrays.html</a></p>
<p>You need to normalize your data. You need to have a category model and your UserDetail should be related to it through a foreign key.</p>
| 0 | 2016-08-17T11:23:34Z | [
"python",
"django",
"postgresql"
] |
Python: replace string with regex | 38,994,930 | <p>my problem here is that I have a huge amount of files. Each xml file contains ID. And I got set of source and target files. Source files has name A and ID = B. Target file has name B and ID=B
What I need to do is to match source ID B with Target name B and then replace target ID=B with source name A. Hope its clear</p>
<p>Here is my code</p>
<pre><code>import os
import re
sourcepath = input('Path to source folder:\n')
targetpath = input('Path to target folder:\n')
for root,dir,source in os.walk(sourcepath):
for each_file in source:
os.chdir(root)
correctID = each_file[:16]
each_xml = open(each_file, 'r', encoding='utf8').read()
findsourceID = re.findall('id="\w{3}\d{13}"', each_xml)
StringID = str(findsourceID)
correctFilename = StringID[6:22]
IDtoreplace = 'id="' + correctID + '"'
print(IDtoreplace)
for main,folder,target in os.walk(targetpath):
for each_target in target:
os.chdir(main)
targetname = each_target[:16]
if targetname == correctFilename:
with open(each_target, 'r+', encoding='utf8') as each_targ:
each_targ.read()
findtargetID = re.sub('id="\w{3}\d{13}"',str(IDtoreplace), each_targ)
each_targ.close()
</code></pre>
<p>And here is the error</p>
<pre><code>File "C:/Users/ms/Desktop/Project/test.py", line 23, in <module>
findtargetID = re.sub('id="\w{3}\d{13}"',str(IDtoreplace), each_targ)
File "C:\Users\ms\AppData\Local\Programs\Python\Python35\lib\re.py", line 182, in sub
return _compile(pattern, flags).sub(repl, string, count)
TypeError: expected string or bytes-like object
</code></pre>
| 0 | 2016-08-17T11:08:00Z | 38,995,207 | <p>You <code>read()</code> from <code>each_targ</code> but you don't store the string anywhere. </p>
<p>Instead you pass the file handle <code>each_targ</code> to <code>.sub</code> and that causes the type mismatch here. You could just say:</p>
<pre><code>findtargetID = re.sub('id="\w{3}\d{13}"',str(IDtoreplace), each_targ.read())
</code></pre>
| 1 | 2016-08-17T11:19:23Z | [
"python",
"replace"
] |
Clustat command not found error in python script | 38,995,008 | <p>Hi I am trying to create a script using python to log on to a server and to check the status of the cluster by running a clustat command. When I do this I get the following error:
/bin/sh: clustat: command not found
As I understand it, it's not able to run the command as this is a non standard bash command that is being used. I was hoping someone would have some ideas to get around this to get it work.</p>
<p>Below is the method used to run the command:(I have antoher method to ssh onto the system it works fine)</p>
<pre><code>def run_cmd(command):
"""Function for running command on the system."""
proc = subprocess.Popen([command], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
return out
</code></pre>
<p>This is where it seems to go wrong. I know the run_cmd method works as I am able to use it with other commands:</p>
<pre><code>run_cmd("clustat >> out.txt")
return ""
</code></pre>
| 0 | 2016-08-17T11:11:13Z | 38,995,255 | <p><code>subprocess</code> runs the commands <strong>locally</strong>. </p>
<p>You will have to use <code>paramiko.SSHClient</code> to run commands on the remote machine.</p>
<pre><code>ssh_client = paramiko.SSHClient()
ssh_client.connect(host='some_host', username='username', password='password')
ssh_client.exec_command('clustat >> out.txt')
</code></pre>
| 1 | 2016-08-17T11:21:48Z | [
"python",
"bash"
] |
Python 3 - Data mining from PDF | 38,995,050 | <p>I'm working on a project that requires obtaining data from some PDF documents.</p>
<p>Currently I'm using <code>Foxit toolkit</code> (calling it from the script) to convert the document to txt and then I iterate through it. I'm pretty happy with it, but <code>100$</code> it's just something I can't afford for such a small project.</p>
<ul>
<li><p>I've tested all the free converters that I could find (like <code>xpdf</code>, <code>pdftotext</code>) but they just don't cut it, they mess up the format in a way that i cant use the words to locate the data.</p></li>
<li><p>I've tried some <code>Python</code> modules like <code>pdfminer</code> but they don't seem to work well in <code>Python 3</code>.</p></li>
<li><p>I can't get the data before it's converted to PDF because I get them from a phone carrier.</p></li>
</ul>
<p>I'm looking for a way of <strong>getting the data from the PDF</strong> or <strong>a converter</strong> that at least follow the newlines properly.</p>
<p>Update:
<a href="https://pythonhosted.org/PyPDF2/index.html" rel="nofollow">PyPDF2</a> is not grabbing any text whatsoever from the pdf document.</p>
| 2 | 2016-08-17T11:13:25Z | 38,995,341 | <p>I don't believe that there is a good free python pdf converter sadly, however pdf2html although it is not a python module, works extremely well and provides you with much more structured data(html) compared to a simple text file. And from there you can use python tools such as beautiful soup to scrape the html file. </p>
<p>link - <a href="http://coolwanglu.github.io/pdf2htmlEX/" rel="nofollow">http://coolwanglu.github.io/pdf2htmlEX/</a></p>
<p>Hope this helps.</p>
| 1 | 2016-08-17T11:26:15Z | [
"python",
"python-3.x",
"pdf",
"toolkit",
"foxit"
] |
Python 3 - Data mining from PDF | 38,995,050 | <p>I'm working on a project that requires obtaining data from some PDF documents.</p>
<p>Currently I'm using <code>Foxit toolkit</code> (calling it from the script) to convert the document to txt and then I iterate through it. I'm pretty happy with it, but <code>100$</code> it's just something I can't afford for such a small project.</p>
<ul>
<li><p>I've tested all the free converters that I could find (like <code>xpdf</code>, <code>pdftotext</code>) but they just don't cut it, they mess up the format in a way that i cant use the words to locate the data.</p></li>
<li><p>I've tried some <code>Python</code> modules like <code>pdfminer</code> but they don't seem to work well in <code>Python 3</code>.</p></li>
<li><p>I can't get the data before it's converted to PDF because I get them from a phone carrier.</p></li>
</ul>
<p>I'm looking for a way of <strong>getting the data from the PDF</strong> or <strong>a converter</strong> that at least follow the newlines properly.</p>
<p>Update:
<a href="https://pythonhosted.org/PyPDF2/index.html" rel="nofollow">PyPDF2</a> is not grabbing any text whatsoever from the pdf document.</p>
| 2 | 2016-08-17T11:13:25Z | 38,995,348 | <p>Here is an example of pyPDF2 codes:</p>
<pre><code>from PyPDF2 import PdfFileReader
pdfFileObj = open("FileName", "rb")
pdfReader = PdfFileReader(pdfFileObj,strict = False)
data=[page.extractText() for page in pdfReader.pages]
</code></pre>
<p>more information on pyPDF2 <a href="https://pythonhosted.org/PyPDF2/" rel="nofollow"><strong>here</strong></a>.</p>
| 0 | 2016-08-17T11:26:38Z | [
"python",
"python-3.x",
"pdf",
"toolkit",
"foxit"
] |
Python 3 - Data mining from PDF | 38,995,050 | <p>I'm working on a project that requires obtaining data from some PDF documents.</p>
<p>Currently I'm using <code>Foxit toolkit</code> (calling it from the script) to convert the document to txt and then I iterate through it. I'm pretty happy with it, but <code>100$</code> it's just something I can't afford for such a small project.</p>
<ul>
<li><p>I've tested all the free converters that I could find (like <code>xpdf</code>, <code>pdftotext</code>) but they just don't cut it, they mess up the format in a way that i cant use the words to locate the data.</p></li>
<li><p>I've tried some <code>Python</code> modules like <code>pdfminer</code> but they don't seem to work well in <code>Python 3</code>.</p></li>
<li><p>I can't get the data before it's converted to PDF because I get them from a phone carrier.</p></li>
</ul>
<p>I'm looking for a way of <strong>getting the data from the PDF</strong> or <strong>a converter</strong> that at least follow the newlines properly.</p>
<p>Update:
<a href="https://pythonhosted.org/PyPDF2/index.html" rel="nofollow">PyPDF2</a> is not grabbing any text whatsoever from the pdf document.</p>
| 2 | 2016-08-17T11:13:25Z | 38,995,633 | <p>The <a href="https://pythonhosted.org/PyPDF2/index.html" rel="nofollow">PyPDF2</a> seems to be the best one available for Python3
It's well documented and the API is simple to use.</p>
<p>It also can work with encrypted files, retrieve metadata, merge documents, etc</p>
<p>A simple use case for extracting the text:</p>
<pre><code>from PyPDF2 import PdfFileReader
with open("test.pdf",'rb') as f:
if f:
ipdf = PdfFileReader(f)
text = [p.extractText() for p in ipdf.pages]
</code></pre>
| 0 | 2016-08-17T11:40:14Z | [
"python",
"python-3.x",
"pdf",
"toolkit",
"foxit"
] |
Python 3 - Data mining from PDF | 38,995,050 | <p>I'm working on a project that requires obtaining data from some PDF documents.</p>
<p>Currently I'm using <code>Foxit toolkit</code> (calling it from the script) to convert the document to txt and then I iterate through it. I'm pretty happy with it, but <code>100$</code> it's just something I can't afford for such a small project.</p>
<ul>
<li><p>I've tested all the free converters that I could find (like <code>xpdf</code>, <code>pdftotext</code>) but they just don't cut it, they mess up the format in a way that i cant use the words to locate the data.</p></li>
<li><p>I've tried some <code>Python</code> modules like <code>pdfminer</code> but they don't seem to work well in <code>Python 3</code>.</p></li>
<li><p>I can't get the data before it's converted to PDF because I get them from a phone carrier.</p></li>
</ul>
<p>I'm looking for a way of <strong>getting the data from the PDF</strong> or <strong>a converter</strong> that at least follow the newlines properly.</p>
<p>Update:
<a href="https://pythonhosted.org/PyPDF2/index.html" rel="nofollow">PyPDF2</a> is not grabbing any text whatsoever from the pdf document.</p>
| 2 | 2016-08-17T11:13:25Z | 39,056,694 | <p>I had the same problem when I wanted to do some deep inspection of PDFs for security analysis - I had to write my own utility that parses the low-level objects and literals, unpacks streams, etc so I could get at the "raw data":</p>
<p><a href="https://github.com/opticaliqlusion/pypdf" rel="nofollow">https://github.com/opticaliqlusion/pypdf</a></p>
<p>It's not a feature complete solution, but it is meant to be used in a pure python context where you can define your own visitors to iterate over all the streams, text, id nodes, etc in the PDF tree:</p>
<pre class="lang-python prettyprint-override"><code>class StreamIterator(PdfTreeVisitor):
'''For deflating (not crossing) the streams'''
def visit_stream(self, node):
print(node.value)
pass
...
StreamIterator().visit(tree)
</code></pre>
<p>Anyhow, I dont know if this is the kind of thing you were looking for, but I used it to do some security analysis when looking at suspicious email attachments.</p>
<p>Cheers!</p>
| 0 | 2016-08-20T17:03:13Z | [
"python",
"python-3.x",
"pdf",
"toolkit",
"foxit"
] |
Python Generic Exception Vs Specific Exceptions | 38,995,091 | <p>I'm writing a small production level Flask application that runs on IIS. I've wrapped all of my functions inside <code>try catch</code> blocks and it looks like this.</p>
<pre><code>try:
#Do Something
except Exception,e:
logger.error('Exception in Function X of type : %s ,for Image %s : %s'%(str(type(e)),path,str(e.args)))
</code></pre>
<p>I just need to log the problem in most of the cases and use python's builtin <code>logging</code> module to achieve this. I even log the type of the exception sometimes.</p>
<p>Now the thing I'm really concerned about is that although in my specific case, I don't have to handle or recover from any exception and even If I handle specific exceptions with a stack of different <code>except</code> cases, I'll just be logging the error in each block. So,</p>
<p>Is it still necessary for me to catch specific exceptions instead of
the generic <code>Exception</code>?</p>
| 2 | 2016-08-17T11:15:18Z | 38,995,223 | <p>If the goal is to log <em>all</em> exceptions, then no, you don't have to catch specific ones.</p>
<p>As you noted, there'd be no point as you'd only repeat the same piece of logging.</p>
| 1 | 2016-08-17T11:19:59Z | [
"python",
"exception",
"exception-handling"
] |
Is there a specific use of pdist function of scipy for some particular indexes? | 38,995,263 | <p>my question is about use of pdist function of scipy.spatial.distance. Although I have to calculate the hamming distances between a 1x64 vector with each and every one of other millions of 1x64 vectors that are stored in a 2D-array, I cannot do it with pdist. Because it returns hamming distances between any two vector inside the same 2D array. I wonder if there is any way to make it calculate hamming distances between a specific index' vector and all others each.</p>
<p>Here is my current code, I use 1000x64 for now because memory error shows up with big arrays.</p>
<pre><code>import numpy as np
from scipy.spatial.distance import pdist
ph = np.load('little.npy')
print pdist(ph, 'hamming').shape
</code></pre>
<p>and the output is </p>
<pre><code>(499500,)
</code></pre>
<p>little.npy has a 1000x64 array. For example, if I want only to see the hamming distances with 31. vector and all others. What should I do?</p>
| 0 | 2016-08-17T11:22:05Z | 38,997,940 | <p>You can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="nofollow"><code>cdist</code></a>. For example,</p>
<pre><code>In [101]: from scipy.spatial.distance import cdist
In [102]: x
Out[102]:
array([[0, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 1, 0],
[0, 0, 0, 1, 1, 1, 0, 0],
[1, 0, 1, 1, 0, 1, 1, 0],
[1, 0, 1, 1, 0, 1, 1, 1],
[0, 1, 0, 1, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1, 1, 0],
[1, 0, 0, 1, 1, 0, 0, 1]])
In [103]: index = 3
In [104]: cdist(x[index:index+1], x, 'hamming')
Out[104]:
array([[ 0.625, 0.375, 0.5 , 0. , 0.125, 0.75 , 0.375, 0.375,
0.5 , 0.625]])
</code></pre>
<p>That gives the Hamming distance between the row at index 3 and all the other rows (including the row at index 3).
The result is a 2D array, with a single row. You might want to immediately pull out that row so the result is 1D:</p>
<pre><code>In [105]: cdist(x[index:index+1], x, 'hamming')[0]
Out[105]:
array([ 0.625, 0.375, 0.5 , 0. , 0.125, 0.75 , 0.375, 0.375,
0.5 , 0.625])
</code></pre>
<p>I used <code>x[index:index+1]</code> instead of just <code>x[index]</code> so that input is a 2D array (with just a single row):</p>
<pre><code>In [106]: x[index:index+1]
Out[106]: array([[1, 0, 1, 1, 0, 1, 1, 0]])
</code></pre>
<p>You'll get an error if you use <code>x[index]</code>.</p>
| 0 | 2016-08-17T13:23:11Z | [
"python",
"scipy",
"pdist"
] |
Cant import module in site packages directory? | 38,995,296 | <p>i have been trying to import a module named requests, it is located in site-packages folder. It was installed via pip but every time i try to import it i get the error "<em>ImportError: No module named 'request'</em>" </p>
<p>I am using "import requests" to import.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
def spider(max_pages):
page = 1
while page <= max_pages:
url = "http://www.ebay.ie/sch/Laptops-Netbooks/175672/i.html?_catref=1&_pgn=" + str(page) + "&_skc=50&rt=nc"
source_code = requests.get(url)
plain = source_code.text
soup = BeautifulSoup(text)
for link in soup.findAll('a', {'class': 'img'}):
href = "http://www.ebay.ie" + link.get('href')
title = link.string
print(href)
print(title)
page += 1
spider(1)
</code></pre>
<p>I'm wondering if its to do with my enviroment variables or if i have installed them wrong.</p>
| -1 | 2016-08-17T11:24:01Z | 38,995,368 | <p>You have a typo.</p>
<p><code>import request</code> should be <code>import requests</code></p>
| 0 | 2016-08-17T11:27:31Z | [
"python",
"python-3.x"
] |
Cant import module in site packages directory? | 38,995,296 | <p>i have been trying to import a module named requests, it is located in site-packages folder. It was installed via pip but every time i try to import it i get the error "<em>ImportError: No module named 'request'</em>" </p>
<p>I am using "import requests" to import.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
def spider(max_pages):
page = 1
while page <= max_pages:
url = "http://www.ebay.ie/sch/Laptops-Netbooks/175672/i.html?_catref=1&_pgn=" + str(page) + "&_skc=50&rt=nc"
source_code = requests.get(url)
plain = source_code.text
soup = BeautifulSoup(text)
for link in soup.findAll('a', {'class': 'img'}):
href = "http://www.ebay.ie" + link.get('href')
title = link.string
print(href)
print(title)
page += 1
spider(1)
</code></pre>
<p>I'm wondering if its to do with my enviroment variables or if i have installed them wrong.</p>
| -1 | 2016-08-17T11:24:01Z | 38,997,238 | <p>I guess you have installed <a href="http://docs.python-requests.org/en/master/user/install/" rel="nofollow"><code>requests</code></a> module for <code>Python 2</code> but you are working on <code>Python 3</code>. Use the below command to install it for Python 3:</p>
<pre><code>python3 -m pip install requests
</code></pre>
<p>Source: <a href="https://docs.python.org/3.6/installing/index.html#work-with-multiple-versions-of-python-installed-in-parallel" rel="nofollow">Python docs</a>.</p>
| 0 | 2016-08-17T12:54:25Z | [
"python",
"python-3.x"
] |
Generate unique barcodes without race conditions | 38,995,304 | <p>For our customers, I need to generate unique barcodes. No customer shall have two identical barcodes. The barcodes are built up as follows:</p>
<ul>
<li>Customer prefix</li>
<li>Index number</li>
<li>Check digit</li>
</ul>
<p>I want to keep track per customer which was the latest index number used, and when generating a barcode, retrieve the latest index number and increment that index number by one.</p>
<p>The problem now occurs when two processes try to generate barcodes at the same time. Process A and B both ask the latest index number, both receive the same latest index number and both create the same barcode code.</p>
<p>Is there a way to ensure that even when offering the barcode generation asynchronously, no duplicate barcodes are generated? The system to build it in is Django 1.9, Python 3.5 with a PostgreSQL database.</p>
| 0 | 2016-08-17T11:24:21Z | 38,995,443 | <p>That's one of the tools backed into database engines, so how about using it for that purpose? It is a <a href="https://www.postgresql.org/docs/9.1/static/sql-createsequence.html" rel="nofollow">sequence</a>. It is the very same tool that is used for generating primary key values (which I assume are not an option for some reason in your case, otherwise just use it).</p>
<p>Unfortunately, it is not handled by Django ORM, but you may create one directly like this:</p>
<pre><code>CREATE SEQUENCE barcodes START WITH 100;
</code></pre>
<p>You may then use it in anytime by performing a direct SQL query from your django application:</p>
<pre><code>from django.db import connection
with connection.cursor() as cursor:
cursor.execute("select nextval('barcodes')")
barcode = cursor.fetchone()
</code></pre>
<p>The sequence is guaranteed to be unique. Note that there may be gaps in the generated numbers as rolling back the transaction will not "revert" advancing the sequence.</p>
<p>Now you have a guaranteed unique number, you can insert it into your barcode, guaranteeing its uniqueness as well.</p>
<p>For convenience, you may want to create / drop the sequence in a <a href="https://docs.djangoproject.com/en/1.10/howto/writing-migrations/" rel="nofollow">custom migration</a>.</p>
| 2 | 2016-08-17T11:31:35Z | [
"python",
"django",
"postgresql",
"barcode",
"race-condition"
] |
Unable to import child from parent package python | 38,995,363 | <p>I'm currently writing a web application in python that needs unit tests, however whenever I try to import a child module that's in another parent directory I get the following error:</p>
<pre><code>$ python my_package/tests/main.py
Traceback (most recent call last):
File "my_package/tests/test.py", line 1, in <module>
from my_package.core.main import hello
ImportError: No module named my_package.core.main
</code></pre>
<h2>File: my_package/core/main.py</h2>
<pre><code>hello = "Hello"
</code></pre>
<h2>File: my_package/test/test.py</h2>
<pre><code>from my_package.core.main import hello
print(hello, "world!")
</code></pre>
<h2>My directory structure:</h2>
<pre><code>$ tree
.
âââ my_package
âââ __init__.py
âââ core
â  âââ __init__.py
â  âââ main.py
âââ tests
âââ __init__.py
âââ test.py
</code></pre>
<p>Could someone please explain what I'm doing wrong? Thank you for your time.</p>
| 1 | 2016-08-17T11:27:19Z | 38,995,409 | <p>Your my_package is not in PYTHONPATH. At the top of your test.py add the below. Note that any change in location of test.py would affect package_path</p>
<pre><code>from os.path import dirname, abspath
import sys
package_path = dirname(dirname(abspath(__file__)))
sys.path.append(package_path)
</code></pre>
| 0 | 2016-08-17T11:29:57Z | [
"python",
"packages"
] |
Unable to import child from parent package python | 38,995,363 | <p>I'm currently writing a web application in python that needs unit tests, however whenever I try to import a child module that's in another parent directory I get the following error:</p>
<pre><code>$ python my_package/tests/main.py
Traceback (most recent call last):
File "my_package/tests/test.py", line 1, in <module>
from my_package.core.main import hello
ImportError: No module named my_package.core.main
</code></pre>
<h2>File: my_package/core/main.py</h2>
<pre><code>hello = "Hello"
</code></pre>
<h2>File: my_package/test/test.py</h2>
<pre><code>from my_package.core.main import hello
print(hello, "world!")
</code></pre>
<h2>My directory structure:</h2>
<pre><code>$ tree
.
âââ my_package
âââ __init__.py
âââ core
â  âââ __init__.py
â  âââ main.py
âââ tests
âââ __init__.py
âââ test.py
</code></pre>
<p>Could someone please explain what I'm doing wrong? Thank you for your time.</p>
| 1 | 2016-08-17T11:27:19Z | 38,996,210 | <p>It is <strong><a href="https://github.com/amontalenti/elements-of-python-style#avoid-syspath-hacks" rel="nofollow">considered an anti-pattern</a></strong> to modify <code>sys.path</code>. If you want your package to be available to all subpackages, it's better to use <code>setup.py</code> development mode.
Create <code>setup.py</code> in the root of your project:</p>
<pre><code>from setuptools import setup
setup(
name="you_project",
version="0.0.0",
packages=['my_package', ],
install_requires=['requirement1', 'requirement2'],
)
</code></pre>
<p>Then run:</p>
<p><code>$python setup.py develop</code></p>
<p>After this you will be able to import <code>my_packege</code> from anywhere within your Python environment.</p>
| 4 | 2016-08-17T12:07:24Z | [
"python",
"packages"
] |
Add two variables in django templates? | 38,995,503 | <pre><code>{% with a=pro_details.product_quantity|add:product_details.product_quantity %}
</code></pre>
<p>I need to add two variables in django templates using with and add.</p>
| -2 | 2016-08-17T11:34:20Z | 38,997,576 | <p>You can use custom template tags for this to achieve.</p>
<p><strong>The templatetags/custom_tags.py file:</strong></p>
<pre><code>from django import template
register = template.Library()
@register.simple_tag
def add(a, b):
return a+b
</code></pre>
<p><strong>The template part, with our tag call:</strong></p>
<pre><code>{% load video_tags %}
</code></pre>
<p>(where you want to use)</p>
<pre><code>{% add 5 6 %}
</code></pre>
<p>you can consult to this link as well. <a href="https://docs.djangoproject.com/en/1.10/howto/custom-template-tags/" rel="nofollow">https://docs.djangoproject.com/en/1.10/howto/custom-template-tags/</a></p>
| 3 | 2016-08-17T13:08:28Z | [
"python",
"django",
"templates"
] |
How to detect when pytest test case got AssertionError? | 38,995,522 | <p>I am using pytest to automate project test. I want to take some unique actions like "save_snapshot()" only when a test case fails. </p>
<p>Do we have something like that in pytest?</p>
<p>I have tried to achieve this using the teardown_method() But this method is not getting executed when a test case fails.</p>
<ul>
<li>without using fixtures please.</li>
</ul>
| 1 | 2016-08-17T11:35:22Z | 39,249,323 | <p>I found a solution for this issue by using python decorator for each test in class:</p>
<pre><code>def is_failed_decorator(func):
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except AssertionError:
cls_obj = args[0]
cur_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
func_name = func.__name__
# Save_snapshot().
raise
return wrapper
# Tests class
@is_failed_decorator
def test_fail(self):
assert False
</code></pre>
<p>worked for me :D</p>
| 0 | 2016-08-31T12:12:12Z | [
"python",
"python-2.7",
"testing",
"py.test"
] |
Built a list of functions | 38,995,590 | <p>I post simple code in order to understand how solve my problem. The while loop have 2 cycle but in real case have billion cycle.</p>
<pre><code>def uno_trov():
if(1==1):
return True
else:
return False
def due_trov():
if(1==0):
return True
else:
return False
condizioneV = []
condizione = [1, 0] #1 or 0 inside can be change by the user
first_time = False
i=0
while(i<2):
if(first_time == False): #whit the first cycle I build condizioneV (list of functions)
if(condizione[0]==1):
condizioneV.append(uno_trov)
if(condizione[1]==1):
condizioneV.append(due_trov)
first_time = True
print(condizioneV) #I expect [True]
i+=1
else: #second time condizioneV is already builded and I suppose the process will be more fast because the code not check anymore " if(condizione[1]==1)"
print(condizioneV) #I expect [True]
i+=1
#problem is that I obtain "[<function uno_trov at 0x0272DED0>]" two time.
</code></pre>
<p>I don't understand the reason but I obtain "[]" two time. They aren't error but I didn't have a list with one or two True/False.</p>
| -2 | 2016-08-17T11:38:55Z | 38,995,734 | <pre><code> condizioneV.append(uno_trov)
</code></pre>
<p>appends the function address you're missing the <code>()</code></p>
<p>correction:</p>
<pre><code>condizioneV.append(uno_trov())
</code></pre>
<p>General remark: your code is very poorly written and confusing. look at your <code>first_time</code> condition which is reversed for instance. And the indentation is terrible.</p>
| 1 | 2016-08-17T11:45:39Z | [
"python",
"python-3.x"
] |
Built a list of functions | 38,995,590 | <p>I post simple code in order to understand how solve my problem. The while loop have 2 cycle but in real case have billion cycle.</p>
<pre><code>def uno_trov():
if(1==1):
return True
else:
return False
def due_trov():
if(1==0):
return True
else:
return False
condizioneV = []
condizione = [1, 0] #1 or 0 inside can be change by the user
first_time = False
i=0
while(i<2):
if(first_time == False): #whit the first cycle I build condizioneV (list of functions)
if(condizione[0]==1):
condizioneV.append(uno_trov)
if(condizione[1]==1):
condizioneV.append(due_trov)
first_time = True
print(condizioneV) #I expect [True]
i+=1
else: #second time condizioneV is already builded and I suppose the process will be more fast because the code not check anymore " if(condizione[1]==1)"
print(condizioneV) #I expect [True]
i+=1
#problem is that I obtain "[<function uno_trov at 0x0272DED0>]" two time.
</code></pre>
<p>I don't understand the reason but I obtain "[]" two time. They aren't error but I didn't have a list with one or two True/False.</p>
| -2 | 2016-08-17T11:38:55Z | 38,995,770 | <p>That's some pretty convoluted code you have there. I can't imagine what it is trying to do. I'll presume you are a beginner.</p>
<p>But the reason your arrays contains functions is that these two lines are appending the functions themselves into the arrays, the functions are not being executed.</p>
<p>Instead of this:</p>
<pre><code> condizioneV.append(uno_trov)
condizioneV.append(due_trov)
</code></pre>
<p>You need to do this:</p>
<pre><code> condizioneV.append(uno_trov())
condizioneV.append(due_trov())
</code></pre>
| 0 | 2016-08-17T11:47:17Z | [
"python",
"python-3.x"
] |
Built a list of functions | 38,995,590 | <p>I post simple code in order to understand how solve my problem. The while loop have 2 cycle but in real case have billion cycle.</p>
<pre><code>def uno_trov():
if(1==1):
return True
else:
return False
def due_trov():
if(1==0):
return True
else:
return False
condizioneV = []
condizione = [1, 0] #1 or 0 inside can be change by the user
first_time = False
i=0
while(i<2):
if(first_time == False): #whit the first cycle I build condizioneV (list of functions)
if(condizione[0]==1):
condizioneV.append(uno_trov)
if(condizione[1]==1):
condizioneV.append(due_trov)
first_time = True
print(condizioneV) #I expect [True]
i+=1
else: #second time condizioneV is already builded and I suppose the process will be more fast because the code not check anymore " if(condizione[1]==1)"
print(condizioneV) #I expect [True]
i+=1
#problem is that I obtain "[<function uno_trov at 0x0272DED0>]" two time.
</code></pre>
<p>I don't understand the reason but I obtain "[]" two time. They aren't error but I didn't have a list with one or two True/False.</p>
| -2 | 2016-08-17T11:38:55Z | 38,996,709 | <p>Yes, this is a solution to one cycle but if you try to use it by pass a parameter to a function doesn't work anymore. I add "t=i" in the first if:</p>
<pre><code>def uno_trov(t):
if(1==t):
return True
else:
return False
def due_trov():
if(1==0):
return True
else:
return False
condizioneV = []
condizione = [1, 1] #1 or 0 inside can change by the user
first_time = False
i=0
while(i<2):
if(first_time == False): #whit first cycle I build condizioneV
if(condizione[0]==1):
t=i
condizioneV.append(uno_trov(t))
if(condizione[1]==1):
condizioneV.append(due_trov())
first_time = True
print(condizioneV) #I expect [False]
i+=1
t=i
else: #second time condizioneV is already build but not properly work good
print(condizioneV) #I expect [True] but instead I get [False]
i+=1
t=i
</code></pre>
<p>In other words I would like that with first cycle I build the list of functions and from the second cycle to the end, execute the list of functions.</p>
| 0 | 2016-08-17T12:29:24Z | [
"python",
"python-3.x"
] |
How to stop Python Websocket client "ws.run_forever" | 38,995,640 | <p>I'm starting my Python Websocket using "ws.run_forever", another <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.run_until_complete" rel="nofollow">source</a> stated that I should use "run_until_complete()" but these functions only seem available to Python asyncio.</p>
<p>How can I stop a websocket client? Or how to start it withouth running forever.</p>
| 0 | 2016-08-17T11:40:29Z | 38,995,641 | <p>In python websockets, you can use "ws.keep_running = False" to stop the "forever running" websocket.</p>
<p>This may be a little unintuitive and you may choose another library which may work better overall.</p>
<p>The code below was working for me (using ws.keep_running = False).</p>
<pre><code>class testingThread(threading.Thread):
def __init__(self,threadID):
threading.Thread.__init__(self)
self.threadID = threadID
def run(self):
print str(self.threadID) + " Starting thread"
self.ws = websocket.WebSocketApp("ws://localhost/ws", on_error = self.on_error, on_close = self.on_close, on_message=self.on_message,on_open=self.on_open)
self.ws.keep_running = True
self.wst = threading.Thread(target=self.ws.run_forever)
self.wst.daemon = True
self.wst.start()
running = True;
testNr = 0;
time.sleep(0.1)
while running:
testNr = testNr+1;
time.sleep(1.0)
self.ws.send(str(self.threadID)+" Test: "+str(testNr)+")
self.ws.keep_running = False;
print str(self.threadID) + " Exiting thread"
</code></pre>
| 0 | 2016-08-17T11:40:29Z | [
"python",
"websocket"
] |
ImportError: No module named 'ase.build' | 38,995,718 | <p>In ubuntu 16.04 i installed python and modules:</p>
<pre><code>sudo apt install python3 python3-scipy python3-numpy python3-ase
</code></pre>
<p>then i try to follow <a href="https://wiki.fysik.dtu.dk/ase/tutorials/surface.html" rel="nofollow">the first tutorial</a> on the <a href="https://wiki.fysik.dtu.dk/ase/index.html" rel="nofollow">ASE homepage</a>. I run <code>python3</code> in <code>bash</code> terminal, and can import other modules but not <code>ase-build</code>. It looks like this: </p>
<pre><code>>>> from ase.optimize import QuasiNewton
>>> from ase.build import fcc111, add_adsorbate
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'ase.build'
>>>
</code></pre>
<p>Using a python script throws an equivalent error.
What could be the problem? </p>
<p><strong>UPDATE & SOLUTION</strong>
Seems this was not really even a python problem. I seem to have had some package dependency errors probably due to not running <code>apt update</code> in a long time between program installations. I removed <code>python2.x</code> and <code>python 3.x</code>, then iterated <code>apt update</code>, <code>apt upgrade</code>, <code>apt autoremove</code>, then reinstalled only <code>python3</code>. I installed <code>python3-pip</code> and installed the numpy, scipy, and ase packages using the proper form <code>python3 -m pip install --upgrade <package></code>. Now everything works as expected. </p>
| 0 | 2016-08-17T11:44:43Z | 38,996,237 | <p>Check what version of the library you have.</p>
<pre><code>import ase
print(ase.__version__)
</code></pre>
<p>If the version is <code>3.10.0</code> then that is the problem since the <code>build</code> module appeared (as far as I know) in the <code>3.11.0</code> version.</p>
| 0 | 2016-08-17T12:08:35Z | [
"python",
"python-3.x"
] |
ImportError: No module named 'ase.build' | 38,995,718 | <p>In ubuntu 16.04 i installed python and modules:</p>
<pre><code>sudo apt install python3 python3-scipy python3-numpy python3-ase
</code></pre>
<p>then i try to follow <a href="https://wiki.fysik.dtu.dk/ase/tutorials/surface.html" rel="nofollow">the first tutorial</a> on the <a href="https://wiki.fysik.dtu.dk/ase/index.html" rel="nofollow">ASE homepage</a>. I run <code>python3</code> in <code>bash</code> terminal, and can import other modules but not <code>ase-build</code>. It looks like this: </p>
<pre><code>>>> from ase.optimize import QuasiNewton
>>> from ase.build import fcc111, add_adsorbate
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'ase.build'
>>>
</code></pre>
<p>Using a python script throws an equivalent error.
What could be the problem? </p>
<p><strong>UPDATE & SOLUTION</strong>
Seems this was not really even a python problem. I seem to have had some package dependency errors probably due to not running <code>apt update</code> in a long time between program installations. I removed <code>python2.x</code> and <code>python 3.x</code>, then iterated <code>apt update</code>, <code>apt upgrade</code>, <code>apt autoremove</code>, then reinstalled only <code>python3</code>. I installed <code>python3-pip</code> and installed the numpy, scipy, and ase packages using the proper form <code>python3 -m pip install --upgrade <package></code>. Now everything works as expected. </p>
| 0 | 2016-08-17T11:44:43Z | 38,996,892 | <p>Due to <a href="http://packages.ubuntu.com/search?keywords=python3-ase" rel="nofollow">this link</a> - you have installed the 3.9.1.4567-3 version on your computer.</p>
<p>But The <strong>asu.build</strong> has been added in:</p>
<blockquote>
<p>commit 71c9563e423e2add645c26f8d0a722f3db13e135</p>
<p>Author: Jens Jørgen Mortensen </p>
<p>Date: Tue Apr 12 15:40:59 2016 +0200</p>
<p>Move stuff to ase.build module</p>
</blockquote>
<p>So, the module <strong>asu.build</strong> doesn't exist in your version (3.9 has been released in 2015). You have to install the newer version of python3-asu.</p>
| 0 | 2016-08-17T12:38:13Z | [
"python",
"python-3.x"
] |
detecting wrong file format with csv reader | 38,995,968 | <p>I like to read in a list of ASCII files (utf-8) with the csv reader.
For error handling i like to detect if a user has seleted by accident a file which cannot be read.
The source is like this:</p>
<pre><code> for File in Filenames:
print ('... processing file :',File)
with open(File, 'r') as csvfile:
Reader = csv.reader(csvfile, delimiter = ';')
for Line in Reader:
print(Line)
</code></pre>
<p>I the user selects e.g. file which is GZIPed i got the message:</p>
<p>(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte</p>
<p>Which in the first place is ok, but the script crashes.
I didn't found out how to capture the error and force the script to jump to the next file in the list. I found a lot of about dialects and other codecs, but my task is not to read the wrong file by just changing the codec.</p>
<p>Many thanks for any comment!</p>
| 0 | 2016-08-17T11:56:22Z | 38,996,039 | <p>How about this: </p>
<pre><code>for File in Filenames:
print ('... processing file :',File)
with open(File, 'r') as csvfile:
try:
Reader = csv.reader(csvfile, delimiter = ';')
for Line in Reader:
print(Line)
except UnicodeDecodeError as e:
print("File {:} cannot be read. Skipping...".format(csvfile))
continue
</code></pre>
| 1 | 2016-08-17T11:59:44Z | [
"python",
"csv"
] |
detecting wrong file format with csv reader | 38,995,968 | <p>I like to read in a list of ASCII files (utf-8) with the csv reader.
For error handling i like to detect if a user has seleted by accident a file which cannot be read.
The source is like this:</p>
<pre><code> for File in Filenames:
print ('... processing file :',File)
with open(File, 'r') as csvfile:
Reader = csv.reader(csvfile, delimiter = ';')
for Line in Reader:
print(Line)
</code></pre>
<p>I the user selects e.g. file which is GZIPed i got the message:</p>
<p>(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte</p>
<p>Which in the first place is ok, but the script crashes.
I didn't found out how to capture the error and force the script to jump to the next file in the list. I found a lot of about dialects and other codecs, but my task is not to read the wrong file by just changing the codec.</p>
<p>Many thanks for any comment!</p>
| 0 | 2016-08-17T11:56:22Z | 38,996,040 | <p>Use exception handling - <a href="https://docs.python.org/3/tutorial/errors.html" rel="nofollow">https://docs.python.org/3/tutorial/errors.html</a></p>
<p>Your code would then look like:</p>
<pre><code>for File in Filenames:
print ('... processing file :',File)
try:
with open(File, 'r', encoding='utf-8') as csvfile:
Reader = csv.reader(csvfile, delimiter = ';')
for Line in Reader:
print(Line)
except UnicodeDecodeError:
pass
</code></pre>
<p>It's good practice to include the encoding you expect when you open the file. If you put the same script on a Windows box, the default encoding would not be "utf-8".</p>
| 0 | 2016-08-17T11:59:47Z | [
"python",
"csv"
] |
Python : Compare two csv files and print out differences | 38,996,033 | <p>I need to compare two CSV files and print out differences in a third CSV file.
In my case, the first CSV is a old list of hash named old.csv and the second CSV is the new list of hash which contains both old and new hash.</p>
<p>Here is my code :</p>
<pre><code>import csv
t1 = open('old.csv', 'r')
t2 = open('new.csv', 'r')
fileone = t1.readlines()
filetwo = t2.readlines()
t1.close()
t2.close()
outFile = open('update.csv', 'w')
x = 0
for i in fileone:
if i != filetwo[x]:
outFile.write(filetwo[x])
x += 1
outFile.close()
</code></pre>
<p>The third file is a copy of the old one and not the update.
What's wrong ? I Hope you can help me, many thanks !! </p>
<p>PS : i don't want to use diff </p>
| 1 | 2016-08-17T11:59:36Z | 38,996,349 | <p>I assumed your new file was just like your old one, except that some lines were added in between the old ones. The old lines in both files are stored in the same order.</p>
<p>Try this :</p>
<pre><code>with open('old.csv', 'r') as t1:
old_csv = t1.readlines()
with open('new.csv', 'r') as t2:
new_csv = t2.readlines()
with open('update.csv', 'w') as out_file:
line_in_new = 0
line_in_old = 0
while line_in_new < len(new_csv) and line_in_old < len(old_csv):
if old_csv[line_in_old] != new_csv[line_in_new]:
out_file.write(new_csv[line_in_new])
else:
line_in_old += 1
line_in_new += 1
</code></pre>
<ul>
<li>Note that I used the context manager <a class='doc-link' href="http://stackoverflow.com/documentation/python/928/context-managers-with-statement#t=201608171234082476867"><code>with</code></a> and some meaningful variable names, which makes it instantly easier to understand. And you don't need the <code>csv</code> package since you're not using any of its functionalities here.</li>
<li>About your code, you were almost doing the right thing, except that _you must not go to the next line in your old CSV unless you are reading the same thing in both CSVs. That is to say, if you find a new line, keep reading the new file until you stumble upon an old one and then you'll be able to continue reading.</li>
</ul>
<p><strong>UPDATE:</strong> This solution is not as pretty as <a href="http://stackoverflow.com/a/38996374/5018771">Chris Mueller's one</a> which is perfect and very Pythonic for small files, but it only reads the files once (keeping the idea of your original algorithm), thus it can be better if you have larger file.</p>
| 0 | 2016-08-17T12:13:24Z | [
"python",
"csv"
] |
Python : Compare two csv files and print out differences | 38,996,033 | <p>I need to compare two CSV files and print out differences in a third CSV file.
In my case, the first CSV is a old list of hash named old.csv and the second CSV is the new list of hash which contains both old and new hash.</p>
<p>Here is my code :</p>
<pre><code>import csv
t1 = open('old.csv', 'r')
t2 = open('new.csv', 'r')
fileone = t1.readlines()
filetwo = t2.readlines()
t1.close()
t2.close()
outFile = open('update.csv', 'w')
x = 0
for i in fileone:
if i != filetwo[x]:
outFile.write(filetwo[x])
x += 1
outFile.close()
</code></pre>
<p>The third file is a copy of the old one and not the update.
What's wrong ? I Hope you can help me, many thanks !! </p>
<p>PS : i don't want to use diff </p>
| 1 | 2016-08-17T11:59:36Z | 38,996,374 | <p>The problem is that you are comparing each line in <code>fileone</code> to the same line in <code>filetwo</code>. As soon as there is an extra line in one file you will find that the lines are never equal again. Try this:</p>
<pre><code>with open('old.csv', 'r') as t1, open('new.csv', 'r') as t2:
fileone = t1.readlines()
filetwo = t2.readlines()
with open('update.csv', 'w') as outFile:
for line in filetwo:
if line not in fileone:
outFile.write(line)
</code></pre>
| 1 | 2016-08-17T12:14:29Z | [
"python",
"csv"
] |
Reading python variables during a running job | 38,996,051 | <p>I have launched a python script that takes a long time to finish, and silly me I forgot to print out the values of important variables every now and then in my script, to for example estimate the progress of the computation I'm doing.
So now, I was wondering if there's a way to access the current values of certain set of variables in my code (e.g. a list), as the script is running? (I could of course just stop it and add the changes/prints to the code then relaunch, but since it has been running for a day now, it is a pity to lose the computed values so far)</p>
<p>Alternatively, can I crash it in a certain way (other than usual Ctrl-c keyboard interrupt) such that the variable values at the moment of crash are pasted somewhere given that I didn't plan for this in my script? (I am running Ubuntu, python 2.7 and the script is simply run from a terminal by 'python test.py')</p>
| 5 | 2016-08-17T12:00:26Z | 38,996,231 | <p>Without editing your program, you're going to have a bad time. What you are looking for is some form of remote debugger, but anything that gives you python specific things will probably have to be at least somehow given a hook into your program. That being said, if you feel like fiddling around in a stack, you can attach gdb to your program (<code>gdb -p <PID></code>) and see what you can find.</p>
<p>Edit: Well. This might actually be possible.</p>
<p>Following <a href="https://wiki.python.org/moin/DebuggingWithGdb" rel="nofollow">here</a>, with the python extentions for GDB installed, if you pop open a gdb shell with <code>gdb python <PID></code>, you should be able to run <code>py-print <name of the variable></code> to get its value, assuming it's in the scope of the program at that point.
Attempting to do this myself, with the trivial program</p>
<pre><code>import time
a = 10
time.sleep(1000)
</code></pre>
<p>I was able to open a GDB shell by finding the PID of the program (<code>ps aux | grep python</code>), running <code>sudo gdb python <PID></code> and then run <code>py-print a</code>, which produced "global 'a' = 10". Of course this assumes you are running in a *nix environment.</p>
<p>Tawling around in the GDB shell for a while, I found you can actually interact with the Python primatives. For example, to get the length of an array:</p>
<pre><code>(gdb) python-interative
>>> frame = Frame.get_selected_python_frame()
>>> pyop_frame = frame.get_pyop()
>>> var, scope = pyop_frame.get_var_by_name('<variable name>')
>>> print(var.field('ob_size'))
</code></pre>
<p>Note the requirement to use the actual internal field names to get things (The actual values of the list can be found with 'ob_item', and then an index).</p>
<p>You can dump the array to a file in a similar way:</p>
<pre><code>length = int(str(var.field('ob_size'))
output = []
for i in range(length):
output.append(str(var[i]))
with open('dump', 'w') as f:
f.write(', '.join(output))
</code></pre>
| 7 | 2016-08-17T12:08:14Z | [
"python",
"shell"
] |
Python multiprocessing: process a list of input files with error logging | 38,996,137 | <p>I want to use python multiprocessing to conduct the following:</p>
<ul>
<li>process a long list of input files </li>
<li>include error logging </li>
<li>set a limit on the concurrent CPU cores in use (number of processes)</li>
</ul>
<p>The <a href="https://docs.python.org/3/howto/logging-cookbook.html#logging-to-a-single-file-from-multiple-processes" rel="nofollow">python logging cookbook</a> has two excellent examples for multiprocessing. In the code below, I've modified the second method ("logging in the main process, in a separate thread") which uses multiprocessing.Queue. Both for myself and new users, I have added detailed notes, and created example input and output files.</p>
<p>Where I'm stuck is that the code iterates through the number of CPU cores, NOT through the number of items in my list. </p>
<p>How can I apply the function to all my input files, without exceeding the limit on the number of concurrent processes?</p>
<pre><code>import json
import logging
import multiprocessing
import numpy as np
import os
import pandas as pd
import threading
import time
def create_10_infiles():
"""Creates 10 csv files with 4x4 array of floats, + occasional strings"""
list_csv_in = []
for i in range(1,11):
csv_in = "{:02d}_in.csv".format(i)
# create a 4 row, 4 column dataframe with random values centered around i
df = pd.DataFrame(np.random.rand(16).reshape(4,4) * i)
# add a string to one of the arrays (as a reason to need error logging)
if i == 2 or i == 8:
df.loc[2,2] = "Oops, array contains a string. Welcome to data science."
# save to csv, and append filename to list of inputfiles
df.to_csv(csv_in)
list_csv_in.append(csv_in)
return list_csv_in
def logger_thread(queue):
"""Listener process that logs output received from other processes?"""
while True:
record = queue.get()
if record is None:
break
logger = logging.getLogger(record.name)
logger.handle(record)
def worker_process(queue, infile):
"""Worker process that used to run tasks.
Each process is isolated, so it starts by setting up logging."""
# set up a handle to hold the logger output?
queue_handle = logging.handlers.QueueHandler(queue)
# creates a new logger called "process logger" (printed in each line)
logger = logging.getLogger("process logger")
# sets the logging level to DEBUG, so logger.info messages are printed.
logger.setLevel(logging.DEBUG)
# connects logger to handle defined above?
logger.addHandler(queue_handle)
# here you can run your desired program, in the hope that the time saved from parallel
# processing is greater than the overhead of setting up all those processes and loggers:)
normalise_array_to_mean_and_save(infile, logger)
def normalise_array_to_mean_and_save(csv_in, logger):
"""Opens csv with array, checks dtypes, calculates mean, saves output csv."""
# check if file exists
if os.path.isfile(csv_in):
# open as pandas dataframe
df = pd.read_csv(csv_in)
# if none of the columns contain mixed datatypes (i.e, a string)
if not pd.np.dtype('object') in df.dtypes.tolist():
# calc mean over whole dataframe
mean = df.stack().mean()
logger.info("{}, Mean = {:0.2f}".format(csv_in, mean))
# normalise all values to mean. Save as "01_out.csv", "02_out.csv" etc
df = df / mean
csv_out = csv_in[:-6] + "out.csv"
df.to_csv(csv_out)
else:
logger.info("{}, Mean not calculated. Non-float values found.".format(csv_in))
if __name__ == '__main__':
os.chdir(r"D:\data")
# import your favourite json logging settings (collapsed for brevity)
logsettings = json.dumps({"version": 1, "root": {"handlers": ["console", "file"], "level": "DEBUG"}, "formatters": {"detailed": {"class": "logging.Formatter", "format": "%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s"}}, "handlers": {"console": {"class": "logging.StreamHandler", "level": "DEBUG"}, "file": {"mode": "w", "formatter": "detailed", "class": "logging.FileHandler", "filename": "my_multiprocessing_logfile.log"}}})
config = json.loads(logsettings)
# replace default logfile with a filename containing the exact time
config['handlers']['file']['filename'] = time.strftime("%Y%m%d_%H_%M_%S") + "_mp_logfile.txt"
# load the logging settings
logging.config.dictConfig(config)
queue = multiprocessing.Queue()
workers = []
# set the number of concurrent processes created (i.e. CPU cores used)
num_processes = 4
# create 10 csv files with data, and return the list of filepaths
list_10_infiles = create_10_infiles()
# set up a process for each CPU core (e.g. 4)
for i in range(num_processes):
wp = multiprocessing.Process(target=worker_process,
name='worker_{}'.format(i+1),
args=(queue, list_10_infiles[i]))
workers.append(wp)
wp.start()
# set up a thread as the logger_process
logger_process = threading.Thread(target=logger_thread, args=(queue,))
logger_process.start()
#At this point, the main process could do some useful work of its own
#Once it's done that, it can wait for the workers to terminate...
for wp in workers:
wp.join()
# set logger for main process if desired
root = logging.getLogger("main")
root.setLevel(logging.DEBUG)
logger = logging.getLogger("main logger")
logger.info("CPUs used = {}/{}".format(num_processes, multiprocessing.cpu_count()))
logger.info('Program is finished. All files analysed.')
# And now tell the logging thread to finish up, too
queue.put(None)
logger_process.join()
</code></pre>
<p>Note: I've tried dividing the list of input files into chunks depending on the number of CPU cores. This processed the files, but was very slow.</p>
| 0 | 2016-08-17T12:04:17Z | 39,966,621 | <p>I found that using python multiprocessing Pool instead of Queue allowed me to process a long list of files, and limit the number of concurrent cores.</p>
<p>Although logging is not compatible with Pool, I found that it is possible to collect the return values. The return values can be logged after all files have been processed, assuming the code doesn't throw an exception.</p>
<p>Maybe someone here can give me a more elegant solution, but for the moment this solves the problem.</p>
<pre><code>from multiprocessing import Pool
from time import strftime
import logging
def function_to_process_files(file):
#..check file integrity, etc..
if file_gives_an_error:
return "{} file {} gave an error".format(strftime("%Y%m%d_%H_%M_%S"), file)
#..do stuff without using the logging module..
#.. for slow, irregular processes, printing to console is possible..
return "{} file {} processed correctly".format(strftime("%Y%m%d_%H_%M_%S"), file)
if __name__ == "__main__":
list_of_files_to_process = define_your_file_list_somehow()
logging = logging.setup_regular_logging_to_file_as_desired()
# define the number of CPU cores to be used concurrently
n_processes = 4
with Pool(processes=n_processes) as pool:
list_of_return_statements = pool.map(function_to_process_files, list_of_files_to_process)
# now transfer the list of return statements to the logfile
for return_statement in list_of_return_statements:
logging.info(return_statement)
</code></pre>
| 0 | 2016-10-10T20:56:13Z | [
"python",
"logging",
"python-multiprocessing",
"error-logging"
] |
Merging of two lists following certain patterns | 38,996,316 | <p>I have two lists:</p>
<pre><code>a = ['a', 'b', 'c', 'd']
b = ['e', 'f', 'g', 'h']
</code></pre>
<p>which I want to merge to one list which contains element nr. 1 of list a as first element, element nr.1 of list b as second element, element nr. 2 of list a as third element and so on, looking like this:</p>
<pre><code>c = ['a', 'e', 'b', 'f', 'c', 'g', 'd', 'h']
</code></pre>
<p>What is the easiest way to do so, possibly without using loops?</p>
| 0 | 2016-08-17T12:12:17Z | 38,996,411 | <p>Borrowing from <a href="http://stackoverflow.com/a/952946/2285236">this answer</a>:</p>
<pre><code>list(sum(zip(a, b), ()))
Out: ['a', 'e', 'b', 'f', 'c', 'g', 'd', 'h']
</code></pre>
| 9 | 2016-08-17T12:16:13Z | [
"python"
] |
Merging of two lists following certain patterns | 38,996,316 | <p>I have two lists:</p>
<pre><code>a = ['a', 'b', 'c', 'd']
b = ['e', 'f', 'g', 'h']
</code></pre>
<p>which I want to merge to one list which contains element nr. 1 of list a as first element, element nr.1 of list b as second element, element nr. 2 of list a as third element and so on, looking like this:</p>
<pre><code>c = ['a', 'e', 'b', 'f', 'c', 'g', 'd', 'h']
</code></pre>
<p>What is the easiest way to do so, possibly without using loops?</p>
| 0 | 2016-08-17T12:12:17Z | 38,996,416 | <p>Just <a href="https://docs.python.org/3/library/functions.html#zip"><code>zip</code></a> them into pairs and then flatten the list using <a href="https://docs.python.org/3.5/library/itertools.html#itertools.chain.from_iterable"><code>itertools.chain.from_iterable</code></a>:</p>
<pre><code>In [1]: a=['a','b','c','d']
In [2]: b=['e','f','g','h']
In [3]: from itertools import chain
In [4]: chain.from_iterable(zip(a, b))
Out[4]: <itertools.chain at 0x7fbcf2335ef0>
In [5]: list(chain.from_iterable(zip(a, b)))
Out[5]: ['a', 'e', 'b', 'f', 'c', 'g', 'd', 'h']
</code></pre>
| 5 | 2016-08-17T12:16:36Z | [
"python"
] |
Merging of two lists following certain patterns | 38,996,316 | <p>I have two lists:</p>
<pre><code>a = ['a', 'b', 'c', 'd']
b = ['e', 'f', 'g', 'h']
</code></pre>
<p>which I want to merge to one list which contains element nr. 1 of list a as first element, element nr.1 of list b as second element, element nr. 2 of list a as third element and so on, looking like this:</p>
<pre><code>c = ['a', 'e', 'b', 'f', 'c', 'g', 'd', 'h']
</code></pre>
<p>What is the easiest way to do so, possibly without using loops?</p>
| 0 | 2016-08-17T12:12:17Z | 38,997,009 | <p>Here's an answer comparing some of the possible methods with 2 differents datasets, one will consist of many little arrays, the other one will be few large arrays:</p>
<p>import timeit
import random
from itertools import chain</p>
<pre><code>def f1(a, b):
return list(chain.from_iterable(zip(a, b)))
def f2(a, b):
return list(sum(zip(a, b), ()))
def f3(a, b):
result = []
for (e1, e2) in zip(a, b):
result += [e1, e2]
return result
def f4(a, b):
result = []
len_result = min(len(a), len(b))
result = []
i = 0
while i < len_result:
result.append(a[i])
result.append(b[i])
i += 1
return result
# Small benchmark
N = 5000000
a_small = ['a', 'b', 'c', 'd']
b_small = ['e', 'f', 'g', 'h']
benchmark1 = [
timeit.timeit(
'f1(a_small, b_small)', setup='from __main__ import f1, a_small,b_small', number=N),
timeit.timeit(
'f2(a_small, b_small)', setup='from __main__ import f2, a_small,b_small', number=N),
timeit.timeit(
'f3(a_small, b_small)', setup='from __main__ import f3, a_small,b_small', number=N),
timeit.timeit(
'f4(a_small, b_small)', setup='from __main__ import f4, a_small,b_small', number=N)
]
for index, value in enumerate(benchmark1):
print " - Small sample with {0} elements -> f{1}={2}".format(len(a_small), index + 1, value)
# Large benchmark
N = 5000
K = 100000
P = 1000
a_large = random.sample(range(K), P)
b_large = random.sample(range(K), P)
benchmark2 = [
timeit.timeit(
'f1(a_large, b_large)', setup='from __main__ import f1, a_large,b_large', number=N),
timeit.timeit(
'f2(a_large, b_large)', setup='from __main__ import f2, a_large,b_large', number=N),
timeit.timeit(
'f3(a_large, b_large)', setup='from __main__ import f3, a_large,b_large', number=N),
timeit.timeit(
'f4(a_large, b_large)', setup='from __main__ import f4, a_large,b_large', number=N)
]
for index, value in enumerate(benchmark2):
print " - Large sample with {0} elements -> f{1}={2}".format(K, index + 1, value)
</code></pre>
<ul>
<li>Small sample with 4 elements -> f1=7.50175959666</li>
<li>Small sample with 4 elements -> f2=5.52386084127</li>
<li>Small sample with 4 elements -> f3=7.12457549607</li>
<li>Small sample with 4 elements -> f4=7.24530968309</li>
<li>Large sample with 100000 elements -> f1=0.512278885906</li>
<li>Large sample with 100000 elements -> f2=28.0679210232</li>
<li>Large sample with 100000 elements -> f3=1.05977378475</li>
<li>Large sample with 100000 elements -> f4=1.17144886156</li>
</ul>
<p>Conclusion: It seems f2 function is the slightly faster method when N is big and the lists are litte. When the arrays are large and the number is little, f1 is the winner though.</p>
<p>Specs: Python2.7.11(64) , N=5000000 on a i-7 2.6Ghz</p>
| 2 | 2016-08-17T12:43:41Z | [
"python"
] |
Pearson multiple correlation with Scipy | 38,996,366 | <p>I am trying to do something quite simple compute a Pearson correlation matrix of several variables that are given as columns of a DataFrame. I want it to ignore nans and provide also the p-values. <code>scipy.stats.pearsonr</code> is insufficient because it works only for two variables and cannot account for nans. There should be something better than that...</p>
<p>For example,</p>
<pre><code> df = pd.DataFrame([[1,2,3],[6,5,4],[1,None,9]])
0 1 2
0 1 2.0 3
1 6 5.0 4
2 1 NaN 9
</code></pre>
<p>The columns of df are the variables and the rows are observations. I would like a command that returns a 3x3 correlation matrix, along with a 3x3 matrix of corresponding p-values. I want it to omit the None. That is, the correlation between [1,6,1],[2,5,NaN] should be the correlation between [1,6] and [2,5]. </p>
<p>There must be a nice Pythonic way to do that, can anyone please suggest?</p>
| 0 | 2016-08-17T12:14:19Z | 38,999,002 | <p>If you have your data in a pandas DataFrame, you can simply use <code>df.corr()</code>.</p>
<p>From the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html" rel="nofollow">docs</a>:</p>
<pre><code>DataFrame.corr(method='pearson', min_periods=1)
Compute pairwise correlation of columns, excluding NA/null values
</code></pre>
| 1 | 2016-08-17T14:07:28Z | [
"python",
"scipy",
"pearson-correlation"
] |
sorting list of tuple based on engineering unit | 38,996,396 | <p>I am trying to sort list into following way.</p>
<pre><code>data = [('18.3G', 'pgaur'), ('29.6G', 'adsoni'), ('5.51G', 'keyur'), ('10.8G', 'adityac')]
sorted(data, key= lambda x:x[0])
</code></pre>
<p>but it is not sorting data.</p>
| 1 | 2016-08-17T12:15:28Z | 38,996,622 | <p>To sort numbers in the way that makes sense to humans, you have to make sure that they are presented as numbers and not as text:</p>
<ul>
<li>e.g., '5' is a string representing a number, not a number so
<code>example_list = ['5', '10']</code> if sorted would yield `['10', '5']
because what's compared is '5' against '1'; one character at a time.</li>
</ul>
<p>To get the result you probably want, you have to do the following:</p>
<pre><code>data = [('18.3G', 'pgaur'), ('29.6G', 'adsoni'), ('5.51G', 'keyur'), ('10.8G', 'adityac')]
sorted_list = sorted(data, key=lambda x: float(x[0][:-1]))
print(sorted_list)
# prints [('5.51G', 'keyur'), ('10.8G', 'adityac'), ('18.3G', 'pgaur'), ('29.6G', 'adsoni')]
</code></pre>
<p>Notice the slicing on the <code>x[0]</code>. It takes all characters in <code>x[0]</code> apart from the last one (<code>'G'</code>) since that would mess up the sorting of the number. Then the sliced <code>x[0]</code> is converted to float with <code>float()</code> and used for the sorting. Finally the results are saved in a new list.</p>
| 2 | 2016-08-17T12:25:34Z | [
"python",
"python-2.7"
] |
sorting list of tuple based on engineering unit | 38,996,396 | <p>I am trying to sort list into following way.</p>
<pre><code>data = [('18.3G', 'pgaur'), ('29.6G', 'adsoni'), ('5.51G', 'keyur'), ('10.8G', 'adityac')]
sorted(data, key= lambda x:x[0])
</code></pre>
<p>but it is not sorting data.</p>
| 1 | 2016-08-17T12:15:28Z | 38,996,737 | <pre><code>sorted(data, key= lambda x:float(x[0][:][:-1]))
</code></pre>
<p>Will give you what you want.</p>
<p>This is sorting by the first element of the tuple x[0]
[:] copies the rest of the tuple, [:-1] up to the last digit (excludes G)</p>
| 1 | 2016-08-17T12:30:50Z | [
"python",
"python-2.7"
] |
sorting list of tuple based on engineering unit | 38,996,396 | <p>I am trying to sort list into following way.</p>
<pre><code>data = [('18.3G', 'pgaur'), ('29.6G', 'adsoni'), ('5.51G', 'keyur'), ('10.8G', 'adityac')]
sorted(data, key= lambda x:x[0])
</code></pre>
<p>but it is not sorting data.</p>
| 1 | 2016-08-17T12:15:28Z | 38,996,750 | <p>Ideally you need a key function that returns a numeric value. Let's suppose that you can use <code>k</code>, <code>m</code> and <code>g</code> as multipliers. This code does no error checking which is, as usual, left as an exercise for the reader.</p>
<pre><code>def sortkey(pair):
num = pair[0][:-1]
mult = pair[0][-1].lower()
num = float(num)
mult = {'k': 1000, 'm': 1000000, 'g': 1000000000}[mult]
return num*mult
</code></pre>
| 1 | 2016-08-17T12:31:25Z | [
"python",
"python-2.7"
] |
Parameter restrictions for Nelson-Siegel yield curve in quantlib | 38,996,422 | <p>I am using QL in Python and have translated parts of the example file
<a href="http://quantlib.org/reference/_fitted_bond_curve_8cpp-example.html#_a25" rel="nofollow">http://quantlib.org/reference/_fitted_bond_curve_8cpp-example.html#_a25</a>;
of how to fit a yield curve with bonds in order to fit a Nelson-Siegel
yield curve to a set of given calibration bonds.</p>
<p>As usual when performing such a non-linear fit, the results depend strongly
on the initial conditions and many (economically meaningless) minima of the
objective function exist. This is why putting constraints on the parameters
is essential for success. To give an example, at times I get negative
tau/lambda parameters and my yield curve diverges.</p>
<p>I did not find how these parameter constraints can be specified in
the NelsonSiegelFitting or the FittedBondDiscountCurve classes. I could
imagine that anyone performing NS fitting in QL will encounter the same
issue.</p>
| 1 | 2016-08-17T12:16:46Z | 39,012,011 | <p>Thanks to Andres Hernandez for the answer:</p>
<p>Currently it is not possible. However, it is very easy to extend QL to allow it, but I think it needs to be done on the c++. So even though you are using QL in python, can you modify the c++ code and export a new binding? If yes, then you can use the following code, if not then I could just check it into the code, but it will take some time for the pull request to be accepted. In case you can touch the code, you can add something like this:</p>
<p>in nonlinearfittingmethods.hpp:</p>
<pre><code> class NelsonSiegelConstrainedFitting
: public FittedBondDiscountCurve::FittingMethod {
public:
NelsonSiegelConstrainedFitting(const Array& lower, const Array& upper,
const Array& weights = Array(),
boost::shared_ptr<OptimizationMethod> optimizationMethod
= boost::shared_ptr<OptimizationMethod>());
std::auto_ptr<FittedBondDiscountCurve::FittingMethod> clone() const;
private:
Size size() const;
DiscountFactor discountFunction(const Array& x, Time t) const;
Array lower_, upper_;
};
</code></pre>
<p>in nonlinearfittingmethods.cpp:</p>
<pre><code>NelsonSiegelConstrainedFitting::NelsonSiegelConstrainedFitting(
const Array& lower, const Array& upper, const Array& weights,
boost::shared_ptr<OptimizationMethod> optimizationMethod)
: FittedBondDiscountCurve::FittingMethod(true, weights, optimizationMethod),
lower_(lower), upper_(upper){
QL_REQUIRE(lower_.size() == 4, "Lower constraint must have 4 elements");
QL_REQUIRE(upper_.size() == 4, "Lower constraint must have 4 elements");
}
std::auto_ptr<FittedBondDiscountCurve::FittingMethod>
NelsonSiegelConstrainedFitting::clone() const {
return std::auto_ptr<FittedBondDiscountCurve::FittingMethod>(
new NelsonSiegelFitting(*this));
}
Size NelsonSiegelConstrainedFitting::size() const {
return 4;
}
DiscountFactor NelsonSiegelConstrainedFitting::discountFunction(const Array& x,
Time t) const {
///extreme values of kappa result in colinear behaviour of x[1] and x[2], so it should be constrained not only
///to be positive, but also not very extreme
Real kappa = lower_[3] + upper_[3]/(1.0+exp(-x[3]));
Real x0 = lower_[0] + upper_[0]/(1.0+exp(-x[0])),
x1 = lower_[1] + upper_[1]/(1.0+exp(-x[1])),
x2 = lower_[2] + upper_[2]/(1.0+exp(-x[2])),;
Real zeroRate = x0 + (x1 + x2)*
(1.0 - std::exp(-kappa*t))/
((kappa+QL_EPSILON)*(t+QL_EPSILON)) -
x2*std::exp(-kappa*t);
DiscountFactor d = std::exp(-zeroRate * t) ;
return d;
}
</code></pre>
<p>You then need to add it to the swig interface, but it should be trivial to do so.</p>
| 1 | 2016-08-18T07:13:26Z | [
"python",
"quantlib"
] |
Error with backquotes when import parser from dateutil with Python3.5 | 38,996,464 | <p>I used Tensorflow. It was working.
After I installed Caffe (with all dependencies) my old TF projects stopped working.</p>
<p>The root cause is:</p>
<pre><code>from dateutil import parser as _date_parser
</code></pre>
<p>gives</p>
<pre><code>/usr/bin/python3.5 /data/PycharmProjects/tensorflow/test/test1.py
Traceback (most recent call last):
File "/data/PycharmProjects/tensorflow/test/test1.py", line 1, in <module>
from dateutil import parser as _date_parser
File "/usr/local/lib/python3.5/dist-packages/dateutil/parser.py", line 158
l.append("%s=%s" % (attr, `value`))
^
SyntaxError: invalid syntax
Process finished with exit code 1
</code></pre>
<p>as a result</p>
<pre><code>import tensorflow as tf
</code></pre>
<p>doesn't work because of dependencies</p>
<p>Why is it happened? It was working before Caffe installation.</p>
| 0 | 2016-08-17T12:18:25Z | 38,996,589 | <p>Is it possible that installing your Caffe updated Python? A L-O-O-N-G time ago Python used backticks as a shortcut for calling the <code>repr</code> function on its argument. Replacing the backtick-quoted expression with <code>repr(value)</code> might be all you need.</p>
| 0 | 2016-08-17T12:24:19Z | [
"python",
"parsing",
"tensorflow",
"python-module"
] |
How do I access a custom reverse model manager in a template? | 38,996,580 | <p>Given the following models:</p>
<pre><code>class Author(models.Model):
name = models.CharField(max_length=20)
class CustomQuerySet1(models.QuerySet):
def some_method(self):
return 'custom queryset 1'
class CustomQuerySet2(models.QuerySet):
def some_method(self):
return 'custom queryset 2'
class Book(models.Model):
author = models.ForeignKey(Author, related_name='books')
title = models.CharField(max_length=50)
objects = models.Manager()
custom1 = CustomQuerySet1.as_manager()
custom2 = CustomQuerySet2.as_manager()
</code></pre>
<p>In my REPL I can access the custom related managers like so:</p>
<pre><code>>>> author = Author.objects.create(name='John')
>>> book = Book.objects.create(author=author, title='Hello')
>>> author.books(manager='custom1').some_method()
'custom queryset 1'
>>> author.books(manager='custom2').some_method()
'custom queryset 2'
</code></pre>
<p>And in our templates we normally access related objects via the default manager like so:</p>
<pre><code>{% for book in author.books.all %}
{{ book.title }}
{% endfor %}
</code></pre>
<p>We obviously aren't allowed to make function calls and pass in parameters within templates:</p>
<pre><code>{% for book in author.books(manager='custom1').some_method %}
{% endfor %}
</code></pre>
<p>So is there a native way to access these custom related model managers in our templates I'm not aware of? Was something like this never intended? Any outside the box solutions?</p>
<p>Edit:
In case there was some confusion within the comments, I wasn't suggesting the following:</p>
<pre><code>def some_view(request):
context = {}
template = 'my_template.html'
context['author'] = Author
return render(request, template, context)
</code></pre>
<p>and then calling <code>author.objects.all</code> but instead:</p>
<pre><code>def some_view(request):
context = {}
template = 'my_template.html'
author = Author.objects.get(name='John')
context['author'] = author
return render(request, template, context)
</code></pre>
<p>and then accessing <code>author.books.custom1.some_method</code>. Note the <code>related_name</code> on <code>author</code> defined on the <code>Book</code> model.</p>
| 1 | 2016-08-17T12:23:48Z | 38,996,852 | <p>There is none and I think it won't (because it shouldn't).
Template is a place where data is finally presented, so it should all be prepared and template markup only used to present it in a convenient way.</p>
<p>However, it can be a nice idea for a template filter.</p>
| 0 | 2016-08-17T12:36:37Z | [
"python",
"django"
] |
Quick sort using median as pivot working (but shows incorrect count for comaprisons) | 38,996,606 | <p>I am using Quick sort in python to sort a list of numbers.This has to be done using the median as the pivot.
How am i finding the median ?
It is the second largest no.(out of a list of 3 numbers which are first,middle and last element of the original list)</p>
<p>If the list is of length even(2j),the middle no will be the jth element.</p>
<p>If the list is of length odd(say 5),the middle no. will be the 3rd element</p>
<p>My code works successfully.However, the total no. of comparisons made are incorrect(found using the variable <strong>tot_comparisons</strong>)</p>
<p>What am i doing?</p>
<p>1.Finding the pivot (which is the median of first,middle and last element)</p>
<ol start="2">
<li>Replacing the pivot found above with the first element(because i already have the code ready for quicksort,with the first element as pivot)</li>
</ol>
<p>I do not need the code.I have my posted below which is working successfully(except for the total no. of comparisons part). I only need to know the mistake in my code</p>
<pre><code>def swap(A,i,k):
temp=A[i]
print "temp is "
print temp
A[i]=A[k]
A[k]=temp
def find_median(input_list,start,end):
#print 'input list is'
#print input_list
temp_list1=[]
temp_list1.append(input_list[start])
if len(input_list) % 2 ==0:
middle_element_index=(len(input_list) / 2)-1
middle_element=input_list[middle_element_index]
temp_list1.append(middle_element)
else:
middle_element_index=len(input_list)/2
middle_element=input_list[middle_element_index]
temp_list1.append(middle_element)
temp_list1.append(input_list[end])
temp_list1.sort()
#print temp_list1
if len(temp_list1)==3:
print temp_list1[1]
return temp_list1[1]
elif len(temp_list1)==2:
print temp_list1[1]
return temp_list1[1]
else:
print temp_list1[0]
return temp_list1[0]
def partition(A,start,end,median_index):
swap(A,start,median_index)
pivot=A[start]
pivot_index=start + 1
for i in range(start+1,end+1):
if A[i] < pivot:
swap(A,i,pivot_index)
pivot_index+=1
swap(A,pivot_index-1,start)
return pivot_index-1
def quicksort(A,start,end):
global tot_comparisons
if start<end:
median=find_median(A, start, end)
median_index=A.index(median)
pivot_index=partition(A,start,end,median_index)
tot_comparisons+=end-start
print "pivot_index"
print pivot_index
print "ENDS"
quicksort(A, start,pivot_index-1)
#tot_comparisons+=end-pivot_index
#quicksort(A, pivot_index, end)
quicksort(A, pivot_index+1, end)
#A=[45,21,23,4,65]
#A=[21,23,19,22,1,3,7,88,110]
#A=[1,22,3,4,66,7]
#A=[1, 3, 7, 19, 21, 22, 23, 88, 110]
#A=[7,2,1,6,8,5,3,4]
temp_list=[]
f=open('temp_list.txt','r')
for line in f:
temp_list.append(int(line.strip()))
f.close()
print 'list is '
#print temp_list
print 'list ends'
tot_comparisons=0
#quicksort(A, 0, 7)
quicksort(temp_list, 0, 9999)
#quicksort(temp_list, 0, len(temp_list))
print 'hhh'
print temp_list
print tot_comparisons
#print A
</code></pre>
| 0 | 2016-08-17T12:24:54Z | 39,024,597 | <p>I believe I've found the critical problem: <strong>find_median</strong> grabs the left and right elements of the given list, but then adds the middle element of the <em>original</em> list. Thus, if you're looking to sort list positions 5:7, you grab elements 5, 7, and 3. The last should be element 6. This is likely to force your through extra sorting work.</p>
<p>The middle element has index </p>
<blockquote>
<p>(start+end) / 2</p>
</blockquote>
<p>(integer division; it looks like you're using Python 2.? ). You don't need separate cases for odd and even lengths. The position depends on the given list, <em>not</em> the original length, <strong>len(input_list)</strong>.</p>
<p>Just make a list of the three needed elements, sort it, and return the middle element. Since there's only one temp list, which is obviously a list, let's shorten that name.</p>
<pre><code>temp = [
input_list[start],
input_list[end],
input_list[(start+end) / 2]
]
temp.sort()
return temp[1]
</code></pre>
<p>You can reduce this to a one-command function. I'll rename <strong>input list</strong> as simply <strong>in</strong> to help readability.</p>
<pre><code>return sorted( [ in[start], in[end], in[(start+end) / 2] ] )[1]
</code></pre>
| 1 | 2016-08-18T17:51:04Z | [
"python",
"recursion",
"quicksort",
"median"
] |
What does _com_interfaces_ do? | 38,996,617 | <p>I'm trying to understand the COM server examples of <a href="/questions/tagged/pywin32" class="post-tag" title="show questions tagged 'pywin32'" rel="tag">pywin32</a>, and in <code>win32comext/shell/demos/servers/icon_handler.py</code> I saw the line</p>
<pre><code>_com_interfaces_ = [shell.IID_IExtractIcon, pythoncom.IID_IPersistFile]
</code></pre>
<p>While that pretty clearly refers to an <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/cc144122(v=vs.85).aspx" rel="nofollow"><code>IconHandler</code></a> having to implement the <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/bb761854(v=vs.85).aspx" rel="nofollow"><code>IExtractIcon</code></a> and <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms687223(v=vs.85).aspx" rel="nofollow"><code>IPersistFile</code></a> interfaces, I cannot find any documentation <em>where</em> <code>_com_interfaces_</code> is actually used. It is nowhere to be seen in <code>win32com.server.register</code> or <code>win32com.server.localserver.py</code>, so neither registration nor the server call seem to actually use this. Is there any documentation on <code>_com_interfaces_</code>?</p>
| 1 | 2016-08-17T12:25:21Z | 38,996,764 | <p><code>_com_interfaces_</code> is an optional attribute a <a href="http://docs.activestate.com/activepython/2.6/pywin32/html/com/win32com/HTML/QuickStartServerCom.html#Policies" rel="nofollow">Server Policy</a> looks for:</p>
<blockquote>
<p><code>_com_interfaces_</code></p>
<p>Optional list of IIDs exposed by this object. If this attribute is missing, <code>IID_IDispatch</code> is assumed (ie, if not supplied, the COM object will be created as a normal Automation object.</p>
</blockquote>
<p>The list is used to answer <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms682521(v=vs.85).aspx" rel="nofollow"><code>QueryInterface</code> enqueries</a>, see the <a href="https://github.com/SublimeText/Pywin32/blob/753322f9ac4b943c2c04ddd88605e68bc742dbb4/lib/x64/win32com/server/policy.py" rel="nofollow"><code>win32com.server.policy</code> module</a> to see how this is being used, specifically the <a href="https://github.com/SublimeText/Pywin32/blob/753322f9ac4b943c2c04ddd88605e68bc742dbb4/lib/x64/win32com/server/policy.py#L207" rel="nofollow"><code>BasicPolicy._wrap()</code></a> and <a href="https://github.com/SublimeText/Pywin32/blob/753322f9ac4b943c2c04ddd88605e68bc742dbb4/lib/x64/win32com/server/policy.py#L247" rel="nofollow"><code>BasicPolicy._QueryInterface_</code></a> methods.</p>
| 2 | 2016-08-17T12:32:04Z | [
"python",
"pywin32",
"shell-extensions",
"com-server",
"icon-handler"
] |
Python mmap object complains of a string pattern on Python 3.5.2 (not in Python 2.6.6) | 38,996,646 | <p>I have the following code:</p>
<pre><code>def grep(pattern, file_path):
with io.open(file_path, "r", encoding="utf-8") as f:
file_size = os.path.getsize(file_path)
mm = mmap.mmap(f.fileno(), file_size, access=mmap.ACCESS_READ)
return re.search(pattern, mm)
</code></pre>
<p>With Python 2.6.6, I can use an <code>r'approved="no"'</code> pattern.<br>
With Python 3.5.2, I have to use a <code>b'approved="no"'</code> pattern. Otherwise, I get a <code>TypeError: cannot use a string pattern on a bytes-like object</code></p>
<p>Is there a way to use the raw string pattern with Python 3.5.2? I have code that uses the same raw string patterns that I pass to this function using mmap, so I would like to reuse those patterns.</p>
<p>I have tried reading the mmap object into a string but that considerably slows down the performance on Windows (not so much on Linux)</p>
<pre><code>data = str(mm.read(file_size))
return re.search(pattern, data)
</code></pre>
<h2>Results</h2>
<p>Working set: 405 Xliff files, 3,860,117 lines in total.<br>
Time measured with Python <code>(time.time() - start_time)</code><br>
<strong>Reading the mmap object into a string</strong>: 29s<br>
<strong>Using a binary pattern and the mmap object directly in the regex</strong>: 3s</p>
| 2 | 2016-08-17T12:26:53Z | 39,007,514 | <p>The simplest way would probably be to just encode to utf-8:</p>
<pre><code>def grep(pattern, file_path):
pattern = pattern.encode("utf-8")
with io.open(file_path, "r", encoding="utf-8") as f:
file_size = os.path.getsize(file_path)
mm = mmap.mmap(f.fileno(), file_size, access=mmap.ACCESS_READ)
return re.search(pattern, mm)
</code></pre>
<p>It will give you bytes on python3 and as I commented there is no difference between str and bytes using python2. </p>
| 2 | 2016-08-17T23:01:24Z | [
"python",
"regex",
"performance",
"mmap"
] |
Python mmap object complains of a string pattern on Python 3.5.2 (not in Python 2.6.6) | 38,996,646 | <p>I have the following code:</p>
<pre><code>def grep(pattern, file_path):
with io.open(file_path, "r", encoding="utf-8") as f:
file_size = os.path.getsize(file_path)
mm = mmap.mmap(f.fileno(), file_size, access=mmap.ACCESS_READ)
return re.search(pattern, mm)
</code></pre>
<p>With Python 2.6.6, I can use an <code>r'approved="no"'</code> pattern.<br>
With Python 3.5.2, I have to use a <code>b'approved="no"'</code> pattern. Otherwise, I get a <code>TypeError: cannot use a string pattern on a bytes-like object</code></p>
<p>Is there a way to use the raw string pattern with Python 3.5.2? I have code that uses the same raw string patterns that I pass to this function using mmap, so I would like to reuse those patterns.</p>
<p>I have tried reading the mmap object into a string but that considerably slows down the performance on Windows (not so much on Linux)</p>
<pre><code>data = str(mm.read(file_size))
return re.search(pattern, data)
</code></pre>
<h2>Results</h2>
<p>Working set: 405 Xliff files, 3,860,117 lines in total.<br>
Time measured with Python <code>(time.time() - start_time)</code><br>
<strong>Reading the mmap object into a string</strong>: 29s<br>
<strong>Using a binary pattern and the mmap object directly in the regex</strong>: 3s</p>
| 2 | 2016-08-17T12:26:53Z | 39,537,491 | <p><code>bytes</code> can be raw literals too. If your pattern is only used with the <code>mmap</code> (or other <code>bytes</code>-like things), you can just use <code>br'approved="no"'</code>, which is supported on 2.6 and later just fine (the <code>b</code> is redundant on Py2.x normally, but it means something to the <code>2to3</code> converter, and will undo the effect of <code>from __future__ import unicode_literals</code> on that specific literal).</p>
<p>Sadly, on Py2.x (and Py3.2 and below), the prefix order matters, <code>br'approved="no"'</code> is fine, <code>rb'approved="no"'</code> is a syntax error. In Py 3.3 and later, either order is accepted (I prefer the latter myself, since it reads naturally in my head as "raw bytes", vs. the more awkward "bytes raw", but portability takes precedence).</p>
<p>Obviously, if your other uses are for strict text (Py2 <code>unicode</code>, Py3 <code>str</code>), Padraic's answer of encoding (or storing as <code>bytes</code> and decoding in other places) is the only option to avoid storing two copies of each pattern.</p>
| 1 | 2016-09-16T17:54:41Z | [
"python",
"regex",
"performance",
"mmap"
] |
From a sentence group all words that start with same alphabet and and sort it according to first character of the word | 38,996,697 | <p>suppose I take a string input from the user , and if will then group all the words in that sentence with respect to first character of the word and later we have to display the o/p as dictionary , No repetition of words is allowed .
For example consider i/p string <code>a="A cat ran after the dog and died due to injury"</code></p>
<p>Then o/p should be :</p>
<pre><code>{'A': ['A'], 'a': ['after', 'and'], 'c': ['cat'], 'd': ['died', 'dog', 'due'], 'i': ['injury'], 'r': ['ran'], 't': ['the', 'to']}
</code></pre>
<p>Here is making list according to starting character and all the words starting with that character .
I have made this program :</p>
<pre><code>a="A cat ran after the dog and died due to injury"
b=[]
c={}
b=list(sorted(set(a.split())))
for x in b:
e=x[0]
c.setdefault(e,[])
c[e].append(x)
print (c)
</code></pre>
<p>Can you suggest me different way to do this . I am not satisfied with this approach of mine . Please provide a algorithmic way .
The language I am using is python3</p>
| 1 | 2016-08-17T12:28:46Z | 38,996,889 | <p>You could simplify your code a little but apart from that it's ok</p>
<pre><code>from collections import defaultdict
a="A cat ran after the dog and died due to injury"
c=defaultdict(list)
for x in set(a.split()): # no need to sort unless you create an OrderedDict which you didn't
c[x[0]].append(x)
print (c)
</code></pre>
<p>You can replace the loop by</p>
<pre><code>any(map(lambda x : c[x[0]].append(x),set(a.split())))
</code></pre>
| 3 | 2016-08-17T12:37:54Z | [
"python",
"algorithm",
"python-3.x"
] |
From a sentence group all words that start with same alphabet and and sort it according to first character of the word | 38,996,697 | <p>suppose I take a string input from the user , and if will then group all the words in that sentence with respect to first character of the word and later we have to display the o/p as dictionary , No repetition of words is allowed .
For example consider i/p string <code>a="A cat ran after the dog and died due to injury"</code></p>
<p>Then o/p should be :</p>
<pre><code>{'A': ['A'], 'a': ['after', 'and'], 'c': ['cat'], 'd': ['died', 'dog', 'due'], 'i': ['injury'], 'r': ['ran'], 't': ['the', 'to']}
</code></pre>
<p>Here is making list according to starting character and all the words starting with that character .
I have made this program :</p>
<pre><code>a="A cat ran after the dog and died due to injury"
b=[]
c={}
b=list(sorted(set(a.split())))
for x in b:
e=x[0]
c.setdefault(e,[])
c[e].append(x)
print (c)
</code></pre>
<p>Can you suggest me different way to do this . I am not satisfied with this approach of mine . Please provide a algorithmic way .
The language I am using is python3</p>
| 1 | 2016-08-17T12:28:46Z | 38,997,510 | <pre><code>res = {}
for a in "A cat ran after the dog and died due to injury".split():
k = a[0]
try: res[k].add(a)
except KeyError: res[k] = {a}
print(res)
</code></pre>
| 1 | 2016-08-17T13:05:29Z | [
"python",
"algorithm",
"python-3.x"
] |
Find list items in excel sheet with Python | 38,996,708 | <p>I've the following code below which finds non-blank values in Column J of an Excel worksheet. It does some things with it, including getting the value's email address in column K. Then it emails the member using smtp.</p>
<p>What I'd like instead is to get the person's email from a Python list, which can be declared in the beginning of the code. I just can't figure out how to find the matching names in column J in the worksheet per the list, and then get the resulting email address from the list.</p>
<p>Please excuse any horrible syntax...this is my first stab at a major python project.</p>
<pre><code>memlist = {'John Frank':'email@email.com',
'Liz Poe':'email2@email.com'}
try:
for i in os.listdir(os.getcwd()):
if i.endswith(".xlsx") or i.endswith(".xls"):
workbook = load_workbook(i, data_only=True)
ws = workbook.get_sheet_by_name(wsinput)
cell_range = ws['j3':'j7']
for row in cell_range: # This is iterating through rows 1-7
#for matching names in memlist
for cell in row: # This iterates through the columns(cells) in that row
value = cell.value
if cell.value:
if cell.offset(row=0, column =-9).value.date() == (datetime.now().date() + timedelta(days=7)):
#print(cell.value)
email = cell.offset(row=0, column=1).value
name = cell.value.split(',',1)[0]
</code></pre>
| 0 | 2016-08-17T12:29:23Z | 38,997,154 | <p>This is my attempt at an answer.</p>
<p><code>memlist</code> is not a <code>list</code>, rather it is a <code>dict</code> because it contains <code>key : value</code> pairs.</p>
<p>If you want to check that a certain key exists in a <code>dict</code>, you can use <code>dict.has_key(key)</code> method.</p>
<p>In <code>memlist</code> , the name is the <code>key</code> and the corresponding email is the <code>value</code>.</p>
<p>In your code, you could do this:</p>
<pre><code>if memlist.has_key(cell.value): # For Python 2
if ... # From your code
email = memlist[cell.value]
</code></pre>
<p>In case you're using Python 3, you can search for the key like this:</p>
<pre><code>if cell.value in memlist: # For Python 3
</code></pre>
<p>See if this works for you as I couldn't fully comprehend your question.</p>
| 0 | 2016-08-17T12:51:06Z | [
"python",
"excel",
"smtp"
] |
Find list items in excel sheet with Python | 38,996,708 | <p>I've the following code below which finds non-blank values in Column J of an Excel worksheet. It does some things with it, including getting the value's email address in column K. Then it emails the member using smtp.</p>
<p>What I'd like instead is to get the person's email from a Python list, which can be declared in the beginning of the code. I just can't figure out how to find the matching names in column J in the worksheet per the list, and then get the resulting email address from the list.</p>
<p>Please excuse any horrible syntax...this is my first stab at a major python project.</p>
<pre><code>memlist = {'John Frank':'email@email.com',
'Liz Poe':'email2@email.com'}
try:
for i in os.listdir(os.getcwd()):
if i.endswith(".xlsx") or i.endswith(".xls"):
workbook = load_workbook(i, data_only=True)
ws = workbook.get_sheet_by_name(wsinput)
cell_range = ws['j3':'j7']
for row in cell_range: # This is iterating through rows 1-7
#for matching names in memlist
for cell in row: # This iterates through the columns(cells) in that row
value = cell.value
if cell.value:
if cell.offset(row=0, column =-9).value.date() == (datetime.now().date() + timedelta(days=7)):
#print(cell.value)
email = cell.offset(row=0, column=1).value
name = cell.value.split(',',1)[0]
</code></pre>
| 0 | 2016-08-17T12:29:23Z | 38,998,760 | <p>Shubham,
I used a part of your response in finding my own answer. Instead of the has_key method, I just used another for/in statement with a subsequent if statement.</p>
<p>My fear, however, is that with these multiple for's and if's, the code takes a long time to run and maybe not the most efficient/optimal. But that's worthy of another day.</p>
<pre><code>try:
for i in os.listdir(os.getcwd()):
if i.endswith(".xlsx") or i.endswith(".xls"):
workbook = load_workbook(i, data_only=True)
ws = workbook.get_sheet_by_name(wsinput)
cell_range = ws['j3':'j7']
for row in cell_range: # This is iterating through rows 1-7
for cell in row: # This iterates through the columns(cells) in that row
value = cell.value
if cell.value:
if cell.offset(row=0, column =-9).value.date() == (datetime.now().date() + timedelta(days=7)):
for name, email in memlist.items():
if cell.value == name:
#send the email
</code></pre>
| 0 | 2016-08-17T13:56:42Z | [
"python",
"excel",
"smtp"
] |
Django 1.10 RequestContext processors not working | 38,996,878 | <p>After updating to django 1.10, processors stop working.</p>
<p>This menu context for many views:</p>
<pre><code>def menu_category(request):
category_parent = Category.objects.filter(parent__isnull=True, is_active=True).order_by('mass')
category_child = Category.objects.filter(parent__isnull=False, is_active=True).order_by('mass')
return {'category_parent': category_parent, 'category_child': category_child}
</code></pre>
<p>This view for django 1.9 (in django 1.10 processors=[menu_category] empty context):</p>
<pre><code>def news_main(request):
posts = Post.objects.filter(
Q(date_completion__gt=timezone.now()) | Q(date_completion=None),
date_published__lte=timezone.now(),
is_active=True).order_by('-date_published')
return render_to_response('news/news_main.html',
{'posts': posts},
RequestContext(request, processors=[menu_category]))
</code></pre>
| 1 | 2016-08-17T12:37:24Z | 38,997,190 | <p>You should never have been passing a RequestContext to <code>render_to_response</code>, and Django 1.10 has made that an error. Instead use the <code>render</code> shortcut:</p>
<pre><code>return render(request, 'news/news_main.html', {'posts': posts})
</code></pre>
<p>Note that whenever you upgrade a version you should always make sure to read the release notes; in this case the change is noted under <a href="https://docs.djangoproject.com/en/1.10/releases/1.10/#features-removed-in-1-10" rel="nofollow">Features removed in 1.10</a>.</p>
| 1 | 2016-08-17T12:52:29Z | [
"python",
"django"
] |
Tensorflow CSV decode error | 38,996,904 | <p>I am using TensorFlow 0.10.0rc0. I have CUDA Driver Version = 7.5 and CUDNN 4 on Ubuntu 14.04.</p>
<p>I have a simple CSV file which has a single line like this:</p>
<pre><code>"field with
newline",0
</code></pre>
<p>where the newline has been added by pressing the enter key in VIM on Ubuntu.
I am able to read this file in <code>pandas</code> using the <code>read_csv</code> function, where the text field is shown as containing a single <code>\n</code> character.</p>
<p>But when I try to read it in TensorFlow, I get the following error:</p>
<pre><code>tensorflow.python.framework.errors.InvalidArgumentError: Quoted field has to end with quote followed by delim or end
</code></pre>
<p>My tensor flow code to read CSV uses this function to read a single row:</p>
<pre><code>def read_single_example(filename_queue, skip_header_lines, record_defaults, feature_index, label_index):
reader = tf.TextLineReader(skip_header_lines=skip_header_lines)
key, value = reader.read(filename_queue)
record = tf.decode_csv(
value,
record_defaults=record_defaults)
features, label = record[feature_index], record[label_index]
return features, label
</code></pre>
<p>If I read using <code>pandas</code> and replace all newlines with spaces, the TensorFlow code is able to parse the CSV successfully.</p>
<p>But it will be really helpful if newlines can be handled within the TensorFlow CSV pipeline itself.</p>
| 0 | 2016-08-17T12:38:45Z | 38,998,494 | <p>TensorFlow's CSV reader is pretty strict, in my experience with it, with regards to RFC4180.</p>
<p>Making sure your files use CRLF at the end of each line, as well as in quoted fields, should allow processing.</p>
<p>Note: I have been using this up to 0.9 so far. I did not try on RCs from 0.10.</p>
| 0 | 2016-08-17T13:47:00Z | [
"python",
"csv",
"tensorflow"
] |
Set a condition from inside of a loop Jinja2 | 38,996,912 | <pre><code> <ul>
{% for bm in user.bookmarks %}
<li>
<a href="{{ bm.url }}">{{ bm.description }}</a>
</li>
{% else %}
<li>This user has not added any bookmarks yet.</li>
{% endfor %}
</ul>
</code></pre>
<p>Is there a way to set a condition from inside of the loop
I mean if 'for loop' doesnt have any result the ul tag dont generate into page</p>
<p>The whole idea is preventing putting empty tag in the page
I know I can put another if expression outside but It is so complicated for maintenance </p>
| 0 | 2016-08-17T12:38:59Z | 38,997,000 | <p>You can put the <code>ul</code> tags inside the loop, and use the <code>loop.first</code> and <code>loop.last</code> variables to control them.</p>
<pre><code>{% for bm in user.bookmarks %}
{% if loop.first %}
<ul>
{% endif %}
<li>
<a href="{{ bm.url }}">{{ bm.description }}</a>
</li>
{% if loop.last %}
</ul>
{% endif %}
{% else %}
This user has not added any bookmarks yet.
{% endfor %}
</code></pre>
| 1 | 2016-08-17T12:43:22Z | [
"python",
"templates",
"jinja2"
] |
Set a condition from inside of a loop Jinja2 | 38,996,912 | <pre><code> <ul>
{% for bm in user.bookmarks %}
<li>
<a href="{{ bm.url }}">{{ bm.description }}</a>
</li>
{% else %}
<li>This user has not added any bookmarks yet.</li>
{% endfor %}
</ul>
</code></pre>
<p>Is there a way to set a condition from inside of the loop
I mean if 'for loop' doesnt have any result the ul tag dont generate into page</p>
<p>The whole idea is preventing putting empty tag in the page
I know I can put another if expression outside but It is so complicated for maintenance </p>
| 0 | 2016-08-17T12:38:59Z | 39,208,578 | <pre><code>{% if user.bookmarks %}
<ul>
{% for bm in user.bookmarks %}
<li>
<a href="{{ bm.url }}">{{ bm.description }}</a>
</li>
{% else %}
<li>This user has not added any bookmarks yet.</li>
{% endfor %}
</ul>
{% enfif %}
</code></pre>
| 0 | 2016-08-29T14:21:37Z | [
"python",
"templates",
"jinja2"
] |
Pandas: get some data from dataframe | 38,996,921 | <p>I have dataframe <code>ID,"month","type"
0896cbe25bb8aec86ff93dd1bf20fa80,2013-12,desktop
0896cbe25bb8aec86ff93dd1bf20fa80,2014-01,desktop
0896cbe25bb8aec86ff93dd1bf20fa80,2014-02,desktop
0896cbe25bb8aec86ff93dd1bf20fa80,2014-03,desktop
0ce926c4c33e63aeef04a55dc204cb1a,2014-06,desktop
0ce926c4c33e63aeef04a55dc204cb1a,2014-07,desktop
0ce926c4c33e63aeef04a55dc204cb1a,2014-08,desktop
0ce926c4c33e63aeef04a55dc204cb1a,2014-09,desktop
0ce926c4c33e63aeef04a55dc204cb1a,2014-10,desktop
</code>
And have another dataframe</p>
<pre><code>idp year month
5663b84ee164ed2628f4df6ed6ffe89b 2015 11
d156e747fb3e715a13ac850ca3e4c0e5 2014 7
0ce926c4c33e63aeef04a55dc204cb1a 2014 10
142068cd70ec3541698c919b023caf1c 2014 3
24fa9c75cc86187937f4fea0c06a6513 2014 12
3e3906343b235e6eac743be65da1dcbb 2014 6
757bf2f08a1de8383e24509d5f105ce7 2015 8
</code></pre>
<p>I need if <code>idp</code> in first dataframe and if date from second df is equal to first (or month from first df less to 1 from second month) I should get data to this ID.
I need to get</p>
<pre><code>ID, month, type
0ce926c4c33e63aeef04a55dc204cb1a,2014-06,desktop
0ce926c4c33e63aeef04a55dc204cb1a,2014-07,desktop
0ce926c4c33e63aeef04a55dc204cb1a,2014-08,desktop
0ce926c4c33e63aeef04a55dc204cb1a,2014-09,desktop
0ce926c4c33e63aeef04a55dc204cb1a,2014-10,desktop
</code></pre>
<p>How can I write this condition?</p>
| -1 | 2016-08-17T12:39:39Z | 38,997,498 | <p>Not sure it'll be very effective on big dataframes,
but you may add a special "merge" column to df2:</p>
<pre><code> >>> df2['year_month'] = df2.year.astype(str) + '-' + df2.month.astype(int).apply(lambda s: '%02d' % s)
>>> df2
idp year month year_month
0 5663b84ee164ed2628f4df6ed6ffe89b 2015 11 2015-11
1 d156e747fb3e715a13ac850ca3e4c0e5 2014 7 2014-07
2 0ce926c4c33e63aeef04a55dc204cb1a 2014 10 2014-10
3 142068cd70ec3541698c919b023caf1c 2014 3 2014-03
4 24fa9c75cc86187937f4fea0c06a6513 2014 12 2014-12
5 3e3906343b235e6eac743be65da1dcbb 2014 6 2014-06
6 757bf2f08a1de8383e24509d5f105ce7 2015 8 2015-08
</code></pre>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/merging.html" rel="nofollow">Merge</a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html#pandas.DataFrame.query" rel="nofollow">query</a> to take only interesting rows and columns:</p>
<pre><code> >>> pd.merge(df1, df2, how='left', left_on='ID', right_on='idp', suffixes=('', '_df2')) \
.query('month <= year_month') \
[df1.columns]
id year_month type
4 0ce926c4c33e63aeef04a55dc204cb1a 2014-06 desktop
5 0ce926c4c33e63aeef04a55dc204cb1a 2014-07 desktop
6 0ce926c4c33e63aeef04a55dc204cb1a 2014-08 desktop
7 0ce926c4c33e63aeef04a55dc204cb1a 2014-09 desktop
8 0ce926c4c33e63aeef04a55dc204cb1a 2014-10 desktop
</code></pre>
| 0 | 2016-08-17T13:05:01Z | [
"python",
"pandas"
] |
Django multiple sites : Prevent cross-site authentification | 38,997,004 | <p>I'm currently developing 2 different sites at the same time: one of them is a heavily customized django-admin interface, and the other a "stand-alone" website that will share it's database with the previous one .</p>
<p>Even though they are related, I'd like my users not to loosely identify between the two sites : they are both able to be independant of the other.</p>
<p>However, a problem arises when someone is logged in the "admin" site : when they go to the other website, they are automatically logged. Won't happen the other way unless I allow it though, as the admin site requires special permissions in the User model.</p>
<p>I already created an UserProfile that can differentiate an user of one of the sites or of the both.</p>
<p>So, knowing all this, how can I make sure that the customers of the admin site don't get to be authenticated when in the other web site (without, of course, logging them out from the first one) ?</p>
<p>Thanks !</p>
<p><strong>EDIT :</strong> To format it better, here is what I got , summed up :</p>
<ul>
<li>One admin application / site                                                     Both running</li>
<li>One related application / site                                                  on same server,
                                                                                      sharing settings and urls.py</li>
</ul>
<p>If some is logged in <strong>admin</strong>, I want to require them to create a new session to log on <strong>[related site]</strong> : this, without logging them off the admin site.</p>
<p>What changes should I do to this configuration to achieve this ?</p>
| 0 | 2016-08-17T12:43:30Z | 38,998,750 | <p>Put different <code>SESSION_COOKIE_DOMAIN</code> and <code>SESSION_COOKIE_NAME</code> for each appication. Hope this solve your issue.</p>
<pre><code>SESSION_COOKIE_DOMAIN = 'site1.com' #site2.com for other
SESSION_COOKIE_NAME = 'sid1' #sid2 for other
</code></pre>
| 1 | 2016-08-17T13:56:16Z | [
"python",
"django",
"django-admin",
"django-authentication"
] |
sqlalchemy insert data does not work | 38,997,016 | <p>In models.py I have define:</p>
<pre><code>class slidephoto(db.Model):
__tablename__ = 'slide_photo'
id = db.Column(db.Integer, primary_key=True)
uid = db.Column(db.Integer, nullable=False)
photo = db.Column(db.String(collation='utf8_bin'), nullable=False)
def __init__(self, uid, photo):
self.uid = uid
self.photo = photo
def __repr__(self):
return "{'photo': " + str(self.photo) + "}"
</code></pre>
<p>I select data like this (for example):</p>
<pre><code>@app.route('/index/')
def index():
user_photo = slidephoto.query.filter_by(uid=5).all()
</code></pre>
<p>Now I want to know how to insert data. I tried this:</p>
<pre><code>@app.route('/insert/')
def insert():
act = slidephoto.query.insert().execute(uid='2016', photo='niloofar.jpg')
return 'done'
</code></pre>
<p>But it does not do what I need. What should I do?</p>
<p>I have read and tested other answers and solutions, but none of them was useful for my script.</p>
<p>================ update ================</p>
<p>I don't no if it helps... but here is all imports and configs in app.py:</p>
<pre><code>import os, sys
from niloofar import *
from flask import Flask, request, url_for, render_template, make_response, redirect
from flask_sqlalchemy import SQLAlchemy
from werkzeug.utils import secure_filename
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://myusername:mypassword@localhost/mydbname'
db = SQLAlchemy(app)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
</code></pre>
| 2 | 2016-08-17T12:44:02Z | 39,014,748 | <p>I write a simple demo that do insert work, you can take it as a reference:</p>
<pre><code>from flask import Flask
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
class FirstTest(db.Model):
__tablename__ = "first_test"
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String, nullable=False)
def __init__(self, name):
self.name = name
# Fill your db info here
mysql_info = {
"user": "",
"pwd": "",
"host": "",
"port": 3306,
"db": "",
}
app = Flask(__name__)
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = True
# Here I use pymysql
app.config["SQLALCHEMY_DATABASE_URI"] = "mysql+pymysql://{0}:{1}@{2}:{3}/{4}".format(
mysql_info["user"], mysql_info["pwd"], mysql_info["host"],
mysql_info["port"], mysql_info["db"])
db.__init__(app)
@app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
with app.test_request_context("/"):
record = FirstTest("test")
db.session.add(record)
db.session.commit()
</code></pre>
| 0 | 2016-08-18T09:36:17Z | [
"python",
"database",
"insert",
"sqlalchemy"
] |
sqlalchemy insert data does not work | 38,997,016 | <p>In models.py I have define:</p>
<pre><code>class slidephoto(db.Model):
__tablename__ = 'slide_photo'
id = db.Column(db.Integer, primary_key=True)
uid = db.Column(db.Integer, nullable=False)
photo = db.Column(db.String(collation='utf8_bin'), nullable=False)
def __init__(self, uid, photo):
self.uid = uid
self.photo = photo
def __repr__(self):
return "{'photo': " + str(self.photo) + "}"
</code></pre>
<p>I select data like this (for example):</p>
<pre><code>@app.route('/index/')
def index():
user_photo = slidephoto.query.filter_by(uid=5).all()
</code></pre>
<p>Now I want to know how to insert data. I tried this:</p>
<pre><code>@app.route('/insert/')
def insert():
act = slidephoto.query.insert().execute(uid='2016', photo='niloofar.jpg')
return 'done'
</code></pre>
<p>But it does not do what I need. What should I do?</p>
<p>I have read and tested other answers and solutions, but none of them was useful for my script.</p>
<p>================ update ================</p>
<p>I don't no if it helps... but here is all imports and configs in app.py:</p>
<pre><code>import os, sys
from niloofar import *
from flask import Flask, request, url_for, render_template, make_response, redirect
from flask_sqlalchemy import SQLAlchemy
from werkzeug.utils import secure_filename
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://myusername:mypassword@localhost/mydbname'
db = SQLAlchemy(app)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
</code></pre>
| 2 | 2016-08-17T12:44:02Z | 39,014,848 | <p>You should use query as a method. Like 'query()'</p>
| 0 | 2016-08-18T09:40:45Z | [
"python",
"database",
"insert",
"sqlalchemy"
] |
sqlalchemy insert data does not work | 38,997,016 | <p>In models.py I have define:</p>
<pre><code>class slidephoto(db.Model):
__tablename__ = 'slide_photo'
id = db.Column(db.Integer, primary_key=True)
uid = db.Column(db.Integer, nullable=False)
photo = db.Column(db.String(collation='utf8_bin'), nullable=False)
def __init__(self, uid, photo):
self.uid = uid
self.photo = photo
def __repr__(self):
return "{'photo': " + str(self.photo) + "}"
</code></pre>
<p>I select data like this (for example):</p>
<pre><code>@app.route('/index/')
def index():
user_photo = slidephoto.query.filter_by(uid=5).all()
</code></pre>
<p>Now I want to know how to insert data. I tried this:</p>
<pre><code>@app.route('/insert/')
def insert():
act = slidephoto.query.insert().execute(uid='2016', photo='niloofar.jpg')
return 'done'
</code></pre>
<p>But it does not do what I need. What should I do?</p>
<p>I have read and tested other answers and solutions, but none of them was useful for my script.</p>
<p>================ update ================</p>
<p>I don't no if it helps... but here is all imports and configs in app.py:</p>
<pre><code>import os, sys
from niloofar import *
from flask import Flask, request, url_for, render_template, make_response, redirect
from flask_sqlalchemy import SQLAlchemy
from werkzeug.utils import secure_filename
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://myusername:mypassword@localhost/mydbname'
db = SQLAlchemy(app)
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
</code></pre>
| 2 | 2016-08-17T12:44:02Z | 39,019,137 | <p>I hope that my answer will help you solving the problem.</p>
<pre><code>from sqlalchemy import create_engine, MetaData, Table, insert
# I have tested this using my local postgres db.
engine = create_engine('postgresql://localhost/db', convert_unicode=True)
metadata = MetaData(bind=engine)
con = engine.connect()
act = insert(slidephoto).values(uid='2016', photo='niloofer.jpg')
con.execute(act)
</code></pre>
| 0 | 2016-08-18T13:09:49Z | [
"python",
"database",
"insert",
"sqlalchemy"
] |
Profit and loss using pandas | 38,997,043 | <p>I have a dataframe containing data like this:</p>
<pre><code> open close signal
date_time
2011-01-03 01:04:00 1.5560 1.5556 0.0
2011-01-03 01:05:00 1.5557 1.5556 0.0
2011-01-03 01:06:00 1.5557 1.5556 1.0
2011-01-03 01:07:00 1.5556 1.5545 1.0
2011-01-03 01:08:00 1.5546 1.5548 1.0
2011-01-03 01:09:00 1.5549 1.5547 0.0
2011-01-03 01:10:00 1.5548 1.5549 0.0
2011-01-03 01:11:00 1.5549 1.5551 1.0
2011-01-03 01:12:00 1.5550 1.5552 1.0
2011-01-03 01:13:00 1.5553 1.5553 0.0
2011-01-03 01:14:00 1.5552 1.5553 0.0
</code></pre>
<p>Which is a fairly standard way of representing a financial timeseries in Python. </p>
<p>Now, I want to use the signal column to trade. When signal == 1 then buy, when it gets back to 0 then sell. The signal is known at the end of the current minute, so when we say 'Buy', it really means 'buy at the open of the next minute'.</p>
<p>Let's say the initial value of our portfolio is 1.0. I want a timeseries that outputs:</p>
<pre><code> pnl
date_time
2011-01-03 01:04:00 1.0
2011-01-03 01:05:00 1.0
2011-01-03 01:06:00 1.0
2011-01-03 01:07:00 0.999292877 # Buy: pnl = (1.0 * 1.5545 / 1.5556)
2011-01-03 01:08:00 0.999485729 # Hold: pnl = (1.0 * 1.5548 / 1.5556)
2011-01-03 01:09:00 0.999421445 # Hold: pnl = (1.0 * 1.5547 / 1.5556)
2011-01-03 01:10:00 0.999485729 # Sell: pnl = (1.0 * 1.5548 / 1.5556)
2011-01-03 01:11:00 0.999485729 # Wait
2011-01-03 01:12:00 0.999614280 # Buy: pnl = (0.999485729 * 1.5552 / 1.5550)
2011-01-03 01:13:00 0.999678556 # Hold: pnl = (0.999485729 * 1.5553 / 1.5550)
2011-01-03 01:14:00 0.999614280 # Sell: pnl = (0.999485729 * 1.5552 / 1.5550)
2011-01-03 01:15:00 0.999614280 # Wait
</code></pre>
<p>Any idea how to do this with pandas <strong>without looping through the dataframe?</strong></p>
| 1 | 2016-08-17T12:45:18Z | 39,005,976 | <p>I don't quite understand the buy/sell hold (perhaps you have an error in your sell time?), but this should get you close as an idea. The key is to compute an array 'units' that indicates if you are holding stock or not. Then the rest should work OK. Each day you either change value (.99 or 1.01, e.g. based on stock close price), or hold value (1.0). The cumprod() function then accumulates those changes. Since you are buying at the open, you will need to add some complexity. You could create a 'buy' array like <code>signal[1:0]-signal[0:-1]</code> if you need to do something special at those times.</p>
<pre><code>#!/usr/bin/env python
import pandas as pd
import numpy as np
df=pd.DataFrame([[ 1.5560, 1.5556, 0.0],
[ 1.5557, 1.5556, 0.0],
[ 1.5557, 1.5556, 1.0],
[ 1.5556, 1.5545, 1.0],
[ 1.5546, 1.5548, 1.0],
[ 1.5549, 1.5547, 0.0],
[ 1.5548, 1.5549, 0.0],
[ 1.5549, 1.5551, 1.0],
[ 1.5550, 1.5552, 1.0],
[ 1.5553, 1.5553, 0.0],
[ 1.5552, 1.5553, 0.0]], columns=['open','close','signal'])
#You will need to adjust units based on your exact buy/sell times. Assuming here that
#units are signal delayed by 1 time slot.
units=np.insert(df['signal'].values,0,[0])[0:-1]
#change is relative change in price from day before. Insert 1.0 in first day to represent start
change_close=np.insert(df['close'].values[1:]/df['close'].values[0:-1],0,[1])
#hold is 1,0 flag whether you are holding stock
hold=(units>0)
#relative change in value is either change_close or 1.0 (no change)
change_value=hold*change_close + ~hold*1.0
#cumulative product of changes gives current value
pnl=change_value.cumprod()
#insert back into dataframe as new column
df['pnl']=pnl
df
open close signal pnl
0 1.5560 1.5556 0.0 1.000000
1 1.5557 1.5556 0.0 1.000000
2 1.5557 1.5556 1.0 1.000000
3 1.5556 1.5545 1.0 0.999293
4 1.5546 1.5548 1.0 0.999486
5 1.5549 1.5547 0.0 0.999421
6 1.5548 1.5549 0.0 0.999421
7 1.5549 1.5551 1.0 0.999421
8 1.5550 1.5552 1.0 0.999486
9 1.5553 1.5553 0.0 0.999550
10 1.5552 1.5553 0.0 0.999550
</code></pre>
<p>Perhaps if you posted looping code that does what you want, it might be easier for someone to vectorize.</p>
| 0 | 2016-08-17T20:45:30Z | [
"python",
"pandas",
"numpy"
] |
add new column and remove duplicates in that replace null values column wise | 38,997,069 | <pre><code>Duplication type:
Check this column only (default)
Check other columns only
Check all columns
Use Last Value:
True - retain the last duplicate value
False - retain the first of the duplicates (default)
</code></pre>
<p>This rule should add a new column to the dataframe which contains the same as the source column for any unique columns and is null for any duplicate columns.</p>
<p>basic code is df.loc[df.duplicated(),get_unique_column_name(df, "clean")] = df[get_column_name(df, column)] with the parameters for duplicated() set based on the duplication type</p>
<p>See reference for this function above: <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html</a></p>
<p>You should specify the columns in the subset parameter based on the setting of duplication_type</p>
<p>You should specify use_last_value based on use_last_value above</p>
<p>This is my file.</p>
<pre><code>Jason Miller 42 4 25
Tina Ali 36 31 57
Jake Milner 24 2 62
Jason Miller 42 4 25
Jake Milner 24 2 62
Amy Cooze 73 3 70
Jason Miller 42 4 25
Jason Miller 42 4 25
Jake Milner 24 2 62
Jake Miller 42 4 25
</code></pre>
<p>I want to get like this by using in pandas.in below file i have choose 2 column.</p>
<pre><code>Jason Miller 42 4 25
Jake Ali 36 31 57
Jake Milner 24 2 62
Jason Miller 4 25
Jake Milner 2 62
Jake Cooze 73 3 70
Jason Miller 4 25
Jason Miller 4 25
Jake Milner 2 62
Jake Miller 4 25
</code></pre>
<p>Please anybody reply to my query.</p>
| 4 | 2016-08-17T12:46:37Z | 38,998,853 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html" rel="nofollow"><code>DF.duplicated</code></a> and assign the values of column C where the first occurence of values appears along columns A and B. </p>
<p>You could then fill the <code>Nans</code> produced with empty strings to produce the required dataframe.</p>
<pre><code>df = pd.read_csv(data, delim_whitespace=True, header=None, names=['A','B','C','D','E'])
df.loc[~df.duplicated(), "C'"] = df['C']
df.fillna('', inplace=True)
df = df[["A","B", "C'","D","E"]]
print(df)
A B C' D E
0 Jason Miller 42 4 25
1 Tina Ali 36 31 57
2 Jake Milner 24 2 62
3 Jason Miller 4 25
4 Jake Milner 2 62
5 Amy Cooze 73 3 70
6 Jason Miller 4 25
7 Jason Miller 4 25
8 Jake Milner 2 62
9 Jake Miller 42 4 25
</code></pre>
<p>Another way of doing would be to take a subset of the duplicated columns and replace the concerned column with empty strings. Then, you could use <a href="http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DataFrame.update.html" rel="nofollow"><code>update</code></a> to modify the dataframe in place with the original, <code>df</code>.</p>
<pre><code>In [2]: duplicated_cols = df[df.duplicated(subset=['C', 'D', 'E'])]
In [3]: duplicated_cols
Out[3]:
A B C D E
3 Jason Miller 42 4 25
4 Jake Milner 24 2 62
6 Jason Miller 42 4 25
7 Jason Miller 42 4 25
8 Jake Milner 24 2 62
9 Jake Miller 42 4 25
In [4]: duplicated_cols.loc[:,'C'] = ''
In [5]: df.update(duplicated_cols)
In [6]: df
Out[6]:
A B C D E
0 Jason Miller 42 4.0 25.0
1 Tina Ali 36 31.0 57.0
2 Jake Milner 24 2.0 62.0
3 Jason Miller 4.0 25.0
4 Jake Milner 2.0 62.0
5 Amy Cooze 73 3.0 70.0
6 Jason Miller 4.0 25.0
7 Jason Miller 4.0 25.0
8 Jake Milner 2.0 62.0
9 Jake Miller 4.0 25.0
</code></pre>
| 1 | 2016-08-17T14:00:43Z | [
"python",
"pandas"
] |
Differences between `open(fname, 'r').close()` and `os.path.isfile(fname)` | 38,997,213 | <p>I have to check the presence and readable of multiple files. Which is the most efficient way to do it?</p>
<pre><code>list_of_files = [fname1,fname2,fname3]
for fname in list_of_files:
try:
open(fname, 'r').close()
except IOError:
raise YourCustomError
</code></pre>
<p>or</p>
<pre><code>list_of_files = [fname1,fname2,fname3]
for fname in list_of_files:
if not ps.path.isfile(fname):
raise YourCustomError
</code></pre>
| 1 | 2016-08-17T12:53:11Z | 38,997,305 | <p>There is a common philosophy - execute and ask for forgiveness than asking for permission before hand. </p>
<p>The first approach tries to open the file in readable mode and fails if it doesn't exist or not readable. Here we're asking for forgiveness after the operation fails. </p>
<p>In the second approach, we are just asking for permission first. Your second example however just checks if it's a file, doesn't check permission. </p>
<p>However, instead of manually closing the file handler, using context managers (<code>with open(filename, "r")</code>) might be a much better approach. </p>
| 1 | 2016-08-17T12:57:34Z | [
"python",
"file-processing"
] |
Differences between `open(fname, 'r').close()` and `os.path.isfile(fname)` | 38,997,213 | <p>I have to check the presence and readable of multiple files. Which is the most efficient way to do it?</p>
<pre><code>list_of_files = [fname1,fname2,fname3]
for fname in list_of_files:
try:
open(fname, 'r').close()
except IOError:
raise YourCustomError
</code></pre>
<p>or</p>
<pre><code>list_of_files = [fname1,fname2,fname3]
for fname in list_of_files:
if not ps.path.isfile(fname):
raise YourCustomError
</code></pre>
| 1 | 2016-08-17T12:53:11Z | 38,997,311 | <blockquote>
<p>I have to check the presence and readable of multiple files. Which is the most efficient way to do it?</p>
</blockquote>
<p>It's not just a question of efficiency, as your two approaches don't do the same thing. A file might <a href="https://www.linux.com/learn/understanding-linux-file-permissions" rel="nofollow">exist and be unreadable to you</a>, say, because of permissions. Your first approach is the only one of the two that actually checks that the file is open and readable (at least at some point in time - both your approaches have race conditions).</p>
| 1 | 2016-08-17T12:57:43Z | [
"python",
"file-processing"
] |
Differences between `open(fname, 'r').close()` and `os.path.isfile(fname)` | 38,997,213 | <p>I have to check the presence and readable of multiple files. Which is the most efficient way to do it?</p>
<pre><code>list_of_files = [fname1,fname2,fname3]
for fname in list_of_files:
try:
open(fname, 'r').close()
except IOError:
raise YourCustomError
</code></pre>
<p>or</p>
<pre><code>list_of_files = [fname1,fname2,fname3]
for fname in list_of_files:
if not ps.path.isfile(fname):
raise YourCustomError
</code></pre>
| 1 | 2016-08-17T12:53:11Z | 38,997,577 | <p>If you plan on <em>using</em> those files then <strong>neither</strong>. Files may be deleted or permissions changed between your call and when you use the file, making that information obsolete. Instead just try to use the file and handle the exception there:</p>
<pre><code>try:
with open(fname, 'r') as f:
# use f
except FileNotFoundError as e:
# file was deleted
except IsADirectoryError as e:
# path exists but is a directory
except PermissionError as e:
# you don't have permissions to read the file
except OSError as e:
# other error
</code></pre>
<p>If however you are writing a tool that is displaying the information about permissions to the user, then it makes sense to use the methods and functions provided for specifically this purpose, hence you should use <a href="https://docs.python.org/3.5/library/os.path.html#os.path.exists" rel="nofollow"><code>os.path.exists</code></a> and <a href="https://docs.python.org/3.5/library/os.html#os.DirEntry.is_file" rel="nofollow"><code>os.is_file</code></a> and <a href="https://docs.python.org/3.5/library/os.html#os.DirEntry.is_dir" rel="nofollow"><code>os.is_dir</code></a>.</p>
<p>If you are dealing with entire directories note that it is more efficient to use <a href="https://docs.python.org/3.5/library/os.html#os.scandir" rel="nofollow"><code>os.scandir</code></a> and check the methods on the <code>DirEntry</code>s objects.</p>
| 2 | 2016-08-17T13:08:29Z | [
"python",
"file-processing"
] |
Python: Convert map in kilometres to degrees | 38,997,228 | <p>I have a pandas Dataframe with a few million rows, each with an X and Y attribute with their location in kilometres according to the WGS 1984 World Mercator projection (created using ArcGIS).</p>
<p>What is the easiest way to project these points back to degrees, without leaving the Python/pandas environment?</p>
| -2 | 2016-08-17T12:53:43Z | 39,000,572 | <p>There is already a python module that can do these kind of transformations for you called <a href="https://github.com/jswhit/pyproj" rel="nofollow">pyproj</a>. I will agree it is actually not the simplest module to find via google. Some examples of its use can be seen <a href="http://gis.stackexchange.com/questions/78838/how-to-convert-projected-coordinates-to-lat-lon-using-python">here</a></p>
| 0 | 2016-08-17T15:19:51Z | [
"python",
"pandas",
"projection",
"degrees"
] |
Django-tinyMCE Submit button not working | 38,997,237 | <p>I've implemented TinyMCE with the django-tinymce package. However, my submit button which worked fine without TinyMCE now has become rather useless since I can't submit the form, once everything is filled out. </p>
<p>I can use Ctrl + S inside of TinyMCE (I discovered that by accident) and everything will get submitted correctly. Also, I can use the save-button of the TinyMCE "save" plugin to submit.. Do I have to configure the submit button to make it work with TinyMCE?</p>
<p>Template:</p>
<pre><code>{% extends 'medisearch/header.html' %}
{% load crispy_forms_tags %}
{% block header %}
{{ form.media }}
{% endblock %}
{% block content %}
â·â
â
â
â
â
â
â
<form action="{{ url }}" method="post">
â·â
â
â
â
â
â
â
<div class="form-group">
â·â
â
â
â
â
â
â
{% csrf_token %}
â·â
â
â
â
â
â
â
{{ form|crispy }}
â·â
â
â
â
â
â
â
</div>
â·â
â
â
â
â
â
â
<input type="submit" class="btn btn-primary" value="Speichern" />
â·â
â
â
â
â
â
â
</form>
{% endblock %}
</code></pre>
<p>views.py</p>
<pre><code>class EntryDetail(DetailView):
model = Mediwiki
slug_field = 'non_proprietary_name'
template_name = 'mediwiki/entry.html'
class MediwikiForm(FormView):
template_name = 'mediwiki/create.html'
form_class = MediwikiFormâ
success_url = "/" #TODO user get's redirected to page he's createdâ
def form_valid(self, form):
form.save()
return super(MediwikiForm, self).form_valid(form)
class EntryDisplay(View):
def get(self, request, *args, **kwargs):
try:
view = EntryDetail.as_view()
return view(request, *args, **kwargs)
except Http404: # If there's no entry in db:
if check_user_editor(request.user) == True:
view = MediwikiForm.as_view()
return view(request, *args, **kwargs)
else:
pass
def post(self, request, *args, **kwargs):
view = MediwikiForm.as_view()
return view(request, *args, **kwargs)â
</code></pre>
<p>forms.py</p>
<pre><code>class MediwikiForm(ModelForm):
wiki_page = forms.CharField(widget=TinyMCE(attrs={'cols': 80, 'rows': 30}))
class Meta:
model = Mediwikiâ
fields = '__all__'
</code></pre>
<p>TinyMCE is in <code>urls.py</code> and under <code>INSTALLED_APPS</code>..</p>
| 0 | 2016-08-17T12:54:21Z | 39,836,162 | <p>I also had the same issue as yours, and I just removed for instance: "wiki_page" charfield from the subclass of Modelform, and put Tinymce widget in the Meta class. </p>
<pre><code>class MediwikiForm(ModelForm):
class Meta:
model = Mediwikiâ
fields = '__all__'
widgets = {
'wiki_page': TinyMCE(attrs={'cols': 80, 'rows': 30})
}
</code></pre>
| -1 | 2016-10-03T16:25:03Z | [
"javascript",
"jquery",
"python",
"django",
"django-tinymce"
] |
Permission error when installing textblob | 38,997,459 | <p>This command:</p>
<pre><code>python -m pip install textblob
</code></pre>
<p>gave this error: </p>
<blockquote>
<p>error: could not create '/Library/Python/2.7/site-packages/nltk': Permission denied</p>
</blockquote>
<p>and </p>
<p><code>Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/9z/kwrqy2qn49s1rt0zf5ft12h80000gn/T/pip-build-gyWGVB/nltk/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/9z/kwrqy2qn49s1rt0zf5ft12h80000gn/T/pip-1Cek94-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/9z/kwrqy2qn49s1rt0zf5ft12h80000gn/T/pip-build-gyWGVB/nltk/</code></p>
<p>I'm using:</p>
<ul>
<li>OS X 10.11.6 </li>
<li>python version 2.7.10 </li>
<li>pip 8.1.2 </li>
</ul>
| -2 | 2016-08-17T13:03:32Z | 38,997,526 | <p>Try with permission.</p>
<pre><code>sudo pip install textblob
</code></pre>
| 0 | 2016-08-17T13:06:20Z | [
"python",
"python-2.7",
"pip",
"nltk"
] |
Permission error when installing textblob | 38,997,459 | <p>This command:</p>
<pre><code>python -m pip install textblob
</code></pre>
<p>gave this error: </p>
<blockquote>
<p>error: could not create '/Library/Python/2.7/site-packages/nltk': Permission denied</p>
</blockquote>
<p>and </p>
<p><code>Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/9z/kwrqy2qn49s1rt0zf5ft12h80000gn/T/pip-build-gyWGVB/nltk/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/9z/kwrqy2qn49s1rt0zf5ft12h80000gn/T/pip-1Cek94-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/9z/kwrqy2qn49s1rt0zf5ft12h80000gn/T/pip-build-gyWGVB/nltk/</code></p>
<p>I'm using:</p>
<ul>
<li>OS X 10.11.6 </li>
<li>python version 2.7.10 </li>
<li>pip 8.1.2 </li>
</ul>
| -2 | 2016-08-17T13:03:32Z | 38,997,566 | <p>The error is caused because the user you logged-in doesn't have appropriate permissions to do this, so do it with sudo or su</p>
<pre><code>sudo pip install textblob
</code></pre>
<p>or you can first switch as root user and then fire the command: </p>
<pre><code>su
pip install textblob
</code></pre>
| 0 | 2016-08-17T13:07:57Z | [
"python",
"python-2.7",
"pip",
"nltk"
] |
mean of the probability activation for each day of a week with python | 38,997,548 | <p>I have a dataframe df with this structure : </p>
<pre><code>TIMESTAMP probab-activ1 probab-activ3 probab-activ5
2015-07-31 23:00:00 90.0 90.0 90.0
2015-07-31 23:10:00 0.0 0.0 0.0
2015-07-31 23:20:00 0.0 0.0 0.0
2015-07-31 23:30:00 0.0 0.0 0.0
2015-07-31 23:40:00 0.0 0.0 0.0
...
2015-10-31 23:20:00 0.0 0.0 0.0
2015-10-31 23:30:00 0.0 0.0 0.0
2015-10-31 23:40:00 0.0 0.0 0.0
</code></pre>
<p>I need to calculate for each day of the week (monday , tuesday ,.., sunday) the mean of the probability (probab-activ1, probab-activ3 and probab-activ5) durant the 2 last months.</p>
<p>Any idea to solve this problem?</p>
<p>Thank you in advance</p>
| -3 | 2016-08-17T13:07:25Z | 38,997,985 | <p>You can use the <code>datetime</code> module and convert your timestamp to a format that is useful for your purpose. For example, you could do:</p>
<pre><code>import datetime
timestamp = '2015-07-31 23:00:00'
day_of_week = datetime.datetime.strptime(timestamp, '%Y-%m-%d %H:%M:%S').strftime('%a')
day_of_week
'Fri'
</code></pre>
| 0 | 2016-08-17T13:25:05Z | [
"python",
"dataframe"
] |
Reuse a logistic regression object for different fitted models | 38,997,591 | <p>I have a <code>Pipeline</code> object that I want to fit on different combinations of training and test labels and thus using the <code>fit</code> objects, create different predictions. But I believe that <code>fit</code> using the same classifier object gets rid of previous <code>fit</code> objects.</p>
<p>An example of my code is:</p>
<pre><code>text_clf = Pipeline([('vect', CountVectorizer(analyzer="word",tokenizer=None,preprocessor=None,stop_words=None,max_features=5000)),
('tfidf', TfidfTransformer(use_idf=True,norm='l2',sublinear_tf=True)),
('clf',LogisticRegression(solver='newton-cg',class_weight='balanced', multi_class='multinomial',fit_intercept=True),
)])
print "Fitting the open multinomial BoW logistic regression model for probability models...\n"
open_multi_logit_words = text_clf.fit(train_wordlist, train_property_labels)
print "Fitting the open multinomial BoW logistic regression model w/ ",threshold," MAPE threshold...\n"
open_multi_logit_threshold_words = (text_clf.copy.deepcopy()).fit(train_wordlist, train_property_labels_threshold)
</code></pre>
<p>However, classifier objects do not have <code>deepcopy()</code> methods. How can I achieve what I need without having to define:</p>
<pre><code>text_clf_open_multi_logit = Pipeline([('vect', CountVectorizer(analyzer="word",tokenizer=None,preprocessor=None,stop_words=None,max_features=5000)),
('tfidf', TfidfTransformer(use_idf=True,norm='l2',sublinear_tf=True)),
('clf',LogisticRegression(solver='newton-cg',class_weight='balanced', multi_class='multinomial',fit_intercept=True),
)])
</code></pre>
<p>For all of my 16 classifier combinations?</p>
| 0 | 2016-08-17T13:09:24Z | 38,998,464 | <p>I would try</p>
<pre><code>text_clf0=copy.deepcopy(text_clf)
open_multi_logit_threshold_words = text_clf0.fit(train_wordlist, train_property_labels_threshold)
</code></pre>
<p>EDIT: you can use a list</p>
<pre><code>text_clf_list=[copy.deepcopy(text_clf) for _ in range(16)]
</code></pre>
<p>or directly</p>
<pre><code>copy.deepcopy(text_cââlf).fit(train_wordlisâât, train_property_labelââs_threshold)
</code></pre>
| 1 | 2016-08-17T13:45:51Z | [
"python",
"scikit-learn",
"logistic-regression",
"text-classification"
] |
Django tutorial part 3 - NoReverseMatch at /polls/ | 38,997,600 | <p>I have been following the <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial03/" rel="nofollow">Django tutorial part 3</a> and am getting the following error when I attempt to view <a href="http://localhost:8000/polls/" rel="nofollow">http://localhost:8000/polls/</a>:</p>
<pre><code>**Reverse for 'detail' with arguments '('',)' and keyword arguments '{}' not found. 1 pattern(s) tried: [u'polls/(?P<question_id>[0-9]+)/$']**
</code></pre>
<p>My files are as follows:</p>
<p>mysite/urls.py:</p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^polls/', include('polls.urls', namespace="polls")),
url(r'^admin/', admin.site.urls),
]
</code></pre>
<p>polls/urls.py:</p>
<pre><code>from django.conf.urls import url
from . import views
app_name = 'polls'
urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^(?P<question_id>[0-9]+)/$', views.detail, name='detail'),
url(r'^(?P<question_id>[0-9]+)/results/$', views.results, name='results'),
url(r'^(?P<question_id>[0-9]+)/vote/$', views.vote, name='vote'),
]
</code></pre>
<p>polls/detail.html:</p>
<pre><code><h1>{{ question.question_text }}</h1>
<ul>
{% for choice in question.choice_set.all %}
<li>{{ choice.choice_text }}</li>
{% endfor %}
</ul>
</code></pre>
<p>polls/templates/polls/index.html:</p>
<pre><code><li><a href="{% url 'polls:detail' question.id %}">{{ question.question_text }}</a></li>
</code></pre>
<p><strong>What does this error mean?</strong></p>
<p><strong>How do I debug it?</strong></p>
<p><strong>Can you suggest a fix?</strong></p>
<p>N.b. I have seen and tried the answers to the similar questions:</p>
<p><a href="http://stackoverflow.com/questions/33151816/noreversematch-at-polls-django-tutorial?rq=1">NoReverseMatch at /polls/ (django tutorial)</a>
<a href="http://stackoverflow.com/questions/31103954/django-1-8-2-tutorial-chapter-3-error-noreversematch-at-polls-python">Django 1.8.2 -- Tutorial Chapter 3 -- Error: NoReverseMatch at /polls/ -- Python 3.4.3</a>
<a href="http://stackoverflow.com/questions/27645132/noreversematch-django-1-7-beginners-tutorial?rq=1">NoReverseMatch - Django 1.7 Beginners tutorial</a>
<a href="http://stackoverflow.com/questions/19336076/django-reverse-for-detail-with-arguments-and-keyword-arguments-n#19336837">Django: Reverse for 'detail' with arguments '('',)' and keyword arguments '{}' not found</a>
<a href="https://groups.google.com/forum/#!msg/django-users/etSR78dgKBo/euSYcSyMCgAJ" rel="nofollow">https://groups.google.com/forum/#!msg/django-users/etSR78dgKBo/euSYcSyMCgAJ</a>
<a href="http://stackoverflow.com/questions/33151816/noreversematch-at-polls-django-tutorial?rq=1">NoReverseMatch at /polls/ (django tutorial)</a>
<a href="https://www.reddit.com/r/django/comments/3d43gb/noreversematch_at_polls1results_in_django/" rel="nofollow">https://www.reddit.com/r/django/comments/3d43gb/noreversematch_at_polls1results_in_django/</a></p>
<p>Edit, I initially missed the following question. Its excellent answer partially answers my question (how to debug) but does not cover my specific problem.</p>
<p><a href="http://stackoverflow.com/questions/38390177/what-is-a-noreversematch-error-and-how-do-i-fix-it">What is a NoReverseMatch error, and how do I fix it?</a></p>
| 2 | 2016-08-17T13:09:37Z | 38,999,003 | <p>This was the problem:</p>
<p>polls/templates/polls/index.html should have been:</p>
<pre><code>{% if latest_question_list %}
<ul>
{% for question in latest_question_list %}
<li><a href="{% url 'polls:detail' question.id %}">{{ question.question_text }}</a></li>
{% endfor %}
</ul>
{% else %}
<p>No polls are available.</p>
{% endif %}
</code></pre>
<p>I had inadvertantly replaced the entire file with the following line, rather than just updating the relevant line (implied <a href="https://docs.djangoproject.com/en/1.10/intro/tutorial03/#namespacing-url-names" rel="nofollow">here</a>):</p>
<pre><code><li><a href="{% url 'polls:detail' question.id %}">{{ question.question_text }}</a></li>
</code></pre>
<p>As stated by @Sayse in the comments, this would mean that question.id is empty resulting in the error.</p>
| 1 | 2016-08-17T14:07:30Z | [
"python",
"html",
"django",
"python-2.7"
] |
NLTK Lemmatizing with list comprehension | 38,997,658 | <p>How can I verify whether I am correctly using the NLTK lemmatizer in this list comprehension, specifically whether it is taking account of the POS tags?</p>
<pre><code>clean_article_string = (article_db.loc[0,'clean_text']) # pandas dataframe cell containing string.
tokens = word_tokenize(clean_article_string)
treebank_tagged_tokens = tagger.tag(tokens)
wordnet_tagged_tokens = [(w,get_wordnet_pos(t)) for (w, t) in treebank_tagged_tokens]
lemmatized_tokens = [(lemmatizer.lemmatize(w).lower(),t) for (w,t) in wordnet_tagged_tokens]
print(len(set(wordnet_tagged_tokens)),(len(set(lemmatized_tokens))))
423 384
</code></pre>
<p>I'm using a converter I found on Stackoverflow to switch from treebank to Wordnet tokens, and it works fine. My issue is whether for <code>lemmatized_tokens</code> the lemmatizer is actually taking both the word and the tag of my <code>(w,t)</code> tuple into account, or if it is just looking at the <code>w</code> and lemmatizing based on that (presuming everything to be a noun). I tried...</p>
<pre><code>lemmatized_tokens = [(lemmatizer.lemmatize(w,t)) for (w,t) in wordnet_tagged_tokens]
</code></pre>
<p>and</p>
<pre><code>lemmatized_tokens = [(lemmatizer.lemmatize(w, pos=t)) for (w,t) in wordnet_tagged_tokens]
</code></pre>
<p>which produces a <code>KeyError: ''</code> in the Wordnet lemmatize function. So the initial code actually functions, but I don't know if it is using the POS tag or not. Does anyone know whether the lemmmatizer will be taking it into account in the working code, and/or if I can verify it is?</p>
| 0 | 2016-08-17T13:12:00Z | 39,081,102 | <p>Answer by ewcz in comments. Labelled as community wiki. This helped me, might help others.</p>
<hr>
<p>You use <code>lemmatizer.lemmatize(w)</code>, then it will use the default POS tag n, the error suggests that some of the tags are empty - in this case, perhaps one could use a fallback to nouns, i.e., to use
<code>lemmatizer.lemmatize(w, pos=t if t else 'n')</code></p>
| 0 | 2016-08-22T13:38:14Z | [
"python",
"nltk",
"wordnet"
] |
Else not working in IPython | 38,997,873 | <p><a href="http://i.stack.imgur.com/C8XLQ.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/C8XLQ.jpg" alt="enter image description here"></a>I'm new in Python. I'm trying to use an else statement in IPython but I get an error message:
"Identation error: unindent does not match any outer identation level." I have looked for similar questions with Python, but the issues were linked to a mix use of tabulation key and spaces to increment or to supplementary spaces/characters. Here, I'm using 4 spaces in front of each print statement without any supplementary (hidden) character (except of course one carriage return by line). The 'if' part alone works but not with the 'else' part.</p>
<pre><code>x = True
if x == True:
print('ok')
else:
print('nok')
</code></pre>
<p>Update: Here is a screenshot showing Ipython version, code and erreor message.</p>
| -4 | 2016-08-17T13:20:00Z | 38,998,030 | <p>IPython auto-indents blocks for compound statements, so to enter the <code>else:</code> line you have to manually <em>unindent</em> again.</p>
<p>I can only reproduce your error if I do not un-indent properly; hitting backspace between 1 and 3 times, instead of 4.</p>
<p>Note the extra space before <code>else:</code> in this error example, where I used backspace 3 times:</p>
<pre><code>In [2]: if x == True:
...: print('ok')
...: else:
File "<ipython-input-2-915b4c0eb5ea>", line 3
else:
^
IndentationError: unindent does not match any outer indentation level
</code></pre>
<p>IPython allows you to edit your block; just use the Up arrow key to recall the failed piece of text, and remove the extraneous spaces at the start of the <code>else:</code> line.</p>
<p>If you are not typing in code by hand, don't copy-and-paste code directly. Use the <code>%paste</code> command instead, and IPython will paste the code <em>for you</em>, avoiding any auto-indent issues:</p>
<pre><code>In [3]: %paste
x = True
if x == True:
print('ok')
else:
print('nok')
## -- End pasted text --
ok
</code></pre>
<p>See <code>%paste?</code> for more information.</p>
| 1 | 2016-08-17T13:27:02Z | [
"python",
"ipython"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.