title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How to Query from salesforce the OpportunityFieldHistory with python | 39,043,710 | <p>I am having an issue figuring out how to start a query on the OpportunityFieldHistory from Salesforce.</p>
<p>The code I usually use and works for querying Opportunty or Leads work fine, but I do not know how should be written for the FieldHistory. </p>
<p>When I want to query the opportunity or Lead I use the following:</p>
<pre><code>oppty1 = sf.opportunity.get('00658000002vFo3')
lead1 = sf.lead.get('00658000002vFo3')
</code></pre>
<p>and then do the proper query code with the access codes...</p>
<p>The problem arises when I want to do the analysis on the OpportunityFieldHistory, I tried the following:</p>
<pre><code>opptyhist = sf.opportunityfieldhistory.get('xxx')
</code></pre>
<p>Guess what, does not work. Do you have any clue on what should I write between sf. and .get?</p>
<p>Thanks in advance</p>
| 1 | 2016-08-19T16:21:59Z | 39,045,357 | <p>Looking at the <a href="https://github.com/simple-salesforce/simple-salesforce/blob/master/simple_salesforce/api.py" rel="nofollow">simple-salesforce API</a>, it appears that the <code>get</code> method accepts an ID which you are passing correctly. However, a quick search in the <a href="https://developer.salesforce.com/forums/?id=906F00000009BM7IAM" rel="nofollow">Salesforce API reference</a> seems to indicate that the OpportunityFieldHistory may need to be obtained by another function such as <code>get_by_custom_id(self, custom_id_field, custom_id)</code>.</p>
<blockquote>
<pre><code> (OpportunityFieldHistory){
Id = None
CreatedDate = 2012-08-27 12:00:03
Field = "StageName"
NewValue = "3.0 - Presentation & Demo"
OldValue = "2.0 - Qualification & Discovery"
OpportunityId = "0067000000RFCDkAAP"
},
</code></pre>
</blockquote>
| 0 | 2016-08-19T18:10:25Z | [
"python",
"salesforce"
] |
Use WTForms QuerySelectField with Pyramid 1.7's db session | 39,043,798 | <p>I use wtforms_sqlalchemy in my pyramid apps and define several <code>QuerySelectField</code>s. The query factory uses the imported <code>DBSession</code> object to make the query.</p>
<pre><code>from wtforms.form import Form
from wtforms_sqlalchemy.fields import QuerySelectField
from myapp.models import DBSession, MyModel
def mymodel_choices():
choices = DBSession.query(MyModel)
return choices
class MyForm(Form):
mymod = QuerySelectField(u'Field', query_factory=mymodel_choices)
</code></pre>
<p>Pyramid 1.7 introduced a new SQLAlchemy scaffold which attaches the db session object to each request. Using the new scaffold <code>mymodel_choices</code> has to use <code>request</code> from my view to access the db session. The field doesn't have access to the request object though, and doesn't know to call the factory with it.</p>
<p>My idea was to update <code>query_factory</code> directly from the view but that doesn't seem to be a logical way to do it. How can I use <code>QuerySelectField</code> when the db session is part of the request object?</p>
| 2 | 2016-08-19T16:27:14Z | 39,067,122 | <p>You can try something like this (although it's not the cleanest solution)</p>
<pre><code>from myapp.models import MyModel
from pyramid import threadlocal
def mymodel_choices(request=None):
request = request or threadlocal.get_current_request()
choices = request.DBSession.query(MyModel)
return choices
</code></pre>
<p>for more details please see: <a href="http://docs.pylonsproject.org/projects/pyramid/en/latest/api/threadlocal.html" rel="nofollow">http://docs.pylonsproject.org/projects/pyramid/en/latest/api/threadlocal.html</a></p>
| 1 | 2016-08-21T17:46:59Z | [
"python",
"sqlalchemy",
"pyramid",
"wtforms"
] |
Use WTForms QuerySelectField with Pyramid 1.7's db session | 39,043,798 | <p>I use wtforms_sqlalchemy in my pyramid apps and define several <code>QuerySelectField</code>s. The query factory uses the imported <code>DBSession</code> object to make the query.</p>
<pre><code>from wtforms.form import Form
from wtforms_sqlalchemy.fields import QuerySelectField
from myapp.models import DBSession, MyModel
def mymodel_choices():
choices = DBSession.query(MyModel)
return choices
class MyForm(Form):
mymod = QuerySelectField(u'Field', query_factory=mymodel_choices)
</code></pre>
<p>Pyramid 1.7 introduced a new SQLAlchemy scaffold which attaches the db session object to each request. Using the new scaffold <code>mymodel_choices</code> has to use <code>request</code> from my view to access the db session. The field doesn't have access to the request object though, and doesn't know to call the factory with it.</p>
<p>My idea was to update <code>query_factory</code> directly from the view but that doesn't seem to be a logical way to do it. How can I use <code>QuerySelectField</code> when the db session is part of the request object?</p>
| 2 | 2016-08-19T16:27:14Z | 39,171,644 | <p><code>query_factory</code> only specifies the <em>default</em> query to use, but <a href="https://github.com/wtforms/wtforms-sqlalchemy/blob/master/wtforms_sqlalchemy/fields.py#L98" rel="nofollow"><code>QuerySelectField</code> will prefer the <code>query</code> property if it is set.</a> This is useful for Pyramid, which discourages interacting with <code>threadlocal</code> directly. </p>
<p>Change the factory to accept a db session. Set <code>query</code> to the result of calling your factory with the request's db session.</p>
<pre><code>def mymodel_choices(session):
return session.query(MyModel)
f = MyForm(request.POST)
f.mymod.query = mymodel_choices(request.db_session)
</code></pre>
<p>Since this is a bit inconvenient, you can create a subclass of <code>Form</code> that takes the <code>request</code>, pulls out the appropriate form data, and calls each <code>QuerySelectField's</code> query factory with the request.</p>
<pre><code>class PyramidForm(Form):
def __init__(self, request, **kwargs):
if 'formdata' not in kwargs and request.method == 'POST':
kwargs['formdata'] = request.POST
super().__init__(**kwargs)
for field in self:
if isinstance(field, QuerySelectField) and field.query_factory is not None and field.query is None:
field.query = field.query_factory(request.db_session)
class MyForm(PyramidForm):
...
f = MyForm(request)
</code></pre>
| 1 | 2016-08-26T17:22:08Z | [
"python",
"sqlalchemy",
"pyramid",
"wtforms"
] |
python regex find match that spans multiple lines | 39,043,801 | <p>So I am trying to grab the string from a BibTex using regex in python. Here is part of my string:</p>
<pre><code>a = '''title = {The Origin ({S},
{Se}, and {Te})- {TiO$_2$} Photocatalysts},
year = {2010},
volume = {114},'''
</code></pre>
<p>I want to grab the string for the title, which is:</p>
<pre><code>The Origin ({S},
{Se}, and {Te})- {TiO$_2$} Photocatalysts
</code></pre>
<p>I currently have this code:</p>
<pre><code>pattern = re.compile('title\s*=\s*{(.*|\n?)},\s*\n', re.DOTALL|re.I)
pattern.findall(a)
</code></pre>
<p>But it only gives me:</p>
<pre><code>['The Origin ({S},\n {Se}, and {Te})- {TiO$_2$} Photocatalysts},\n year = {2010']
</code></pre>
<p>How can I get the whole title string without the <code>year</code> information?
Many times, <code>year</code> is not right after <code>title</code>. So I cannot use:</p>
<pre><code>pattern = re.compile('title\s*=\s*{(.*|\n?)},\s*\n.*year', re.DOTALL|re.I)
pattern.findall(a)
</code></pre>
| 2 | 2016-08-19T16:27:25Z | 39,043,864 | <p>A quick solution would be to modify your regex pattern</p>
<pre><code>pattern = re.compile('title\s*=\s*{(.*|\n?)},\s*\n', re.DOTALL|re.I)
</code></pre>
| 1 | 2016-08-19T16:31:34Z | [
"python",
"regex"
] |
python regex find match that spans multiple lines | 39,043,801 | <p>So I am trying to grab the string from a BibTex using regex in python. Here is part of my string:</p>
<pre><code>a = '''title = {The Origin ({S},
{Se}, and {Te})- {TiO$_2$} Photocatalysts},
year = {2010},
volume = {114},'''
</code></pre>
<p>I want to grab the string for the title, which is:</p>
<pre><code>The Origin ({S},
{Se}, and {Te})- {TiO$_2$} Photocatalysts
</code></pre>
<p>I currently have this code:</p>
<pre><code>pattern = re.compile('title\s*=\s*{(.*|\n?)},\s*\n', re.DOTALL|re.I)
pattern.findall(a)
</code></pre>
<p>But it only gives me:</p>
<pre><code>['The Origin ({S},\n {Se}, and {Te})- {TiO$_2$} Photocatalysts},\n year = {2010']
</code></pre>
<p>How can I get the whole title string without the <code>year</code> information?
Many times, <code>year</code> is not right after <code>title</code>. So I cannot use:</p>
<pre><code>pattern = re.compile('title\s*=\s*{(.*|\n?)},\s*\n.*year', re.DOTALL|re.I)
pattern.findall(a)
</code></pre>
| 2 | 2016-08-19T16:27:25Z | 39,043,891 | <p>Depends on how general you want your regex to be. I guess you want your string to be able to contain { and }, so using that to mark the ending of the pattern will cause issues. Also there could be multiple brackets.</p>
<p>Here's an idea, what if you look for the word year at the end of the regex, assuming that's constant.</p>
<pre><code>pattern = re.compile('title\s*=\s*{(.*?)},\s*\n\s*year', re.DOTALL|re.I)
</code></pre>
| 1 | 2016-08-19T16:33:32Z | [
"python",
"regex"
] |
python regex find match that spans multiple lines | 39,043,801 | <p>So I am trying to grab the string from a BibTex using regex in python. Here is part of my string:</p>
<pre><code>a = '''title = {The Origin ({S},
{Se}, and {Te})- {TiO$_2$} Photocatalysts},
year = {2010},
volume = {114},'''
</code></pre>
<p>I want to grab the string for the title, which is:</p>
<pre><code>The Origin ({S},
{Se}, and {Te})- {TiO$_2$} Photocatalysts
</code></pre>
<p>I currently have this code:</p>
<pre><code>pattern = re.compile('title\s*=\s*{(.*|\n?)},\s*\n', re.DOTALL|re.I)
pattern.findall(a)
</code></pre>
<p>But it only gives me:</p>
<pre><code>['The Origin ({S},\n {Se}, and {Te})- {TiO$_2$} Photocatalysts},\n year = {2010']
</code></pre>
<p>How can I get the whole title string without the <code>year</code> information?
Many times, <code>year</code> is not right after <code>title</code>. So I cannot use:</p>
<pre><code>pattern = re.compile('title\s*=\s*{(.*|\n?)},\s*\n.*year', re.DOTALL|re.I)
pattern.findall(a)
</code></pre>
| 2 | 2016-08-19T16:27:25Z | 39,043,962 | <p><a href="https://docs.python.org/2/library/textwrap.html" rel="nofollow">textwrap</a> can be useful:</p>
<pre><code>import textwrap
a = '''title = {The Origin ({S},
{Se}, and {Te})- {TiO$_2$} Photocatalysts},
year = {2010},
volume = {114},'''
indent = " "
print(textwrap.dedent(indent + a))
</code></pre>
| 0 | 2016-08-19T16:38:14Z | [
"python",
"regex"
] |
python regex find match that spans multiple lines | 39,043,801 | <p>So I am trying to grab the string from a BibTex using regex in python. Here is part of my string:</p>
<pre><code>a = '''title = {The Origin ({S},
{Se}, and {Te})- {TiO$_2$} Photocatalysts},
year = {2010},
volume = {114},'''
</code></pre>
<p>I want to grab the string for the title, which is:</p>
<pre><code>The Origin ({S},
{Se}, and {Te})- {TiO$_2$} Photocatalysts
</code></pre>
<p>I currently have this code:</p>
<pre><code>pattern = re.compile('title\s*=\s*{(.*|\n?)},\s*\n', re.DOTALL|re.I)
pattern.findall(a)
</code></pre>
<p>But it only gives me:</p>
<pre><code>['The Origin ({S},\n {Se}, and {Te})- {TiO$_2$} Photocatalysts},\n year = {2010']
</code></pre>
<p>How can I get the whole title string without the <code>year</code> information?
Many times, <code>year</code> is not right after <code>title</code>. So I cannot use:</p>
<pre><code>pattern = re.compile('title\s*=\s*{(.*|\n?)},\s*\n.*year', re.DOTALL|re.I)
pattern.findall(a)
</code></pre>
| 2 | 2016-08-19T16:27:25Z | 39,045,425 | <p>Use the newer <a href="https://pypi.python.org/pypi/regex" rel="nofollow"><strong><code>regex module</code></strong></a>:</p>
<pre><code>import regex as re
rx = re.compile(r'''
(?(DEFINE)
(?<part>\w+\ =\ \{)
(?<end>\},)
(?<title>title\ =\ \{)
)
(?&title)(?P<t>(?:(?!(?&part))[\s\S])+)(?&end)
''', re.VERBOSE)
string = '''
title = {The Origin ({S},
{Se}, and {Te})- {TiO$_2$} Photocatalysts},
year = {2010},
volume = {114},
'''
title = rx.search(string).group('t')
print(title)
# The Origin ({S},
# {Se}, and {Te})- {TiO$_2$} Photocatalysts
</code></pre>
<p>Though it is not really needed, it provides an alternative solution.</p>
| 1 | 2016-08-19T18:15:30Z | [
"python",
"regex"
] |
python pandas groupby about categorial variables | 39,043,832 | <p>Here is what my dataframe looks like:</p>
<pre><code>df = pd.DataFrame([
['01', 'aa', '1+', 1200],
['01', 'ab', '1+', 1500],
['01', 'jn', '1+', 1600],
['02', 'bb', '2', 2100],
['02', 'ji', '2', 785],
['03', 'oo', '2', 5234],
['04', 'hg', '5-', 1231],
['04', 'kf', '5-', 454],
['05', 'mn', '6', 45],
], columns=['faculty_id', 'sub_id', 'default_grade', 'sum'])
df
</code></pre>
<p><a href="http://i.stack.imgur.com/EW31G.png" rel="nofollow"><img src="http://i.stack.imgur.com/EW31G.png" alt="enter image description here"></a></p>
<p>I want to groupby facility id, ignore subid, aggregate sum, and assign one default_grade to each facility id. How to do that? I know how to groupby facility id and aggregate sum, but I'm not sure about how to assign the default_grade to each facility.</p>
<p>Thanks a lot! </p>
| 1 | 2016-08-19T16:29:38Z | 39,044,741 | <p>You can apply different functions by column in a groupby using dictionary syntax.</p>
<pre><code>df.groupby('faculty_id').agg({'default_grade': 'first', 'sum': 'sum'})
</code></pre>
| 1 | 2016-08-19T17:30:13Z | [
"python",
"python-2.7",
"pandas",
"dataframe",
"group-by"
] |
Python 3 __getattribute__ vs dot access behaviour | 39,043,912 | <p>I read a bit on python's object attribute lookup (here: <a href="https://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/#object-attribute-lookup" rel="nofollow">https://blog.ionelmc.ro/2015/02/09/understanding-python-metaclasses/#object-attribute-lookup</a>).</p>
<p>Seems pretty straight forward, so I tried it out (python3):</p>
<pre><code>class A:
def __getattr__(self, attr):
return (1,2,3)
a = A()
a.foobar #returns (1,2,3) as expected
a.__getattribute__('foobar') # raises AttributeError
</code></pre>
<p>My question is aren't the two supposed to dibe identical?</p>
<p>Why does the second one raise an attribute error?</p>
<p>So apparently the answer is that the logic for <code>a.foobar</code> IS different from the logic for <code>a.__getattribute("foobar")</code>. According to the <a href="https://docs.python.org/3.5/reference/datamodel.html#object.__getattribute__" rel="nofollow">data model</a>: <code>a.foobar</code> calls <code>a.__getattribute("foobar")</code> and if it raises an AttributeError, it calls <code>a.-__getattr__('foobar')</code></p>
<p>So it seems the article has a mistake in their diagram. Is this correct?</p>
<p>And another question: Where does the real logic for a.foobar sit? i thought it was in <code>__getattribute__</code> but apparently not entirely.</p>
<p>Edit:
<strong>Not a duplicate of</strong></p>
<p><a href="http://stackoverflow.com/questions/3278077/difference-between-getattr-vs-getattribute">Difference between __getattr__ vs __getattribute__</a>.
I am asking here what is the different between <code>object.foo</code> and <code>object.__getattribute__("foo")</code>. This is different from <code>__getattr__</code> vs <code>__getatribute__</code> which is trivial...</p>
| 0 | 2016-08-19T16:35:23Z | 39,044,897 | <p>It's easy to get the impression that <code>__getattribute__</code> is responsible for more than it really is. <code>thing.attr</code> doesn't directly translate to <code>thing.__getattribute__('attr')</code>, and <code>__getattribute__</code> is not responsible for calling <code>__getattr__</code>.</p>
<p>The fallback to <code>__getattr__</code> happens in the part of the attribute access machinery that lies outside <code>__getattribute__</code>. The attribute lookup process works like this:</p>
<ul>
<li>Find the <code>__getattribute__</code> method through a direct search of the object's type's <a href="https://www.python.org/download/releases/2.3/mro/" rel="nofollow">MRO</a>, bypassing the regular attribute lookup process.</li>
<li>Try <code>__getattribute__</code>.
<ul>
<li>If <code>__getattribute__</code> returned something, the attribute lookup process is complete, and that's the attribute value.</li>
<li>If <code>__getattribute__</code> raised a non-AttributeError, the attribute lookup process is complete, and the exception propagates out of the lookup.</li>
<li>Otherwise, <code>__getattribute__</code> raised an AttributeError. The lookup continues.</li>
</ul></li>
<li>Find the <code>__getattr__</code> method the same way we found <code>__getattribute__</code>.
<ul>
<li>If there is no <code>__getattr__</code>, the attribute lookup process is complete, and the AttributeError from <code>__getattribute__</code> propagates.</li>
</ul></li>
<li>Try <code>__getattr__</code>, and return or raise whatever <code>__getattr__</code> returns or raises.</li>
</ul>
<p>At least, in terms of the language semantics, it works like that. In terms of the low-level implementation, some of these steps may be optimized out in cases where they're unnecessary, and there are C hooks like <code>tp_getattro</code> that I haven't described. You don't need to worry about that kind of thing unless you want to dive into the CPython interpreter source code.</p>
| 3 | 2016-08-19T17:40:28Z | [
"python",
"python-3.x"
] |
Convert curl to python request | 39,043,938 | <p>This works on my system:</p>
<pre><code>curl https://api.serverpilot.io/v1/servers -u KEY
</code></pre>
<p>I'm trying to convert it to Python and have tried several variations on this code.</p>
<pre><code>params = {"u" :KEY}
# params = {"u" :json.dumps(KEY)}
restUrl = "https://api.serverpilot.io/v1/servers"
response = requests.get(restUrl, data=params, headers=headers)
parsed = json.loads(response.content)
print params
print response
print json.dumps(parsed, indent=4, sort_keys=True)
</code></pre>
| -2 | 2016-08-19T16:36:48Z | 39,044,025 | <p>If you response header is really 'application/json; charset=utf8' (or other charset),</p>
<pre><code>assert response.headers['content-type'] == 'application/json; charset=utf8'
</code></pre>
<p>you can use:</p>
<pre><code>parsed = response.json()
</code></pre>
| -1 | 2016-08-19T16:41:38Z | [
"python",
"curl",
"python-requests"
] |
Convert curl to python request | 39,043,938 | <p>This works on my system:</p>
<pre><code>curl https://api.serverpilot.io/v1/servers -u KEY
</code></pre>
<p>I'm trying to convert it to Python and have tried several variations on this code.</p>
<pre><code>params = {"u" :KEY}
# params = {"u" :json.dumps(KEY)}
restUrl = "https://api.serverpilot.io/v1/servers"
response = requests.get(restUrl, data=params, headers=headers)
parsed = json.loads(response.content)
print params
print response
print json.dumps(parsed, indent=4, sort_keys=True)
</code></pre>
| -2 | 2016-08-19T16:36:48Z | 39,044,076 | <p>If you check the documentation for curl, you'll see that -u specifies a user. <a href="http://linux.die.net/man/1/curl" rel="nofollow">http://linux.die.net/man/1/curl</a></p>
<p>You can use the verbose options of curl to get a printout of the request being made.</p>
<p>If you checkout the requests documentation, you'll see that it supports different auth methods through the auth keyword parameter. <a href="http://docs.python-requests.org/en/master/user/authentication/" rel="nofollow">http://docs.python-requests.org/en/master/user/authentication/</a> </p>
<p>Essentially, your username (or key code) should not be a GET parameter, it is a different portion of the HTTP request.</p>
| 1 | 2016-08-19T16:44:50Z | [
"python",
"curl",
"python-requests"
] |
How to max out bandwidth using concurrent requests? | 39,043,948 | <p>I want to send approximately 6e6 post requests to a web server. Content is fetched only in the absence of a redirect status code.
The problem arises when traversing a stretch of data giving redirects; the bandwidth use is very low!(Like 10 % of the available bandwidth.)</p>
<p>I was using <code>multiprocessing.dummy</code> module first, then switched to using <code>asyncio</code> but even then, the requests don't utilize the whole bandwidth.</p>
<h3>Note</h3>
<p>Even though <a href="http://stackoverflow.com/questions/23318419/how-can-i-effectively-max-out-concurrent-http-requests">this</a> is exactly the problem, I don't understand Go , so , I asked here for a solution in Python.
<a href="http://stackoverflow.com/questions/38831322/making-1-milion-requests-with-aiohttp-asyncio-literally?rq=1">This</a> is not the question I want to ask, I get around that problem by working with a subset of the data at a time.</p>
| 0 | 2016-08-19T16:37:15Z | 39,044,081 | <p>Oh, it may be even not connected with programming.
6e6 is really much, so it could be limited even because of poor network card.
The solution is to try stress-test utility to determine if your hardware could send that number of requests per second.
For example, use <code>ab</code> ApacheBench utility, like:
<code>ab -kc 6000000 -n 1000 http://your-site.com</code></p>
| 1 | 2016-08-19T16:45:06Z | [
"python",
"python-3.5",
"python-multithreading",
"python-asyncio",
"aiohttp"
] |
Django - Looping through two queries for the same data, and adding cost field for each | 39,044,020 | <p>Its likely the title could be worded better but im struggling with it.</p>
<p>Basically i am trying to tally up the cost of circuits per site from my models. the circuit model inherts the site model as per below:</p>
<p>models:</p>
<pre><code>class ShowroomConfigData(models.Model):
location = models.CharField(max_length=50)
class CircuitInfoData(models.Model):
showroom_config_data = models.ForeignKey(ShowroomConfigData,verbose_name="Install Showroom")
major_site_info = models.ForeignKey(MajorSiteInfoData,verbose_name="Install Site")
circuit_type = models.CharField(max_length=100,choices=settings.CIRCUIT_CHOICES)
circuit_speed = models.IntegerField(blank=True)
cost_per_month = models.DecimalField(decimal_places=2,max_digits=8)
</code></pre>
<p>how this probably could be done with queries, but ive tried in a pervious question and it seems ive hit a bug so im trying to do it manually</p>
<p>sample data:</p>
<pre><code>site a | 1
site a | 2
site a | 5
site b | 100
site b | 2
site d | 666
site d | 1
</code></pre>
<p>so i want to produce</p>
<p>site a | 8
site b | 102
site d | 667</p>
<p>i tried this way as a test:</p>
<pre><code>circuits = CircuitInfoData.objects.all()
showrooms = ShowroomConfigData.objects.only('location')
for sdata in showrooms:
for cdata in circuits:
while cdata.showroom_config_data.location == sdata.location:
print sdata.location
print cdata.cost
</code></pre>
<p>this has just churned out the site a and 8 x amount of times. so i dont know how i should be going about this instead?</p>
<p>Thanks</p>
| 0 | 2016-08-19T16:41:20Z | 39,044,320 | <p>The most efficient way to do this is using queries. This can be accomplished using <a href="https://docs.djangoproject.com/en/1.10/topics/db/aggregation/" rel="nofollow">annotations</a>, in the following way:</p>
<pre><code>from django.db.models import Sum
CircuitInfoData.objects.values("showroom_config_data__location") \
.annotate(cost=Sum("cost_per_month"))
</code></pre>
<p>E.g. this will return data in the form</p>
<pre><code>[{'showroom_config_data__location': u'site a', 'cost': Decimal('30.00')},
{'showroom_config_data__location': u'site b', 'cost': Decimal('5.00')}]
</code></pre>
<p>You can then format this output</p>
<pre><code>for entry in output:
print entry["showroom_config_data__location"], " | ", entry["cost"]
</code></pre>
<p>To get</p>
<pre><code>site a | 30.00
site b | 5.00
</code></pre>
| 1 | 2016-08-19T17:00:46Z | [
"python",
"django"
] |
Django - Looping through two queries for the same data, and adding cost field for each | 39,044,020 | <p>Its likely the title could be worded better but im struggling with it.</p>
<p>Basically i am trying to tally up the cost of circuits per site from my models. the circuit model inherts the site model as per below:</p>
<p>models:</p>
<pre><code>class ShowroomConfigData(models.Model):
location = models.CharField(max_length=50)
class CircuitInfoData(models.Model):
showroom_config_data = models.ForeignKey(ShowroomConfigData,verbose_name="Install Showroom")
major_site_info = models.ForeignKey(MajorSiteInfoData,verbose_name="Install Site")
circuit_type = models.CharField(max_length=100,choices=settings.CIRCUIT_CHOICES)
circuit_speed = models.IntegerField(blank=True)
cost_per_month = models.DecimalField(decimal_places=2,max_digits=8)
</code></pre>
<p>how this probably could be done with queries, but ive tried in a pervious question and it seems ive hit a bug so im trying to do it manually</p>
<p>sample data:</p>
<pre><code>site a | 1
site a | 2
site a | 5
site b | 100
site b | 2
site d | 666
site d | 1
</code></pre>
<p>so i want to produce</p>
<p>site a | 8
site b | 102
site d | 667</p>
<p>i tried this way as a test:</p>
<pre><code>circuits = CircuitInfoData.objects.all()
showrooms = ShowroomConfigData.objects.only('location')
for sdata in showrooms:
for cdata in circuits:
while cdata.showroom_config_data.location == sdata.location:
print sdata.location
print cdata.cost
</code></pre>
<p>this has just churned out the site a and 8 x amount of times. so i dont know how i should be going about this instead?</p>
<p>Thanks</p>
| 0 | 2016-08-19T16:41:20Z | 39,044,448 | <p>What if you just used a dictionary? Each circuit will have access to the showroom right? So If you built a dictionary and simply pulled out the value at a particular site if that is already being used as a key, you could simply add the value and store it again. Here is an example of this being done.</p>
<pre><code>my_dictionary = dict()
for circuit in circuits:
if my_dictionary[circuit.showroom_config_data.location] in my_dictionary:
my_dictionary[circuit.showroom_config_data.location] += circuit.cost_per_month
else:
my_dictionary[circuit.showroom_config_data.location] = circuit.cost_per_month
</code></pre>
<p>May not work exactly but the concept behind the code will give you the desired output you are looking for.</p>
| 0 | 2016-08-19T17:09:08Z | [
"python",
"django"
] |
Pass a closure from Cython to C++ | 39,044,063 | <p>I have a C++ function that accepts a callback, like this:</p>
<pre><code>void func(std::function<void(A, B)> callback) { ... }
</code></pre>
<p>I want to call this function from Cython by giving it a closure, i.e. something I would have done with a lambda if I was calling it from C++. If this was a C function, it would have some extra <code>void*</code> arguments:</p>
<pre><code>typedef void(*callback_t)(int, int, void*);
void func(callback_t callback, void *user_data) {
callback(1, 2, user_data);
}
</code></pre>
<p>and then I would just pass <code>PyObject*</code> as <code>user_data</code> (there is a more detailed <a href="https://github.com/cython/cython/tree/master/Demos/callback" rel="nofollow">example here</a>).</p>
<p>Is there way to do this more in C++ way, without having to resort to explicit <code>user_data</code>?</p>
| 2 | 2016-08-19T16:44:15Z | 39,052,204 | <p>What I believe you're aiming to do is pass a callable Python object to something accepting a <code>std::function</code>. You need to do create a bit of C++ code to make it happen, but it's reasonably straightforward.</p>
<p>Starting by defining "accepts_std_function.hpp" as simply as possible to provide an illustrative example:</p>
<pre><code>#include <functional>
#include <string>
inline void call_some_std_func(std::function<void(int,const std::string&)> callback) {
callback(5,std::string("hello"));
}
</code></pre>
<p>The trick is then to create a wrapper class that holds a <code>PyObject*</code> and defines <code>operator()</code>. Defining <code>operator()</code> allows it to be converted to a <code>std::function</code>. Most of the class is just refcounting. "py_obj_wrapper.hpp":</p>
<pre><code>#include <Python.h>
#include <string>
#include "call_obj.h" // cython helper file
class PyObjWrapper {
public:
// constructors and destructors mostly do reference counting
PyObjWrapper(PyObject* o): held(o) {
Py_XINCREF(o);
}
PyObjWrapper(const PyObjWrapper& rhs): PyObjWrapper(rhs.held) { // C++11 onwards only
}
PyObjWrapper(PyObjWrapper&& rhs): held(rhs.held) {
rhs.held = 0;
}
// need no-arg constructor to stack allocate in Cython
PyObjWrapper(): PyObjWrapper(nullptr) {
}
~PyObjWrapper() {
Py_XDECREF(held);
}
PyObjWrapper& operator=(const PyObjWrapper& rhs) {
PyObjWrapper tmp = rhs;
return (*this = std::move(tmp));
}
PyObjWrapper& operator=(PyObjWrapper&& rhs) {
held = rhs.held;
rhs.held = 0;
return *this;
}
void operator()(int a, const std::string& b) {
if (held) { // nullptr check
call_obj(held,a,b); // note, no way of checking for errors until you return to Python
}
}
private:
PyObject* held;
};
</code></pre>
<p>This file uses a very short Cython file to do the conversions from C++ types to Python types. "call_obj.pyx":</p>
<pre><code>from libcpp.string cimport string
cdef public void call_obj(obj, int a, const string& b):
obj(a,b)
</code></pre>
<p>You then just need to create the Cython code wraps these types. Compile this module and call <code>test_func</code> to run this. ("simple_version.pyx":)</p>
<pre><code>cdef extern from "py_obj_wrapper.hpp":
cdef cppclass PyObjWrapper:
PyObjWrapper()
PyObjWrapper(object) # define a constructor that takes a Python object
# note - doesn't match c++ signature - that's fine!
cdef extern from "accepts_std_func.hpp":
void call_some_std_func(PyObjWrapper) except +
# here I lie about the signature
# because C++ does an automatic conversion to function pointer
# for classes that define operator(), but Cython doesn't know that
def example(a,b):
print(a,b)
def test_call():
cdef PyObjWrapper f = PyObjWrapper(example)
call_some_std_func(f)
</code></pre>
<hr>
<p>The above version works but is somewhat limited in that if you want to do this with a different <code>std::function</code> specialization you need to rewrite some of it (and the conversion from C++ to Python types doesn't naturally lend itself to a template implementation). One easy way round this is to use the Boost Python library <code>object</code> class, which has a templated <code>operator()</code>. This comes at the cost of introducing an extra library dependency.</p>
<p>First defining the header "boost_wrapper.hpp" to simplify <a href="http://www.boost.org/doc/libs/1_60_0/libs/python/doc/html/tutorial/tutorial/object.html#tutorial.object.creating_python_object" rel="nofollow">the conversion from <code>PyObject*</code> to <code>boost::python::object</code></a></p>
<pre><code>#include <boost/python/object.hpp>
inline boost::python::object get_as_bpo(PyObject* o) {
return boost::python::object(boost::python::handle<>(boost::python::borrowed(o)));
}
</code></pre>
<p>You then just need to Cython code to wrap this class ("boost_version.pyx"). Again, call <code>test_func</code></p>
<pre><code>cdef extern from "boost_wrapper.hpp":
cdef cppclass bpo "boost::python::object":
# manually set name (it'll conflict with "object" otherwise
bpo()
bpo get_as_bpo(object)
cdef extern from "accepts_std_func.hpp":
void call_some_std_func(bpo) except + # again, lie about signature
def example(a,b):
print(a,b)
def test_call():
cdef bpo f = get_as_bpo(example)
call_some_std_func(f)
</code></pre>
<hr>
<p>A "setup.py"</p>
<pre><code>from distutils.core import setup, Extension
from Cython.Build import cythonize
extensions = [
Extension(
"simple_version", # the extension name
sources=["simple_version.pyx", "call_obj.pyx" ],
language="c++", # generate and compile C++ code
),
Extension(
"boost_version", # the extension name
sources=["boost_version.pyx"],
libraries=['boost_python'],
language="c++", # generate and compile C++ code
)
]
setup(ext_modules = cythonize(extensions))
</code></pre>
<hr>
<p>(A final option is to use <code>ctypes</code> to generate a C function pointer from a Python callable. See <a href="http://stackoverflow.com/questions/34878942/using-function-pointers-to-methods-of-classes-without-the-gil/34900829#34900829">Using function pointers to methods of classes without the gil</a> (bottom half of answer) and <a href="http://osdir.com/ml/python-cython-devel/2009-10/msg00202.html" rel="nofollow">http://osdir.com/ml/python-cython-devel/2009-10/msg00202.html</a>. I'm not going to go into detail about this here.)</p>
| 1 | 2016-08-20T08:40:09Z | [
"python",
"c++",
"cython"
] |
How to create a SkewNormal stochastic in pymc3? | 39,044,126 | <p>How would one use <code>DensityDist</code> to create a SkewNormal distribution for pymc3? There are several dead links to github pages explaining how to create custom Stochastic that are floating around.</p>
<p><code>exp</code> is implemented in theano, but I don't think the Normal Cumulative distribution or the <code>erf</code> functions are.</p>
<p>I presume I couldn't just use something like:</p>
<pre><code>F = DensityDist('F', lambda value: pymc.skew_normal_like(value, um, std, a), shape = N)
</code></pre>
<p>Where I import the skew_normal_like distribution from pymc2?</p>
| 0 | 2016-08-19T16:47:34Z | 39,098,684 | <p>The <code>erf</code> function is implemented in Theano. Try this
</p>
<pre><code>skn = pm.DensityDist('skn', lambda value: tt.log(1 + tt.erf(((value - mu) * tt.sqrt(tau) * alpha)/tt.sqrt(2))) + (-tau * (value - mu)**2 + tt.log(tau / np.pi / 2.)) / 2. )
</code></pre>
<p>where:
</p>
<pre><code>tau = sd**-2
</code></pre>
<p><strong>Update:</strong></p>
<p>The SkewNormal is now included as part of the ready-to-use PyMC3's distributions.</p>
| 1 | 2016-08-23T10:29:44Z | [
"python",
"theano",
"pymc",
"pymc3"
] |
Coding style (PEP8) - Module level "dunders" | 39,044,343 | <p><em>Definition of "Dunder" (<strong>D</strong>ouble <strong>under</strong>score): <a href="http://www.urbandictionary.com/define.php?term=Dunder" rel="nofollow">http://www.urbandictionary.com/define.php?term=Dunder</a></em></p>
<hr>
<p>I have a question according the placement of module level "dunders" (like <code>__all__</code>, <code>__version__</code>, <code>__author__</code> etc.) in Python code.</p>
<p>The question came up to me while reading through <a href="https://www.python.org/dev/peps/pep-0008" rel="nofollow">PEP8</a> and seeing <a href="http://stackoverflow.com/questions/24741141/pycharm-have-author-appear-before-imports">this</a> Stack Overflow question.</p>
<p>The accepted answer says:</p>
<blockquote>
<p><code>__author__</code> is a global "variable" and should therefore appear below the imports.</p>
</blockquote>
<p>But in the PEP8 section <a href="https://www.python.org/dev/peps/pep-0008/#module-level-dunder-names" rel="nofollow">Module level dunder names</a> I read the following:</p>
<blockquote>
<p>Module level "dunders" (i.e. names with two leading and two trailing
underscores) such as <code>__all__</code> , <code>__author__</code> , <code>__version__</code> , etc. should
be placed after the module docstring but before any import statements
except from <code>__future__</code> imports. Python mandates that future-imports
must appear in the module before any other code except docstrings.</p>
</blockquote>
<p>The authors also give a code example:</p>
<pre><code>"""This is the example module.
This module does stuff.
"""
from __future__ import barry_as_FLUFL
__all__ = ['a', 'b', 'c']
__version__ = '0.1'
__author__ = 'Cardinal Biggles'
import os
import sys
</code></pre>
<p>But when I put the above into PyCharm, I see this warning (also see the screenshot):</p>
<blockquote>
<p>PEP8: module level import not at top of file</p>
</blockquote>
<p><a href="http://i.stack.imgur.com/EWkeW.png" rel="nofollow"><img src="http://i.stack.imgur.com/EWkeW.png" alt="PyCharm ignoring PEP8?"></a></p>
<p><strong>Question: What is the correct way/place to store these variables with double underscores?</strong></p>
| 12 | 2016-08-19T17:01:55Z | 39,044,408 | <p>PEP 8 recently was <em>updated</em> to put the location before the imports. See <a href="https://hg.python.org/peps/rev/cf8e888b9555">revision cf8e888b9555</a>, committed on June 7th, 2016:</p>
<blockquote>
<p>Relax <code>__all__</code> location.</p>
<p>Put all module level dunders together in the same location, and remove
the redundant version bookkeeping information.</p>
<p>Closes #27187. Patch by Ian Lee.</p>
</blockquote>
<p>The text was <a href="https://hg.python.org/peps/rev/c451868df657">further updated the next day</a> to address the <code>from __future__ import ...</code> caveat.</p>
<p>The patch links to <a href="http://bugs.python.org/issue27187">issue #27187</a>, which in turn references <a href="https://github.com/PyCQA/pycodestyle/issues/394">this <code>pycodestyle</code> issue</a>, where it was discovered PEP 8 was unclear.</p>
<p>Before this change, as there was no clear guideline on module-level dunder globals, so PyCharm and the other answer were correct <em>at the time</em>. I'm not sure how PyCharm implements their PEP 8 checks; if they use the <a href="https://pycodestyle.readthedocs.io/en/latest/">pycodestyle project</a> (the defacto Python style checker), then I'm sure it'll be fixed automatically. Otherwise, perhaps file a bug with them to see this fixed.</p>
| 13 | 2016-08-19T17:06:14Z | [
"python",
"pycharm",
"pep8",
"pep"
] |
Why is AWS telling me BucketAlreadyExists when it doesn't? | 39,044,362 | <p>I'm in the process of automating the setup of some AWS services using AWS SDK for Python (boto3) and running into a very simple problem of creating an S3 bucket. </p>
<p>I've double-checked the following:</p>
<ul>
<li>In <code>~/.aws/credentials</code>, I have an Access Key ID and Secret Access Key set.</li>
<li><p>This access key ID/secret access key is for an account that is part of a group with the following policy attached:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
</code></pre></li>
<li>There is no existing bucket with the name I'm trying to create the bucket with</li>
</ul>
<p>Yet when I try to run this very simple operation, it fails: </p>
<pre><code>>>> import boto3
>>> client = boto3.client('s3')
>>> response = client.create_bucket('staging')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/yiqing/Repos/ansible-home/roles/osx/files/virtualenvs/obaku/lib/python2.7/site-packages/botocore/client.py", line 157, in _api_call
"%s() only accepts keyword arguments." % py_operation_name)
TypeError: create_bucket() only accepts keyword arguments.
>>> response = client.create_bucket(Bucket='staging')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/yiqing/Repos/ansible-home/roles/osx/files/virtualenvs/obaku/lib/python2.7/site-packages/botocore/client.py", line 159, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/yiqing/Repos/ansible-home/roles/osx/files/virtualenvs/obaku/lib/python2.7/site-packages/botocore/client.py", line 494, in _make_api_call
raise ClientError(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
</code></pre>
<p>I feel like I'm missing something very silly but can't for the life of me think of what it might be or what I'm doing wrong. </p>
| 0 | 2016-08-19T17:03:21Z | 39,044,418 | <p>A bucket name is global to region and not specific to your account.
So you need to choose a name that doesnt exist at all. I recommend using a prefix</p>
| 4 | 2016-08-19T17:07:10Z | [
"python",
"amazon-web-services",
"amazon-s3",
"boto3"
] |
How does one refresh subplots created using Python(x,y) QT Designer Matplotlib Widget? | 39,044,402 | <p>I am using the MPL widget that comes with QT in the Python(x,y) package. I am attempting to refresh or redraw my plot with new data. The method fig.canvas.draw() refreshes the main plot only. The subplots are where my issue lies. The previous subplots and everything associated with them (axes scale, annotations, etc) are left on the chart. When I refresh, new subplot data are plotted over the old data, creating quite a mess. Everything associated with the main plot gets redrawn correctly. I have tried everything that I know to try, including clf, and cla. How does one refresh subplots with the QT MPL Widget that is included in Python(x,y)?</p>
<p>Here's my code:</p>
<pre><code>def mpl_plot(self, plot_page, replot = 0): #Data stored in lists
if plot_page == 1: #Plot 1st Page
plt = self.mplwidget.axes
fig = self.mplwidget.figure #Add a figure
if plot_page == 2: #Plot 2nd Page
plt = self.mplwidget_2.axes
fig = self.mplwidget_2.figure #Add a figure
if plot_page == 3: #Plot 3rd Page
plt = self.mplwidget_3.axes
fig = self.mplwidget_3.figure #Add a figure
if replot == 1:
#self.mplwidget_2.figure.clear()
print replot
par1 = fig.add_subplot(111)
par2 = fig.add_subplot(111)
#Add Axes
ax1 = par1.twinx()
ax2 = par2.twinx()
impeller = str(self.comboBox_impellers.currentText()) #Get Impeller
fac_curves = self.mpl_factory_specs(impeller)
fac_lift = fac_curves[0]
fac_power = fac_curves[1]
fac_flow = fac_curves[2]
fac_eff = fac_curves[3]
fac_max_eff = fac_curves[4]
fac_max_eff_bpd = fac_curves[5]
fac_ranges = self.mpl_factory_ranges()
min_range = fac_ranges[0]
max_range = fac_ranges[1]
#bep = fac_ranges[2]
#Plot Chart
plt.hold(False) #Has to be included for multiple curves
#Plot Factory Pressure
plt.plot(fac_flow, fac_lift, 'b', linestyle = "dashed", linewidth = 1)
#Plot Factory Power
ax1.plot(fac_flow, fac_power, 'r', linestyle = "dashed", linewidth = 1)
ax2.plot(fac_flow, fac_eff, 'g', linestyle = "dashed", linewidth = 1)
#Move spines
ax2.spines["right"].set_position(("outward", 25))
self.make_patch_spines_invisible(ax2)
ax2.spines["right"].set_visible(True)
#Plot x axis minor tick marks
minorLocatorx = AutoMinorLocator()
ax1.xaxis.set_minor_locator(minorLocatorx)
ax1.tick_params(which='both', width= 0.5)
ax1.tick_params(which='major', length=7)
ax1.tick_params(which='minor', length=4, color='k')
#Plot y axis minor tick marks
minorLocatory = AutoMinorLocator()
plt.yaxis.set_minor_locator(minorLocatory)
plt.tick_params(which='both', width= 0.5)
plt.tick_params(which='major', length=7)
plt.tick_params(which='minor', length=4, color='k')
#Make Border of Chart White
#Plot Grid
plt.grid(b=True, which='both', color='k', linestyle='-')
#set shaded Area
plt.axvspan(min_range, max_range, facecolor='#9BE2FA', alpha=0.5) #Yellow rectangular shaded area
#Set Vertical Lines
plt.axvline(fac_max_eff_bpd, color = '#69767A')
#BEP MARKER *** Can change marker style if needed
bep = fac_max_eff * 0.90 #bep is 90% of maximum efficiency point
bep_corrected = bep * 0.90 # We knock off another 10% to place the arrow correctly on chart
ax2.annotate('BEP', xy=(fac_max_eff_bpd, bep_corrected), xycoords='data', #Subtract 2.5 shows up correctly on chart
xytext=(-50, 30), textcoords='offset points',
bbox=dict(boxstyle="round", fc="0.8"),
arrowprops=dict(arrowstyle="-|>",
shrinkA=0, shrinkB=10,
connectionstyle="angle,angleA=0,angleB=90,rad=10"),
)
#Set Scales
plt.set_ylim(0,max(fac_lift) + (max(fac_lift) * 0.40)) #Pressure
#plt.set_xlim(0,max(fac_flow))
ax1.set_ylim(0,max(fac_power) + (max(fac_power) * 0.40)) #Power
ax2.set_ylim(0,max(fac_eff) + (max(fac_eff) * 0.40)) #Effiency
# Set Axes Colors
plt.tick_params(axis='y', colors='b')
ax1.tick_params(axis='y', colors='r')
ax2.tick_params(axis='y', colors='g')
# Set Chart Labels
plt.set_xlabel("BPD")
plt.set_ylabel("Feet" , color = 'b')
#To redraw plot
fig.canvas.draw()
def mpl_plot(self, plot_page, replot = 0): #Data stored in lists
if plot_page == 1: #Plot 1st Page
plt = self.mplwidget.axes
fig = self.mplwidget.figure #Add a figure
if plot_page == 2: #Plot 2nd Page
plt = self.mplwidget_2.axes
fig = self.mplwidget_2.figure #Add a figure
if plot_page == 3: #Plot 3rd Page
plt = self.mplwidget_3.axes
fig = self.mplwidget_3.figure #Add a figure
if replot == 1:
#self.mplwidget_2.figure.clear()
print replot
par1 = fig.add_subplot(111)
par2 = fig.add_subplot(111)
#Add Axes
ax1 = par1.twinx()
ax2 = par2.twinx()
impeller = str(self.comboBox_impellers.currentText()) #Get Impeller
fac_curves = self.mpl_factory_specs(impeller)
fac_lift = fac_curves[0]
fac_power = fac_curves[1]
fac_flow = fac_curves[2]
fac_eff = fac_curves[3]
fac_max_eff = fac_curves[4]
fac_max_eff_bpd = fac_curves[5]
fac_ranges = self.mpl_factory_ranges()
min_range = fac_ranges[0]
max_range = fac_ranges[1]
#bep = fac_ranges[2]
#Plot Chart
plt.hold(False) #Has to be included for multiple curves
#Plot Factory Pressure
plt.plot(fac_flow, fac_lift, 'b', linestyle = "dashed", linewidth = 1)
#Plot Factory Power
ax1.plot(fac_flow, fac_power, 'r', linestyle = "dashed", linewidth = 1)
ax2.plot(fac_flow, fac_eff, 'g', linestyle = "dashed", linewidth = 1)
#Move spines
ax2.spines["right"].set_position(("outward", 25))
self.make_patch_spines_invisible(ax2)
ax2.spines["right"].set_visible(True)
#Plot x axis minor tick marks
minorLocatorx = AutoMinorLocator()
ax1.xaxis.set_minor_locator(minorLocatorx)
ax1.tick_params(which='both', width= 0.5)
ax1.tick_params(which='major', length=7)
ax1.tick_params(which='minor', length=4, color='k')
#Plot y axis minor tick marks
minorLocatory = AutoMinorLocator()
plt.yaxis.set_minor_locator(minorLocatory)
plt.tick_params(which='both', width= 0.5)
plt.tick_params(which='major', length=7)
plt.tick_params(which='minor', length=4, color='k')
#Make Border of Chart White
#Plot Grid
plt.grid(b=True, which='both', color='k', linestyle='-')
#set shaded Area
plt.axvspan(min_range, max_range, facecolor='#9BE2FA', alpha=0.5) #Yellow rectangular shaded area
#Set Vertical Lines
plt.axvline(fac_max_eff_bpd, color = '#69767A')
#BEP MARKER *** Can change marker style if needed
bep = fac_max_eff * 0.90 #bep is 90% of maximum efficiency point
bep_corrected = bep * 0.90 # We knock off another 10% to place the arrow correctly on chart
ax2.annotate('BEP', xy=(fac_max_eff_bpd, bep_corrected), xycoords='data', #Subtract 2.5 shows up correctly on chart
xytext=(-50, 30), textcoords='offset points',
bbox=dict(boxstyle="round", fc="0.8"),
arrowprops=dict(arrowstyle="-|>",
shrinkA=0, shrinkB=10,
connectionstyle="angle,angleA=0,angleB=90,rad=10"),
)
#Set Scales
plt.set_ylim(0,max(fac_lift) + (max(fac_lift) * 0.40)) #Pressure
#plt.set_xlim(0,max(fac_flow))
ax1.set_ylim(0,max(fac_power) + (max(fac_power) * 0.40)) #Power
ax2.set_ylim(0,max(fac_eff) + (max(fac_eff) * 0.40)) #Effiency
# Set Axes Colors
plt.tick_params(axis='y', colors='b')
ax1.tick_params(axis='y', colors='r')
ax2.tick_params(axis='y', colors='g')
# Set Chart Labels
plt.set_xlabel("BPD")
plt.set_ylabel("Feet" , color = 'b')
#To redraw plot
fig.canvas.draw()
</code></pre>
| 0 | 2016-08-19T17:05:44Z | 39,053,082 | <p>From the example it is not clear how <code>self.mpl_widget.axes</code> is created. It is not used and as explained below, drawing to an old reference of an axes instance might cause the problem.</p>
<p>Because <code>mpl_widget.axes</code> is not used at all I would suggest to not hold a reference on the axes. Then the use of <code>twinx</code> in the example is not correct. The following could work:</p>
<pre><code>if plot_page == 1: #Plot 1st Page
widget = self.mplwidget
if plot_page == 2: #Plot 2nd Page
widget = self.mplwidget_3
if plot_page == 3: #Plot 3rd Page
widget = self.mplwidget_3
if replot == 1:
widget.figure.clear()
ax1 = widget.figure.add_subplot(111)
ax2 = ax1.twinx()
...
widget.figure.canvas.draw()
</code></pre>
<p>Another issue is that <code>self.mpl_widgetXX.axes</code> is assigned to <code>plt</code> and further below in the example <code>plt</code> is used to draw the new data. Because <code>self.mpl_widgetXX.axes</code> is not updated to contain one of the newly created axes instances, the example code draws to an old axes, thus potentially leading to the effects described in the question.
You should only use <code>ax1</code> and <code>ax2</code> for plotting and tick setup and <code>widget.figure</code> to access the figure instance.</p>
| 1 | 2016-08-20T10:31:21Z | [
"python",
"python-2.7",
"matplotlib",
"matplotlib-widget"
] |
List comprehension works but not for loopââwhy? | 39,044,403 | <p>I'm a bit annoyed with myself because I can't understand why one solution to a problem worked but another didn't. As in, it points to a deficient understanding of (basic) pandas on my part, and that makes me mad!</p>
<p>Anyway, my problem was simple: I had a list of 'bad' values ('bad_index'); these corresponded to row indexes on a dataframe ('data_clean1') for which I wanted to delete the corresponding rows. However, as the values will change with each new dataset, I didn't want to plug the bad values directly into the code. Here's what I did first:</p>
<pre><code>bad_index = [2, 7, 8, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 29]
for i in bad_index:
dataclean2 = dataclean1.drop([i]).reset_index(level = 0, drop = True)
</code></pre>
<p>But this didn't work; the data_clean2 remained the exact same as data_clean1. My second idea was to use list comprehensions (as below); this worked out fine.</p>
<pre><code>bad_index = [2, 7, 8, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 29]
data_clean2 = data_clean1.drop([x for x in bad_index]).reset_index(level = 0, drop = True)
</code></pre>
<p>Now, why did the list comprehension method work and not the 'for' loop? I've been coding for a few months, and I feel that I shouldn't be making these kinds of errors.</p>
<p>Thanks!</p>
| 5 | 2016-08-19T17:05:46Z | 39,044,503 | <p><strong>EDIT: it turns out this is not your problem ... but if you did not have the problem mentioned in the other answer by Deepspace then you would have this problem</strong></p>
<pre><code>for i in bad_index:
dataclean2 = dataclean1.drop([i]).reset_index(level = 0, drop = True)
</code></pre>
<p>imagine your bad index is <code>[1,2,3]</code> and your dataclean is <code>[4,5,6,7,8]</code> </p>
<p><strong>now lets step through what actually happens</strong></p>
<p>initial: <code>dataclean == [4,5,6,7,8]</code></p>
<p>loop0 : i == 1 => drop index 1 ==>dataclean = <code>[4,6,7,8]</code></p>
<p>loop1 : i == 2 => drop index 2 ==> dataclean = <code>[4,6,8]</code></p>
<p>loop2 : i ==3 ==> drop index 3 !!!! uh oh there is no index 3</p>
<hr>
<p>you could i guess do instead</p>
<pre><code>for i in reversed(bad_index):
...
</code></pre>
<p>this way if you remove index3 first it will not affect index 1 and 2</p>
<p><strong>but in general you should not mutate a list/dict as you iterate over it</strong></p>
| 1 | 2016-08-19T17:12:39Z | [
"python",
"pandas",
"for-loop",
"indexing",
"list-comprehension"
] |
List comprehension works but not for loopââwhy? | 39,044,403 | <p>I'm a bit annoyed with myself because I can't understand why one solution to a problem worked but another didn't. As in, it points to a deficient understanding of (basic) pandas on my part, and that makes me mad!</p>
<p>Anyway, my problem was simple: I had a list of 'bad' values ('bad_index'); these corresponded to row indexes on a dataframe ('data_clean1') for which I wanted to delete the corresponding rows. However, as the values will change with each new dataset, I didn't want to plug the bad values directly into the code. Here's what I did first:</p>
<pre><code>bad_index = [2, 7, 8, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 29]
for i in bad_index:
dataclean2 = dataclean1.drop([i]).reset_index(level = 0, drop = True)
</code></pre>
<p>But this didn't work; the data_clean2 remained the exact same as data_clean1. My second idea was to use list comprehensions (as below); this worked out fine.</p>
<pre><code>bad_index = [2, 7, 8, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 29]
data_clean2 = data_clean1.drop([x for x in bad_index]).reset_index(level = 0, drop = True)
</code></pre>
<p>Now, why did the list comprehension method work and not the 'for' loop? I've been coding for a few months, and I feel that I shouldn't be making these kinds of errors.</p>
<p>Thanks!</p>
| 5 | 2016-08-19T17:05:46Z | 39,044,516 | <p><code>data_clean1.drop([x for x in bad_index]).reset_index(level = 0, drop = True)</code> is equivalent to simply passing the <code>bad_index</code> list to <code>drop</code>:</p>
<p><code>data_clean1.drop(bad_index).reset_index(level = 0, drop = True)</code></p>
<p><code>drop</code> accepts a list, and drops every index present in the list.</p>
<p>Your explicit <code>for</code> loop didn't work because in every iteration you simply dropped a different index from the <code>dataclean1</code> dataframe without saving the intermediate dataframes, so by the last iteration <code>dataclean2</code> was simply the result of executing<br>
<code>dataclean2 = dataclean1.drop(29).reset_index(level = 0, drop = True)</code></p>
| 8 | 2016-08-19T17:13:29Z | [
"python",
"pandas",
"for-loop",
"indexing",
"list-comprehension"
] |
Use qcut pandas for multiple valuable categorizing | 39,044,434 | <p>I am trying to use two values from two columns from a dataframe and perform <code>qcut</code> categorization. </p>
<p>single value categorizing it quite simple. But two variables as pairs and vs is something I am trying get. </p>
<p>Input: </p>
<pre><code>date,startTime,endTime,day,c_count,u_count
2004-01-05,22:00:00,23:00:00,Mon,18944,790
2004-01-05,23:00:00,00:00:00,Mon,17534,750
2004-01-06,00:00:00,01:00:00,Tue,17262,747
2004-01-06,01:00:00,02:00:00,Tue,19072,777
2004-01-06,02:00:00,03:00:00,Tue,18275,785
2004-01-06,03:00:00,04:00:00,Tue,13589,757
2004-01-06,04:00:00,05:00:00,Tue,16053,735
2004-01-06,05:00:00,06:00:00,Tue,11440,636
2004-01-06,06:00:00,07:00:00,Tue,5972,513
2004-01-06,07:00:00,08:00:00,Tue,3424,382
2004-01-06,08:00:00,09:00:00,Tue,2696,303
2004-01-06,09:00:00,10:00:00,Tue,2350,262
2004-01-06,10:00:00,11:00:00,Tue,2309,254
</code></pre>
<p>Code with pure python but I am trying to do the same in pandas. </p>
<pre><code>for row in csv.reader(inp):
if int(row[1])>(0.80*c_count) and int(row[2])>(0.80*u_count):
val='highly active'
elif int(row[1])>=(0.60*c_count) and int(row[2])<=(0.60*u_count):
val='active'
elif int(row[1])<=(0.40*c_count) and int(row[2])>=(0.40*u_count):
val='event based'
elif int(row[1])<(0.20*c_count) and int(row[2])<(0.20*u_count):
val ='situational'
else:
val= 'viewers'
</code></pre>
<p>What I am trying to find is ?</p>
<ol>
<li><code>c_count</code> and <code>u_count</code> both </li>
<li>Like in the above code <code>c_count</code> vs <code>u_count</code> </li>
</ol>
| 1 | 2016-08-19T17:08:07Z | 39,045,050 | <p>You can create a Series for each quantile group:</p>
<pre><code>q = df[['c_count', 'u_count']].apply(lambda x: pd.qcut(x, np.linspace(0, 1, 6),
labels=np.arange(5)))
q
Out:
c_count u_count
0 4 4
1 3 3
2 3 2
3 4 4
4 4 4
5 2 3
6 2 2
7 2 2
8 1 1
9 1 1
10 0 0
11 0 0
12 0 0
</code></pre>
<p>0 is for the first 20%, 1 is for 20%-40% and goes on. </p>
<p>Now the if logic works a little different here. For the else part, first populate the column:</p>
<pre><code>df['val'] = 'viewers'
</code></pre>
<p>Anything we do afterwards will overwrite the values in this column if condition is satisfied. So the operation we do later precedes the previous one. From bottom to top:</p>
<pre><code>df.ix[(q['c_count'] < 1) & (q['u_count'] < 1), 'val'] = 'situational'
df.ix[(q['c_count'] < 2) & (q['u_count'] > 1), 'val'] = 'event_based'
df.ix[(q['c_count'] > 2) & (q['u_count'] < 2), 'val'] = 'active'
df.ix[(q['c_count'] > 3) & (q['u_count'] > 3), 'val'] = 'highly active'
</code></pre>
<p>The first condition checks whether both c_count and u_count are in the first 20%. If so, changes the corresponding rows at 'val' column to situational. The remaining ones work in a similar manner. You might need to adjust comparison operators a little bit (greater vs greater than or equal to).</p>
| 1 | 2016-08-19T17:50:22Z | [
"python",
"validation",
"csv",
"pandas"
] |
I get 'ImportError: No module named web' despite the fact that it is installed | 39,044,472 | <p>I would like to run a simple 'Hello world' app. Every time I run it I get </p>
<pre><code>'ImportError: No module named web'
</code></pre>
<p>I installed web.py using pip and using easy_install several times. I tried uninstalling it and installing again. I tried installing it a as sudo. Nothing seems to work. I use OS X </p>
<p>Code of the application:</p>
<pre><code>import web
urls = (
'/', 'index'
)
app = web.application(urls, globals())
class index:
def GET(self):
greeting = "Hello World"
return greeting
if __name__ == "__main__":
app.run()
</code></pre>
<p>I tried to run this app using this commend:</p>
<pre><code>python /Users/mptorz/webversion/bin/app.py http://0.0.0.0:8080/
</code></pre>
<p>However I know that this is not an issue of the code, because I am basically doing this course <a href="http://learnpythonthehardway.org/book/ex50.html" rel="nofollow">http://learnpythonthehardway.org/book/ex50.html</a>.</p>
<p>I thought it might be interesting. I have just tried again to reinstall web.py and I have got this error:</p>
<pre><code>pc7:~ mptorz$ pip uninstall web.py
Uninstalling web.py-0.40.dev0:
/usr/local/lib/python2.7/site-packages/web.py-0.40.dev0-py2.7.egg
Proceed (y/n)? y
Exception:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/usr/local/lib/python2.7/site-packages/pip/commands/uninstall.py", line 76, in run
requirement_set.uninstall(auto_confirm=options.yes)
File "/usr/local/lib/python2.7/site-packages/pip/req/req_set.py", line 336, in uninstall
req.uninstall(auto_confirm=auto_confirm)
File "/usr/local/lib/python2.7/site-packages/pip/req/req_install.py", line 742, in uninstall
paths_to_remove.remove(auto_confirm)
File "/usr/local/lib/python2.7/site-packages/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/usr/local/lib/python2.7/site-packages/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 300, in move
rmtree(src)
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 247, in rmtree
rmtree(fullname, ignore_errors, onerror)
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 252, in rmtree
onerror(os.remove, fullname, sys.exc_info())
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 250, in rmtree
os.remove(fullname)
OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/site-packages/web.py-0.40.dev0-py2.7.egg/EGG-INFO/dependency_links.txt'
</code></pre>
<p>I tried doing the same with sudo and I got this error.</p>
<pre><code>pc7:~ mptorz$ sudo pip uninstall web.py
Password:
The directory '/Users/mptorz/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Uninstalling web.py-0.40.dev0:
/usr/local/lib/python2.7/site-packages/web.py-0.40.dev0-py2.7.egg
Proceed (y/n)? y
Successfully uninstalled web.py-0.40.dev0
The directory '/Users/mptorz/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
</code></pre>
<p>Then I tried to do this:</p>
<pre><code>pc7:~ mptorz$ sudo pip install web.py
The directory '/Users/mptorz/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/mptorz/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting web.py
Downloading web.py-0.38.tar.gz (91kB)
100% |ââââââââââââââââââââââââââââââââ| 92kB 199kB/s
Installing collected packages: web.py
Running setup.py install for web.py ... done
Successfully installed web.py-0.38
</code></pre>
<p>When I run the app I still get the same error 'ImportError: No module named web'.</p>
<p>I was asked to add the result of python -c "print(<strong>import</strong>('sys').path)":</p>
<pre><code>pc7:~ mptorz$ python -c "print(__import__('sys').path)"
['', '/Library/Python/2.7/site-packages/pip-8.1.2-py2.7.egg', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python27.zip', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC', '/Library/Python/2.7/site-packages']
</code></pre>
<p>Somone also suggested trying "python -m pip install web.py"
It resulted in this error:</p>
<pre><code>pc7:~ mptorz$ python -m pip install web.py
Collecting web.py
Downloading web.py-0.38.tar.gz (91kB)
100% |ââââââââââââââââââââââââââââââââ| 92kB 215kB/s
Installing collected packages: web.py
Running setup.py install for web.py ... error
Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/w6/f08g43wn1zg6ny5b_lq414cr0000gn/T/pip-build-pn7SCD/web.py/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/w6/f08g43wn1zg6ny5b_lq414cr0000gn/T/pip-tBblX5-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/web
copying web/__init__.py -> build/lib/web
copying web/application.py -> build/lib/web
copying web/browser.py -> build/lib/web
copying web/db.py -> build/lib/web
copying web/debugerror.py -> build/lib/web
copying web/form.py -> build/lib/web
copying web/http.py -> build/lib/web
copying web/httpserver.py -> build/lib/web
copying web/net.py -> build/lib/web
copying web/python23.py -> build/lib/web
copying web/session.py -> build/lib/web
copying web/template.py -> build/lib/web
copying web/test.py -> build/lib/web
copying web/utils.py -> build/lib/web
copying web/webapi.py -> build/lib/web
copying web/webopenid.py -> build/lib/web
copying web/wsgi.py -> build/lib/web
creating build/lib/web/wsgiserver
copying web/wsgiserver/__init__.py -> build/lib/web/wsgiserver
copying web/wsgiserver/ssl_builtin.py -> build/lib/web/wsgiserver
copying web/wsgiserver/ssl_pyopenssl.py -> build/lib/web/wsgiserver
creating build/lib/web/contrib
copying web/contrib/__init__.py -> build/lib/web/contrib
copying web/contrib/template.py -> build/lib/web/contrib
running install_lib
creating /Library/Python/2.7/site-packages/web
error: could not create '/Library/Python/2.7/site-packages/web': Permission denied
----------------------------------------
Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/w6/f08g43wn1zg6ny5b_lq414cr0000gn/T/pip-build-pn7SCD/web.py/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/w6/f08g43wn1zg6ny5b_lq414cr0000gn/T/pip-tBblX5-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/w6/f08g43wn1zg6ny5b_lq414cr0000gn/T/pip-build-pn7SCD/web.py/
</code></pre>
<p>After doing everything I was advised here, I tried running the app again and I have got this:</p>
<pre><code>pc7:~ mptorz$ python /Users/mptorz/webversion/bin/app.py http://0.0.0.0:8080/
Traceback (most recent call last):
File "/Users/mptorz/webversion/bin/app.py", line 15, in <module>
app.run()
File "/Library/Python/2.7/site-packages/web/application.py", line 313, in run
return wsgi.runwsgi(self.wsgifunc(*middleware))
File "/Library/Python/2.7/site-packages/web/wsgi.py", line 55, in runwsgi
server_addr = validip(listget(sys.argv, 1, ''))
File "/Library/Python/2.7/site-packages/web/net.py", line 125, in validip
port = int(port)
ValueError: invalid literal for int() with base 10: '//0.0.0.0:8080/'
</code></pre>
<p>I have just tried the home-brew solution suggested by @PhilipTzou. After I did that the output of running the app is:</p>
<pre><code>pc7:~ mptorz$ python /Users/mptorz/webversion/bin/app.py http://0.0.0.0:8080/
Traceback (most recent call last):
File "/Users/mptorz/webversion/bin/app.py", line 1, in <module>
import web
File "/usr/local/lib/python2.7/site-packages/web/__init__.py", line 14, in <module>
import utils, db, net, wsgi, http, webapi, httpserver, debugerror
File "/usr/local/lib/python2.7/site-packages/web/wsgi.py", line 8, in <module>
import http
File "/usr/local/lib/python2.7/site-packages/web/http.py", line 16, in <module>
import net, utils, webapi as web
File "/usr/local/lib/python2.7/site-packages/web/webapi.py", line 31, in <module>
import sys, cgi, Cookie, pprint, urlparse, urllib
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/cgi.py", line 50, in <module>
import mimetools
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/mimetools.py", line 6, in <module>
import tempfile
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tempfile.py", line 32, in <module>
import io as _io
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/io.py", line 51, in <module>
import _io
ImportError: dlopen(/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_io.so, 2): Symbol not found: __PyCodecInfo_GetIncrementalDecoder
Referenced from: /usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_io.so
Expected in: flat namespace
in /usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_io.so
</code></pre>
<p>SOLUTION
Finally, it worked. After running the solution suggested by @PhilipTzou and entering <code>python /Users/mptorz/webversion/bin/app.py 8080</code> app.py is finally working and I can continue my python course. Thank you all for your help :)</p>
| 0 | 2016-08-19T17:10:41Z | 39,046,428 | <p>According to the output of your <code>sys.path</code>, you are probably using OSX Python to run your script (<code>app.py</code>) but using Homebrew Python to execute PIP command. The solution is pretty simple:</p>
<pre class="lang-sh prettyprint-override"><code>brew link --overwrite python
</code></pre>
<p>This will <a href="http://stackoverflow.com/questions/5157678/python-homebrew-by-default">change your default Python to Homebrew one</a>. After executing this command, you can verify you are using Homebrew Python by typing this command:</p>
<pre class="lang-sh prettyprint-override"><code>which -a python
</code></pre>
<p>The result should be like this (sequence matters):</p>
<pre><code>/usr/local/bin/python
/usr/bin/python
</code></pre>
| 0 | 2016-08-19T19:29:35Z | [
"python",
"web",
"module",
"importerror",
"web.py"
] |
Convert Buffer to Desired Array? | 39,044,487 | <p>I'm using <a href="https://github.com/copitux/python-github3" rel="nofollow"><code>pygithub3</code></a> to call an api to receive an organization's repositories, but am getting this result:</p>
<pre><code><pygithub3.core.result.smart.Result object at 0x7f97e9c03d50>
</code></pre>
<p>I believe this is a buffer object. Why is this happening? I want the result to be</p>
<pre><code>['express-file', 'users']
</code></pre>
<p>My code looks somewhat like this:</p>
<pre><code>import pygithub3
auth = dict(username="******", password="******") # I hashed these for SO.
gh = pygithub3.Github(**auth)
repos = gh.repos.list_by_org(org="incrosoft", type="all")
print repos
</code></pre>
<p>How would I get my desired output? Is it possible? Is there something to turn it into my desired array?</p>
| 0 | 2016-08-19T17:11:21Z | 39,044,565 | <p>If you look at the docstring of <a href="https://github.com/copitux/python-github3/blob/master/pygithub3/core/result/smart.py" rel="nofollow">the <code>Result</code> class</a>, you'll see that one can obtain a list by calling <code>your_result.all()</code>. </p>
<p>If you type <code>help(pygithub3.core.result.smart.Result)</code> in your Python interpreter session (with <code>pygithub3</code> imported), you'll see this docstring printed, so you don't need to check the source each time.</p>
| 1 | 2016-08-19T17:16:24Z | [
"python",
"pygithub"
] |
.any()/.all() when checking array elements in list | 39,044,501 | <p>Hi I'm currently stuck trying to solve this problem:</p>
<pre><code>a = [array([1,3]),array([11,3])]
b = [array([1,7]),array([1,9])]
c = [[array([1,3]),array([11,3])], [array([2,6]),array([9,9])]]
if b not in c:
c.append(b)
if a not in c:
c.append(a)
</code></pre>
<p>I'm keep getting an error message telling me that I have to correct my code using <code>any()</code> or <code>all()</code>. How do I check if an array element is already in the list or not using <code>any()</code>/<code>all()</code>?</p>
| -2 | 2016-08-19T17:12:34Z | 39,045,850 | <p>You should provide a <a href="http://stackoverflow.com/help/mcve">mcve</a>, in any case, here's a starting point so you can continue experimenting by yourself:</p>
<pre><code>a = [[1, 3], [11, 3]]
b = [[1, 7], [1, 9]]
c = [[[1, 3], [11, 3]], [[2, 6], [9, 9]]]
print a in c
print b in c
print all([a in c, b in c])
print any([a in c, b in c])
</code></pre>
<p>This is just a simple example showing how to use any & all.</p>
| 0 | 2016-08-19T18:43:32Z | [
"python"
] |
PuLP-OR: Can you delete variables already created? | 39,044,577 | <p>My question is very simple. Is it possible to delete a variable that I have already created?</p>
<p>Or my only hope is to not create the variable to begin with?</p>
<p>I guess if you can print a variable then using:</p>
<pre><code>del prob.variable
</code></pre>
<p>would delete it. But I can't find it exactly, Python says that:</p>
<pre><code>prob.variables
</code></pre>
<p>is not subscriptable.</p>
| 0 | 2016-08-19T17:17:48Z | 39,045,254 | <p><code>del</code> will delete your variable. To simply clear, or unset your variable, use <code>None</code>.</p>
<h1>Using <code>del</code></h1>
<pre><code>>>> variable = "Hello World!"
>>> print variable
Hello World!
>>> del variable
>>> print variable
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'variable' is not defined
>>>
</code></pre>
<h1>Using <code>None</code></h1>
<pre><code>>>> variable = "Hello World!"
>>> print variable
Hello World!
>>> variable = None
>>> print variable
None
>>>
</code></pre>
| 0 | 2016-08-19T18:03:30Z | [
"python",
"optimization",
"integer-programming",
"pulp",
"operations-research"
] |
Computing the least distance between coordinate pairs | 39,044,599 | <p>First dataframe df1 contains id and their corresponding two coordinates. For each coordinate pair in the first dataframe, i have to loop through the second dataframe to find the one with the least distance. I tried taking the individual coordinates and finding the distance between them but it does not work as expected. I believe it has to be taken as a pair when finding the distance between them. Not sure whether Python offers some methods to achieve this.</p>
<p>For eg: df1 </p>
<pre><code>Id Co1 Co2
334 30.371353 -95.384010
337 39.497448 -119.789623
</code></pre>
<p>df2</p>
<pre><code>Id Co1 Co2
339 40.914585 -73.892456
441 34.760395 -77.999260
dfloc3 =[[38.991512-77.441536],
[40.89869-72.37637],
[40.936115-72.31452],
[30.371353-95.38401],
[39.84819-75.37162],
[36.929306-76.20035],
[40.682342-73.979645]]
dfloc4 = [[40.914585,-73.892456],
[41.741543,-71.406334],
[50.154522,-96.88806],
[39.743565,-121.795761],
[30.027597,-89.91014],
[36.51881,-82.560844],
[30.449587,-84.23629],
[42.920475,-85.8208]]
</code></pre>
| 1 | 2016-08-19T17:19:27Z | 39,044,999 | <p>The code below creates a new column in <code>df1</code> showing the Id of the nearest point in <code>df2</code>. (I can't tell from the question if this is what you want.) I'm assuming the coordinates are in a Euclidean space, i.e., that the distance between points is given by the Pythagorean Theorem. If not, you could easily use some other calculation instead of <code>dist_squared</code>.</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame(dict(Id=[334, 337], Co1=[30.371353, 39.497448], Co2=[-95.384010, -119.789623]))
df2 = pd.DataFrame(dict(Id=[339, 441], Co1=[40.914585, 34.760395], Co2=[-73.892456, -77.999260]))
def nearest(row, df):
# calculate euclidian distance from given row to all rows of df
dist_squared = (row.Co1 - df.Co1) ** 2 + (row.Co2 - df.Co2) ** 2
# find the closest row of df
smallest_idx = dist_squared.argmin()
# return the Id for the closest row of df
return df.loc[smallest_idx, 'Id']
near = df1.apply(nearest, args=(df2,), axis=1)
df1['nearest'] = near
</code></pre>
| 1 | 2016-08-19T17:46:51Z | [
"python",
"dataframe"
] |
Computing the least distance between coordinate pairs | 39,044,599 | <p>First dataframe df1 contains id and their corresponding two coordinates. For each coordinate pair in the first dataframe, i have to loop through the second dataframe to find the one with the least distance. I tried taking the individual coordinates and finding the distance between them but it does not work as expected. I believe it has to be taken as a pair when finding the distance between them. Not sure whether Python offers some methods to achieve this.</p>
<p>For eg: df1 </p>
<pre><code>Id Co1 Co2
334 30.371353 -95.384010
337 39.497448 -119.789623
</code></pre>
<p>df2</p>
<pre><code>Id Co1 Co2
339 40.914585 -73.892456
441 34.760395 -77.999260
dfloc3 =[[38.991512-77.441536],
[40.89869-72.37637],
[40.936115-72.31452],
[30.371353-95.38401],
[39.84819-75.37162],
[36.929306-76.20035],
[40.682342-73.979645]]
dfloc4 = [[40.914585,-73.892456],
[41.741543,-71.406334],
[50.154522,-96.88806],
[39.743565,-121.795761],
[30.027597,-89.91014],
[36.51881,-82.560844],
[30.449587,-84.23629],
[42.920475,-85.8208]]
</code></pre>
| 1 | 2016-08-19T17:19:27Z | 39,045,028 | <p>Given you can get your points into a list like so...</p>
<pre><code>df1 = [[30.371353, -95.384010], [39.497448, -119.789623]]
df2 = [[40.914585, -73.892456], [34.760395, -77.999260]]
</code></pre>
<p>Import math then create a function to make finding the distance easier:</p>
<pre><code>import math
def distance(pt1, pt2):
return math.sqrt((pt1[0] - pt2[0])**2 + (pt1[1] - pt2[1])**2)
</code></pre>
<p>Then simply transverse your your list saving the closest points:</p>
<pre><code>for pt1 in df1:
closestPoints = [pt1, df2[0]]
for pt2 in df2:
if distance(pt1, pt2) < distance(closestPoints[0], closestPoints[1]):
closestPoints = [pt1, pt2]
print ("Point: " + str(closestPoints[0]) + " is closest to " + str(closestPoints[1]))
</code></pre>
<p>Outputs: </p>
<pre><code>Point: [30.371353, -95.38401] is closest to [34.760395, -77.99926]
Point: [39.497448, -119.789623] is closest to [34.760395, -77.99926]
</code></pre>
| 1 | 2016-08-19T17:48:36Z | [
"python",
"dataframe"
] |
How to pass argument to scoring function in scikit-learn's LogisticRegressionCV call | 39,044,686 | <p><strong>Problem</strong></p>
<p>I am trying to use <em>scikit-learn</em>'s <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html" rel="nofollow"><code>LogisticRegressionCV</code></a> with <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html" rel="nofollow"><code>roc_auc_score</code></a> as the scoring metric.</p>
<pre><code>from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
clf = LogisticRegressionCV(scoring=roc_auc_score)
</code></pre>
<p>But when I attempt to fit the model (<code>clf.fit(X, y)</code>), it throws an error.</p>
<pre><code> ValueError: average has to be one of (None, 'micro', 'macro', 'weighted', 'samples')
</code></pre>
<p>That's cool. It's clear what's going on: <code>roc_auc_score</code> needs to be called with the <code>average</code> argument specified, per <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html" rel="nofollow">its documentation</a> and the error above. So I tried that.</p>
<pre><code>clf = LogisticRegressionCV(scoring=roc_auc_score(average='weighted'))
</code></pre>
<p>But it turns out that <code>roc_auc_score</code> can't be called with an optional argument alone, because this throws another error.</p>
<pre><code>TypeError: roc_auc_score() takes at least 2 arguments (1 given)
</code></pre>
<p><strong>Question</strong></p>
<p>Any thoughts on how I can use <code>roc_auc_score</code> as the scoring metric for <code>LogisticRegressionCV</code> in a way that I can specify an argument for the scoring function?</p>
<p>I can't find an SO question on this issue or a discussion of this issue in <em>scikit-learn</em>'s GitHub repo, but surely someone has run into this before?</p>
| 2 | 2016-08-19T17:26:16Z | 39,045,082 | <p>I found a way to solve this problem!</p>
<p><em>scikit-learn</em> offers a <code>make_scorer</code> function in its <code>metrics</code> module that allows a user to create a scoring object from one of its native scoring functions <strong>with arguments specified to non-default values</strong> (see <a href="http://scikit-learn.org/stable/modules/model_evaluation.html#defining-your-scoring-strategy-from-metric-functions" rel="nofollow">here</a> for more information on this function from the <em>scikit-learn</em> docs).</p>
<p>So, I created a scoring object with the <code>average</code> argument specified.</p>
<pre><code>roc_auc_weighted = sk.metrics.make_scorer(sk.metrics.roc_auc_score, average='weighted')
</code></pre>
<p>Then, I passed that object in the call to <code>LogisticRegressionCV</code> and it ran without any issues!</p>
<pre><code>clf = LogisticRegressionCV(scoring=roc_auc_weighted)
</code></pre>
| 0 | 2016-08-19T17:52:36Z | [
"python",
"function",
"arguments",
"scikit-learn",
"scoring"
] |
How to pass argument to scoring function in scikit-learn's LogisticRegressionCV call | 39,044,686 | <p><strong>Problem</strong></p>
<p>I am trying to use <em>scikit-learn</em>'s <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html" rel="nofollow"><code>LogisticRegressionCV</code></a> with <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html" rel="nofollow"><code>roc_auc_score</code></a> as the scoring metric.</p>
<pre><code>from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
clf = LogisticRegressionCV(scoring=roc_auc_score)
</code></pre>
<p>But when I attempt to fit the model (<code>clf.fit(X, y)</code>), it throws an error.</p>
<pre><code> ValueError: average has to be one of (None, 'micro', 'macro', 'weighted', 'samples')
</code></pre>
<p>That's cool. It's clear what's going on: <code>roc_auc_score</code> needs to be called with the <code>average</code> argument specified, per <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html" rel="nofollow">its documentation</a> and the error above. So I tried that.</p>
<pre><code>clf = LogisticRegressionCV(scoring=roc_auc_score(average='weighted'))
</code></pre>
<p>But it turns out that <code>roc_auc_score</code> can't be called with an optional argument alone, because this throws another error.</p>
<pre><code>TypeError: roc_auc_score() takes at least 2 arguments (1 given)
</code></pre>
<p><strong>Question</strong></p>
<p>Any thoughts on how I can use <code>roc_auc_score</code> as the scoring metric for <code>LogisticRegressionCV</code> in a way that I can specify an argument for the scoring function?</p>
<p>I can't find an SO question on this issue or a discussion of this issue in <em>scikit-learn</em>'s GitHub repo, but surely someone has run into this before?</p>
| 2 | 2016-08-19T17:26:16Z | 39,047,049 | <p>You can use <code>make_scorer</code>, e.g.</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.linear_model import LogisticRegressionCV
from sklearn.metrics import roc_auc_score, make_scorer
# some example data
X, y = make_classification()
# little hack to filter out Proba(y==1)
def roc_auc_score_proba(y_true, proba):
return roc_auc_score(y_true, proba[:, 1])
# define your scorer
auc = make_scorer(roc_auc_score_proba, needs_proba=True)
# define your classifier
clf = LogisticRegressionCV(scoring=auc)
# train
clf.fit(X, y)
# have look at the scores
print clf.scores_
</code></pre>
| 1 | 2016-08-19T20:16:06Z | [
"python",
"function",
"arguments",
"scikit-learn",
"scoring"
] |
Django Rest Framework - Getting a model instance after serialization | 39,044,759 | <p>I've made a serializer, and I'm trying to create a <code>Booking</code> instance from the booking field in the serializer, after I've validated the POST data. However, because the <code>Booking</code> object has foreign keys, I'm getting the error: <code>ValueError: Cannot assign "4": "Booking.activity" must be a "Activity" instance.</code></p>
<p>Here's my view function:</p>
<pre><code>@api_view(['POST'])
def customer_charge(request):
serializer = ChargeCustomerRequestSerializer(data=request.data)
serializer.is_valid(raise_exception=True)
# trying to create an instance using the ReturnDict from the serializer
booking = Booking(**serializer.data['booking'])
booking.save()
</code></pre>
<p>Serializers.py where <code>BookingSerializer</code> is a ModelSerializer</p>
<pre><code>class ChargeCustomerRequestSerializer(serializers.Serializer):
booking = BookingSerializer()
customer = serializers.CharField(max_length=255)
class BookingSerializer(serializers.ModelSerializer):
class Meta:
model = Booking
fields = '__all__'
# I wanted to view the instances with the nested information available
# but this breaks the serializer validation if it's just given a foreign key
# depth = 1
</code></pre>
<p>What is the correct way to create a model instance from a nested serializer?</p>
| 0 | 2016-08-19T17:31:34Z | 39,047,676 | <p><code>serializer.validated_data</code> should be used, not <code>serializer.data</code></p>
| 0 | 2016-08-19T21:06:17Z | [
"python",
"django",
"django-rest-framework"
] |
Executing a python script via php | 39,044,876 | <p>So Im trying to execute a python script from php using this code</p>
<pre><code>exec('C:\Python27\python.exe C:\WEBSITE_DATA\script.py');
</code></pre>
<p>or with double backspace...</p>
<pre><code>exec('C:\\Python27\\python.exe C:\\WEBSITE_DATA\\script.py');
</code></pre>
<p>the script is functional and generates a shape file when ran normally so the issue is not with the .py file.</p>
<p>I changed permissions to allow everyone full control from the python27 folder and the website data folder just to check if it were permissions issue</p>
<p>php log is absent of any errors so I can't chase don't any bugs or problems with code</p>
<p>I'm running IIS on windows server 2012 r2 is this my issue? Is what I'm trying to do even possible?</p>
<p>also if its possible can I exchange variables from php to python and then back?</p>
| 0 | 2016-08-19T17:39:16Z | 39,045,033 | <p>That is very bad thing to do. That is very hacky and unsafe solution. Keep this in mind.</p>
<p>With this said, you should be looking for php <code>system</code> function that executes the command in console, not <code>exec</code>.</p>
<p>There are multiple ways to transfer variables from php script to Python. You can write data to MySQL database and then read them from Python script. You can pack all the variables in JSON using php zip and json_encode functions, save the JSON to file on disk, and then read it from the same location in Python script. </p>
<p>The simplest way though is to do this in your <code>system</code> call:</p>
<pre><code>system("C:\Python27\python.exe /path/to/script.py $a $b $c")
</code></pre>
<p>This way the variables will be the arguments to Python script. There you ca just read them from <code>sys.argv</code> array like that:</p>
<pre><code>import sys
a = sys.argv[1] # Note that the first argument is [1], not [0]
b = sys.argv[2]
...
</code></pre>
<p>To transfer values back to PHP script I suggest the same thing - saving them to JSON on the disk and then reading and parsing this JSON in PHP script after the execution of Python script is finished.</p>
<p>Now, although I told you how to do that, you absolutely should not. Much better way is to write a CGI script on Python that will process the request and do the work you initially written in Python.You can read more here - <a href="https://www.linux.com/blog/configuring-apache2-run-python-scripts" rel="nofollow">https://www.linux.com/blog/configuring-apache2-run-python-scripts</a></p>
| 1 | 2016-08-19T17:48:54Z | [
"php",
"python"
] |
Confusion about django.shortcuts.render() method signature in 1.10 | 39,044,909 | <p>The Django 1.10 release notes (<a href="https://docs.djangoproject.com/en/1.10/releases/1.10/#features-removed-in-1-10" rel="nofollow">https://docs.djangoproject.com/en/1.10/releases/1.10/#features-removed-in-1-10</a>) say:</p>
<ul>
<li>The dictionary and context_instance parameters for the following functions are removed:
<ul>
<li><code>django.shortcuts.render()</code></li>
<li>...</li>
</ul></li>
</ul>
<p>However, the documentation for <code>render()</code> in 1.10 (<a href="https://docs.djangoproject.com/en/1.10/topics/http/shortcuts/#render" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/http/shortcuts/#render</a>) still lists <code>context</code> as an argument of type <code>dictionary</code>:</p>
<h3>context</h3>
<p>A dictionary of values to add to the template context. By default, this is an empty dictionary. If a value in the dictionary is callable, the view will call it just before rendering the template.</p>
<hr>
<p>My question, to put it frankly, is what gives? Normally this would be an academic question, but the following code:</p>
<pre><code>def index(request):
context = RequestContext(request, {})
return render(request, 'maintenance/maintenance.html', context)
</code></pre>
<p>yields this error:</p>
<pre><code>TypeError: dict expected at most 1 arguments, got 3
</code></pre>
<p>and this was the best lead I could find as to what the problem might be. I should also mention that this error appeared after updating Django from 1.8 to 1.10.</p>
| 2 | 2016-08-19T17:41:21Z | 39,045,070 | <p>You are confusing <code>context</code> and <code>context_instance</code>, which are two separate arguments. The <code>context_instance</code> argument has been removed in Django 1.10, but <code>context</code> remains.</p>
<p>As the docs say, <code>context</code> should be a <em>dictionary</em> of values. You are getting the error because you are passing a <code>RequestContext</code> instance instead of a dictionary. You can fix your example view by changing it to:</p>
<pre><code>def index(request):
context = {}
return render(request, 'maintenance/maintenance.html', context)
</code></pre>
| 3 | 2016-08-19T17:51:54Z | [
"python",
"django",
"dictionary"
] |
Filtering out row in Pyspark when there is a word from other table in any column | 39,044,993 | <p>I am new to pyspark and I want to write a query something like,</p>
<p><code>select * from table1 where column like '%word1%'</code> which we write in sql or hive.</p>
<p>I am writing the following command,</p>
<pre><code>data = sqlCtx.sql('select * from table1 where column like '%word1%')
</code></pre>
<p>But I am getting errors such as,</p>
<pre><code>NameError: name 'word1' is not defined
</code></pre>
<p>I am ideally thinking of having a condition like,</p>
<pre><code>select word_name from table2;
</code></pre>
<p>would give a list of words and whenever those words occur in table1 in any column, I want to filter out those entries and give out the remaining rows and place it in a dataframe.</p>
<p>Can anybody help me in doing this?</p>
<p>Thanks</p>
| 0 | 2016-08-19T17:46:35Z | 39,059,728 | <p>Well, "like" function works in pyspark just fine and just like in SQL. With DataFrame API and with SQL API.
Examples:</p>
<pre><code>import statsmodels.api as sm
duncan_prestige = sm.datasets.get_rdataset("Duncan", "car")
df = sqlContext.createDataFrame(duncan_prestige.data.reset_index())
index type income education prestige
0 accountant prof 62 86 82
1 pilot prof 72 76 83
2 architect prof 75 92 90
3 author prof 55 90 76
</code></pre>
<p>DataFrame API:</p>
<pre><code>df.filter(df['index'].like('%ilo%')).toPandas()
index type income education prestige
0 pilot prof 72 76 83
</code></pre>
<p>Or with SQL</p>
<pre><code>df.registerTempTable('df')
sqlContext.sql("select * from df d where d.index like '%ilo%' ").toPandas()
</code></pre>
<p>And with join (silly but to prove the point)</p>
<pre><code>qry = """
select d1.*
from df d1 join df d2
on ( d1.index = d2.index)
where d1.index like '%ilo%' and d2.index like concat('%', d1.index , '%')
"""
sqlContext.sql(qry).toPandas()
</code></pre>
| 0 | 2016-08-21T00:01:09Z | [
"python",
"sql",
"apache-spark",
"dataframe",
"pyspark"
] |
Python SQL Object selecting data by dictionary and date | 39,045,017 | <p>I am using SQLObject, a wrapper for python to manage SQL Queries, with Python 2.7. I know I can select data by using a dictionary such as:</p>
<pre><code>restrictions = { ... }
selection = sql_table.selectBy(**restrictions).orderBy('-createdtime')
</code></pre>
<p>I can also query for date by doing:</p>
<pre><code>selection = sql_table.select(sql_table.q.creatdtime>=datetime.datetime(year, month, day, 0, 0, 0, 0)
</code></pre>
<p>However, I want to use both together to sort by the date as well as the dictionary pairings. When I try to put them together like this:</p>
<pre><code>selection = sql_table.select(**restrictions, sql_table.q.creatdtime>=datetime.datetime(year, month, day, 0, 0, 0, 0)
</code></pre>
<p>It doesn't work. Is there any way to filter the SQL query by a range of datetime and by the dictionary pairings?</p>
| 0 | 2016-08-19T17:47:57Z | 39,151,432 | <p>Fixed the issue. In case you are here facing the same problem, here is the solution:</p>
<p>Since the SQLObject wrapper for python supports entering straight SQL queries, I opted to build it myself. First, I unpack all the restrictions as a query</p>
<pre><code>select_string = " AND ".join(str(key) + "=" + str(restrictions[key]) for key in restrictions.keys())
</code></pre>
<p>I then wanted to add a restriction based on my dates. I knew that the column in my database that stored date and time was called createdtime, so I put the string as </p>
<pre><code>select_string += " AND " + 'createdtime>=' + '"' + str(datetime.datetime(year, month, day, 0, 0, 0, 0)) + '"'
</code></pre>
<p>Note the quotation marks around the datetime object, even though it has been cast as a string, it still needs to have the quotation marks to work. </p>
<p>Therefore, my final code looks like this:</p>
<pre><code>select_string = " AND ".join(str(key) + "=" + str(restrictions[key]) for key in restrictions.keys())
if date1:
select_string += " AND " + 'createdtime>=' + '"' + str(datetime.datetime(year, month, day, 0, 0, 0, 0)) + '"'
if date2:
select_string += " AND " + 'createdtime<=' + '"' + str(datetime.datetime(year2, month2, day2, 0, 0, 0, 0)) + '"'
selection = sql_table.select(select_string).orderBy('-createdtime')
return selection
</code></pre>
| 0 | 2016-08-25T17:31:33Z | [
"python",
"mysql",
"python-2.7",
"dictionary",
"sqlobject"
] |
Arcpy, select features based on part of a string | 39,045,084 | <p>So for my example, I have a large shapefile of state parks where some of them are actual parks and others are just trails. However there is no column defining which are trails vs actual parks, and I would like to select those that are trails and remove them. I DO have a column for the name of each feature, that usually contains the word "trail" somewhere in the string. It's not always at the beginning or end however. </p>
<p>I'm only familiar with Python at a basic level and while I could go through manually selecting the ones I want, I was curious to see if it could be automated. I've been using arcpy.Select_analysis and tried using "LIKE" in my where_clause and have seen examples using slicing, but have not been able to get a working solution. I've also tried using the 'is in' function but I'm not sure I'm using it right with the where_clause. I might just not have a good enough grasp of the proper terms to use when asking and searching. Any help is appreciated. I've been using the Python Window in ArcMap 10.3. </p>
<p>Currently I'm at:</p>
<blockquote>
<blockquote>
<blockquote>
<p>arcpy.Select_analysis ("stateparks", "notrails", ''trail' is in \"SITE_NAME\"')</p>
</blockquote>
</blockquote>
</blockquote>
| -1 | 2016-08-19T17:52:42Z | 39,060,971 | <p>Although using the Select tool is a good choice, the syntax for the SQL expression can be a challenge. Consider using an <a href="http://resources.arcgis.com/en/help/main/10.2/index.html#//018w00000014000000" rel="nofollow">Update Cursor</a> to tackle this problem.</p>
<pre><code>import arcpy
stateparks = r"C:\path\to\your\shapefile.shp"
notrails = r"C:\path\to\your\shapefile_without_trails.shp"
# Make a copy of your shapefile
arcpy.CopyFeatures_management(stateparks, notrails)
# Check if "trail" exists in the string--delete row if so
with arcpy.da.UpdateCursor(notrails, "SITE_NAME") as cursor:
for row in cursor:
if "trails" in row[0]: # row[0] refers to the current row in the "SITE_NAME" field
cursor.deleteRow() # Delete the row if condition is true
</code></pre>
| 0 | 2016-08-21T05:02:17Z | [
"python",
"string",
"select",
"arcpy"
] |
scipy.io.loadmat reads MATLAB (R2016a) structs incorrectly | 39,045,089 | <p>Instead of loading a MATLAB struct as a dict (as described in <a href="http://docs.scipy.org/doc/scipy/reference/tutorial/io.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/tutorial/io.html</a> and other related questions), scipy.io.loadmat is loading it as a strange ndarray, where the values are an array of arrays, and the field names are taken to be the dtype. Minimal example:</p>
<p>(MATLAB):</p>
<pre><code>>> a = struct('b',0)
a =
b: 0
>> save('simple_struct.mat','a')
</code></pre>
<p>(Python):</p>
<pre><code>In[1]:
import scipy.io as sio
matfile = sio.loadmat('simple_struct.mat')
a = matfile['a']
a
Out[1]:
array([[([[0]],)]],
dtype=[('b', 'O')])
</code></pre>
<p>This problem persists in Python 2 and 3.</p>
| -2 | 2016-08-19T17:53:01Z | 39,059,775 | <p>This is expected behavior. Numpy is just showing you have MATLAB is storing your data under-the-hood.</p>
<p>MATLAB structs are 2+D cell arrays where one dimension is mapped to a sequence of strings. In Numpy, this same data structure is called a "record array", and the dtype is used to store the name. And since MATLAB matrices must be at least 2D, the <code>0</code> you stored in MATLAB is really a 2D matrix with dimensions <code>(1, 1)</code>.</p>
<p>So what you are seeing in the <code>scipy.io.loadmat</code> is how MATLAB is storing your data (minus the dtype bit, MATLAB doesn't have such a thing). Specifically, it is a 2D <code>[1, 1]</code> array (that is what Numpy calls cell arrays), where one dimension is mapped to a string, containing a <code>[1, 1]</code> 2D array. MATLAB hides some of these details from you, but numpy doesn't.</p>
| 0 | 2016-08-21T00:11:39Z | [
"python",
"matlab"
] |
Python AttributeError: module 'runAnalytics' has no attribute 'run' | 39,045,091 | <p>I am trying to grab an input from my main.py file using tkinter and then use that input in runAnalytics.py</p>
<p>main.py</p>
<pre><code>import runAnalytics
import tkinter
import os
import centerWindow
loadApplication = tkinter.Tk()
loadApplication.title("Stock Analytics")
loadApplication.geometry("1080x720")
label1 = tkinter.Label(loadApplication, text = "Ticker")
input1 = tkinter.Entry(loadApplication)
loadAnalytics = tkinter.Button(loadApplication, text = "Load Analytics", command = runAnalytics.run)
centerWindow.center(loadApplication)
loadAnalytics.pack()
label1.pack()
input1.pack()
loadApplication.mainloop()
</code></pre>
<p>runAnalytics.py</p>
<pre><code>from yahoo_finance import Share
from main import input1
import tkinter
import os
import centerWindow
def run():
ticker = input1
loadAnalytics = tkinter.Tk()
loadAnalytics.title("$" + ticker + " Data")
loadAnalytics.geometry("1080x720")
print ("Price per share: " + ticker.get_price())
ticker.refresh()
print ("Price per share: " + ticker.get_price())
print("The dividend yield is: " + ticker.get_dividend_yield())
print("The 52 week low is: " + ticker.get_year_low())
print("The 52 week high is: " + ticker.get_year_high())
print("The volume is: " + ticker.get_volume())
print("The previous close was: " + ticker.get_prev_close())
print("The previous open was: " + ticker.get_open())
loadAnalytics.mainloop()
</code></pre>
<p>My error message reads as follows;</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\MyName\Documents\Python Projects\MarketData\main.py", line 1, in <module>
import runAnalytics
File "C:\Users\MyName\Documents\Python Projects\MarketData\runAnalytics.py", line 2, in <module>
from main import input1
File "C:\Users\MyName\Documents\Python Projects\MarketData\main.py", line 13, in <module>
loadAnalytics = tkinter.Button(loadApplication, text = "Load Analytics", command = runAnalytics.run)
AttributeError: module 'runAnalytics' has no attribute 'run'
</code></pre>
| 1 | 2016-08-19T17:53:14Z | 39,045,161 | <p>Try adding () to the end of runAnalytics.run. You are currently telling it to look for a <code>run</code> attribute which it doesn't have instead of a function</p>
<pre><code>loadAnalytics = tkinter.Button(loadApplication, text = "Load Analytics", command = runAnalytics.run())
</code></pre>
| -2 | 2016-08-19T17:58:35Z | [
"python",
"input",
"tkinter",
"python-3.5",
"attributeerror"
] |
Python AttributeError: module 'runAnalytics' has no attribute 'run' | 39,045,091 | <p>I am trying to grab an input from my main.py file using tkinter and then use that input in runAnalytics.py</p>
<p>main.py</p>
<pre><code>import runAnalytics
import tkinter
import os
import centerWindow
loadApplication = tkinter.Tk()
loadApplication.title("Stock Analytics")
loadApplication.geometry("1080x720")
label1 = tkinter.Label(loadApplication, text = "Ticker")
input1 = tkinter.Entry(loadApplication)
loadAnalytics = tkinter.Button(loadApplication, text = "Load Analytics", command = runAnalytics.run)
centerWindow.center(loadApplication)
loadAnalytics.pack()
label1.pack()
input1.pack()
loadApplication.mainloop()
</code></pre>
<p>runAnalytics.py</p>
<pre><code>from yahoo_finance import Share
from main import input1
import tkinter
import os
import centerWindow
def run():
ticker = input1
loadAnalytics = tkinter.Tk()
loadAnalytics.title("$" + ticker + " Data")
loadAnalytics.geometry("1080x720")
print ("Price per share: " + ticker.get_price())
ticker.refresh()
print ("Price per share: " + ticker.get_price())
print("The dividend yield is: " + ticker.get_dividend_yield())
print("The 52 week low is: " + ticker.get_year_low())
print("The 52 week high is: " + ticker.get_year_high())
print("The volume is: " + ticker.get_volume())
print("The previous close was: " + ticker.get_prev_close())
print("The previous open was: " + ticker.get_open())
loadAnalytics.mainloop()
</code></pre>
<p>My error message reads as follows;</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\MyName\Documents\Python Projects\MarketData\main.py", line 1, in <module>
import runAnalytics
File "C:\Users\MyName\Documents\Python Projects\MarketData\runAnalytics.py", line 2, in <module>
from main import input1
File "C:\Users\MyName\Documents\Python Projects\MarketData\main.py", line 13, in <module>
loadAnalytics = tkinter.Button(loadApplication, text = "Load Analytics", command = runAnalytics.run)
AttributeError: module 'runAnalytics' has no attribute 'run'
</code></pre>
| 1 | 2016-08-19T17:53:14Z | 39,045,216 | <pre><code>from runAnalytics import run
loadAnalytics = tkinter.Button(loadApplication, text="Load Analytics", command=run)
</code></pre>
<p>You don't want to start another <code>mainloop</code> of <code>tk</code>. Instead you should pass the <code>root</code> and create a toplevel window.</p>
<pre><code>def run(root):
ticker = input1
parent = Toplevel(root)
parent.title("$" + ticker + " Data")
# the rest of your code
</code></pre>
| 1 | 2016-08-19T18:01:42Z | [
"python",
"input",
"tkinter",
"python-3.5",
"attributeerror"
] |
Python AttributeError: module 'runAnalytics' has no attribute 'run' | 39,045,091 | <p>I am trying to grab an input from my main.py file using tkinter and then use that input in runAnalytics.py</p>
<p>main.py</p>
<pre><code>import runAnalytics
import tkinter
import os
import centerWindow
loadApplication = tkinter.Tk()
loadApplication.title("Stock Analytics")
loadApplication.geometry("1080x720")
label1 = tkinter.Label(loadApplication, text = "Ticker")
input1 = tkinter.Entry(loadApplication)
loadAnalytics = tkinter.Button(loadApplication, text = "Load Analytics", command = runAnalytics.run)
centerWindow.center(loadApplication)
loadAnalytics.pack()
label1.pack()
input1.pack()
loadApplication.mainloop()
</code></pre>
<p>runAnalytics.py</p>
<pre><code>from yahoo_finance import Share
from main import input1
import tkinter
import os
import centerWindow
def run():
ticker = input1
loadAnalytics = tkinter.Tk()
loadAnalytics.title("$" + ticker + " Data")
loadAnalytics.geometry("1080x720")
print ("Price per share: " + ticker.get_price())
ticker.refresh()
print ("Price per share: " + ticker.get_price())
print("The dividend yield is: " + ticker.get_dividend_yield())
print("The 52 week low is: " + ticker.get_year_low())
print("The 52 week high is: " + ticker.get_year_high())
print("The volume is: " + ticker.get_volume())
print("The previous close was: " + ticker.get_prev_close())
print("The previous open was: " + ticker.get_open())
loadAnalytics.mainloop()
</code></pre>
<p>My error message reads as follows;</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\MyName\Documents\Python Projects\MarketData\main.py", line 1, in <module>
import runAnalytics
File "C:\Users\MyName\Documents\Python Projects\MarketData\runAnalytics.py", line 2, in <module>
from main import input1
File "C:\Users\MyName\Documents\Python Projects\MarketData\main.py", line 13, in <module>
loadAnalytics = tkinter.Button(loadApplication, text = "Load Analytics", command = runAnalytics.run)
AttributeError: module 'runAnalytics' has no attribute 'run'
</code></pre>
| 1 | 2016-08-19T17:53:14Z | 39,045,286 | <p>You have a circular import:</p>
<pre><code>import runAnalytics
# ..
from main import input1
</code></pre>
<p>By the time <code>main</code> is being imported again, <code>runAnalytics</code> has not yet had a chance to execute the <code>def run():</code>.. section.</p>
<p>Resolve this by <em>removing</em> the <code>from main import input1</code> line, and pass that object in as an argument instead:</p>
<pre><code>def run(input1):
</code></pre>
<p>passing this in from the <code>main.py</code> module when you call the function:</p>
<pre><code>loadAnalytics = tkinter.Button(loadApplication, text = "Load Analytics", command = lambda: runAnalytics.run(input1))
</code></pre>
<p>Apart from the circular import, there is also the issue that whatever file you run as the main script in Python will be stored as the <code>__main__</code> module. Importing that same script again will lead to a <em>second</em> module being created, now under the name <code>main</code>, and any objects created in that module are distinct from those in <code>__main__</code>.</p>
<p>Next, you'll want to <em>remove</em> the <code>loadAnalytics.mainloop()</code> call from <code>run</code> as you should not start a new mainloop from an already running loop. You probably also want to create a new <code>TopLevel</code> window instead of creating another <code>Tk()</code> root. You'd have to pass in <code>loadApplication</code> to <code>run</code> too if you go this way.</p>
| 3 | 2016-08-19T18:05:33Z | [
"python",
"input",
"tkinter",
"python-3.5",
"attributeerror"
] |
Access file system from Python on boot-up cron | 39,045,143 | <p>Please consider the following problem:</p>
<p>I have a Python script that runs on a linux machine (Raspberry pi 3, running Rasbian Jessie) on boot.
This script has been added to <code>sudo crontab -e</code></p>
<p>The script itself starts with no problem but is unable to load in a certain file that is in the same directory as the script (Desktop), I have any print statements/issues going into a log file. Which reads as follows:</p>
<pre><code>Traceback (most recent call last):
File "/home/pi/Desktop/mainServ.py", line 18, in <module>
mouth_detector = dlib.simple_object_detector(mouth_detector_path)
RuntimeError: Unable to open mouthDetector.svm
</code></pre>
<p>I <em>assume</em> this is because the script has no access to a filesystem, or perhaps LXDE/Desktop at boot time? I could very well be wrong on this.</p>
<p>Any solutions to this problem will be greatly appreciated.</p>
| 0 | 2016-08-19T17:57:32Z | 39,047,268 | <p>Whenever you execute a script via crontab be ready for environment variables to be different. In this case you can simply use the whole path in the file you are trying to reference.</p>
<p>To see what the current environment variables are from within Python, use:</p>
<pre><code> import os
os.environ
</code></pre>
<p>You may find there are other differences between the crontab environment and whatever interpreter environment you are using for testing.</p>
| 1 | 2016-08-19T20:34:52Z | [
"python",
"linux",
"cron",
"x11"
] |
OpenCV 3.0 installation error | 39,045,256 | <p>While trying to install OpenCV 3.0 on a Ubuntu 16.04 machine I get the following errors:</p>
<pre><code>/home/isenses/Documents/opencv-3.0.0-rc1/modules/videoio/src/cap_gstreamer.cpp:53:21: fatal error: gst/gst.h: No such file or directory
compilation terminated
modules/videoio/CMakeFiles/opencv_videoio.dir/build.make:182: recipe for target 'modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_gstreamer.cpp.o' failed
make[2]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_gstreamer.cpp.o] Error 1
CMakeFiles/Makefile2:4295: recipe for target 'modules/videoio/CMakeFiles/opencv_videoio.dir/all' failed
make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2
Makefile:149: recipe for target 'all' failed
make: *** [all] Error 2
</code></pre>
<p>Please advise.</p>
| -2 | 2016-08-19T18:03:54Z | 39,045,299 | <p>you're missing gstreamer includes. You have to install GStreamer SDK first</p>
<p><a href="http://gstreamer.com/" rel="nofollow">from here</a></p>
| 0 | 2016-08-19T18:06:42Z | [
"python",
"c++",
"opencv",
"ubuntu-16.04"
] |
Where does the performance boost of map or list comprehension implementations over calling a function over a loop come from? | 39,045,396 | <p>I understand that you could be more efficient with memory in the implementation of map than in how you might do it over a loop. However, I see that using a map function over calling a function iterating over a loop has a speed boost as well.</p>
<p>Is this coming from some optimization in where to store the memory? One example of what I mean by this is placement of memory done in such a way that it is contiguous. Also I can see that if the operations were being in run in parallel then there would also be a speed boost, but I don't think this is the case. Examples from any known optimizations for map implementations from any language/package are very welcome!</p>
<p>-- Edit: I feel as if the example I had previously did not illustrate my question very well.</p>
<p>MAY NOT BE COMPLETELY FAIR COMPARISON.
For example, I test the a loop implementation, list comprehension, and the map function. Would you have any ideas of what would make one faster than the other? NOT NECESSARILY PYTHON; this is more a question about implementation of more efficient algorithms for applying a function over an iterable object. A valid answer might be that "typically for every map/list comprehension style of code, you would be able to make a loop implementation faster than that". Or for some case, the list comprehension is faster, but this relates to the implementation details and that is what I am interested in.</p>
<pre><code>import time
import numpy as np
def square2(x):
return x*x
def main():
foobar = np.linspace(1, 300, 300)
start_time = time.time()
res = [0] * len(foobar)
for i, foo in enumerate(foobar):
res[i] = square2(foo)
print("{} {} runtime seconds {}".format("-"*8, time.time()-start_time, "-"*8))
res = [0] * len(foobar)
start_time = time.time()
res = [square2(foo) for foo in foobar]
print("{} {} runtime seconds {}".format("-"*8, time.time()-start_time, "-"*8))
start_time = time.time()
res = list(map(square2, foobar))
print("{} {} runtime seconds {}".format("-"*8, time.time()-start_time, "-"*8))
</code></pre>
<p>The output is:</p>
<pre><code>-------- 6.175041198730469e-05 runtime seconds --------
-------- 5.984306335449219e-05 runtime seconds --------
-------- 5.316734313964844e-05 runtime seconds --------
</code></pre>
| 2 | 2016-08-19T18:13:41Z | 39,045,648 | <p>The difference is due to append, not map. Try this:</p>
<pre><code>res = []
res = [square2(foo) for foo in foobar]
</code></pre>
| 0 | 2016-08-19T18:30:45Z | [
"python",
"algorithm",
"performance",
"functional-programming"
] |
Where does the performance boost of map or list comprehension implementations over calling a function over a loop come from? | 39,045,396 | <p>I understand that you could be more efficient with memory in the implementation of map than in how you might do it over a loop. However, I see that using a map function over calling a function iterating over a loop has a speed boost as well.</p>
<p>Is this coming from some optimization in where to store the memory? One example of what I mean by this is placement of memory done in such a way that it is contiguous. Also I can see that if the operations were being in run in parallel then there would also be a speed boost, but I don't think this is the case. Examples from any known optimizations for map implementations from any language/package are very welcome!</p>
<p>-- Edit: I feel as if the example I had previously did not illustrate my question very well.</p>
<p>MAY NOT BE COMPLETELY FAIR COMPARISON.
For example, I test the a loop implementation, list comprehension, and the map function. Would you have any ideas of what would make one faster than the other? NOT NECESSARILY PYTHON; this is more a question about implementation of more efficient algorithms for applying a function over an iterable object. A valid answer might be that "typically for every map/list comprehension style of code, you would be able to make a loop implementation faster than that". Or for some case, the list comprehension is faster, but this relates to the implementation details and that is what I am interested in.</p>
<pre><code>import time
import numpy as np
def square2(x):
return x*x
def main():
foobar = np.linspace(1, 300, 300)
start_time = time.time()
res = [0] * len(foobar)
for i, foo in enumerate(foobar):
res[i] = square2(foo)
print("{} {} runtime seconds {}".format("-"*8, time.time()-start_time, "-"*8))
res = [0] * len(foobar)
start_time = time.time()
res = [square2(foo) for foo in foobar]
print("{} {} runtime seconds {}".format("-"*8, time.time()-start_time, "-"*8))
start_time = time.time()
res = list(map(square2, foobar))
print("{} {} runtime seconds {}".format("-"*8, time.time()-start_time, "-"*8))
</code></pre>
<p>The output is:</p>
<pre><code>-------- 6.175041198730469e-05 runtime seconds --------
-------- 5.984306335449219e-05 runtime seconds --------
-------- 5.316734313964844e-05 runtime seconds --------
</code></pre>
| 2 | 2016-08-19T18:13:41Z | 39,046,254 | <p>So, the issue with function-calls in loops, in a dynamic language like Python, is that the interpreter has to evalute the refernces each time, and that is costly, especially for global variables. However, notice what happens when you make things local:</p>
<pre><code>import time
def square(x):
return x*x
def test_loop(x):
res = []
for i in x:
res.append(square(i))
return res
def test_map(x):
return list(map(square,x))
def test_map_local(x, square=square):
return list(map(square,x))
def test_loop_local(x, square=square):
res = []
for i in x:
res.append(square(i))
return res
def test_loop_local_local(x, square=square):
res = []
append = res.append
for i in x:
append(square(i))
return res
def test_comprehension(x):
return [square(i) for i in x]
def test_comprehension_local(x, square=square):
return [square(i) for i in x]
x = range(1,10000000)
start = time.time()
test_loop(x)
stop = time.time()
print("Loop:", stop - start,"seconds")
start = time.time()
test_loop_local(x)
stop = time.time()
print("Loop-local:", stop - start, "seconds")
start = time.time()
test_loop_local_local(x)
stop = time.time()
print("Loop-local-local:", stop - start, "seconds")
start = time.time()
test_map(x)
stop = time.time()
print("Map:", stop - start, "seconds")
start = time.time()
test_map_local(x)
stop = time.time()
print("Map-local:", stop - start, "seconds")
start = time.time()
test_comprehension(x)
stop = time.time()
print("Comprehesion:", stop - start, "seconds")
start = time.time()
test_comprehension_local(x)
stop = time.time()
print("Comprehesion-local:", stop - start, "seconds")
</code></pre>
<p>Results:</p>
<pre><code>Loop: 3.9749317169189453 seconds
Loop-local: 3.686530828475952 seconds
Loop-local-local: 3.006138563156128 seconds
Map: 3.1068732738494873 seconds
Map-local: 3.1318843364715576 seconds
Comprehesion: 2.973804235458374 seconds
Comprehesion-local: 2.7370445728302 seconds
</code></pre>
<p>So, making the function look-up in map local doesn't really help, as I would expect because map does a single look-up at the beginning. What really surprised me is that there seems to be a non-negligible difference between a comprehension and a comprehension with the function made local. Not sure if this is just noise, however.</p>
| 2 | 2016-08-19T19:15:07Z | [
"python",
"algorithm",
"performance",
"functional-programming"
] |
How do I group lat-lon pairings in a Pandas DataFrame? | 39,045,481 | <p>I have a dataframe that looks like:</p>
<pre><code>lon lat
-77.487 39.044
-77.487 39.044
-122.031 37.354
-77.487 39.044
</code></pre>
<p>I want to group these lon-lat pairings with a resulting count, like so:</p>
<pre><code>lon lat count
-77.487 39.044 3
-122.031 37.354 1
</code></pre>
<p>How can I do this? The <code>group()</code> function only appears to allow for grouping by one column.</p>
| 2 | 2016-08-19T18:20:10Z | 39,045,543 | <p>Please find the documentation from the following link </p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html</a></p>
<pre><code>df_data=df_data.groupby(['lon','lat']).size()
print df_data
</code></pre>
| 1 | 2016-08-19T18:24:03Z | [
"python",
"pandas",
"dataframe"
] |
How do I group lat-lon pairings in a Pandas DataFrame? | 39,045,481 | <p>I have a dataframe that looks like:</p>
<pre><code>lon lat
-77.487 39.044
-77.487 39.044
-122.031 37.354
-77.487 39.044
</code></pre>
<p>I want to group these lon-lat pairings with a resulting count, like so:</p>
<pre><code>lon lat count
-77.487 39.044 3
-122.031 37.354 1
</code></pre>
<p>How can I do this? The <code>group()</code> function only appears to allow for grouping by one column.</p>
| 2 | 2016-08-19T18:20:10Z | 39,045,790 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>groupby.size</code></a> and rename the column created followed by <code>reset_index</code> to get back the desired <code>dataframe</code>.</p>
<pre><code>print(df.groupby(['lon', 'lat']).size().rename('count').reset_index())
lon lat count
0 -122.031 37.354 1
1 -77.487 39.044 3
</code></pre>
| 4 | 2016-08-19T18:39:22Z | [
"python",
"pandas",
"dataframe"
] |
Feature engineering using PostgreSQL on a large dataset (~3M entries) | 39,045,544 | <p>I have a dataset of ~3 million chess games (existing columns include names of players, date, result and tournament name). I want to use Random Forest to predict results of chess games.</p>
<p>To this end, I want to do some feature engineering. There are several variables that I believe would be strong predictors, e.g. players' results so far in the tournament, number of games 90 days prior to the game. </p>
<p>Columns: </p>
<pre><code> - date DATE
- namew TEXT
- nameb TEXT
- whiterank INTEGER
- blackrank INTEGER
- tournament TEXT
- t_round INTEGER
- result REAL
- id BIGINT
- chess_data2_pkey(id)
</code></pre>
<p>Indices: </p>
<pre><code>game_index INDEX chess_data2 (namew ASC, tournament ASC, date ASC)
</code></pre>
<p>Unfortunately, my queries were rather slow (I wrote 14 and tested them on a smaller dataset, not even 1 was completed in 8 days). Below is a simplified version, which I put on 2 hours ago and still have no results.</p>
<pre><code>SELECT Sum(result)
INTO temp
FROM chess_data2 t1
WHERE id IN (SELECT t2.id
FROM chess_data2 t2
WHERE t1.tournament = t2.tournament
AND t1.namew = t2.namew
AND t1.date < t2.date)
</code></pre>
<p>My questions: </p>
<ol>
<li><strong>Can I make this work faster in SQL</strong> (faster as in, complete 14 similar queries in less than 10 days on my i7-4710HQ and 12gb of RAM?). Perhaps by explicitly sorting it beforehand? </li>
<li><strong>Any other way I could achieve my goal faster?</strong> I tried to naively code this using loops in Python and the performance was even worse, but I heard C is better for this stuff - but just how much better?</li>
</ol>
<p>I use Python 3.5 for estimation and psycopg2 to deal with SQL. </p>
<p><strong>EDIT:</strong> Thank you all for helpful responses. I managed to successfully use indexes to make some of the queries extremely fast, e.g. this one:</p>
<pre><code># Number of points that the white player has so far accrued throughout the tournament
(SELECT coalesce(SUM(result),0) from chess_data2 t2
where (t1.namew = t2.namew) and t1.tournament = t2.tournament
and t1.date > t2.date and t1.date < t2.date + 90)
+ SELECT coalesce(SUM(1-result),0) from chess_data2 t2
where (t1.namew = t2.nameb) and t1.tournament = t2.tournament
and t1.date > t2.date and t1.date < t2.date + 90 ) AS result_in_t_w
from chessdata2 t1
</code></pre>
<p>Takes only ~60 seconds now, which is more than acceptable. However, for some reason, the counting selects like this one take more than half an hour (I didn't wait longer) to compute:</p>
<pre><code># Number of games that the white player has so far played in the tournament
(SELECT count(*) from chess_data t2 where (t1.namew = t2.namew) and
t1.tournament = t2.tournament and t1.date > t2.date and t1.date < t2.date + 90)
+ (SELECT coalesce(count(*),0) from chess_data2 t2
where (t1.namew = t2.nameb) and t1.tournament = t2.tournament
and t1.date > t2.date and t1.date < t2.date + 90) AS games_t_w from chess_data2 t1
</code></pre>
<p>I guess I'm using the indexes in the wrong way but I have no idea what's wrong, it's basically the same thing as previously but instead of summing result column I calculate sum of rows... Does it make sense at all? </p>
| -2 | 2016-08-19T18:24:11Z | 39,045,626 | <p>if you want speed up executing query you can create index of columns use to joining (foreign keys and columns using where clausule).
But added index cause slow down inserting and updtating, and increase required to space disc.</p>
| 1 | 2016-08-19T18:29:17Z | [
"python",
"sql",
"postgresql",
"machine-learning",
"bigdata"
] |
Feature engineering using PostgreSQL on a large dataset (~3M entries) | 39,045,544 | <p>I have a dataset of ~3 million chess games (existing columns include names of players, date, result and tournament name). I want to use Random Forest to predict results of chess games.</p>
<p>To this end, I want to do some feature engineering. There are several variables that I believe would be strong predictors, e.g. players' results so far in the tournament, number of games 90 days prior to the game. </p>
<p>Columns: </p>
<pre><code> - date DATE
- namew TEXT
- nameb TEXT
- whiterank INTEGER
- blackrank INTEGER
- tournament TEXT
- t_round INTEGER
- result REAL
- id BIGINT
- chess_data2_pkey(id)
</code></pre>
<p>Indices: </p>
<pre><code>game_index INDEX chess_data2 (namew ASC, tournament ASC, date ASC)
</code></pre>
<p>Unfortunately, my queries were rather slow (I wrote 14 and tested them on a smaller dataset, not even 1 was completed in 8 days). Below is a simplified version, which I put on 2 hours ago and still have no results.</p>
<pre><code>SELECT Sum(result)
INTO temp
FROM chess_data2 t1
WHERE id IN (SELECT t2.id
FROM chess_data2 t2
WHERE t1.tournament = t2.tournament
AND t1.namew = t2.namew
AND t1.date < t2.date)
</code></pre>
<p>My questions: </p>
<ol>
<li><strong>Can I make this work faster in SQL</strong> (faster as in, complete 14 similar queries in less than 10 days on my i7-4710HQ and 12gb of RAM?). Perhaps by explicitly sorting it beforehand? </li>
<li><strong>Any other way I could achieve my goal faster?</strong> I tried to naively code this using loops in Python and the performance was even worse, but I heard C is better for this stuff - but just how much better?</li>
</ol>
<p>I use Python 3.5 for estimation and psycopg2 to deal with SQL. </p>
<p><strong>EDIT:</strong> Thank you all for helpful responses. I managed to successfully use indexes to make some of the queries extremely fast, e.g. this one:</p>
<pre><code># Number of points that the white player has so far accrued throughout the tournament
(SELECT coalesce(SUM(result),0) from chess_data2 t2
where (t1.namew = t2.namew) and t1.tournament = t2.tournament
and t1.date > t2.date and t1.date < t2.date + 90)
+ SELECT coalesce(SUM(1-result),0) from chess_data2 t2
where (t1.namew = t2.nameb) and t1.tournament = t2.tournament
and t1.date > t2.date and t1.date < t2.date + 90 ) AS result_in_t_w
from chessdata2 t1
</code></pre>
<p>Takes only ~60 seconds now, which is more than acceptable. However, for some reason, the counting selects like this one take more than half an hour (I didn't wait longer) to compute:</p>
<pre><code># Number of games that the white player has so far played in the tournament
(SELECT count(*) from chess_data t2 where (t1.namew = t2.namew) and
t1.tournament = t2.tournament and t1.date > t2.date and t1.date < t2.date + 90)
+ (SELECT coalesce(count(*),0) from chess_data2 t2
where (t1.namew = t2.nameb) and t1.tournament = t2.tournament
and t1.date > t2.date and t1.date < t2.date + 90) AS games_t_w from chess_data2 t1
</code></pre>
<p>I guess I'm using the indexes in the wrong way but I have no idea what's wrong, it's basically the same thing as previously but instead of summing result column I calculate sum of rows... Does it make sense at all? </p>
| -2 | 2016-08-19T18:24:11Z | 39,045,725 | <p>Not sure know why you use that IN. I thnk you try to simplify your query and lost more of the logic.</p>
<p>I belive that is equivalent to</p>
<pre><code>SELECT sum(result) INTO temp
FROM chess_data2 t1
</code></pre>
<p>You probably want </p>
<pre><code>SELECT tournament, namew, sum(result)
FROM chess_data2 t1
GROUP BY tournament, namew
</code></pre>
<p>or</p>
<pre><code>SELECT tournament, namew, sum(result)
FROM chess_data2 t1
WHERE tournament = @tournament
AND namew = @namew
</code></pre>
| 1 | 2016-08-19T18:35:22Z | [
"python",
"sql",
"postgresql",
"machine-learning",
"bigdata"
] |
How can I only replace some of the specific characters in a python string? | 39,045,594 | <p>I want to replace characters in a string but not all of the characters at once. For example:</p>
<pre><code>s = "abac"
</code></pre>
<p>I would like to replace the string with all these options</p>
<pre><code>"Xbac"
"abXc"
"XbXc"
</code></pre>
<p>I only know the normal s.replace() function that would replace all occurences of that character. Does anyone have any clever code that would replace all the possible options of characters in the string?</p>
| -2 | 2016-08-19T18:26:57Z | 39,045,702 | <pre><code>string.replace(s, old, new[, maxreplace])
</code></pre>
<p>Return a copy of string s with all occurrences of substring old replaced by new. If the optional argument maxreplace is given, the first maxreplace occurrences are replaced.
So, if <code>maxreplace=1</code> only the first character is replaced.</p>
| 0 | 2016-08-19T18:34:19Z | [
"python",
"string",
"replace"
] |
How can I only replace some of the specific characters in a python string? | 39,045,594 | <p>I want to replace characters in a string but not all of the characters at once. For example:</p>
<pre><code>s = "abac"
</code></pre>
<p>I would like to replace the string with all these options</p>
<pre><code>"Xbac"
"abXc"
"XbXc"
</code></pre>
<p>I only know the normal s.replace() function that would replace all occurences of that character. Does anyone have any clever code that would replace all the possible options of characters in the string?</p>
| -2 | 2016-08-19T18:26:57Z | 39,045,740 | <p><a href="https://docs.python.org/2/library/string.html#string.replace" rel="nofollow">You can still use <code>replace()</code></a> to only replace the first <em>k</em> occurrences of a character, rather than all of them at once.</p>
<blockquote>
<p><em>string.replace(s, old, new[, maxreplace])</em> </p>
<p>Return a copy of string <code>s</code> with all occurrences of substring <code>old</code> replaced by <code>new</code>. <strong>If the optional argument <code>maxreplace</code> is given, the first <code>maxreplace</code> occurrences are replaced.</strong></p>
</blockquote>
<p>So your example could be accomplished by:</p>
<pre><code>'abac'.replace('a','X',1).replace('a','X',1)
>>> 'XbXc'
</code></pre>
<hr>
<p>More complicated patterns can be accomplished by using the <code>re</code> module in Python, which allows for pattern matching using regular expressions, notably with <a href="https://docs.python.org/2/library/re.html#re.sub" rel="nofollow"><code>re.sub()</code></a>.</p>
| 0 | 2016-08-19T18:36:14Z | [
"python",
"string",
"replace"
] |
How can I only replace some of the specific characters in a python string? | 39,045,594 | <p>I want to replace characters in a string but not all of the characters at once. For example:</p>
<pre><code>s = "abac"
</code></pre>
<p>I would like to replace the string with all these options</p>
<pre><code>"Xbac"
"abXc"
"XbXc"
</code></pre>
<p>I only know the normal s.replace() function that would replace all occurences of that character. Does anyone have any clever code that would replace all the possible options of characters in the string?</p>
| -2 | 2016-08-19T18:26:57Z | 39,045,746 | <blockquote>
<p>Does anyone have any clever code that would replace all the possible
options of characters in the string?</p>
</blockquote>
<p>Take a look at <a href="https://docs.python.org/3.4/library/re.html" rel="nofollow">regular expressions</a> in Python.</p>
<p>Here's an example:</p>
<pre><code>import re
s = "abac"
re.sub('^a', 'X', s)
# output = 'Xbac'
</code></pre>
<p>This would replace all strings that start with <code>a</code> and put an <code>X</code> there.</p>
| 0 | 2016-08-19T18:36:25Z | [
"python",
"string",
"replace"
] |
How can I only replace some of the specific characters in a python string? | 39,045,594 | <p>I want to replace characters in a string but not all of the characters at once. For example:</p>
<pre><code>s = "abac"
</code></pre>
<p>I would like to replace the string with all these options</p>
<pre><code>"Xbac"
"abXc"
"XbXc"
</code></pre>
<p>I only know the normal s.replace() function that would replace all occurences of that character. Does anyone have any clever code that would replace all the possible options of characters in the string?</p>
| -2 | 2016-08-19T18:26:57Z | 39,047,992 | <p>I have managed to replace all the character in the string with all the different combinations. Although the code isnt the most efficient it does what I wanted it to</p>
<pre><code>def replaceall(s, n):
occurence = s.count(n)
alt = []
temp = s
for i in range(occurence):
temp2 = temp
for j in range(i,occurence):
temp2 = temp2.replace(n,"x",1)
alt.append(temp2)
temp = temp.replace(n,"!",1)
for i in range(len(alt)):
alt[i] = alt[i].replace("!",n)
return alt
</code></pre>
| 0 | 2016-08-19T21:35:23Z | [
"python",
"string",
"replace"
] |
RDD basics about partitions | 39,045,609 | <p>I am reading <a href="http://spark.apache.org/docs/1.6.2/programming-guide.html#rdd-operations" rel="nofollow">Spark: RDD operations</a> and I am executing:</p>
<pre><code>In [7]: lines = sc.textFile("data")
In [8]: lines.getNumPartitions()
Out[8]: 1000
In [9]: lineLengths = lines.map(lambda s: len(s))
In [10]: lineLengths.getNumPartitions()
Out[10]: 1000
In [11]: len(lineLengths.collect())
Out[11]: 508524
</code></pre>
<p>but I would expect that my dataset gets split in parts, how many? As the number of partitions, i.e. 1000.</p>
<p>Then the <code>map()</code> would run on every partition and return a <em>local</em> result (which then should be reduced), but if this is the case I would expect <code>lineLenghts</code> which is a list of numbers, to have length <em>equal</em> to the #partitions, which is not the case.</p>
<p>What am I missing?</p>
| 0 | 2016-08-19T18:28:08Z | 39,046,070 | <p><code>len(lineLengths.collect())</code> or <code>lineLengths.count()</code> tells you the number of rows in your rdd. <code>lineLengths.getNumPartitions()</code>, as you noted, is the number of partitions your rdd is distributed over. Each partition of the rdd contains many rows of the dataframe. </p>
| 2 | 2016-08-19T19:00:28Z | [
"python",
"apache-spark",
"distributed-computing",
"partitioning",
"rdd"
] |
Jupyter Notebook: need to review all cells in a data frame | 39,045,754 | <p>I am using Jupyter notebook, having a python data frame of 100 columns and 200 rows. I need to manually review every cell in the data frame, but on the browser, it just show "..." when the row or column numbers are large. Can I force it to show every row and column instead of just showing "..."?</p>
<p>Thanks!</p>
| 1 | 2016-08-19T18:36:43Z | 39,045,775 | <p>You need to specify how many rows you want shown. You can configure those options in a notebook like this...</p>
<pre><code>pd.set_option('display.max_column', 999)
pd.set_option('display.max_row', 999)
</code></pre>
<p>where 999 is the number of rows/columns to be shown</p>
| 1 | 2016-08-19T18:38:10Z | [
"python",
"dataframe",
"jupyter-notebook"
] |
Jupyter Notebook: need to review all cells in a data frame | 39,045,754 | <p>I am using Jupyter notebook, having a python data frame of 100 columns and 200 rows. I need to manually review every cell in the data frame, but on the browser, it just show "..." when the row or column numbers are large. Can I force it to show every row and column instead of just showing "..."?</p>
<p>Thanks!</p>
| 1 | 2016-08-19T18:36:43Z | 39,045,787 | <p>You must configure the <code>display.max_rows</code> and/or <code>display.max_columns</code>using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.set_option.html" rel="nofollow"><code>pd.set_option()</code></a>.</p>
<p>I.e.:</p>
<pre><code>def print_all(x):
pd.set_option('display.max_rows', len(x))
pd.set_option('display.max_columns', len(x.columns))
print(x)
pd.reset_option('display.max_rows')
pd.reset_option('display.max_columns')
</code></pre>
| 1 | 2016-08-19T18:39:02Z | [
"python",
"dataframe",
"jupyter-notebook"
] |
PyDev Interactive console not initiating | 39,045,762 | <p>When I try to start an interactive console in Eclipse PyDev, I get the following error:</p>
<blockquote>
<p>'Create Interactive Console' has encountered a problem.</p>
<p>Error Initialising console.</p>
<p>Error initializing console.
Unexpected error connecting to console.
Failed to recive suitable Hello response from pydevconsole. Last msg received: Console already exited with value: 1 while waiting for an answer.</p>
</blockquote>
<p>I have already tried the following solutions given by others for similar problems to no avail:</p>
<p><a href="http://stackoverflow.com/questions/22355359/pydev-interactive-console">PyDev interactive console</a></p>
<p><a href="http://stackoverflow.com/questions/22355359/pydev-interactive-console/35953122#35953122">PyDev interactive console</a></p>
<p><a href="http://stackoverflow.com/questions/27197887/pydev-error-initializing-console">Pydev: error initializing console</a></p>
<p><a href="http://stackoverflow.com/questions/31507500/error-during-runfile-in-eclipse-with-pydev-error-initializing-console">Error during runfile in Eclipse with PyDev/ error initializing console</a></p>
<p>I had had PyDev working before, but I had to upgrade my python from Python 2.7 32bit to Python 2.7 64bit to use a library developed by a colleague. I know that my Python is not the problem since this library works fine in the generic Python shell, is there something going on with my <a href="http://i.stack.imgur.com/DdPvX.jpg" rel="nofollow">interpreter configuration settings...?</a></p>
<p>I have also tried the obvious remove and reconfigure the same interpreter, which also didn't work. I'm running out of ideas, but really enjoyed using PyDev's environment, so any help - would be really appreciated!</p>
| 0 | 2016-08-19T18:37:01Z | 39,050,045 | <p>Make sure your python path is set in
Window>Preferences>Interpreters>Python Interpreter </p>
<p>Make sure your java is 64bit</p>
| 0 | 2016-08-20T03:01:00Z | [
"python",
"eclipse",
"python-2.7",
"64bit",
"pydev"
] |
PyDev Interactive console not initiating | 39,045,762 | <p>When I try to start an interactive console in Eclipse PyDev, I get the following error:</p>
<blockquote>
<p>'Create Interactive Console' has encountered a problem.</p>
<p>Error Initialising console.</p>
<p>Error initializing console.
Unexpected error connecting to console.
Failed to recive suitable Hello response from pydevconsole. Last msg received: Console already exited with value: 1 while waiting for an answer.</p>
</blockquote>
<p>I have already tried the following solutions given by others for similar problems to no avail:</p>
<p><a href="http://stackoverflow.com/questions/22355359/pydev-interactive-console">PyDev interactive console</a></p>
<p><a href="http://stackoverflow.com/questions/22355359/pydev-interactive-console/35953122#35953122">PyDev interactive console</a></p>
<p><a href="http://stackoverflow.com/questions/27197887/pydev-error-initializing-console">Pydev: error initializing console</a></p>
<p><a href="http://stackoverflow.com/questions/31507500/error-during-runfile-in-eclipse-with-pydev-error-initializing-console">Error during runfile in Eclipse with PyDev/ error initializing console</a></p>
<p>I had had PyDev working before, but I had to upgrade my python from Python 2.7 32bit to Python 2.7 64bit to use a library developed by a colleague. I know that my Python is not the problem since this library works fine in the generic Python shell, is there something going on with my <a href="http://i.stack.imgur.com/DdPvX.jpg" rel="nofollow">interpreter configuration settings...?</a></p>
<p>I have also tried the obvious remove and reconfigure the same interpreter, which also didn't work. I'm running out of ideas, but really enjoyed using PyDev's environment, so any help - would be really appreciated!</p>
| 0 | 2016-08-19T18:37:01Z | 39,086,455 | <p>For anyone struggling with the same issue, I ended up reinstalling everything from scratch. This was the most useful link.</p>
<p><a href="http://halfanhour.blogspot.ca/2010/05/setting-up-pydev-on-eclipse-for-64-bit.html" rel="nofollow">http://halfanhour.blogspot.ca/2010/05/setting-up-pydev-on-eclipse-for-64-bit.html</a></p>
<p>The only thing I didn't do from this site was I downloaded the 64 bit Neon that was offered. I guess back when that posting was written, the 64 bit was not a good version.</p>
<p>This was annoying and nothing out there seemed to help, but it now works.</p>
| 0 | 2016-08-22T18:34:25Z | [
"python",
"eclipse",
"python-2.7",
"64bit",
"pydev"
] |
Why use tuples when inserting vars into strings in Python? | 39,045,778 | <p>I feel like this would have been asked before, but could not find anything.</p>
<p>In python, if you want to insert a var into a string, there are (at least) two ways of doing so.</p>
<p>Using the <code>+</code> operator</p>
<pre><code>place = "Alaska"
adjective = "cold"
sentence = "I live in "+place+"! It's very "+adjective+"!"
# "I live in Alaska! It's very cold!"
</code></pre>
<p>And using a tuple</p>
<pre><code>place = "Houston"
adjective = "humid"
sentence = "I live in %s! It's very %s!" % (place, adjective)
# "I live in Houston! It's very humid!"
</code></pre>
<p>Why would one use the tuple method over using <code>+</code>? The tuple format seems a lot more obfuscated. Does it provide an advantage in some cases?</p>
<p>The only advantage I can think of is you don't have to cast types with the latter method, you can use <code>%s</code> to refer to <code>a</code>, where <code>a = 42</code>, and it will just print it as a string, as opposed to using <code>str(a)</code>. Still that hardly seems like a significant reason to sacrifice readability.</p>
| 2 | 2016-08-19T18:38:25Z | 39,045,824 | <p>For a lot of reasons. One good reason is that not always you can or want to use separated strings, because they are already neatly arranged in a list or tuple:</p>
<pre><code>strings = ["one", "two", "three"]
print "{}, {} and {}".format(*strings)
> one, two and three
</code></pre>
| 4 | 2016-08-19T18:41:54Z | [
"python",
"string",
"tuples",
"readability"
] |
Why use tuples when inserting vars into strings in Python? | 39,045,778 | <p>I feel like this would have been asked before, but could not find anything.</p>
<p>In python, if you want to insert a var into a string, there are (at least) two ways of doing so.</p>
<p>Using the <code>+</code> operator</p>
<pre><code>place = "Alaska"
adjective = "cold"
sentence = "I live in "+place+"! It's very "+adjective+"!"
# "I live in Alaska! It's very cold!"
</code></pre>
<p>And using a tuple</p>
<pre><code>place = "Houston"
adjective = "humid"
sentence = "I live in %s! It's very %s!" % (place, adjective)
# "I live in Houston! It's very humid!"
</code></pre>
<p>Why would one use the tuple method over using <code>+</code>? The tuple format seems a lot more obfuscated. Does it provide an advantage in some cases?</p>
<p>The only advantage I can think of is you don't have to cast types with the latter method, you can use <code>%s</code> to refer to <code>a</code>, where <code>a = 42</code>, and it will just print it as a string, as opposed to using <code>str(a)</code>. Still that hardly seems like a significant reason to sacrifice readability.</p>
| 2 | 2016-08-19T18:38:25Z | 39,045,837 | <p>The <code>string % (value, value, ..)</code> syntax is called a <a href="https://docs.python.org/2/library/stdtypes.html#string-formatting" rel="nofollow"><em>string formatting operation</em></a>, and you can also apply a <em>dictionary</em>, it is not limited to just tuples. These days you'd actually want to use the newer <a href="https://docs.python.org/2/library/stdtypes.html#str.format" rel="nofollow"><code>str.format()</code> method</a> as it expands on the offered functionality.</p>
<p>String formatting is about much more than just inserting strings in-between other strings. You'd use it because</p>
<ul>
<li><p>you can configure each interpolation, based on the type of object you are trying to insert into the string. You can configure how floating point numbers are formatted, how dates are formatted (with <code>str.format()</code>), etc.</p></li>
<li><p>you can adjust how the values are padded or aligned; you could create columns with values all neatly right-aligned in fixed-width columns, for example.</p></li>
<li><p>especially with <code>str.format()</code>, the various aspects you can control can either be hardcoded in the string template, or taken from additional variables (the older <code>%</code> string formatting operation only allows for field width and numeric precision can be made dynamic).</p></li>
<li><p>you can define and store the string templates independently, applying values you want to interpolate separately:</p>
<pre><code>template = 'Hello {name}! How do you find {country} today?'
result = template.format(**user_information)
</code></pre>
<p>What fields are available can be larger than what you actually use in the template.</p></li>
<li><p>it can be faster; each <code>+</code> string concatenation has to create a new string object. Add up enough <code>+</code> concatenations and you end up creating a lot of new string objects that are then discarded again. String formatting only has to create one final output string.</p></li>
<li><p>string formatting is actually far more readable than using concatenation, as well as being more maintainable. Try to use <code>+</code> or string formatting on a <em>multiline</em> string with half a dozen different values to be interpolated. Compare:</p>
<pre><code>result = '''\
Hello {firstname}!
I wanted to send you a brief message to inform you of your updated
scores:
Enemies engaged: {enemies.count:>{width}d}
Enemies killed: {enemies.killed:>{width}d}
Kill ratio: {enemies.ratio:>{width}.2%}
Monthly score: {scores.month_to_date:0{width}d}
Hope you are having a great time!
{additional_promotion}
'''.format(width=10, **email_info)
</code></pre>
<p>with</p>
<pre><code>result = '''\
Hello ''' + firstname + '''!
I wanted to send you a brief message to inform you of your updated
scores:
Enemies engaged: ''' + str(email_info['enemies'].count).rjust(width) + '''
Enemies killed: ''' + str(email_info['enemies'].killed).rjust(width) + '''
Kill ratio: ''' + str(round(email_info['enemies'].ratio * 100, 2)).rjust(width - 1) + '''%
Monthly score: ''' + str(email_info['scores'].month_to_date).zfill(width) + '''
Hope you are having a great time!
''' + email_info['additional_promotion'] + '''
'''
</code></pre>
<p>Now imagine having to re-arrange the fields, add some extra text and a few new fields. Would you rather do that in the first or second example?</p></li>
</ul>
| 4 | 2016-08-19T18:42:46Z | [
"python",
"string",
"tuples",
"readability"
] |
Conditional assignment of tensor values in TensorFlow | 39,045,797 | <p>I want to replicate the following <code>numpy</code> code in <code>tensorflow</code>. For example, I want to assign a <code>0</code> to all tensor indices that previously had a value of <code>1</code>.</p>
<pre><code>a = np.array([1, 2, 3, 1])
a[a==1] = 0
# a should be [0, 2, 3, 0]
</code></pre>
<p>If I write similar code in <code>tensorflow</code> I get the following error.</p>
<pre><code>TypeError: 'Tensor' object does not support item assignment
</code></pre>
<p>The condition in the square brackets should be arbitrary as in <code>a[a<1] = 0</code>.</p>
<p>Is there a way to realize this "conditional assignment" (for lack of a better name) in <code>tensorflow</code>?</p>
| 2 | 2016-08-19T18:40:02Z | 39,047,066 | <p><a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/control_flow_ops.html#comparison-operators" rel="nofollow">Several comparison operators</a> are available within TensorFlow API. </p>
<p>However, there is nothing equivalent to the concise NumPy syntax when it comes to manipulating the tensors directly. You have to make use of individual <code>comparison</code>, <code>select</code> and <code>assign</code> operators to perform the same action.</p>
<p>Equivalent code to your NumPy example is this:</p>
<pre><code>import tensorflow as tf
init_a = tf.constant( [1,2,3,1] )
a = tf.Variable( init_a )
start_op = tf.initialize_all_variables()
comparison = tf.equal( a, tf.constant( 1 ) )
conditional_assignment_op = a.assign( tf.select(comparison, tf.zeros_like(a), a) )
with tf.Session() as session:
# Equivalent to: a = np.array( [1, 2, 3, 1] )
session.run( start_op )
print( a.eval() )
# Equivalent to: a[a==1] = 0
session.run( conditional_assignment_op )
print( a.eval() )
# Output is:
# [1 2 3 1]
# [0 2 3 0]
</code></pre>
<p>The print statements are of course optional, they are just there to demonstrate the code is performing correctly.</p>
| 3 | 2016-08-19T20:18:14Z | [
"python",
"numpy",
"tensorflow"
] |
Conditional assignment of tensor values in TensorFlow | 39,045,797 | <p>I want to replicate the following <code>numpy</code> code in <code>tensorflow</code>. For example, I want to assign a <code>0</code> to all tensor indices that previously had a value of <code>1</code>.</p>
<pre><code>a = np.array([1, 2, 3, 1])
a[a==1] = 0
# a should be [0, 2, 3, 0]
</code></pre>
<p>If I write similar code in <code>tensorflow</code> I get the following error.</p>
<pre><code>TypeError: 'Tensor' object does not support item assignment
</code></pre>
<p>The condition in the square brackets should be arbitrary as in <code>a[a<1] = 0</code>.</p>
<p>Is there a way to realize this "conditional assignment" (for lack of a better name) in <code>tensorflow</code>?</p>
| 2 | 2016-08-19T18:40:02Z | 39,047,590 | <p>I'm also just starting to use tensorflow
Maybe some one will fill my approach more intuitive</p>
<pre><code>import tensorflow as tf
conditionVal = 1
init_a = tf.constant([1, 2, 3, 1], dtype=tf.int32, name='init_a')
a = tf.Variable(init_a, dtype=tf.int32, name='a')
target = tf.fill(a.get_shape(), conditionVal, name='target')
init = tf.initialize_all_variables()
condition = tf.not_equal(a, target)
defaultValues = tf.zeros(a.get_shape(), dtype=a.dtype)
calculate = tf.select(condition, a, defaultValues)
with tf.Session() as session:
session.run(init)
session.run(calculate)
print(calculate.eval())
</code></pre>
<p>main trouble is that it is difficult to implement "custom logic". if you could not explain your logic within linear math terms you need to write "custom op" library for tensorflow (<a href="https://www.tensorflow.org/versions/r0.10/how_tos/adding_an_op/index.html" rel="nofollow">more details here</a>)</p>
| 0 | 2016-08-19T20:59:59Z | [
"python",
"numpy",
"tensorflow"
] |
mplayer - How can I detect EOF in Python | 39,045,892 | <p>I have a playlist that changes during play, Mplayer doesn't reload the playlist at the end of the first track so what I need to do is capture the EOF, then reload mplayer to carry on playing. How can I detect EOF using mplayer and popen? Or is there an easier way that I'm missing? I've checked the suggested 'duplicate' question and I don't believe it gives me the answer as this is capturing the end of a track/playlist via popen. </p>
<pre><code>def play_music():
global myplaylist
global playflag
if not playflag:
mycommand = ["mplayer -really-quiet -slave -volume 1 -playlist /home/pi/scripts/playlist.txt"]
p = subprocess.Popen(mycommand, stdin=PIPE, stdout=PIPE, stderr=PIPE, shell=True)
playflag = True
else:
pass
</code></pre>
| 0 | 2016-08-19T18:46:32Z | 39,095,699 | <p>Well I couldn't fix this issue, there must be a way of getting the mplayer status and then doing some code around the state. As it stands I couldn't get mplayer to reload the playlist once it had a new track added and when I did it started back at the beginning - Not very often I let something beat me! </p>
<p>For future readers I went down the MPD/MPC route, wasn't where I wanted to go but it has good playlist management and using the <code>client.consume</code> property it actually removes tracks from the playlist as you go which is great for a Jukebox scenario. </p>
| 0 | 2016-08-23T08:09:14Z | [
"python",
"mplayer"
] |
Python program that crawls for external IP address | 39,045,946 | <p>I created a basic program to try and crawl a website for my external IP address with <code>BeautifulSoup 4</code>. Although, I keep getting an Attribute Error for my program because it can't obtain the string of a div class or whatever. It would appear as the specific div class does not exists and that it cannot therefore crawl it. I do know for a fact that it exists, even though it's saying it doesn't. Does anyone know what is wrong?</p>
<p><strong>Here is my code:</strong></p>
<pre><code>import requests, sys, io
from html.parser import HTMLParser
from bs4 import BeautifulSoup
url = "https://www.iplocation.net/find-ip-address"
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, "cp437", "backslashreplace")
sourcecode = requests.get(url)
plaintext = sourcecode.text
soup = BeautifulSoup(plaintext, "html.parser")
tag = soup.find("span", {"style": "font-weight: bold; color:green;"})
print(tag)
ip = tag.string
print(ip)
</code></pre>
| 1 | 2016-08-19T18:51:07Z | 39,047,945 | <p>It has nothing to do with Javascript, if you look at the source returned you can see:</p>
<pre><code><html style="height:100%"><head><META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW"><meta name="format-detection" content="telephone=no"><meta name="viewport" content="initial-scale=1.0"><meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"></head><body style="margin:0px;height:100%"><iframe src="/_Incapsula_Resource?CWUDNSAI=24&xinfo=9-52943897-0 0NNN RT(1471643127529 69) q(0 -1 -1 -1) r(0 -1) B12(8,881022,0) U10000&incident_id=198001480102412051-472966643371608393&edet=12&cinfo=08000000" frameborder=0 width="100%" height="100%" marginheight="0px" marginwidth="0px">Request unsuccessful. Incapsula incident ID: 198001480102412051-472966643371608393</iframe></body></html>
</code></pre>
<p>They have detected that you are a bot and don't give you the source you expect.</p>
<p>You can get your ip and a lot more info using <em>wtfismyip.com</em> in json format:</p>
<pre><code>url = "http://wtfismyip.com/json"
js = requests.get(url).json()
print(js)
</code></pre>
<p>Or just your ip using <em>httpbin</em>:</p>
<pre><code>url = "http://httpbin.org/ip"
js = requests.get(url).json()
print(js)
</code></pre>
| 2 | 2016-08-19T21:30:53Z | [
"python",
"python-3.x",
"beautifulsoup",
"ip",
"web-crawler"
] |
Python program that crawls for external IP address | 39,045,946 | <p>I created a basic program to try and crawl a website for my external IP address with <code>BeautifulSoup 4</code>. Although, I keep getting an Attribute Error for my program because it can't obtain the string of a div class or whatever. It would appear as the specific div class does not exists and that it cannot therefore crawl it. I do know for a fact that it exists, even though it's saying it doesn't. Does anyone know what is wrong?</p>
<p><strong>Here is my code:</strong></p>
<pre><code>import requests, sys, io
from html.parser import HTMLParser
from bs4 import BeautifulSoup
url = "https://www.iplocation.net/find-ip-address"
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, "cp437", "backslashreplace")
sourcecode = requests.get(url)
plaintext = sourcecode.text
soup = BeautifulSoup(plaintext, "html.parser")
tag = soup.find("span", {"style": "font-weight: bold; color:green;"})
print(tag)
ip = tag.string
print(ip)
</code></pre>
| 1 | 2016-08-19T18:51:07Z | 39,050,273 | <p>As explained above that have bot detection mechanism placed on there servers and if you try to do requests.get then it return "Request unsuccessful. Incapsula incident ID: 415000500153648966-193432437842182947" and since source code is not loaded that you cannot find the required info.
If you want to do it with beautifulsoup, with the help of selenium along with beautifulsoup you can get it, here is sample code:</p>
<p>if selenium is not installed than first do "pip install selenium" and download chromedriver from "<a href="https://sites.google.com/a/chromium.org/chromedriver/downloads" rel="nofollow">https://sites.google.com/a/chromium.org/chromedriver/downloads</a>"</p>
<pre><code>from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome("**Path to chrome driver**\chromedriver.exe")
driver.get('https://www.iplocation.net/find-ip-address')
content = driver.page_source.encode('utf-8').strip()
soup = BeautifulSoup(content,"html.parser")
tag = soup.find("span", {"style": "font-weight: bold; color:green;"}).text
print(tag)
</code></pre>
<p>It will print: xxx.xx.xxx.xxx</p>
<p>Note: Some time When first time you launch the script on a new machine it might ask for a captcha, enter it manually and then the script will work</p>
| 1 | 2016-08-20T03:48:22Z | [
"python",
"python-3.x",
"beautifulsoup",
"ip",
"web-crawler"
] |
Pandas: get multiindex level as series | 39,045,948 | <p>I have a dataframe with multiple levels, eg:</p>
<pre><code>idx = pd.MultiIndex.from_product((['foo', 'bar'], ['one', 'five', 'three' 'four']),
names=['first', 'second'])
df = pd.DataFrame({'A': [np.nan, 12, np.nan, 11, 16, 12, 11, np.nan]}, index=idx).dropna().astype(int)
A
first second
foo five 12
four 11
bar one 16
five 12
three 11
</code></pre>
<p>I want to create a new column using the index level titled <code>second</code>, so that I get</p>
<pre><code> A B
first second
foo five 12 five
four 11 four
bar one 16 one
five 12 five
three 11 three
</code></pre>
<p>I can do this by resetting the index, copying the column, then re-applying, but that seems more round-about. </p>
<p>I tried <code>df.index.levels[1]</code>, but that creates a sorted list, it doesn't preserve the order. </p>
<p>If it was a single index, I would use <code>df.index</code> but in a multiindex that creates a column of tuples. </p>
<p>If this is resolved elsewhere, please share as I haven't had any luck searching the stackoverflow archives.</p>
| 4 | 2016-08-19T18:51:11Z | 39,046,141 | <pre><code>df['B'] = idx.to_series().str[1]
</code></pre>
| 2 | 2016-08-19T19:06:11Z | [
"python",
"pandas",
"multi-index"
] |
Pandas: get multiindex level as series | 39,045,948 | <p>I have a dataframe with multiple levels, eg:</p>
<pre><code>idx = pd.MultiIndex.from_product((['foo', 'bar'], ['one', 'five', 'three' 'four']),
names=['first', 'second'])
df = pd.DataFrame({'A': [np.nan, 12, np.nan, 11, 16, 12, 11, np.nan]}, index=idx).dropna().astype(int)
A
first second
foo five 12
four 11
bar one 16
five 12
three 11
</code></pre>
<p>I want to create a new column using the index level titled <code>second</code>, so that I get</p>
<pre><code> A B
first second
foo five 12 five
four 11 four
bar one 16 one
five 12 five
three 11 three
</code></pre>
<p>I can do this by resetting the index, copying the column, then re-applying, but that seems more round-about. </p>
<p>I tried <code>df.index.levels[1]</code>, but that creates a sorted list, it doesn't preserve the order. </p>
<p>If it was a single index, I would use <code>df.index</code> but in a multiindex that creates a column of tuples. </p>
<p>If this is resolved elsewhere, please share as I haven't had any luck searching the stackoverflow archives.</p>
| 4 | 2016-08-19T18:51:11Z | 39,046,145 | <pre><code>df['B'] = df.index.get_level_values(level=1) # Zero based indexing.
# df['B'] = df.index.get_level_values(level='second') # This also works.
>>> df
A B
first second
foo one 12 one
two 11 two
bar one 16 one
two 12 two
three 11 three
</code></pre>
| 4 | 2016-08-19T19:06:33Z | [
"python",
"pandas",
"multi-index"
] |
Numpy: improve fancy indexing on arrays | 39,045,976 | <p>Looking for faster fancy indexing for numpy, the code I am running slows down, at <code>np.take()</code>. I tried <code>order=F/C</code> with <code>np.reshape()</code>, no improvement. Python <code>operator</code> works well without the double <code>transpose</code>, but with them is equal to <code>np.take().</code> </p>
<pre><code>p = np.random.randn(3500, 51)
rows = np.asarray(range(p.shape[0]))
cols = np.asarray([1,2,3,4,5,6,7,8,9,10,15,20,25,30,40,50])
%timeit p[rows][:, cols]
%timeit p.take(cols, axis = 1 )
%timeit np.asarray(operator.itemgetter(*cols)(p.T)).T
1000 loops, best of 3: 301 µs per loop
10000 loops, best of 3: 132 µs per loop
10000 loops, best of 3: 135 µs per loop
</code></pre>
| 0 | 2016-08-19T18:53:13Z | 39,046,433 | <p>A test of several options:</p>
<pre><code>In [3]: p[rows][:,cols].shape
Out[3]: (3500, 16)
In [4]: p[rows[:,None],cols].shape
Out[4]: (3500, 16)
In [5]: p[:,cols].shape
Out[5]: (3500, 16)
In [6]: p.take(cols,axis=1).shape
Out[6]: (3500, 16)
</code></pre>
<p>time tests - plain <code>p[:,cols]</code> is fastest. Use a slice where possible.</p>
<pre><code>In [7]: timeit p[rows][:,cols].shape
100 loops, best of 3: 2.78 ms per loop
In [8]: timeit p.take(cols,axis=1).shape
1000 loops, best of 3: 739 µs per loop
In [9]: timeit p[rows[:,None],cols].shape
1000 loops, best of 3: 1.43 ms per loop
In [10]: timeit p[:,cols].shape
1000 loops, best of 3: 649 µs per loop
</code></pre>
<p>I've seen <code>itemgetter</code> used for lists, but not arrays. It's a class that iterates of a set of indexes. These 2 lines are doing the same thing:</p>
<pre><code>In [23]: timeit np.asarray(operator.itemgetter(*cols)(p.T)).T.shape
1000 loops, best of 3: 738 µs per loop
In [24]: timeit np.array([p.T[c] for c in cols]).T.shape
1000 loops, best of 3: 748 µs per loop
</code></pre>
<p>Notice that <code>p.T[c]</code> is <code>p.T[c,:]</code> or <code>p[:,c].T</code>. With relatively few <code>cols</code>, and by ignoring advanced indexing with <code>rows</code>, it times close to <code>p[:,cols]</code>.</p>
| 2 | 2016-08-19T19:29:58Z | [
"python",
"numpy",
"optimization",
"slice"
] |
Adding up Random int as it it created | 39,045,980 | <p>I'm trying to get the computer to save it's random outputs then add them.</p>
<p>Example: </p>
<pre><code>ran_num = 1 ; ran_total = 1
ran_num = 3 ; ran_total = 4
</code></pre>
<p>And so on... </p>
<p>This is all in a <code>while</code> loop. </p>
<p>So until the <code>ran_total</code> reaches a certain number it will continue to add those numbers.</p>
<p>This is the part of the code I'm trying to fix. </p>
<pre><code>if player_bet <= Player.total_money_amount:
import random
computer_choice = random.randint(1, 5)
computer_choice_total =+computer_choice # Over Here
print('Computer choice: ', computer_choice)
print("Total Amount", computer_choice_total)
player_yes_or_no = input('Continue? Yes or No')
if player_yes_or_no == 'Yes':
</code></pre>
| -2 | 2016-08-19T18:53:26Z | 39,046,020 | <p>Your problem is most likely on this line (line 5):</p>
<pre><code>computer_choice_total =+computer_choice
</code></pre>
<p><code>=+</code> isn't a valid Python operator. Instead, use <code>+=</code>, like this:</p>
<pre><code>computer_choice_total += computer_choice
</code></pre>
<p>Also, you'll want to make sure your Python code is indented properly, or else it won't run as you intend it to.</p>
| 2 | 2016-08-19T18:56:42Z | [
"python",
"random"
] |
OpenCV Identify rows in Table of variable size | 39,046,011 | <p>I have a bunch of scans of forms that all look like this:</p>
<p><a href="http://i.stack.imgur.com/PqyPq.png" rel="nofollow"><img src="http://i.stack.imgur.com/PqyPq.png" alt="enter image description here"></a></p>
<p>I'm trying to take out each row to make each on its own image (a row being the box with 10 all the way to the right of the form). I've written a function (in python) that will find all the boxes, OCR each box on its own (using tesseract), determine whether the ID label is present (in this blank form, just the 10), and use the height of the box and width of the whole table to pull out the row. </p>
<p>The problem with this process is the OCR; some of the tables are so pixelated that no text is detected at all, so that row doesn't get taken out of the table. I used the row rectangle boundaries of one form that had a good OCR result to take out the rows from all the forms, but, for some reason, some of the forms have differently sized headers, or the row height is larger or smaller than 'normal' (I've resized every table to be the same resolution). One thing that does not change is the general layout of the text windows within each row, though one form's rows might be taller or shorter relative to another table. </p>
<p>My question: how can I identify each row as a feature using one (or a set of) example(s), while accounting for the slight variation in the row position in various examples? I would appreciate any ideas you might have. </p>
<p>I'm working with Python 2.7, OpenCV 3.1.0 (on windows), and the same with scikit-image and scikit-learn on an ubuntu VM. </p>
| 0 | 2016-08-19T18:55:51Z | 39,046,108 | <p><code>HoughLines</code> helps you find the lines in the image. After that you need to filter out all the non-horizontal ones. You can use the remaining ones to find out where you need to split the image.</p>
| 2 | 2016-08-19T19:03:37Z | [
"python",
"opencv",
"image-processing",
"scikit-image"
] |
PyQt - Location of the window | 39,046,059 | <pre><code>def location_on_the_screen(self):
fg = self.frameGeometry()
sbrp = QDesktopWidget().availableGeometry().bottomRight()
fg.moveBottomRight(sbrp)
self.move(fg.topLeft())
</code></pre>
<p>I can't place the window in the bottom right corner of the screen. frameGeometry() not working as it should. Help me, please, what can I do?</p>
| 0 | 2016-08-19T18:59:51Z | 39,052,495 | <p>I assume that your 'window' is a subclass of <code>QWidget</code>. If so, the following should fit your needs:</p>
<pre><code>import sys
from PyQt5.QtWidgets import QApplication, QWidget, QDesktopWidget
class MyWidget(QWidget):
def __init__(self):
super().__init__()
self.setFixedSize(400, 300)
def location_on_the_screen(self):
screen = QDesktopWidget().screenGeometry()
widget = self.geometry()
x = screen.width() - widget.width()
y = screen.height() - widget.height()
self.move(x, y)
if __name__ == '__main__':
app = QApplication(sys.argv)
widget = MyWidget()
widget.location_on_the_screen()
widget.show()
app.exec_()
</code></pre>
| 0 | 2016-08-20T09:23:18Z | [
"python",
"python-3.x",
"user-interface",
"pyqt",
"pyqt5"
] |
PyQt - Location of the window | 39,046,059 | <pre><code>def location_on_the_screen(self):
fg = self.frameGeometry()
sbrp = QDesktopWidget().availableGeometry().bottomRight()
fg.moveBottomRight(sbrp)
self.move(fg.topLeft())
</code></pre>
<p>I can't place the window in the bottom right corner of the screen. frameGeometry() not working as it should. Help me, please, what can I do?</p>
| 0 | 2016-08-19T18:59:51Z | 39,064,225 | <p>Here's a possible solution for windows:</p>
<pre><code>import sys
from PyQt5.QtWidgets import QApplication, QWidget, QDesktopWidget
class MyWidget(QWidget):
def __init__(self):
super().__init__()
self.setFixedSize(400, 300)
def location_on_the_screen(self):
ag = QDesktopWidget().availableGeometry()
sg = QDesktopWidget().screenGeometry()
widget = self.geometry()
x = ag.width() - widget.width()
y = 2 * ag.height() - sg.height() - widget.height()
self.move(x, y)
if __name__ == '__main__':
app = QApplication(sys.argv)
widget = MyWidget()
widget.location_on_the_screen()
widget.show()
app.exec_()
</code></pre>
| 0 | 2016-08-21T12:33:06Z | [
"python",
"python-3.x",
"user-interface",
"pyqt",
"pyqt5"
] |
Use scikit-image to read in an image buffer | 39,046,106 | <p>I already have an in-memory file of an image. I would like to use skimage's io.imread (or equivalent) to read in the image. However, skimage.io.imread() take a file not a buffer</p>
<p>example buffer:</p>
<pre><code> <Buffer ff d8 ff e0 00 10 4a 46 49 46 00 01 01 00 00 01 00 01 00 00 ff db 00 43 00 03 02 02 02 02 02 03 02 02 02 03 03 03 03 04 06 04 04 04 04 04 08 06 06 05 ... >
</code></pre>
<p>skimage.io.imread() just results in a numpy array.</p>
| 0 | 2016-08-19T19:03:31Z | 39,046,359 | <p>Try converting the buffer into StringIO and then read it using <code>skimage.io.imread()</code></p>
<pre><code>import cStringIO
img_stringIO = cStringIO.StringIO(buf)
img = skimage.io.imread(img_stringIO)
</code></pre>
<p>You can also read it as :</p>
<pre><code>from PIL import Image
img = Image.open(img_stringIO)
</code></pre>
| 2 | 2016-08-19T19:22:49Z | [
"python",
"numpy",
"io",
"buffer",
"scikit-image"
] |
Sublist to number | 39,046,121 | <p>I'm working on the solution to a fun problem I found. The code I have gives a bunch of sublists, like (1, 2, 3, 0, 0). Is there a way to turn that sublist into the number 12300 and append it to a new list, <code>perm2</code>? I would have to do this for quite a few sublists, so preferably it would be a function I could run on the whole list (i.e., it'd iterate through the list, do the conversion for each number, and append each new number to the new list, though the old list would stay exactly the same).</p>
<p>So far, I have the code</p>
<pre><code>import itertools
digits = [1,2,3,0,0]
perm = list(itertools.permutations(digits))
perm2 = []
print perm
def lst_var (lst):
for i in lst:
litem = lst[i]
#conversion takes place
perm2.append(v)
lst_var(perm)
</code></pre>
<p>But I really don't know how to do the conversion, and I can't find a solution anywhere. Any help would be appreciated.</p>
<p>Thanks!</p>
| 1 | 2016-08-19T19:04:28Z | 39,046,161 | <pre><code>digits = [1,2,3,0,0]
int(''.join([str(x) for x in digits]))
</code></pre>
| 0 | 2016-08-19T19:07:46Z | [
"python",
"list",
"function"
] |
Sublist to number | 39,046,121 | <p>I'm working on the solution to a fun problem I found. The code I have gives a bunch of sublists, like (1, 2, 3, 0, 0). Is there a way to turn that sublist into the number 12300 and append it to a new list, <code>perm2</code>? I would have to do this for quite a few sublists, so preferably it would be a function I could run on the whole list (i.e., it'd iterate through the list, do the conversion for each number, and append each new number to the new list, though the old list would stay exactly the same).</p>
<p>So far, I have the code</p>
<pre><code>import itertools
digits = [1,2,3,0,0]
perm = list(itertools.permutations(digits))
perm2 = []
print perm
def lst_var (lst):
for i in lst:
litem = lst[i]
#conversion takes place
perm2.append(v)
lst_var(perm)
</code></pre>
<p>But I really don't know how to do the conversion, and I can't find a solution anywhere. Any help would be appreciated.</p>
<p>Thanks!</p>
| 1 | 2016-08-19T19:04:28Z | 39,046,212 | <p>You here are a few different ways to solve this:</p>
<p><strong>1.</strong> <code>perm2 = [int(''.join(str(i) for i in sublist)) for sublist in perm]</code></p>
<p><strong>2.</strong> <code>perm2 = [int(''.join(map(str, sublist))) for sublist in perm]</code></p>
<p>A more <strong>performant</strong> mathematical version:</p>
<p><strong>3.</strong> <code>print [reduce(lambda x, y: 10 * x + y, sublist) for sublist in perm]</code></p>
<p><strong>4.</strong> <code>print map(lambda x: reduce(lambda x, y: 10 * x + y, x), perm)</code> </p>
<p>This method converts the list to a string of this form -> Ex: <code>[1, 2, 3, 4, 5]</code> first using <code>repr()</code> then slices to return a sublist.</p>
<p><strong>5.</strong> <code>print [int(repr(sublist)[1::3]) for sublist in perm]</code></p>
<p><strong>Sample Output:</strong></p>
<pre><code>>>> import itertools
>>> digits = [1,2,3,0,0]
>>> perm = list(itertools.permutations(digits))
>>> perm2 = [int(''.join(map(str, sublist))) for sublist in perm]
>>> print perm2
[12300, 12300, 12030, 12003, 12030, 12003, 13200, 13200, 13020, 13002, 13020, 13002, 10230, 10203, 10320, 10302, 10023, 10032, 10230, 10203, 10320, 10302, 10023, 10032, 21300, 21300, 21030, 21003, 21030, 21003, 23100, 23100, 23010, 23001, 23010, 23001, 20130, 20103, 20310, 20301, 20013, 20031, 20130, 20103, 20310, 20301, 20013, 20031, 31200, 31200, 31020, 31002, 31020, 31002, 32100, 32100, 32010, 32001, 32010, 32001, 30120, 30102, 30210, 30201, 30012, 30021, 30120, 30102, 30210, 30201, 30012, 30021, 1230, 1203, 1320, 1302, 1023, 1032, 2130, 2103, 2310, 2301, 2013, 2031, 3120, 3102, 3210, 3201, 3012, 3021, 123, 132, 213, 231, 312, 321, 1230, 1203, 1320, 1302, 1023, 1032, 2130, 2103, 2310, 2301, 2013, 2031, 3120, 3102, 3210, 3201, 3012, 3021, 123, 132, 213, 231, 312, 321]
</code></pre>
<p><strong>Some benchmarks:</strong></p>
<pre><code>from timeit import timeit
repeat = 1000000
print 'Solution 1 took ->', timeit("import itertools;[int(''.join(str(i) for i in sublist)) for sublist in list(itertools.permutations([1,2,3,0,0]))]", number=repeat), 'secs'
print 'Solution 2 took ->', timeit("import itertools;[int(''.join(map(str, sublist))) for sublist in list(itertools.permutations([1,2,3,0,0]))]", number=repeat), 'secs'
print 'Solution 3 took ->', timeit("import itertools;map(lambda x: reduce(lambda x, y: 10 * x + y, x), list(itertools.permutations([1,2,3,0,0])))", number=repeat), 'secs'
print 'Solution 4 took ->', timeit("import itertools;[reduce(lambda x, y: 10 * x + y, sublist) for sublist in list(itertools.permutations([1,2,3,0,0]))]", number=repeat), 'secs'
print 'Solution 5 took ->', timeit("import itertools;[int(repr(sublist)[1::3]) for sublist in list(itertools.permutations([1,2,3,0,0]))]", number=repeat), 'secs'
</code></pre>
<p><strong>Results (repeat = 1000000):</strong></p>
<pre><code>Solution 1 took -> 242.802856922 secs
Solution 2 took -> 153.20646596 secs
Solution 3 took -> 97.4842221737 secs
Solution 4 took -> 87.8391051292 secs
Solution 5 took -> 122.897110224 secs
</code></pre>
| 3 | 2016-08-19T19:11:25Z | [
"python",
"list",
"function"
] |
Sublist to number | 39,046,121 | <p>I'm working on the solution to a fun problem I found. The code I have gives a bunch of sublists, like (1, 2, 3, 0, 0). Is there a way to turn that sublist into the number 12300 and append it to a new list, <code>perm2</code>? I would have to do this for quite a few sublists, so preferably it would be a function I could run on the whole list (i.e., it'd iterate through the list, do the conversion for each number, and append each new number to the new list, though the old list would stay exactly the same).</p>
<p>So far, I have the code</p>
<pre><code>import itertools
digits = [1,2,3,0,0]
perm = list(itertools.permutations(digits))
perm2 = []
print perm
def lst_var (lst):
for i in lst:
litem = lst[i]
#conversion takes place
perm2.append(v)
lst_var(perm)
</code></pre>
<p>But I really don't know how to do the conversion, and I can't find a solution anywhere. Any help would be appreciated.</p>
<p>Thanks!</p>
| 1 | 2016-08-19T19:04:28Z | 39,046,230 | <p>Here is a function that creates an integer out of a list of numbers. The advantage of this function is that it does not convert the list of integers into a list of strings, which is a rather costly operation:</p>
<pre><code>def list_to_int(ls):
num = 0
for digit in ls:
num *= 10
num += digit
return num
</code></pre>
<p>Applied to your example:</p>
<pre><code>list_to_int([1,2,3,0,0])
</code></pre>
<blockquote>
<p>12300</p>
</blockquote>
<p>In order to apply it to a list of sublists, you can either use a list comprehension or, as I would personally prefer, <code>map</code>:</p>
<pre><code>sublists = [[7, 6, 6], [5, 7, 6], [9, 0, 9, 0], [8, 9, 5, 7, 8, 4]]
map(list_to_int, sublists)
</code></pre>
<blockquote>
<p>[766, 576, 9090, 895784]</p>
</blockquote>
<p>So, following that model your code would end up looking something like:</p>
<pre><code>digits = [1,2,3,0,0]
perm = map(list_to_int, itertools.permutations(digits))
</code></pre>
| 5 | 2016-08-19T19:12:44Z | [
"python",
"list",
"function"
] |
Nested for loops and data analysis across multiple data files | 39,046,193 | <p>I have wrote the following code and am just having some last little issues that I could use some help with and once this is polished up, I figured this could be really useful to people doing data analysis on point proximity in the future. </p>
<p>The purpose of this code is to read in two separate lists of data as individual points and into a numpy array. From there the nested for loop is intended to take point one in file1 and compare its angular separation to each point in file2 then point 2 in file1 and compare it to each element in file2, so on and on. </p>
<p>The code had worked beautifully for all of my test files that have only around 100 elements in each. I am pretty sure the angular separation in spherical coordinates are written properly, and have converted the measurement to radians instead of degrees. </p>
<pre><code>import numpy as np
import math as ma
filename1 = "C:\Users\Justin\Desktop\file1.data"
data1 = np.genfromtxt(filename1,
skip_header=1,
usecols=(0, 1))
#dtype=[
#("x1", "f9"),
#("y1", "f9")])
#print "data1", data1
filename2 = "C:\Users\Justin\Desktop\file2.data"
data2 = np.genfromtxt(filename2,
skip_header=1,
usecols=(0, 1))
#dtype=[
#("x2", "f9"),
#("y2", "f9")])
#print "data2",data2
def d(a,b):
d = ma.acos(ma.sin(ma.radians(a[1]))*ma.sin(ma.radians(b[1]))
+ma.cos(ma.radians(a[1]))*ma.cos(ma.radians(b[1]))* (ma.cos(ma.radians((a[0]-b[0])))))
return d
for coor1 in data1:
for coor2 in data2:
n=0
a = [coor1[0], coor1[1]]
b = [coor2[0], coor2[1]]
#print "a", a
#print "b", b
if d(a, b) < 0.0174533: # if true what happens
n += 1
print 'True', d(a,b)
if n == 0: # if false what happens
print 'False', d(a,b)
</code></pre>
<p>Unfortunately I am now having issues with the much larger files (between 10,000-500,000 data points in each) and have narrowed it down to a few things but first here are my problems:
1.) When running, the output window states that <code>! Too much output to process</code> although plenty of results come out of it. ((Could this be a PyCharm issue?))
2.) The first line of my code is returning complete nonsense and the output changes every time without a t/f result. With more testing this seems specific to the Too much output to process issue. </p>
<p>What I think may be some potential issues that I cant seem to remedy or just don't understand:</p>
<p>1.) I have not properly defined <code>a = [coor1[0], coor1[1]] b = [coor2[0], coor2[1]]</code> or am not calling the coordinates properly. But again, this worked perfectly with other test files.</p>
<p>2.) Since I am running windows, the .data files get corrupted from the original format that was from a Mac. I've tried renaming them to .txt and even in word but the file just gets totally screwed up. I have been assured that this shouldnt matter but I am still suspect... Especially since I cant open them without corrupting the data format. </p>
<p>3.) the files are simply too big for my computer or pycharm/numpy to handle efficiently although I doubt this. </p>
<p>4.) Just to cover all bases: The possibility that my code sucks and I need to learn more? First big project here so if that's the case don't hesitate to point out anything that may be helpful.</p>
| 0 | 2016-08-19T19:10:01Z | 39,046,948 | <p>First, a suggestion: I have had great experiences with pandas.read_csv() to read large ASCII-files (Gigabyte size, 15 million datapoints). Fastest I've come across, close to harddisk read speed and much faster than genfromtxt.</p>
<p>Regarding your nested loops: I think your entire loop can be replaced entirely. That could potenitally speed it up by orders of magnitude.</p>
<pre><code>import numpy as np
import math as ma
#file import here
#define d here
#compute a 2d-array of separations using list comprehension
sep = [d(c1,c2) for c1 in data1 for c2 in data2]
sep = np.array(sep).reshape(data1.shape[0],data2.shape[0])
index = np.where(sep < 0.0174533)
n = len(index[0])
</code></pre>
<p>Also, instead of ma.sin() etc you should use np.sin() etc. All functions are there in numpy (ma.acos -> np.arccos, ma.radians -> np.deg2rad). Then you don't have to do the redefinition of each row as "a" and "b" each time.</p>
<p>Not sure if this is useful for you, depending if you want to execute further code in the if...else part?</p>
<p>Your "too much output" issue might simply because of all the print statements. have you tried commenting them out to see what happens? </p>
| 0 | 2016-08-19T20:07:36Z | [
"python",
"numpy",
"pycharm",
"nested-loops",
"data-analysis"
] |
Nested for loops and data analysis across multiple data files | 39,046,193 | <p>I have wrote the following code and am just having some last little issues that I could use some help with and once this is polished up, I figured this could be really useful to people doing data analysis on point proximity in the future. </p>
<p>The purpose of this code is to read in two separate lists of data as individual points and into a numpy array. From there the nested for loop is intended to take point one in file1 and compare its angular separation to each point in file2 then point 2 in file1 and compare it to each element in file2, so on and on. </p>
<p>The code had worked beautifully for all of my test files that have only around 100 elements in each. I am pretty sure the angular separation in spherical coordinates are written properly, and have converted the measurement to radians instead of degrees. </p>
<pre><code>import numpy as np
import math as ma
filename1 = "C:\Users\Justin\Desktop\file1.data"
data1 = np.genfromtxt(filename1,
skip_header=1,
usecols=(0, 1))
#dtype=[
#("x1", "f9"),
#("y1", "f9")])
#print "data1", data1
filename2 = "C:\Users\Justin\Desktop\file2.data"
data2 = np.genfromtxt(filename2,
skip_header=1,
usecols=(0, 1))
#dtype=[
#("x2", "f9"),
#("y2", "f9")])
#print "data2",data2
def d(a,b):
d = ma.acos(ma.sin(ma.radians(a[1]))*ma.sin(ma.radians(b[1]))
+ma.cos(ma.radians(a[1]))*ma.cos(ma.radians(b[1]))* (ma.cos(ma.radians((a[0]-b[0])))))
return d
for coor1 in data1:
for coor2 in data2:
n=0
a = [coor1[0], coor1[1]]
b = [coor2[0], coor2[1]]
#print "a", a
#print "b", b
if d(a, b) < 0.0174533: # if true what happens
n += 1
print 'True', d(a,b)
if n == 0: # if false what happens
print 'False', d(a,b)
</code></pre>
<p>Unfortunately I am now having issues with the much larger files (between 10,000-500,000 data points in each) and have narrowed it down to a few things but first here are my problems:
1.) When running, the output window states that <code>! Too much output to process</code> although plenty of results come out of it. ((Could this be a PyCharm issue?))
2.) The first line of my code is returning complete nonsense and the output changes every time without a t/f result. With more testing this seems specific to the Too much output to process issue. </p>
<p>What I think may be some potential issues that I cant seem to remedy or just don't understand:</p>
<p>1.) I have not properly defined <code>a = [coor1[0], coor1[1]] b = [coor2[0], coor2[1]]</code> or am not calling the coordinates properly. But again, this worked perfectly with other test files.</p>
<p>2.) Since I am running windows, the .data files get corrupted from the original format that was from a Mac. I've tried renaming them to .txt and even in word but the file just gets totally screwed up. I have been assured that this shouldnt matter but I am still suspect... Especially since I cant open them without corrupting the data format. </p>
<p>3.) the files are simply too big for my computer or pycharm/numpy to handle efficiently although I doubt this. </p>
<p>4.) Just to cover all bases: The possibility that my code sucks and I need to learn more? First big project here so if that's the case don't hesitate to point out anything that may be helpful.</p>
| 0 | 2016-08-19T19:10:01Z | 39,150,299 | <p>After a little more research and getting some advice from colleagues about how to best test my code with different variables, I realized that the code I wrote is actually doing exactly what I want it to. And it seems (for now at least) everything is doing great. All I did was restrict my proximity search to a much smaller field since before it was returning 10,000 x 10,000 results while printing the distance, the coordinate points, and a T/F statement.</p>
<p>So when the search field is small enough, the code does exactly what I need it to, but as for why I was having the <code>too much output to process</code> error I am not sure, but I will likely post another question on here trying to clarify that problem. Again, the above code is although perhaps brute force and basic in approach, it is a pretty efficient way of analyzing proximity of points across multiple tables in a for loop. </p>
| 0 | 2016-08-25T16:22:39Z | [
"python",
"numpy",
"pycharm",
"nested-loops",
"data-analysis"
] |
Python: base case of a recursive function | 39,046,311 | <p>Currently I'm experimenting a little bit with recursive functions in Python. I've read some things on the internet about them and I also built some simple functioning recursive functions myself. Although, I'm still not sure how to use the base case.</p>
<p>I know that a well-designed recursive function satisfies the following rules:</p>
<ul>
<li>There is a base case.</li>
<li>The recursive steps work towards the base case.</li>
<li>The solutions of the subproblems provide a solution for the original problem.</li>
</ul>
<p>Now I want to come down to the question that I have: Is it allowed to make up a base case from multiple statements?</p>
<p>In other words is the base case of the following self written script, valid?</p>
<pre><code>def checkstring(n, string):
if len(string) == 1:
if string == n:
return 1
else:
return 0
if string[-1:] == n:
return 1 + checkstring(n, string[0:len(string) - 1])
else:
return checkstring(n, string[0:len(string) - 1])
print(checkstring('l', 'hello'))
</code></pre>
| 2 | 2016-08-19T19:19:25Z | 39,046,370 | <p>Yes, of course it is: the only requirement on the base case is that it does not call the function recursively. Apart from that it can do anything it wants.</p>
| 2 | 2016-08-19T19:24:23Z | [
"python",
"function",
"recursion"
] |
Python: base case of a recursive function | 39,046,311 | <p>Currently I'm experimenting a little bit with recursive functions in Python. I've read some things on the internet about them and I also built some simple functioning recursive functions myself. Although, I'm still not sure how to use the base case.</p>
<p>I know that a well-designed recursive function satisfies the following rules:</p>
<ul>
<li>There is a base case.</li>
<li>The recursive steps work towards the base case.</li>
<li>The solutions of the subproblems provide a solution for the original problem.</li>
</ul>
<p>Now I want to come down to the question that I have: Is it allowed to make up a base case from multiple statements?</p>
<p>In other words is the base case of the following self written script, valid?</p>
<pre><code>def checkstring(n, string):
if len(string) == 1:
if string == n:
return 1
else:
return 0
if string[-1:] == n:
return 1 + checkstring(n, string[0:len(string) - 1])
else:
return checkstring(n, string[0:len(string) - 1])
print(checkstring('l', 'hello'))
</code></pre>
| 2 | 2016-08-19T19:19:25Z | 39,046,388 | <p>That is absolutely fine and valid function. Just remember that for any scenario that you can call a recursion function from, there should be a base case reachable by recursion flow. </p>
<p>For example, take a look at the following (stupid) recursive function:</p>
<pre><code>def f(n):
if n == 0:
return True
return f(n - 2)
</code></pre>
<p>This function will never reach its base case (n == 0) if it was called for odd number, like 5. You want to avoid scenarios like that and think about all possible base cases the function can get to (in the example above, that would be 0 and 1). So you would do something like</p>
<pre><code>def f(n):
if n == 0:
return True
if n == 1:
return False
if n < 0:
return f(-n)
return f(n - 2)
</code></pre>
<p>Now, that is correct function (with several ifs that checks if number is even).</p>
<p>Also note that your function will be quite slow. The reason for it is that Python string slices are slow and work for O(n) where n is length of sliced string. Thus, it is recommended to try non-recursive solution so that you can not re-slice string each time.</p>
<p>Also note that sometimes the function do not have strictly base case. For example, consider following brute-force function that prints all existing combinations of 4 digits:</p>
<pre><code>def brute_force(a, current_digit):
if current_digit == 4:
# This means that we already chosen all 4 digits and
# we can just print the result
print a
else:
# Try to put each digit on the current_digit place and launch
# recursively
for i in range(10):
a[current_digit] = i
brute_force(a, current_digit + 1)
a = [0] * 4
brute_force(a, 0)
</code></pre>
<p>Here, because function does not return anything but just considers different options, we do not have a base case.</p>
| 0 | 2016-08-19T19:25:38Z | [
"python",
"function",
"recursion"
] |
Python: base case of a recursive function | 39,046,311 | <p>Currently I'm experimenting a little bit with recursive functions in Python. I've read some things on the internet about them and I also built some simple functioning recursive functions myself. Although, I'm still not sure how to use the base case.</p>
<p>I know that a well-designed recursive function satisfies the following rules:</p>
<ul>
<li>There is a base case.</li>
<li>The recursive steps work towards the base case.</li>
<li>The solutions of the subproblems provide a solution for the original problem.</li>
</ul>
<p>Now I want to come down to the question that I have: Is it allowed to make up a base case from multiple statements?</p>
<p>In other words is the base case of the following self written script, valid?</p>
<pre><code>def checkstring(n, string):
if len(string) == 1:
if string == n:
return 1
else:
return 0
if string[-1:] == n:
return 1 + checkstring(n, string[0:len(string) - 1])
else:
return checkstring(n, string[0:len(string) - 1])
print(checkstring('l', 'hello'))
</code></pre>
| 2 | 2016-08-19T19:19:25Z | 39,046,978 | <p>In simple terms Yes, as long as it does not require the need to call the function recursively to arrive at the base case. Everything else is allowed.</p>
| 1 | 2016-08-19T20:09:42Z | [
"python",
"function",
"recursion"
] |
How to convert sql table into a pyspark/python data structure and return back to sql in databricks notebook | 39,046,319 | <p>I am running a sql notebook on databricks. I would like to analyze a table with half a billion records in it. I can run simple sql queries on the data. However, I need to change the date column type from str to date.</p>
<p>Unfortunately, update/alter statements do not seem to be supported by sparkSQL so it seems I cannot modify the data in the table.</p>
<p><strong>What would be the one-line of code that would allow me to convert the SQL table to a python data structure (in pyspark) in the next cell?</strong>
Then I could modify the file and return it to SQL.</p>
| 0 | 2016-08-19T19:19:57Z | 39,046,382 | <pre><code>dataFrame = sqlContext.sql('select * from myTable')
</code></pre>
| 3 | 2016-08-19T19:25:11Z | [
"python",
"sql",
"apache-spark",
"databricks"
] |
python 2.7 and subprocess() not passing args correctly | 39,046,519 | <p>This works from /bin/bash command line â¦</p>
<pre><code>$ /usr/bin/kafka-console-producer --topic AsIs-CalculatedMeasure --broker-list wrlmr4:9092 < /tmp/dataFile
</code></pre>
<p>'[' 0 -eq 0 ']'</p>
<p>When I invoke python's subprocess, it chokes on my arguments, I've changed the arg order, it always causes a choke on the first "âarg"</p>
<pre><code>kafkaProducer='/usr/bin/kafka-console-producer'
cmdLineArgs = []
cmdLineArgs.append(kafkaProducer)
cmdLineArgs.append("""--broker-list wrlmr4:9092""")
cmdLineArgs.append("""--topic %s""" % ('AsIs-CalculatedMeasure'))
print 'Calling subprocess(%s)'%(cmdLineArgs)
cmd = subprocess.Popen(cmdLineArgs, stdin=subprocess.PIPE)
# now write the input file to stdin ...
cmd.stdin.write(payload)
Calling subprocess(['/usr/bin/kafka-console-producer', '--broker-list wrlmr4:9092', '--topic AsIs-CalculatedMeasure'])
</code></pre>
<p>Stderr: broker-list wrlmr4:9092 is not a recognized option</p>
<p>subprocess seems to be eating the "--" from "--broker-list" .. I've switched arg order and it gives same error "--" get eaten, I also tried "--" to no avail.</p>
| 0 | 2016-08-19T19:37:10Z | 39,046,566 | <p>Either you pass one big string with all arguments, protecting spaces by quotes, like this:</p>
<pre><code>subprocess.Popen('/usr/bin/kafka-console-producer --broker-list wrlmr4:9092 --topic AsIs-CalculatedMeasure', stdin=subprocess.PIPE)
</code></pre>
<p>or you split the command line properly.
You passed two parameters as one, subprocess added quotes around them to protect them, and your called program failed to parse the arguments.</p>
<p>When performing its getopt or whatever, your called program expected:</p>
<p><code>--broker-list</code> as argument n
<code>wrlmr4:9092</code> as argument n+1</p>
<p>But subprocess protected the argument since it had space in it so your called program recieved</p>
<p><code>--broker-list wrlmr4:9092</code> as argument n</p>
<p>and it did not like it at all :)</p>
<p>fix your cmdLineArgs preparation like this</p>
<pre><code>cmdLineArgs.extend(["--broker-list","wrlmr4:9092"])
cmdLineArgs.extend(["--topic","AsIs-CalculatedMeasure"])
</code></pre>
<p>I generally recommend the second approach, mostly if the parameters come from a caller, and may contain spaces. <code>subprocesses.Popen</code> will do the quoting for you.</p>
| 2 | 2016-08-19T19:39:46Z | [
"python",
"bash",
"python-2.7",
"subprocess"
] |
root.after unable to find function_name | 39,046,539 | <p>I am trying to put together a GUI that would read from a continuously updated TXT file and updated every once in a while. So far I succeeded with the first part and I am failing to use 'root.after()' to loop the whole thing but it results in NameError:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
class App:
def __init__(self, root):
frame = tk.Frame(root)
frame.pack()
iAutoInEN = 0
iAvailableEN = 0
self.tkAutoInEN = tk.StringVar()
self.tkAutoInEN.set(iAutoInEN)
self.tbAutoInEN = tk.Label(root, textvariable=self.tkAutoInEN)
self.tbAutoInEN.pack(side=tk.LEFT)
self.button = tk.Button(frame, text="Start", fg="red",
command=self.get_text)
self.button.pack(side=tk.LEFT)
def get_text(self):
fText = open("report.txt") #open a text file in the same folder
sContents = fText.read() #read the contents
fText.close()
# omitted working code that parses the text to lines and lines
# to items and marks them with numbers based on which they are
# allocated to a variable
if iLineCounter == 1 and iItemCounter == 3:
iAutoInEN = int(sItem)
self.tkAutoInEN.set(iAutoInEN)
root.after(1000,root,get_text(self))
app = App(root)
root.mainloop()
try:
root.destroy() # optional; see description below
except:
pass
</code></pre>
<p>The first instance runs without any problems and updates the value from 0 to the number in the TXT file but is accompanied with an error </p>
<pre><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\...\Python35\lib\tkinter\__init__.py", line 1549, in __call__
return self.func(*args)
File "C:/.../pythonlab/GUI3.py", line 117, in get_text
self.after(1000,root,get_text())
NameError: name 'get_text' is not defined
</code></pre>
<p><strong>EDIT:</strong>
When changed to the recommended "self.after(1000,self.get_text)"</p>
<pre><code>class App:
...
def get_text(self):
fText = open("report.txt") #open a text file in the same folder
sContents = fText.read() #read the contents
fText.close()
# omitted code
if iLineCounter == 1 and iItemCounter == 3:
iAutoInEN = int(sItem)
self.tkAutoInEN.set(iAutoInEN)
self.after(1000,self.get_text)
</code></pre>
<p>Error changes</p>
<pre><code>Traceback (most recent call last):
File "C:/.../pythonlab/GUI3.py", line 6, in <module>
class App:
File "C:/.../pythonlab/GUI3.py", line 117, in App
self.after(1000, self.get_text)
NameError: name 'self' is not defined
</code></pre>
<p>Also please consider this is my very first programme (not only) in Python, so I would appreciate if you are a little bit more explicit with your answers (e.g. when pointing out an indentation error, please refer to an exact line of code).</p>
| 0 | 2016-08-19T19:38:16Z | 39,046,653 | <p>Because <code>get_text</code> is a method of <code>App</code> class you should call it as <code>self.get_text</code>.</p>
<p>After is a tkinter method. In that case you should call it as <code>root.after</code>. And self refers to the class you are in. So because get_text is a method of current class you should call is with <code>self</code> which is like this in other programming languages like Java.</p>
<pre><code>...
root.after(1000, self.get_text)
...
</code></pre>
| 2 | 2016-08-19T19:46:11Z | [
"python",
"tkinter"
] |
root.after unable to find function_name | 39,046,539 | <p>I am trying to put together a GUI that would read from a continuously updated TXT file and updated every once in a while. So far I succeeded with the first part and I am failing to use 'root.after()' to loop the whole thing but it results in NameError:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
class App:
def __init__(self, root):
frame = tk.Frame(root)
frame.pack()
iAutoInEN = 0
iAvailableEN = 0
self.tkAutoInEN = tk.StringVar()
self.tkAutoInEN.set(iAutoInEN)
self.tbAutoInEN = tk.Label(root, textvariable=self.tkAutoInEN)
self.tbAutoInEN.pack(side=tk.LEFT)
self.button = tk.Button(frame, text="Start", fg="red",
command=self.get_text)
self.button.pack(side=tk.LEFT)
def get_text(self):
fText = open("report.txt") #open a text file in the same folder
sContents = fText.read() #read the contents
fText.close()
# omitted working code that parses the text to lines and lines
# to items and marks them with numbers based on which they are
# allocated to a variable
if iLineCounter == 1 and iItemCounter == 3:
iAutoInEN = int(sItem)
self.tkAutoInEN.set(iAutoInEN)
root.after(1000,root,get_text(self))
app = App(root)
root.mainloop()
try:
root.destroy() # optional; see description below
except:
pass
</code></pre>
<p>The first instance runs without any problems and updates the value from 0 to the number in the TXT file but is accompanied with an error </p>
<pre><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\...\Python35\lib\tkinter\__init__.py", line 1549, in __call__
return self.func(*args)
File "C:/.../pythonlab/GUI3.py", line 117, in get_text
self.after(1000,root,get_text())
NameError: name 'get_text' is not defined
</code></pre>
<p><strong>EDIT:</strong>
When changed to the recommended "self.after(1000,self.get_text)"</p>
<pre><code>class App:
...
def get_text(self):
fText = open("report.txt") #open a text file in the same folder
sContents = fText.read() #read the contents
fText.close()
# omitted code
if iLineCounter == 1 and iItemCounter == 3:
iAutoInEN = int(sItem)
self.tkAutoInEN.set(iAutoInEN)
self.after(1000,self.get_text)
</code></pre>
<p>Error changes</p>
<pre><code>Traceback (most recent call last):
File "C:/.../pythonlab/GUI3.py", line 6, in <module>
class App:
File "C:/.../pythonlab/GUI3.py", line 117, in App
self.after(1000, self.get_text)
NameError: name 'self' is not defined
</code></pre>
<p>Also please consider this is my very first programme (not only) in Python, so I would appreciate if you are a little bit more explicit with your answers (e.g. when pointing out an indentation error, please refer to an exact line of code).</p>
| 0 | 2016-08-19T19:38:16Z | 39,046,689 | <p>Firstly, like James commented, you should fix your indentation so that the functions are a part of the class.</p>
<p>Then, change this line </p>
<pre><code>root.after(1000,root,get_text(self))
</code></pre>
<p>to this</p>
<pre><code>root.after(1000, self.get_text)
</code></pre>
<p>Check out the answer to the following question, which uses the code I just gave you:
<a href="http://stackoverflow.com/questions/9342757/tkinter-executing-functions-over-time">Tkinter, executing functions over time</a></p>
| 0 | 2016-08-19T19:48:08Z | [
"python",
"tkinter"
] |
Remove html tag and string in between in Python | 39,046,564 | <p>I'm pretty new with regular expression. Basically, I would like to use regular expression to remove <code><sup> ... </sup></code> from the string using regular expression.</p>
<p>Input:</p>
<pre><code><b>something here</b><sup>1</sup><sup>,3</sup>, another here<sup>1</sup>
</code></pre>
<p>Output:</p>
<pre><code><b>something here</b>, another here
</code></pre>
<p>Is that a short way and description on how to do it?</p>
<p><strong>note</strong> This question might be duplicated. I tried but couldn't find solution.</p>
| 0 | 2016-08-19T19:39:44Z | 39,046,697 | <p>You could do something like this:</p>
<pre><code>import re
s = "<b>something here</b><sup>1</sup><sup>,3</sup>, another here<sup>1</sup>"
s2 = re.sub(r'<sup>(.*?)</sup>',"", s)
print s2
# Prints: <b>something here</b>, another here
</code></pre>
<p>Remember to use <code>(.*?)</code>, as <code>(.*)</code> is what they call a greedy quantifier and you would obtain a different result:</p>
<pre><code>s2 = re.sub(r'<sup>(.*)</sup>',"", s)
print s2
# Prints: <b>something here</b>
</code></pre>
| 1 | 2016-08-19T19:48:43Z | [
"python",
"regex"
] |
Remove html tag and string in between in Python | 39,046,564 | <p>I'm pretty new with regular expression. Basically, I would like to use regular expression to remove <code><sup> ... </sup></code> from the string using regular expression.</p>
<p>Input:</p>
<pre><code><b>something here</b><sup>1</sup><sup>,3</sup>, another here<sup>1</sup>
</code></pre>
<p>Output:</p>
<pre><code><b>something here</b>, another here
</code></pre>
<p>Is that a short way and description on how to do it?</p>
<p><strong>note</strong> This question might be duplicated. I tried but couldn't find solution.</p>
| 0 | 2016-08-19T19:39:44Z | 39,046,757 | <p>The hard part is knowing how to do a minimal rather than maximal match of the stuff between the tags. This works.</p>
<pre><code>import re
s0 = "<b>something here</b><sup>1</sup><sup>,3</sup>, another here<sup>1</sup>"
prog = re.compile('<sup>.*?</sup>')
s1 = re.sub(prog, '', s0)
print(s1)
# <b>something here</b>, another here
</code></pre>
| 1 | 2016-08-19T19:52:47Z | [
"python",
"regex"
] |
How to compare requirement file and actually installed Python modules? | 39,046,603 | <p>Given <code>requirements.txt</code> and a virtualenv environment, what is the best way to check <strong>from a script</strong> whether requirements are met and possibly provide details in case of mismatch?</p>
<p>Pip changes it's internal API with major releases, so I seen advices not to use it's <code>parse_requirements</code> method.</p>
<p>There is a way of <code>pkg_resources.require(dependencies)</code>, but then how to parse requirements file with all it's fanciness, like github links, etc.?</p>
<p>This should be something pretty simple, but can't find any pointers.</p>
<p>UPDATE: programmatic solution is needed.</p>
| 0 | 2016-08-19T19:42:39Z | 39,047,472 | <p>You can save your virtualenv's current installed packages with <code>pip freeze</code> to a file, say current.txt</p>
<pre><code>pip freeze > current.txt
</code></pre>
<p>Then you can compare this to requirements.txt with difflib using a script like <a href="http://stackoverflow.com/a/15864963/5434629">this</a>:</p>
<pre><code>import difflib
req = open('requirements.txt')
current = open('current.txt')
diff = difflib.ndiff(req.readlines(), current.readlines())
delta = ''.join([x for x in diff if x.startswith('-')])
print(delta)
</code></pre>
<p>This should display only the packages that are in 'requirements.txt' that aren't in 'current.txt'.</p>
| 0 | 2016-08-19T20:51:26Z | [
"python",
"python-2.7",
"pip",
"requirements"
] |
receiving PNG file in django | 39,046,624 | <p>I have a falcon server that I am trying to port to django. One of the falcon endpoints processes a request that contains a PNG file sent with <code>content_type = 'application/octet-stream'</code>. It writes the data to a file maintaining the correct PNG structure. </p>
<p>The falcon code does this:</p>
<pre><code>form = cgi.FieldStorage(fp=req.stream, environ=req.env)
</code></pre>
<p>and then writes the png like this:</p>
<pre><code>fd.write(form[key].file.read())
</code></pre>
<p>I cannot figure out how to do the same thing in django. When my view is called the data in <code>request.POST[key]</code> has already been decoded to unicode text and it's no longer valid png data. </p>
<p>How can I do this with django? Should/can I use <code>cgi.FieldStorage</code>? The request I get (of type <code>django.core.handlers.wsgi.WSGIRequest</code>) does not have a stream method. I'm sure there's some way to do this, but I have not come up with anything googling. </p>
| 0 | 2016-08-19T19:44:15Z | 39,084,828 | <p>I solved this by changing the client to set the file and filename fields each part of the multipart and then I was able it iterate through request.FILES and successfully write the files as PNG. </p>
| 0 | 2016-08-22T16:51:21Z | [
"python",
"django",
"png",
"wsgi",
"falconframework"
] |
iterate over a list of lists with another list in python | 39,046,676 | <p>I am wondering if I can iterate over a list of lists with another list in python:</p>
<p>let's say </p>
<pre><code>lst_a = [x,y,z]
lst_b = [[a,b,c,d],[e,f,g,h],[i,j,k,l]]
</code></pre>
<p>where len(lst_a) = len(lst_b) </p>
<p>I am wondering how to get a new list like below: </p>
<pre><code>lst_c = [[x/a, x/b, x/c, x/d],[y/e, y/f, y/g, y/h],[z/i, z/j, z/k, z/l]]
</code></pre>
<p>thanks a lot! </p>
<p>tim</p>
| 2 | 2016-08-19T19:47:35Z | 39,046,730 | <p>You can use a nested list comprehension</p>
<pre><code>>>> lst_a = [1,2,3]
>>> lst_b = [[1,2,3,4],[2,3,4,5],[3,4,5,6]]
>>> lst_c = [[i/b for b in j] for i,j in zip(lst_a, lst_b)]
>>> lst_c
[[1.0, 0.5, 0.3333333333333333, 0.25], [1.0, 0.6666666666666666, 0.5, 0.4], [1.0, 0.75, 0.6, 0.5]]
</code></pre>
| 6 | 2016-08-19T19:51:14Z | [
"python",
"list"
] |
iterate over a list of lists with another list in python | 39,046,676 | <p>I am wondering if I can iterate over a list of lists with another list in python:</p>
<p>let's say </p>
<pre><code>lst_a = [x,y,z]
lst_b = [[a,b,c,d],[e,f,g,h],[i,j,k,l]]
</code></pre>
<p>where len(lst_a) = len(lst_b) </p>
<p>I am wondering how to get a new list like below: </p>
<pre><code>lst_c = [[x/a, x/b, x/c, x/d],[y/e, y/f, y/g, y/h],[z/i, z/j, z/k, z/l]]
</code></pre>
<p>thanks a lot! </p>
<p>tim</p>
| 2 | 2016-08-19T19:47:35Z | 39,048,467 | <pre><code>lst_a = [x,y,z]
lst_b = [[a,b,c,d],[e,f,g,h],[i,j,k,l]]
lst_c = []
for i in range(len(lst_a)):
lst_c.append([lst_a[i]/lst_b[i][0]])
for j in range(1, len(lst_b[i])):
lst_c[i].append(lst_a[i]/lst_b[i][j])
</code></pre>
| 0 | 2016-08-19T22:25:09Z | [
"python",
"list"
] |
iterate over a list of lists with another list in python | 39,046,676 | <p>I am wondering if I can iterate over a list of lists with another list in python:</p>
<p>let's say </p>
<pre><code>lst_a = [x,y,z]
lst_b = [[a,b,c,d],[e,f,g,h],[i,j,k,l]]
</code></pre>
<p>where len(lst_a) = len(lst_b) </p>
<p>I am wondering how to get a new list like below: </p>
<pre><code>lst_c = [[x/a, x/b, x/c, x/d],[y/e, y/f, y/g, y/h],[z/i, z/j, z/k, z/l]]
</code></pre>
<p>thanks a lot! </p>
<p>tim</p>
| 2 | 2016-08-19T19:47:35Z | 39,050,094 | <p>You can do this using numpy.</p>
<pre><code>import numpy
lst_a = [x,y,z]
lst_b = [[a,b,c,d],[e,f,g,h],[i,j,k,l]]
new_list = []
for i, item in enumerate(lst_a):
new_list_i = float(lst_a[i])/numpy.array(lst_b[i])
new_list.append(new_list_i.tolist())
print new_list
</code></pre>
| 0 | 2016-08-20T03:13:25Z | [
"python",
"list"
] |
How can I make the docstring in PyCharm as helpful as the one in the Jupyter Notebook? | 39,046,688 | <p><strong>Jupyter Notebook</strong>
Documentation upon typing the command and "()" and pressing Shift+Tab in the Jupyter Notebook (a nice docstring with all parameters explained and examples shows up):
<a href="http://i.stack.imgur.com/obrIU.png" rel="nofollow"><img src="http://i.stack.imgur.com/obrIU.png" alt="enter image description here"></a></p>
<p><strong>PyCharm</strong>
Documentation upon entering the command and pressing Ctrl+Q in PyCharm (only an autogenerated docstring with the inferred variable type shows up):
<a href="http://i.stack.imgur.com/30DtT.png" rel="nofollow"><img src="http://i.stack.imgur.com/30DtT.png" alt="enter image description here"></a></p>
<p><strong>Edit</strong>
This question deals with the evaluation of external (e.g. matplotlibs or numpy) documentation decorators, not with how to write your own beautiful docstring.</p>
| 2 | 2016-08-19T19:48:05Z | 39,047,154 | <p>Do you mean "How to write the docstring of my own code.."?</p>
<p>Because, if you use <a href="http://www.sphinx-doc.org/en/stable/rest.html" rel="nofollow">Sphinx/reStructuredText</a> syntax, you can have beautiful documentation.</p>
<p>Here is a basic example:</p>
<pre><code>def axhline_demo(y=0, xmin=0, xmax=1, hold=None, **kwargs):
"""
Add a horizontal line across the axis.
Parameters
----------
:param y: scalar, optional, default: 0
y position in data coordinates of the horizontal line.
:param xmin: scalar, optional, default: 0
etc.
:param xmax: more documentation...
:param hold:
:param kwargs:
:return:
"""
</code></pre>
<p>You'll get:</p>
<p><a href="http://i.stack.imgur.com/Oo9VO.png" rel="nofollow"><img src="http://i.stack.imgur.com/Oo9VO.png" alt="enter image description here"></a></p>
<p>Use the Menu <strong>View</strong> => <strong>Quick Documentation</strong>.</p>
| 1 | 2016-08-19T20:25:45Z | [
"python",
"pycharm",
"jupyter-notebook"
] |
Pass array of function pointers via SWIG | 39,046,704 | <p>With the help of <a href="http://stackoverflow.com/a/22965961/353337">http://stackoverflow.com/a/22965961/353337</a>, I was able to create a simple example of how to pass one function pointer into a function via Python. Specifically, with</p>
<pre><code>double f(double x) {
return x*x;
}
double myfun(double (*f)(double x)) {
fprintf(stdout, "%g\n", f(2.0));
return -1.0;
}
</code></pre>
<pre><code>%module test
%{
#include "test.hpp"
%}
%pythoncallback;
double f(double);
%nopythoncallback;
%ignore f;
%include "test.hpp"
</code></pre>
<p>I can call</p>
<pre><code>import test
test.f(13)
test.myfun(test.f)
</code></pre>
<p>and get the expected results.</p>
<p>Now, I would like to change the signature of <code>myfun</code> to allow for an <em>array</em> of function pointers (all with the same signature), e.g.,</p>
<pre><code>double myfun(std::vector<double (*)(double)>)
</code></pre>
<p>How do I have adapt the <code>.i</code> file?</p>
<p>Ideally, the Python call would be via a list</p>
<pre><code>test.myfun([test.f, test.g])
</code></pre>
| 4 | 2016-08-19T19:49:10Z | 39,057,342 | <p>I made the following test case to illustrate what you're trying to do. It has a real implementation of <code>myfun(const std::vector<double(*)(double)>&)</code> to make life a little more interesting:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <vector>
double g(double x) {
return -x;
}
double f(double x) {
return x*x;
}
typedef double(*pfn_t)(double);
std::vector<double> myfun(const std::vector<pfn_t>& funs, const double d) {
std::vector<double> ret;
ret.reserve(funs.size());
for(auto && fn : funs)
ret.emplace_back(fn(d));
return ret;
}
</code></pre>
<p>I expected that all we'd need to do to make this work is use:</p>
<pre><code>%include <std_vector.i>
%template(FunVec) std::vector<double(*)(double)>;
%template(DoubleVec) std::vector<double>;
%include "test.h"
</code></pre>
<p>However SWIG 3.0 (from Debian stable) doesn't handle this <code>FunVec</code> correctly and the resulting module doesn't compile. So I added a typemap as a workaround:</p>
<pre class="lang-cpp prettyprint-override"><code>%module test
%{
#include "test.h"
%}
%pythoncallback;
double f(double);
double g(double);
%nopythoncallback;
%ignore f;
%ignore g;
%typemap(in) const std::vector<pfn_t>& (std::vector<pfn_t> tmp) {
// Adapted from: https://docs.python.org/2/c-api/iter.html
PyObject *iterator = PyObject_GetIter($input);
PyObject *item;
if (iterator == NULL) {
assert(iterator);
SWIG_fail; // Do this properly
}
while ((item = PyIter_Next(iterator))) {
pfn_t f;
const int res = SWIG_ConvertFunctionPtr(item, (void**)(&f), $descriptor(double(*)(double)));
if (!SWIG_IsOK(res)) {
assert(false);
SWIG_exception_fail(SWIG_ArgError(res), "in method '" "foobar" "', argument " "1"" of type '" "pfn_t""'");
}
Py_DECREF(item);
tmp.push_back(f);
}
Py_DECREF(iterator);
$1 = &tmp;
}
%include <std_vector.i>
// Doesn't work:
//%template(FunVec) std::vector<double(*)(double)>;
%template(DoubleVec) std::vector<double>;
%include "test.h"
</code></pre>
<p>Basically all this does is add one 'in' typemap for the vector of function pointer types. That typemap just iterates over the input given from Python and builds a temporary <code>std::vector</code> from a Python iterable.</p>
<p>This is sufficient that the following Python works as expected:</p>
<pre class="lang-py prettyprint-override"><code>import test
print test.g
print test.f
print test.g(666)
print test.f(666)
print test.myfun([test.g,test.f],123)
</code></pre>
| 1 | 2016-08-20T18:17:54Z | [
"python",
"c++",
"swig"
] |
Column order in pandas.concat | 39,046,931 | <p>I do as below:</p>
<pre><code>data1 = pd.DataFrame({ 'b' : [1, 1, 1], 'a' : [2, 2, 2]})
data2 = pd.DataFrame({ 'b' : [1, 1, 1], 'a' : [2, 2, 2]})
frames = [data1, data2]
data = pd.concat(frames)
data
a b
0 2 1
1 2 1
2 2 1
0 2 1
1 2 1
2 2 1
</code></pre>
<p>The data column order is in alphabet order. Why is it so?
and how to keep the original order?</p>
| 3 | 2016-08-19T20:06:44Z | 39,047,071 | <p>You are creating DataFrames out of dictionaries. Dictionaries are a unordered which means the keys do not have a specific order. So</p>
<pre><code>d1 = {'key_a': 'val_a', 'key_b': 'val_b'}
</code></pre>
<p>and </p>
<pre><code>d2 = {'key_b': 'val_b', 'key_a': 'val_a'}
</code></pre>
<p>are the same.</p>
<p>In addition to that I assume that pandas sorts the dictionary's keys descending by default (unfortunately I did not find any hint in the docs in order to prove that assumption) leading to the behavior you encountered.</p>
<p>So the basic motivation would be to resort / reorder the columns in your DataFrame. You can do this <a href="http://stackoverflow.com/a/13148611/3991125">as follows</a>:</p>
<pre><code>import pandas as pd
data1 = pd.DataFrame({ 'b' : [1, 1, 1], 'a' : [2, 2, 2]})
data2 = pd.DataFrame({ 'b' : [1, 1, 1], 'a' : [2, 2, 2]})
frames = [data1, data2]
data = pd.concat(frames)
print(data)
cols = ['b' , 'a']
data = data[cols]
print(data)
</code></pre>
| 5 | 2016-08-19T20:18:50Z | [
"python",
"pandas",
"concat"
] |
Column order in pandas.concat | 39,046,931 | <p>I do as below:</p>
<pre><code>data1 = pd.DataFrame({ 'b' : [1, 1, 1], 'a' : [2, 2, 2]})
data2 = pd.DataFrame({ 'b' : [1, 1, 1], 'a' : [2, 2, 2]})
frames = [data1, data2]
data = pd.concat(frames)
data
a b
0 2 1
1 2 1
2 2 1
0 2 1
1 2 1
2 2 1
</code></pre>
<p>The data column order is in alphabet order. Why is it so?
and how to keep the original order?</p>
| 3 | 2016-08-19T20:06:44Z | 39,047,188 | <p>You can create the original DataFrames with OrderedDicts</p>
<pre><code>from collections import OrderedDict
odict = OrderedDict()
odict['b'] = [1, 1, 1]
odict['a'] = [2, 2, 2]
data1 = pd.DataFrame(odict)
data2 = pd.DataFrame(odict)
frames = [data1, data2]
data = pd.concat(frames)
data
b a
0 1 2
1 1 2
2 1 2
0 1 2
1 1 2
2 1 2
</code></pre>
| 1 | 2016-08-19T20:28:50Z | [
"python",
"pandas",
"concat"
] |
Synchronizing socket programming python | 39,047,029 | <p>I have a client-server application consisted of three rounds. At each round the client sends a file to the server, the server computes sth and send it back to the client. The client based on the received message prepares the message for the next round etc. </p>
<p>The application sometimes works smoothly, sometimes not. I guess the problem is some sort of lack of synchronization between the rounds. For example before the client sends the message for the second round the server already starts its second round, which creates problems.</p>
<p>I do not use any module for networking apart from sockets and ThreadedTCPHandler. How i can assert my application to wait for example the other network entity to send its message before starting its execution, without creating deadlocks</p>
| -2 | 2016-08-19T20:14:11Z | 39,047,207 | <p>Have a look at <a href="http://zeromq.org/" rel="nofollow">ZeroMQ</a> and its Python client <a href="https://pyzmq.readthedocs.io/en/latest/" rel="nofollow">pyzmq</a>. It provides a bit easier way to write client/server or distributed applications.</p>
| 0 | 2016-08-19T20:30:17Z | [
"python",
"python-sockets"
] |
Python AttributeError: 'str' object has no attribute 'get_price' | 39,047,058 | <p>I am trying to take user input from main.py and then tailor the output to said user input. Not only am I getting this error but it seems the my runAnalytics is running when I start main.py, not when I hit my button command to do so.</p>
<p>main.py</p>
<pre><code>import runAnalytics
import tkinter
import os
import centerWindow
loadApplication = tkinter.Tk()
loadApplication.title("Stock Analytics")
loadApplication.geometry("1080x720")
label1 = tkinter.Label(loadApplication, text = "Ticker")
input1 = tkinter.Entry(loadApplication)
loadAnalytics = tkinter.Button(loadApplication, text = "Load Analytics", command = runAnalytics.run(input1))
centerWindow.center(loadApplication)
loadAnalytics.pack()
label1.pack()
input1.pack()
loadApplication.mainloop()
</code></pre>
<p>runAnalytics.py</p>
<pre><code>from yahoo_finance import Share
import tkinter
import os
import centerWindow
def run(input1):
ticker = Share(input1)
loadAnalytics = tkinter.Tk()
loadAnalytics.title("$" + ticker + " Data")
loadAnalytics.geometry("1080x720")
print ("Price per share: " + ticker.get_price())
ticker.refresh()
print ("Price per share: " + ticker.get_price())
print("The dividend yield is: " + ticker.get_dividend_yield())
print("The 52 week low is: " + ticker.get_year_low())
print("The 52 week high is: " + ticker.get_year_high())
print("The volume is: " + ticker.get_volume())
print("The previous close was: " + ticker.get_prev_close())
print("The previous open was: " + ticker.get_open())
loadAnalytics.mainloop()
</code></pre>
<p>Error message:</p>
<blockquote>
<p>Traceback (most recent call last):
File "C:\Users\MyName\Documents\Python Projects\MarketData\main.py", line 13, in
loadAnalytics = tkinter.Button(loadApplication, text = "Load Analytics", command = runAnalytics.run(input1))
File "C:\Users\MyName\Documents\Python Projects\MarketData\runAnalytics.py", line 12, in run
print ("Price per share: " + ticker.get_price())
AttributeError: 'str' object has no attribute 'get_price'</p>
</blockquote>
| 0 | 2016-08-19T20:17:18Z | 39,047,247 | <p>Your assumption that <code>runAnalytics</code> is running is correct since the function is executed when binding it to the button the way you did.</p>
<p>According to the <a href="http://effbot.org/zone/tkinter-callbacks.htm" rel="nofollow">effbot docs</a> you need to use a <code>lambda</code> function in order to bind a function with passed arguments to a button like this:</p>
<pre><code>import tkinter
def test_func(val):
print(type(val))
print(val)
share_id = val.get()
print(share_id)
loadApplication = tkinter.Tk()
loadApplication.title("Stock Analytics")
loadApplication.geometry("1080x720")
label1 = tkinter.Label(loadApplication, text = "Ticker")
input1 = tkinter.Entry(loadApplication)
loadAnalytics = tkinter.Button(loadApplication, text="Load Analytics", command=lambda: test_func(input1))
loadAnalytics.pack()
label1.pack()
input1.pack()
loadApplication.mainloop()
</code></pre>
<p>However, there is a second thing to keep in mind:</p>
<pre><code>input1 = tkinter.Entry(loadApplication)
</code></pre>
<p>creates an <code>Entry</code> widget called <code>input1</code> which is then passed to the function. The thing is that <code>input1</code> does not contain the string you typed into the entry widget but a reference to the widget (widget ID). In order to get the widget's content you need to call its <code>.get()</code> method as shown in my code snippet.</p>
| 2 | 2016-08-19T20:33:42Z | [
"python",
"string",
"tkinter",
"python-3.5",
"attributeerror"
] |
Reorder Python argparse argument groups | 39,047,075 | <p>I'm using <code>argparse</code> and I have a custom argument group <code>required arguments</code>. Is there any way to change the order of the argument groups in the help message? I think it is more logical to have the required arguments before optional arguments, but haven't found any documentation or questions to help.</p>
<p>For example, changing this: </p>
<pre><code>usage: foo.py [-h] -i INPUT [-o OUTPUT]
Foo
optional arguments:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name
required arguments:
-i INPUT, --input INPUT
Input file name
</code></pre>
<p>to this: </p>
<pre><code>usage: foo.py [-h] -i INPUT [-o OUTPUT]
Foo
required arguments:
-i INPUT, --input INPUT
Input file name
optional arguments:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name
</code></pre>
<p>(example taken from <a href="http://stackoverflow.com/questions/24180527/argparse-required-arguments-listed-under-optional-arguments">this question</a>)</p>
| 2 | 2016-08-19T20:19:00Z | 39,047,348 | <p>You might consider adding an explicit optional arguments group:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser(description='Foo', add_help=False)
required = parser.add_argument_group('required arguments')
required.add_argument('-i', '--input', help='Input file name', required=True)
optional = parser.add_argument_group('optional arguments')
optional.add_argument("-h", "--help", action="help", help="show this help message and exit")
optional.add_argument('-o', '--output', help='Output file name', default='stdout')
parser.parse_args(['-h'])
</code></pre>
<p>You can move the help action to your optional group as
described here:
<a href="http://stackoverflow.com/questions/13075241/move-help-to-a-different-argument-group-in-python-argparse">Move "help" to a different Argument Group in python argparse</a></p>
<p>As you can see, the code produces the required output:</p>
<pre><code>usage: code.py -i INPUT [-h] [-o OUTPUT]
Foo
required arguments:
-i INPUT, --input INPUT
Input file name
optional arguments:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name
</code></pre>
| 3 | 2016-08-19T20:39:55Z | [
"python",
"argparse"
] |
Reorder Python argparse argument groups | 39,047,075 | <p>I'm using <code>argparse</code> and I have a custom argument group <code>required arguments</code>. Is there any way to change the order of the argument groups in the help message? I think it is more logical to have the required arguments before optional arguments, but haven't found any documentation or questions to help.</p>
<p>For example, changing this: </p>
<pre><code>usage: foo.py [-h] -i INPUT [-o OUTPUT]
Foo
optional arguments:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name
required arguments:
-i INPUT, --input INPUT
Input file name
</code></pre>
<p>to this: </p>
<pre><code>usage: foo.py [-h] -i INPUT [-o OUTPUT]
Foo
required arguments:
-i INPUT, --input INPUT
Input file name
optional arguments:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name
</code></pre>
<p>(example taken from <a href="http://stackoverflow.com/questions/24180527/argparse-required-arguments-listed-under-optional-arguments">this question</a>)</p>
| 2 | 2016-08-19T20:19:00Z | 39,048,205 | <p>The parser starts out with 2 argument groups, the usual <code>positional</code> and <code>optionals</code>. The <code>-h</code> help is added to <code>optionals</code>. When you do <code>add_argument_group</code>, a group is created (and returned to you). It is also appended to the <code>parser._action_groups</code> list.</p>
<p>When you ask for help (<code>-h</code>) <code>parser.format_help()</code> is called (you can do that as well in testing). Look for that method in <code>argparse.py</code>. That sets up the help message, and one step is:</p>
<pre><code> # positionals, optionals and user-defined groups
for action_group in self._action_groups:
formatter.start_section(action_group.title)
formatter.add_text(action_group.description)
formatter.add_arguments(action_group._group_actions)
formatter.end_section()
</code></pre>
<p>So if we reorder the items in the <code>parser._action_groups</code> list, we will reorder the groups in the display. Since this is the only use of <code>_action_groups</code> is should be safe and easy. But some people aren't allowed to peak under the covers (look or change <code>._</code> attributes).</p>
<p>The proposed solution(s) is to make your own groups in the order you want to see them, and make sure that the default groups are empty (the <code>add_help=False</code> parameter). That's the only way to do this if you stick with the public API. </p>
<p>Demo:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument('foo')
g1 = parser.add_argument_group('REQUIRED')
g1.add_argument('--bar', required=True)
g1.add_argument('baz', nargs=2)
print(parser._action_groups)
print([group.title for group in parser._action_groups])
print(parser.format_help())
parser._action_groups.reverse() # easy inplace change
parser.print_help()
</code></pre>
<p>Run result:</p>
<pre><code>1504:~/mypy$ python stack39047075.py
</code></pre>
<p><code>_actions_group</code> list and titles:</p>
<pre><code>[<argparse._ArgumentGroup object at 0xb7247fac>,
<argparse._ArgumentGroup object at 0xb7247f6c>,
<argparse._ArgumentGroup object at 0xb721de0c>]
['positional arguments', 'optional arguments', 'REQUIRED']
</code></pre>
<p>default help:</p>
<pre><code>usage: stack39047075.py [-h] --bar BAR foo baz baz
positional arguments:
foo
optional arguments:
-h, --help show this help message and exit
REQUIRED:
--bar BAR
baz
</code></pre>
<p>after reverse:</p>
<pre><code>usage: stack39047075.py [-h] --bar BAR foo baz baz
REQUIRED:
--bar BAR
baz
optional arguments:
-h, --help show this help message and exit
positional arguments:
foo
1504:~/mypy$
</code></pre>
<p>Another way to implement this is to define a <code>ArgumentParser</code> subclass with a new <code>format_help</code> method. In that method reorder the list used in that <code>for action_group...</code> loop. </p>
| 1 | 2016-08-19T21:56:39Z | [
"python",
"argparse"
] |
Reorder Python argparse argument groups | 39,047,075 | <p>I'm using <code>argparse</code> and I have a custom argument group <code>required arguments</code>. Is there any way to change the order of the argument groups in the help message? I think it is more logical to have the required arguments before optional arguments, but haven't found any documentation or questions to help.</p>
<p>For example, changing this: </p>
<pre><code>usage: foo.py [-h] -i INPUT [-o OUTPUT]
Foo
optional arguments:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name
required arguments:
-i INPUT, --input INPUT
Input file name
</code></pre>
<p>to this: </p>
<pre><code>usage: foo.py [-h] -i INPUT [-o OUTPUT]
Foo
required arguments:
-i INPUT, --input INPUT
Input file name
optional arguments:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name
</code></pre>
<p>(example taken from <a href="http://stackoverflow.com/questions/24180527/argparse-required-arguments-listed-under-optional-arguments">this question</a>)</p>
| 2 | 2016-08-19T20:19:00Z | 39,867,709 | <p>This is admittedly a hack, and is reliant on the changeable internal implementation, but after adding the arguments, you can simply do:</p>
<pre><code>parser._action_groups.reverse()
</code></pre>
<p>This will effectively make the required arguments group display above the optional arguments group. Note that this answer is only meant to be descriptive, not prescriptive.</p>
<hr>
<p>Credit: <a href="http://stackoverflow.com/a/39048205/832230">answer by hpaulj</a></p>
| 0 | 2016-10-05T07:19:38Z | [
"python",
"argparse"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.