title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
python pandas dtypes detection from sql | 39,298,989 | <p>I am quite troubled by the behaviour of Pandas DataFrame about Dtype detection.</p>
<p>I use 'read_sql_query' to retrieve data from a database to build a DataFrame, and then dump it into a csv file.</p>
<p>I don't need any transformation. Just dump it into a csv file and change date fields in the form : <strong>'%d/%m/%Y'</strong></p>
<p>However :</p>
<pre><code>self.dataframe.to_csv(self.fic,
index=False,
header=False,
sep='|',
mode='a',
encoding='utf-8',
line_terminator='\n',
date_format='%d/%m/%Y
)
</code></pre>
<p>Would miss to transforme/format some date fields...</p>
<p>I tried to do it another way :</p>
<pre><code>l = list(self.dataframe.select_dtypes(include=['datetime64']).columns)
for i in l:
self.dataframe[i] = self.dataframe[i].dt.strftime('%d/%m/%Y')
</code></pre>
<p>I was about to be satisfied, but some more tests showed a weird behaviour :</p>
<p><strong>if my sql request only selects two nuplets :</strong></p>
<pre><code>requete = 'select * from DOMMAGE_INTERET where doi_id in (176433, 181564)'
</code></pre>
<p>Everything works, whatever formating in the csv or in the DataFrame.</p>
<p>It detects date fields properly :</p>
<pre><code>df.dtypes
doi_id int64
aff_id int64
pdo_id int64
doi_date_decision datetime64[ns]
doi_date_mod datetime64[ns]
doi_montant float64
doi_reste_a_payer object
doi_appliquer_taux int64
doi_date_update datetime64[ns]
afg_id int64
dtype: object
</code></pre>
<p><strong>But when using a different selection :</strong></p>
<pre><code>requete = 'select * from DOMMAGE_INTERET where rownum < 100'
</code></pre>
<p>It miss again. And actually, fields types are detected differently :</p>
<pre><code>doi_id int64
aff_id int64
pdo_id int64
doi_date_decision object
doi_date_mod datetime64[ns]
doi_montant float64
doi_reste_a_payer object
doi_appliquer_taux int64
doi_date_update datetime64[ns]
afg_id int64
dtype: object
</code></pre>
<p>As you can see : <strong>'doi_date_decision' type does change depending of the request selection</strong> but, of course, this is the same set of data !!!</p>
<p>Isn't it weird?</p>
<p>Do you have a explanation for this behaviour?</p>
| 0 | 2016-09-02T18:50:06Z | 39,353,121 | <p>Thanks to Boud and Parfait. Their answers are right : </p>
<p>All my tests shows that missing date fields can make Dtype detection to fail.</p>
<p>read_sql_query() has a parameter to define fields with date type. I guess to cure this problem.</p>
<p>Sadly, since now, I have been using a complete generic treatment to process a hundred of tables.
To use 'read_sql_query' parameter 'parse_dates' would implies to do a prior work of metadata definition (like a json file describing each table).</p>
<p>Actually, I also found out that integers are changed to float when there is NaN field in the column...</p>
<p>If I would be reading csv flat files, I could understand that data type can be hard to detect... but from a database (read_sql_query)! Pandas has SqlAlchelmy as a dependence. And SqlAlchemy (and even any lower level Python database driver (cx_Oracle, DB API)) has reflection mechanism to detect data types. Pandas could have been using those metadatas to keep data types integrity.</p>
| 0 | 2016-09-06T15:51:17Z | [
"python",
"sql",
"csv",
"pandas",
"dataframe"
] |
scraping css values using scrapy framework | 39,299,005 | <p>Is there a way to scrap css values while scraping using python scrapy framework or by using php scraping.
any help will be appreaciated</p>
| -2 | 2016-09-02T18:50:46Z | 39,299,392 | <p>scrapy.Selector allows you to use xpath to extract properties of HTML elements including CSS.</p>
<p>e.g. <a href="https://github.com/okfde/odm-datenerfassung/blob/master/crawl/dirbot/spiders/data.py#L83" rel="nofollow">https://github.com/okfde/odm-datenerfassung/blob/master/crawl/dirbot/spiders/data.py#L83</a></p>
<p>(look around that code for how it fits into an entire scrapy spider)</p>
<p>If you don't need web crawling and just html parsing, you can use xpath directly from lxml in python. Another example:</p>
<p><a href="https://github.com/codeformunich/feinstaubbot/blob/master/feinstaubbot.py" rel="nofollow">https://github.com/codeformunich/feinstaubbot/blob/master/feinstaubbot.py</a></p>
<p>Finally, to get at the css from xpath I only know how to do it via css=element.attrib['style'] - this gives you everything inside of the style attribute which you further split by e.g. css.split(';') and then each of those by ':'.</p>
<p>It wouldn't surprise me if someone has a better suggestion. A little knowledge is enough to do a lot of scraping and that's how I would approach it based on previous projects.</p>
| 0 | 2016-09-02T19:18:22Z | [
"python",
"scrapy"
] |
scraping css values using scrapy framework | 39,299,005 | <p>Is there a way to scrap css values while scraping using python scrapy framework or by using php scraping.
any help will be appreaciated</p>
| -2 | 2016-09-02T18:50:46Z | 39,299,996 | <p>Yes, please check the documentation for <a href="http://doc.scrapy.org/en/latest/topics/selectors.html" rel="nofollow">selectors</a> basically you've two methods <code>response.xpath()</code> for <a href="https://www.w3.org/TR/xpath" rel="nofollow">xpath</a> and <code>response.css()</code> for <a href="https://www.w3.org/TR/selectors" rel="nofollow">css</a> selectors. For example, to get a title's text you could do any of the following:</p>
<pre><code>response.xpath('//title/text()').extract_first()
response.css('title::text').extract_first()
</code></pre>
| 0 | 2016-09-02T20:06:55Z | [
"python",
"scrapy"
] |
SQLite multiple conditions in WHERE LIKE | 39,299,072 | <p>Is there anyway to have multiple conditions in a WHERE x LIKE %x% statement without using OR?</p>
<p>Basically I want to be able to select something where column1 LIKE %something% AND column2 LIKE %check1% OR %check2% OR %check3%</p>
<p>However, using OR removes my first previous check for column1 but I need this to stay in place</p>
<p>Here is what I'm currently using.. but I'm wondering if there is a better way of doing this so I don't have to keep repeating column1</p>
<pre><code>SELECT id FROM test WHERE
column1 LIKE '%bob%' AND column2 LIKE '%something%'
OR column1 LIKE '%bob%' AND column2 LIKE '%somethingdifferent%'
OR column1 LIKE '%bob%' AND column2 LIKE '%somethingdifferent2%'
</code></pre>
<p>Basically.. right now I keep having to repeat</p>
<pre><code>column1 LIKE %bob%' AND .........
</code></pre>
<p>Just wondering if there is a better way to achieve this?</p>
| 1 | 2016-09-02T18:55:31Z | 39,299,123 | <p>What about:</p>
<pre><code>SELECT id FROM test WHERE
column1 LIKE '%bob%' AND
(column2 LIKE '%something%' OR
column2 LIKE '%somethingdifferent%' OR
column2 LIKE '%somethingdifferent2%')
</code></pre>
<p>It's logically equivalent...</p>
<p>You can also use a RegEx: see <a href="http://stackoverflow.com/questions/5071601/how-do-i-use-regex-in-a-sqlite-query">How do I use regex in a SQLite query?</a></p>
| 3 | 2016-09-02T18:59:42Z | [
"python",
"database",
"sqlite"
] |
Iterate and assign value - Python - Numpy | 39,299,187 | <p>I´m a newbie in Python.</p>
<p>I´m trying to do something like this. Iterate an array, compare the value with a constant and assign values to another array.</p>
<p><img src="http://i.stack.imgur.com/r0Sqk.jpg" alt="What I´m trying to do"></p>
<p>Thanks in advance!</p>
<p>Regards</p>
<p>Eduardo</p>
| 0 | 2016-09-02T19:03:52Z | 39,299,213 | <pre><code>a1 = numpy.array(range(10))
a2 = numpy.array(range(15,25))
print a2[a1==5]
print a2[a1 >= 8]
print a2[a1 < 5]
</code></pre>
<p>etc...</p>
| 0 | 2016-09-02T19:05:47Z | [
"python",
"arrays",
"loops",
"numpy"
] |
Iterate and assign value - Python - Numpy | 39,299,187 | <p>I´m a newbie in Python.</p>
<p>I´m trying to do something like this. Iterate an array, compare the value with a constant and assign values to another array.</p>
<p><img src="http://i.stack.imgur.com/r0Sqk.jpg" alt="What I´m trying to do"></p>
<p>Thanks in advance!</p>
<p>Regards</p>
<p>Eduardo</p>
| 0 | 2016-09-02T19:03:52Z | 39,299,308 | <pre><code>// A is the original list
// constant - f(y)
newList = list()
for i in A:
if i < constant: newList.append(i)
else: newList.append(constant)
</code></pre>
| -1 | 2016-09-02T19:13:02Z | [
"python",
"arrays",
"loops",
"numpy"
] |
Iterate and assign value - Python - Numpy | 39,299,187 | <p>I´m a newbie in Python.</p>
<p>I´m trying to do something like this. Iterate an array, compare the value with a constant and assign values to another array.</p>
<p><img src="http://i.stack.imgur.com/r0Sqk.jpg" alt="What I´m trying to do"></p>
<p>Thanks in advance!</p>
<p>Regards</p>
<p>Eduardo</p>
| 0 | 2016-09-02T19:03:52Z | 39,299,545 | <p>The question needs to be more specific BTW look how the element of a numpy <code>array</code> are accessed and modified:</p>
<pre><code>>>> # generating a random numpy array
... np_array = numpy.random.randint(0,100,10)
>>> np_array
45: array([22, 71, 40, 83, 33, 52, 29, 31, 77, 87])
>>> # Replacing 26 with 30
... np_array[np_array == 26] = 30
>>> np_array
46: array([22, 71, 40, 83, 33, 52, 29, 31, 77, 87])
>>> # multiplying all the numbers less than 50 by 10
... np_array[np_array < 50] *= 10
>>> np_array
47: array([220, 71, 400, 83, 330, 52, 290, 310, 77, 87])
</code></pre>
<p><a href="http://docs.scipy.org/doc/numpy/user/basics.indexing.html" rel="nofollow">http://docs.scipy.org/doc/numpy/user/basics.indexing.html</a> </p>
| 0 | 2016-09-02T19:31:30Z | [
"python",
"arrays",
"loops",
"numpy"
] |
django:adding more than one value in model field | 39,299,190 | <p>i want to create a app for classroom in which their only one teacher and students can be more than one.a nd student can be in more than one class. i want to store usernames of students in one classroom. is there any model field i can use to store usernames of students.
my code so far:</p>
<p>models.py </p>
<pre><code>from django.db import models
from django.conf import settings
# Create your models here.
class classroom(models.Model):
teacher = models.ForeignKey(settings.AUTH_USER_MODEL, default=1)
students = models.CharField(max_length=120)
</code></pre>
| 0 | 2016-09-02T19:04:05Z | 39,299,242 | <p>Create a Student model and add a many to many key to a ClassRoom instance. In that way you are telling "<strong>A student may have several classroom and a classroom may have many students</strong>"</p>
<pre><code>class ClassRoom(models.Model):
teacher = models.ForeignKey(settings.AUTH_USER_MODEL, default=1)
class Student(models.Model):
name = models.CharField(max_length=120)
classroom = models.ManyToManyField(ClassRoom, related_name="students")
</code></pre>
<p>So, if you have a <code>classroom</code> instance and want to get all its student. Notice the related_name attribute in the field classroom. </p>
<pre><code>classroom.students.all()
</code></pre>
<p>If you don't add <code>related_name</code> attribute, you will need to use <code>_set</code> notation</p>
<pre><code>classroom.student_set.all()
</code></pre>
<p>If you have a student instance and want to get his classroom</p>
<pre><code>student.classroom
</code></pre>
<p>TIP: For class model naming, use CamelCase notation. </p>
| 1 | 2016-09-02T19:08:11Z | [
"python",
"django",
"django-models",
"django-forms",
"django-views"
] |
django:adding more than one value in model field | 39,299,190 | <p>i want to create a app for classroom in which their only one teacher and students can be more than one.a nd student can be in more than one class. i want to store usernames of students in one classroom. is there any model field i can use to store usernames of students.
my code so far:</p>
<p>models.py </p>
<pre><code>from django.db import models
from django.conf import settings
# Create your models here.
class classroom(models.Model):
teacher = models.ForeignKey(settings.AUTH_USER_MODEL, default=1)
students = models.CharField(max_length=120)
</code></pre>
| 0 | 2016-09-02T19:04:05Z | 39,299,267 | <p>You should create a separate model for storing students, but even more than that - you should really start with the Django tutorial on their official site. </p>
<p>But your problem is solved accordingly: </p>
<pre><code>from django.db import models
from django.conf import settings
class ClassRoom(models.Model):
teacher = models.ForeignKey(settings.AUTH_USER_MODEL, default=1)
class Student(models.Model):
name = models.CharField(max_length=128)
class_room = models.ForeignKey(ClassRoom)
</code></pre>
<p>This way you can add as many students as you like. This is the standard practice, but if you really want to just save the names and NOTHING else (and are using PostgreSQL), you can also use the ArrayField for that. But check it out in the documentation yourself.</p>
| 1 | 2016-09-02T19:09:52Z | [
"python",
"django",
"django-models",
"django-forms",
"django-views"
] |
How to configure a package in PyPI to install only with pip3 | 39,299,320 | <p>I distributed my package written in Python 3 on PyPI. It can be installed by both <code>pip2</code> and <code>pip3</code>. How can I configure the package to only be available in Python 3; i.e. to install only with <code>pip3</code>?</p>
<p>I've already added these classifiers in <code>setup.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code>classifiers=[
...
# Supported Python versions.
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
...
]
</code></pre>
<p>But it still can be installed by <code>pip2</code>.</p>
| 1 | 2016-09-02T19:13:47Z | 39,299,963 | <p>I'm not sure if such an option exists. What you could do though is manually enforce it by checking that the version of python in which it is installed is larger than the version you want to dictate:</p>
<pre><code>from sys import version_info
class NotSupportedException(BaseException): pass
if version_info.major < 3:
raise NotSupportedException("Only Python 3.x Supported")
</code></pre>
<p>While this won't stop it being reached from <code>pip2</code> it should stump any users trying to use an old version of Python </p>
| 2 | 2016-09-02T20:04:09Z | [
"python",
"python-3.x",
"pypi"
] |
Is there a reliable way to get the path of the caller module from a Python function that is executed within a Sphinx conf.py? | 39,299,411 | <p>I'm running some custom Python code in Sphinx and <strong>need to get the path to the caller module</strong>. (Essentially this is the caller's <code>__file__</code> object; I need to interpret a filename relative to this location.)</p>
<p>I can get the filename from <code>inspect.stack()</code> as per <a href="http://stackoverflow.com/questions/3711184">How to use inspect to get the caller's info from callee in Python?</a>, but apparently I need to interpret this filename in the context of the Python startup directory. (Sometimes <code>inspect.stack()[k][1]</code> is an absolute path but sometimes it is a relative path like <code>conf.py</code>; the <a href="https://docs.python.org/2/library/inspect.html#the-interpreter-stack" rel="nofollow"><code>inspect.stack()</code></a> function doesn't seem to document this but <a href="http://stackoverflow.com/a/3711243/44330">unutbu claims in a comment</a> that it is relative to the Python startup directory. )</p>
<p>Sphinx does some unintentionally evil things like this comment:</p>
<pre><code># This file is execfile()d with the current directory set to its
# containing dir.
</code></pre>
<p>so <code>os.path.abspath(filename)</code> doesn't work, and</p>
<pre><code># If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('extensions'))
</code></pre>
<p>so <code>sys.path[0]</code> is corrupted by the time my code gets to it.</p>
<p>How do I find the startup directory in Python, if <code>sys.path</code> has been modified?</p>
<p>Or is there another way to get the path to the caller module?</p>
<hr>
<p>If I run <a href="http://stackoverflow.com/a/39299596/44330">Jean-François Fabre's answer</a></p>
<pre><code>for file,line,w1,w2 in traceback.extract_stack():
sys.stdout.write(' File "{}", line {}, in {}\n'.format(file,line,w1))
</code></pre>
<p>I get this:</p>
<pre><code>File "c:\app\python\anaconda\1.6.0\Scripts\sphinx-build-script.py", line 5, in <module>
File "c:\app\python\anaconda\1.6.0\lib\site-packages\Sphinx-1.4.1-py2.7.egg\sphinx\__init__.py", line 51, in main
File "c:\app\python\anaconda\1.6.0\lib\site-packages\Sphinx-1.4.1-py2.7.egg\sphinx\__init__.py", line 92, in build_main
File "c:\app\python\anaconda\1.6.0\lib\site-packages\Sphinx-1.4.1-py2.7.egg\sphinx\cmdline.py", line 243, in main
File "c:\app\python\anaconda\1.6.0\lib\site-packages\Sphinx-1.4.1-py2.7.egg\sphinx\application.py", line 155, in __init__
File "conf.py", line 512, in setup
[more lines elided, the conf.py is the one that matters]
</code></pre>
<p>so the problem is that I need to find the path to <code>conf.py</code> but the current directory has been changed by Sphinx so I can't just do <code>os.path.abspath(caller_filename)</code></p>
| -1 | 2016-09-02T19:19:46Z | 39,299,596 | <p>you can get what you want using the <code>traceback</code> module. I've written this sample code in PyScripter:</p>
<pre><code>import traceback,sys
def demo():
for file,line,w1,w2 in traceback.extract_stack():
sys.stdout.write(' File "{}", line {}, in {}\n'.format(file,line,w1))
def foo():
demo()
foo()
</code></pre>
<p>which gives on my Windows PC running PyScripter:</p>
<pre><code> File "C:\Users\dartypc\AppData\Roaming\PyScripter\remserver.py", line 63, in <module>
File "C:\Users\dartypc\AppData\Roaming\PyScripter\remserver.py", line 60, in main
File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\utils\server.py", line 227, in start
File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\utils\server.py", line 139, in accept
File "C:\Users\dartypc\AppData\Roaming\PyScripter\remserver.py", line 14, in _accept_method
File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\utils\server.py", line 191, in _serve_client
File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\core\protocol.py", line 391, in serve_all
File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\core\protocol.py", line 382, in serve
File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\core\protocol.py", line 350, in _dispatch
File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\core\protocol.py", line 298, in _dispatch_request
File "C:\Program Files\PyScripter\Lib\rpyc.zip\rpyc\core\protocol.py", line 528, in _handle_call
File "<string>", line 420, in run_nodebug
File "C:\DATA\jff\data\python\stackoverflow\simple_traceback.py", line 10, in <module>
File "C:\DATA\jff\data\python\stackoverflow\simple_traceback.py", line 8, in foo
File "C:\DATA\jff\data\python\stackoverflow\simple_traceback.py", line 4, in demo
</code></pre>
| 3 | 2016-09-02T19:35:52Z | [
"python",
"python-sphinx"
] |
Is there a reliable way to get the path of the caller module from a Python function that is executed within a Sphinx conf.py? | 39,299,411 | <p>I'm running some custom Python code in Sphinx and <strong>need to get the path to the caller module</strong>. (Essentially this is the caller's <code>__file__</code> object; I need to interpret a filename relative to this location.)</p>
<p>I can get the filename from <code>inspect.stack()</code> as per <a href="http://stackoverflow.com/questions/3711184">How to use inspect to get the caller's info from callee in Python?</a>, but apparently I need to interpret this filename in the context of the Python startup directory. (Sometimes <code>inspect.stack()[k][1]</code> is an absolute path but sometimes it is a relative path like <code>conf.py</code>; the <a href="https://docs.python.org/2/library/inspect.html#the-interpreter-stack" rel="nofollow"><code>inspect.stack()</code></a> function doesn't seem to document this but <a href="http://stackoverflow.com/a/3711243/44330">unutbu claims in a comment</a> that it is relative to the Python startup directory. )</p>
<p>Sphinx does some unintentionally evil things like this comment:</p>
<pre><code># This file is execfile()d with the current directory set to its
# containing dir.
</code></pre>
<p>so <code>os.path.abspath(filename)</code> doesn't work, and</p>
<pre><code># If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('extensions'))
</code></pre>
<p>so <code>sys.path[0]</code> is corrupted by the time my code gets to it.</p>
<p>How do I find the startup directory in Python, if <code>sys.path</code> has been modified?</p>
<p>Or is there another way to get the path to the caller module?</p>
<hr>
<p>If I run <a href="http://stackoverflow.com/a/39299596/44330">Jean-François Fabre's answer</a></p>
<pre><code>for file,line,w1,w2 in traceback.extract_stack():
sys.stdout.write(' File "{}", line {}, in {}\n'.format(file,line,w1))
</code></pre>
<p>I get this:</p>
<pre><code>File "c:\app\python\anaconda\1.6.0\Scripts\sphinx-build-script.py", line 5, in <module>
File "c:\app\python\anaconda\1.6.0\lib\site-packages\Sphinx-1.4.1-py2.7.egg\sphinx\__init__.py", line 51, in main
File "c:\app\python\anaconda\1.6.0\lib\site-packages\Sphinx-1.4.1-py2.7.egg\sphinx\__init__.py", line 92, in build_main
File "c:\app\python\anaconda\1.6.0\lib\site-packages\Sphinx-1.4.1-py2.7.egg\sphinx\cmdline.py", line 243, in main
File "c:\app\python\anaconda\1.6.0\lib\site-packages\Sphinx-1.4.1-py2.7.egg\sphinx\application.py", line 155, in __init__
File "conf.py", line 512, in setup
[more lines elided, the conf.py is the one that matters]
</code></pre>
<p>so the problem is that I need to find the path to <code>conf.py</code> but the current directory has been changed by Sphinx so I can't just do <code>os.path.abspath(caller_filename)</code></p>
| -1 | 2016-09-02T19:19:46Z | 39,299,892 | <p>Bah, I'm just going to get around the issue by allowing callers to pass in their <code>__file__</code> value :-(</p>
<p>my function:</p>
<pre><code>def do_something(app, filename, relroot=None):
if relroot is None:
relroot = '.'
else:
relroot = os.path.dirname(relroot)
path = os.path.join(relroot, filename)
...
</code></pre>
<p>in conf.py:</p>
<pre><code>def setup(app):
mymodule.do_something(app, 'path/to/file', relroot=__file__)
</code></pre>
| 0 | 2016-09-02T19:58:33Z | [
"python",
"python-sphinx"
] |
coloring cells in excel with pandas | 39,299,509 | <p>I need some help here. So i have something like this </p>
<pre><code>import pandas as pd
path = '/Users/arronteb/Desktop/excel/ejemplo.xlsx'
xlsx = pd.ExcelFile(path)
df = pd.read_excel(xlsx,'Sheet1')
df['is_duplicated'] = df.duplicated('#CSR')
df_nodup = df.loc[df['is_duplicated'] == False]
df_nodup.to_excel('ejemplo.xlsx', encoding='utf-8')
</code></pre>
<p>So basically this program load the <code>ejemplo.xlsx</code> (ejemplo is example in Spanish, just the name of the file) into <code>df</code> (a <code>DataFrame</code>), then checks for duplicate values in a specific columnââ. It deletes the duplicates and saves the file again. That part works correctly. The problem is that instead of removing duplicates, I need highlight the cells containing them with a different color, like yellow.</p>
| 1 | 2016-09-02T19:28:08Z | 39,299,584 | <p>You can create a function to do the highlighting...</p>
<pre><code>def highlight_cells():
# provide your criteria for highlighting the cells here
return ['background-color: yellow']
</code></pre>
<p>And then apply your highlighting function to your dataframe...</p>
<pre><code>df.style.apply(highlight_cells)
</code></pre>
| 2 | 2016-09-02T19:34:35Z | [
"python",
"excel",
"pandas",
"duplicates",
"highlight"
] |
coloring cells in excel with pandas | 39,299,509 | <p>I need some help here. So i have something like this </p>
<pre><code>import pandas as pd
path = '/Users/arronteb/Desktop/excel/ejemplo.xlsx'
xlsx = pd.ExcelFile(path)
df = pd.read_excel(xlsx,'Sheet1')
df['is_duplicated'] = df.duplicated('#CSR')
df_nodup = df.loc[df['is_duplicated'] == False]
df_nodup.to_excel('ejemplo.xlsx', encoding='utf-8')
</code></pre>
<p>So basically this program load the <code>ejemplo.xlsx</code> (ejemplo is example in Spanish, just the name of the file) into <code>df</code> (a <code>DataFrame</code>), then checks for duplicate values in a specific columnââ. It deletes the duplicates and saves the file again. That part works correctly. The problem is that instead of removing duplicates, I need highlight the cells containing them with a different color, like yellow.</p>
| 1 | 2016-09-02T19:28:08Z | 39,299,648 | <p>I just had this same problem and I just solved it this week. My problem was not getting the includes to work properly to get the online code that I found working properly.</p>
<p>I am going to assume you mean change the background color not change the font color. If I am wrong clarify your request.</p>
<p>My solution is tied to a particular library. openpyxl</p>
<pre><code>#### This import section is where my mistake was at
#### This works for me
import openpyxl ### Excel files
from openpyxl.styles import PatternFill, Border, Side, Alignment, Protection, Font
from openpyxl.styles import Fill, Color
from openpyxl.styles import Style
from openpyxl.styles.colors import RED
from openpyxl.styles.colors import GREEN
str_xls_PathFileCurrent = str_xls_FileName
### Opens Excel Document
var_xls_FileOpen = openpyxl.load_workbook(str_xls_PathFileCurrent)
### Opens up the Excel worksheet
var_xls_TabName = var_xls_FileOpen.worksheets[0]
### Put the spreadsheet tab names into an array
ary_xls_SheetNames = var_xls_FileOpen.get_sheet_names()
### Open the sheet in the file you working on
var_xls_TabSheet = var_xls_FileOpen.get_sheet_by_name(ary_xls_SheetNames[0])
xls_cell = var_xls_TabSheet['d10']
#### Changes the cell background color
xls_cell.style = Style(fill=PatternFill(patternType='solid'
, fgColor=Color('C4C4C4'))) ### Changes background color
#### Changes the fonts (does not use style)
xls_cell.font = xls_cell.font.copy(color = 'FFFF0000') ### Works (Changes to red font text)
xls_cell.font = xls_cell.font.copy(bold = True) ### Works (Changes to bold font)
xls_cell.font = xls_cell.font.copy(italic= True) ### Works (Changes to Italic Text)
xls_cell.font = xls_cell.font.copy(size = 34) ### Works (Changes Size)
</code></pre>
| 1 | 2016-09-02T19:39:55Z | [
"python",
"excel",
"pandas",
"duplicates",
"highlight"
] |
Can't access a file in another directory using python object | 39,299,593 | <p>I wanted to implement a classification application using python 2 and before classification done text should be preprocessed. Classifier and Preprocessor are in different packages. Then I created a object of <code>preprocessing class</code> in class in classification package.</p>
<p>here is my project explorer</p>
<p><a href="http://i.stack.imgur.com/sWSOE.png" rel="nofollow"><img src="http://i.stack.imgur.com/sWSOE.png" alt="enter image description here"></a></p>
<pre><code>preprocessing class
</code></pre>
<blockquote>
<p>class preprocessing:</p>
<pre><code>def preprocess(self, file):
inputFile = "text"
outputFile = "plainText.txt"
# infile = io.open(inputFile, "r", encoding='utf-8').read()
outfile = io.open(outputFile, "w", encoding='utf-8')
text = unnecessaryCharsObj.removeChars(file)
text = stopWrdsObj.removeStopwords(text)
text = text.lower()
plain = text.split()
stemmObj.stemminig(plain)
for x in plain:
outfile.write(x)
outfile.write(u'\u0020')
# plaintext = " ".join(str(x) for x in plain)
# outfile.write(plaintext)
return outfile
outfile.close()
</code></pre>
</blockquote>
<p>preprocessing class object that created in classidier package,</p>
<blockquote>
<pre><code>def classify(self):
dao = DAO();
procc = preprocessing();
# Get IDs of uncatergerized news to uncatNewsList
uncatNewsList = dao.selectUncategerizedNews();
for news in uncatNewsList:
description = dao.getDescriptionById(news[0])
wf = io.open('news.txt', 'w', encoding='utf-8')
x = description[0][0]
wf.write(x)
rf = io.open('news.txt', 'r', encoding='utf-8').read()
txt = procc.preprocess(rf)
category = MultinomialNBClassifier().classifier(txt)
dao.updateNews(news[0],category[0])
</code></pre>
</blockquote>
<p>But in preprocessing class, it uses a text file in same preprocess package. So I can't do the job as I wished since it return error <code>"No such file or directory: 'stopWordList.txt'"</code></p>
<p>what can I do for solving this?</p>
| 0 | 2016-09-02T19:35:40Z | 39,299,816 | <p>Please check that the file is in the same path as path of the current executed file.
Here is idea:
<a href="http://stackoverflow.com/questions/2632199/how-do-i-get-the-path-of-the-current-executed-file-in-python">How do I get the path of the current executed file in python?</a></p>
| 0 | 2016-09-02T19:52:58Z | [
"python",
"utf-8"
] |
Trying to output the x most common words in a text file | 39,299,600 | <p>I'm trying to write a program that will read in a text file and output a list of most common words (30 as the code is written now) along with their counts. so something like:</p>
<pre><code>word1 count1
word2 count2
word3 count3
... ...
... ...
wordn countn
</code></pre>
<p>in order of count1 > count2 > count3 >... >countn. This is what I have so far but I cannot get the sorted function to perform what I want. The error I get now is:</p>
<pre><code>TypeError: list indices must be integers, not tuple
</code></pre>
<p>I'm new to python. Any help would be appreciated. Thank you.</p>
<pre><code> def count_func(dictionary_list):
return dictionary_list[1]
def print_top(filename):
word_list = {}
with open(filename, 'r') as input_file:
count = 0
#best
for line in input_file:
for word in line.split():
word = word.lower()
if word not in word_list:
word_list[word] = 1
else:
word_list[word] += 1
#sorted_x = sorted(word_list.items(), key=operator.itemgetter(1))
# items = sorted(word_count.items(), key=get_count, reverse=True)
word_list = sorted(word_list.items(), key=lambda x: x[1])
for word in word_list:
if (count > 30):#19
break
print "%s: %s" % (word, word_list[word])
count += 1
# This basic command line argument parsing code is provided and
# calls the print_words() and print_top() functions which you must define.
def main():
if len(sys.argv) != 3:
print 'usage: ./wordcount.py {--count | --topcount} file'
sys.exit(1)
option = sys.argv[1]
filename = sys.argv[2]
if option == '--count':
print_words(filename)
elif option == '--topcount':
print_top(filename)
else:
print 'unknown option: ' + option
sys.exit(1)
if __name__ == '__main__':
main()
</code></pre>
| 0 | 2016-09-02T19:36:39Z | 39,299,630 | <p>Use the <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter</code></a> class.</p>
<pre><code>from collections import Counter
for word, count in Counter(words).most_common(30):
print(word, count)
</code></pre>
<p>Some unsolicited advice: Don't make so many functions until everything is working as one big block of code. Refactor into functions <em>after</em> it works. You don't even need a main section for a script this small.</p>
| 2 | 2016-09-02T19:38:42Z | [
"python",
"sorting",
"dictionary",
"tuples",
"sorted"
] |
Trying to output the x most common words in a text file | 39,299,600 | <p>I'm trying to write a program that will read in a text file and output a list of most common words (30 as the code is written now) along with their counts. so something like:</p>
<pre><code>word1 count1
word2 count2
word3 count3
... ...
... ...
wordn countn
</code></pre>
<p>in order of count1 > count2 > count3 >... >countn. This is what I have so far but I cannot get the sorted function to perform what I want. The error I get now is:</p>
<pre><code>TypeError: list indices must be integers, not tuple
</code></pre>
<p>I'm new to python. Any help would be appreciated. Thank you.</p>
<pre><code> def count_func(dictionary_list):
return dictionary_list[1]
def print_top(filename):
word_list = {}
with open(filename, 'r') as input_file:
count = 0
#best
for line in input_file:
for word in line.split():
word = word.lower()
if word not in word_list:
word_list[word] = 1
else:
word_list[word] += 1
#sorted_x = sorted(word_list.items(), key=operator.itemgetter(1))
# items = sorted(word_count.items(), key=get_count, reverse=True)
word_list = sorted(word_list.items(), key=lambda x: x[1])
for word in word_list:
if (count > 30):#19
break
print "%s: %s" % (word, word_list[word])
count += 1
# This basic command line argument parsing code is provided and
# calls the print_words() and print_top() functions which you must define.
def main():
if len(sys.argv) != 3:
print 'usage: ./wordcount.py {--count | --topcount} file'
sys.exit(1)
option = sys.argv[1]
filename = sys.argv[2]
if option == '--count':
print_words(filename)
elif option == '--topcount':
print_top(filename)
else:
print 'unknown option: ' + option
sys.exit(1)
if __name__ == '__main__':
main()
</code></pre>
| 0 | 2016-09-02T19:36:39Z | 39,299,860 | <p>First method as others have suggested i.e. by using <code>most_common(...)</code> doesn't work according to your needs cause it returns the <em>nth first most common words</em> and not the words whose count is less than or equal to <code>n</code>:</p>
<p>Here's using <code>most_common(...)</code>: note it just print the first nth most common words:</p>
<pre><code>>>> import re
... from collections import Counter
... def print_top(filename, max_count):
... words = re.findall(r'\w+', open(filename).read().lower())
... for word, count in Counter(words).most_common(max_count):
... print word, count
... print_top('n.sh', 1)
force 1
</code></pre>
<p>The correct way would be as follows, note it prints all the words whose count is less than equal to count:</p>
<pre><code>>>> import re
... from collections import Counter
... def print_top(filename, max_count):
... words = re.findall(r'\w+', open(filename).read().lower())
... for word, count in filter(lambda x: x[1]<=max_count, sorted(Counter(words).items(), key=lambda x: x[1], reverse=True)):
... print word, count
... print_top('n.sh', 1)
force 1
in 1
done 1
mysql 1
yes 1
egrep 1
for 1
1 1
print 1
bin 1
do 1
awk 1
reinstall 1
bash 1
mythtv 1
selections 1
install 1
v 1
y 1
</code></pre>
| 0 | 2016-09-02T19:56:31Z | [
"python",
"sorting",
"dictionary",
"tuples",
"sorted"
] |
Trying to output the x most common words in a text file | 39,299,600 | <p>I'm trying to write a program that will read in a text file and output a list of most common words (30 as the code is written now) along with their counts. so something like:</p>
<pre><code>word1 count1
word2 count2
word3 count3
... ...
... ...
wordn countn
</code></pre>
<p>in order of count1 > count2 > count3 >... >countn. This is what I have so far but I cannot get the sorted function to perform what I want. The error I get now is:</p>
<pre><code>TypeError: list indices must be integers, not tuple
</code></pre>
<p>I'm new to python. Any help would be appreciated. Thank you.</p>
<pre><code> def count_func(dictionary_list):
return dictionary_list[1]
def print_top(filename):
word_list = {}
with open(filename, 'r') as input_file:
count = 0
#best
for line in input_file:
for word in line.split():
word = word.lower()
if word not in word_list:
word_list[word] = 1
else:
word_list[word] += 1
#sorted_x = sorted(word_list.items(), key=operator.itemgetter(1))
# items = sorted(word_count.items(), key=get_count, reverse=True)
word_list = sorted(word_list.items(), key=lambda x: x[1])
for word in word_list:
if (count > 30):#19
break
print "%s: %s" % (word, word_list[word])
count += 1
# This basic command line argument parsing code is provided and
# calls the print_words() and print_top() functions which you must define.
def main():
if len(sys.argv) != 3:
print 'usage: ./wordcount.py {--count | --topcount} file'
sys.exit(1)
option = sys.argv[1]
filename = sys.argv[2]
if option == '--count':
print_words(filename)
elif option == '--topcount':
print_top(filename)
else:
print 'unknown option: ' + option
sys.exit(1)
if __name__ == '__main__':
main()
</code></pre>
| 0 | 2016-09-02T19:36:39Z | 39,300,055 | <p>Using <code>itertools</code>' <code>groupby</code>:</p>
<pre class="lang-py prettyprint-override"><code>from itertools import groupby
words = sorted([w.lower() for w in open("/path/to/file").read().split()])
count = [[item[0], len(list(item[1]))] for item in groupby(words)]
count.sort(key=lambda x: x[1], reverse = True)
for item in count[:5]:
print(*item)
</code></pre>
<ul>
<li><p>This will list the file's words, sort them and list unique words and their occurrence. Subsequently, the found list is sorted <em>by</em> occurrence by:</p>
<pre><code>count.sort(key=lambda x: x[1], reverse = True)
</code></pre></li>
<li><p>The <code>reverse = True</code> is to list the most common words first.</p></li>
<li><p>In the line:</p>
<pre><code>for item in count[:5]:
</code></pre>
<p><code>[:5]</code> defines the number of most occurring words to show.</p></li>
</ul>
| 1 | 2016-09-02T20:11:03Z | [
"python",
"sorting",
"dictionary",
"tuples",
"sorted"
] |
How to scrape data from multiple wikipedia pages with python? | 39,299,658 | <p>I want grab the age, place of birth and previous occupation of senators.
Information for each individual senator is available on Wikipedia, on their respective pages, and there is another page with a table that lists all senators by the name.
How can I go through that list, follow links to the respective pages of each senator, and grab the information I want?</p>
<p>Here is what I've done so far.</p>
<p>1 . (no python) Found out that DBpedia exists and wrote a query to search for senators. Unfortunately DBpedia hasn't categorized most (if any) of them:</p>
<blockquote>
<pre><code> SELECT ?senator, ?country WHERE {
?senator rdf:type <http://dbpedia.org/ontology/Senator> .
?senator <http://dbpedia.org/ontology/nationality> ?country
}
</code></pre>
</blockquote>
<p>Query <a href="http://dbpedia.org/snorql/?query=SELECT%20%3Fsenator%2C%20%3Fcountry%20WHERE%20%7B%0D%0A%20%20%3Fsenator%20rdf%3Atype%20%3Chttp%3A%2F%2Fdbpedia.org%2Fontology%2FSenator%3E%20.%0D%0A%20%20%3Fsenator%20%3Chttp%3A%2F%2Fdbpedia.org%2Fontology%2Fnationality%3E%20%3Fcountry%0D%0A%7D" rel="nofollow">results</a> are unsatisfactory.</p>
<p>2 . Found out that there is a python module called <code>wikipedia</code> that allows me to search and retrieve information from individual wiki pages. Used it to get a list of senator names from the table by looking at the hyperlinks.</p>
<pre><code>import wikipedia as w
w.set_lang('pt')
# Grab page with table of senator names.
s = w.page(w.search('Lista de Senadores do Brasil da 55 legislatura')[0])
# Get links to senator names by removing links of no interest
# For each link in the page, check if it's a link to a senator page.
senators = [name for name in s.links if not
# Senator names don't contain digits nor ,
(any(char.isdigit() or char == ',' for char in name) or
# And full names always contain spaces.
' ' not in name)]
</code></pre>
<p>At this point I'm a bit lost. Here the list <code>senators</code> contains all senator names, but also other names, e.g., party names. The <code>wikipidia</code> module (at least from what I could find in the API documentation) also doesn't implement functionality to follow links or search through tables.</p>
<p>I've seen two related entries here on StackOverflow that seem helpful, but they both (<a href="http://stackoverflow.com/questions/12415214/scrape-data-from-wikipedia">here</a> and <a href="http://stackoverflow.com/questions/27643738/wikipedia-data-scraping-with-python">here</a>) extract information from a single page. </p>
<p>Can anyone point me towards a solution?</p>
<p>Thanks!</p>
| 3 | 2016-09-02T19:40:27Z | 39,320,281 | <p>Ok, so I figured it out (thanks to a comment pointing me to BeautifulSoup).</p>
<p>There is actually no big secret to achieve what I wanted. I just had to go through the list with BeautifulSoup and store all the links, and then open each stored link with <code>urllib2</code>, call BeautifulSoup on the response, and.. done. Here is the solution:</p>
<pre><code>import urllib2 as url
import wikipedia as w
from bs4 import BeautifulSoup as bs
import re
# A dictionary to store the data we'll retrieve.
d = {}
# 1. Grab the list from wikipedia.
w.set_lang('pt')
s = w.page(w.search('Lista de Senadores do Brasil da 55 legislatura')[0])
html = url.urlopen(s.url).read()
soup = bs(html, 'html.parser')
# 2. Names and links are on the second column of the second table.
table2 = soup.findAll('table')[1]
for row in table2.findAll('tr'):
for colnum, col in enumerate(row.find_all('td')):
if (colnum+1) % 5 == 2:
a = col.find('a')
link = 'https://pt.wikipedia.org' + a.get('href')
d[a.get('title')] = {}
d[a.get('title')]['link'] = link
# 3. Now that we have the links, we can iterate through them,
# and grab the info from the table.
for senator, data in d.iteritems():
page = bs(url.urlopen(data['link']).read(), 'html.parser')
# (flatten list trick: [a for b in nested for a in b])
rows = [item for table in
[item.find_all('td') for item in page.find_all('table')[0:3]]
for item in table]
for rownumber, row in enumerate(rows):
if row.get_text() == 'Nascimento':
birthinfo = rows[rownumber+1].getText().split('\n')
try:
d[senator]['birthplace'] = birthinfo[1]
except IndexError:
d[senator]['birthplace'] = ''
birth = re.search('(.*\d{4}).*\((\d{2}).*\)', birthinfo[0])
d[senator]['birthdate'] = birth.group(1)
d[senator]['age'] = birth.group(2)
if row.get_text() == 'Partido':
d[senator]['party'] = rows[rownumber + 1].getText()
if 'Profiss' in row.get_text():
d[senator]['profession'] = rows[rownumber + 1].getText()
</code></pre>
<p>Pretty simple. BeautifulSoup works wonders =)</p>
| 0 | 2016-09-04T18:39:43Z | [
"python",
"wikipedia"
] |
How to check if character exists in DataFrame cell | 39,299,703 | <p>After creating the three-rows DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': ['1-2', '3-4', '5-6']})
</code></pre>
<p>I check if there is any cell equal to '3-4':</p>
<pre><code>df['a']=='3-4'
</code></pre>
<p><a href="http://i.stack.imgur.com/7UYJY.png" rel="nofollow"><img src="http://i.stack.imgur.com/7UYJY.png" alt="enter image description here"></a></p>
<p>Since <code>df['a']=='3-4'</code> command results to <code>pandas.core.series.Series</code> object I can use it to create a "filtered" version of the original DataFrame like so:</p>
<pre><code>filtered = df[ df['a']=='3-4' ]
</code></pre>
<p><a href="http://i.stack.imgur.com/AGU3O.png" rel="nofollow"><img src="http://i.stack.imgur.com/AGU3O.png" alt="enter image description here"></a></p>
<p>In Python I can check for the occurrence of the string character in another string using:</p>
<pre><code>string_value = '3-4'
print('-' in string_value)
</code></pre>
<p>What would be a way to accomplish the same while working with DataFrames?</p>
<p>So, I could create the filtered version of the original DataFrame by
checking if '-' character in every row's cell, like:</p>
<pre><code>filtered = df['-' in df['a']]
</code></pre>
<p>But this syntax above is invalid and throws <code>KeyError: False</code> error message. </p>
| 3 | 2016-09-02T19:44:11Z | 39,299,761 | <p>Use <code>str</code> and <code>contains</code>:</p>
<pre><code>In [5]: df['a'].str.contains('-')
Out[5]:
0 True
1 True
2 True
Name: a, dtype: bool
</code></pre>
| 4 | 2016-09-02T19:48:55Z | [
"python",
"pandas",
"dataframe"
] |
Python: Update serialized object | 39,299,705 | <p>I am trying to perform a simple task:<br>
1. De-serialize a previously serialized object<br>
2. Updating this object<br>
3. Serializing it back for later use </p>
<p>I tried to do it with <code>pickle</code> with no luck.<br>
I start by doing this: </p>
<pre><code>empty_list = []
f = open('backup.p', 'wb')
pickle.dump(empty_list, f)
f.close()
</code></pre>
<p>and later: </p>
<pre><code>f = open('backup.p', 'rb+')
l = pickle.load(f)
l.append('string')
pickle.dump(l, f)
f.close()
</code></pre>
<p>But when I try to load again the supposedly updated list: </p>
<pre><code>f = open('backup.p', 'rb')
updated_list = pickle.load(f)
print(updated_list) # prints [] instead of ['string']
f.close()
</code></pre>
<p>Why doesn't the second call to <code>dump()</code> overwrite the content of <code>backup.p</code>, with the new list <code>['string']</code>? Do I have to remove <code>buckup.p</code> in order to get the desired behavior? </p>
| 0 | 2016-09-02T19:44:17Z | 39,299,824 | <p>After this:</p>
<pre><code>f = open('backup.p', 'rb+')
l = pickle.load(f)
</code></pre>
<p>you've positioned the file object <code>f</code> at a point in the file after the pickle of <code>empty_list</code>. That means when you dump another object to the file:</p>
<pre><code>pickle.dump(l, f)
</code></pre>
<p>the new pickle gets written after the first pickle. You need to avoid that, either by clearing the file before dumping the new pickle:</p>
<pre><code>f.seek(0)
f.truncate()
</code></pre>
<p>or by dumping to a new file and then replacing the original file with the new one. (You could also seek, dump, then truncate at the end to clear any trailing garbage instead of going seek, truncate, dump.)</p>
| 3 | 2016-09-02T19:53:37Z | [
"python",
"serialization",
"deserialization",
"pickle"
] |
Count most frequent word in row by R | 39,299,811 | <p>There is a table shown below </p>
<pre><code> Name Mon Tue Wed Thu Fri Sat Sun
1 John Apple Orange Apple Banana Apple Apple Orange
2 Ricky Banana Apple Banana Banana Banana Banana Apple
3 Alex Apple Orange Orange Apple Apple Orange Orange
4 Robbin Apple Apple Apple Apple Apple Banana Banana
5 Sunny Banana Banana Apple Apple Apple Banana Banana
</code></pre>
<p>So , I want to count the most frequent Fruit for each person and add those value in new column. </p>
<p>For example.</p>
<pre><code> Name Mon Tue Wed Thu Fri Sat Sun Max_Acc Count
1 John Apple Orange Apple Banana Apple Apple Orange Apple 4
2 Ricky Banana Apple Banana Banana Banana Banana Apple Banana 5
3 Alex Apple Orange Orange Apple Apple Orange Orange Orange 4
4 Robbin Apple Apple Apple Apple Apple Banana Banana Apple 5
5 Sunny Banana Banana Apple Apple Apple Banana Banana Banana 4
</code></pre>
<p>I am facing problem in finding rows. I can find Frequency in column by using <code>table()</code> function.</p>
<pre><code>>table(df$Mon)
Apple Banana
3 2
</code></pre>
<p>But here i want name of most frequent fruit in new column. </p>
| 1 | 2016-09-02T19:52:39Z | 39,303,113 | <p>If we need the "Count" and "Names" corresponding to the <code>max</code> "Count", we loop through the rows of the dataset (using <code>apply</code> with <code>MARGIN = 1</code>), use <code>table</code> to get the frequency, extract the maximum value from it and the <code>names</code> corresponding to the maximum value, <code>rbind</code> it and <code>cbind</code> with the original dataset.</p>
<pre><code>cbind(df1, do.call(rbind, apply(df1[-1], 1, function(x) {
x1 <- table(x)
data.frame(Count = max(x1), Names=names(x1)[which.max(x1)])})))
# Name Mon Tue Wed Thu Fri Sat Sun Count Names
#1 John Apple Orange Apple Banana Apple Apple Orange 4 Apple
#2 Ricky Banana Apple Banana Banana Banana Banana Apple 5 Banana
#3 Alex Apple Orange Orange Apple Apple Orange Orange 4 Orange
#4 Robbin Apple Apple Apple Apple Apple Banana Banana 5 Apple
#5 Sunny Banana Banana Apple Apple Apple Banana Banana 4 Banana
</code></pre>
<hr>
<p>Or we can use <code>data.table</code></p>
<pre><code>library(data.table)
setDT(df1)[, c("Names", "Count") := {tbl <- table(unlist(.SD))
.(names(tbl)[which.max(tbl)], max(tbl))}, by = Name]
</code></pre>
| 3 | 2016-09-03T04:03:54Z | [
"python",
"pyspark",
"rpy2",
"word-frequency"
] |
Count most frequent word in row by R | 39,299,811 | <p>There is a table shown below </p>
<pre><code> Name Mon Tue Wed Thu Fri Sat Sun
1 John Apple Orange Apple Banana Apple Apple Orange
2 Ricky Banana Apple Banana Banana Banana Banana Apple
3 Alex Apple Orange Orange Apple Apple Orange Orange
4 Robbin Apple Apple Apple Apple Apple Banana Banana
5 Sunny Banana Banana Apple Apple Apple Banana Banana
</code></pre>
<p>So , I want to count the most frequent Fruit for each person and add those value in new column. </p>
<p>For example.</p>
<pre><code> Name Mon Tue Wed Thu Fri Sat Sun Max_Acc Count
1 John Apple Orange Apple Banana Apple Apple Orange Apple 4
2 Ricky Banana Apple Banana Banana Banana Banana Apple Banana 5
3 Alex Apple Orange Orange Apple Apple Orange Orange Orange 4
4 Robbin Apple Apple Apple Apple Apple Banana Banana Apple 5
5 Sunny Banana Banana Apple Apple Apple Banana Banana Banana 4
</code></pre>
<p>I am facing problem in finding rows. I can find Frequency in column by using <code>table()</code> function.</p>
<pre><code>>table(df$Mon)
Apple Banana
3 2
</code></pre>
<p>But here i want name of most frequent fruit in new column. </p>
| 1 | 2016-09-02T19:52:39Z | 39,304,465 | <p>Another approach would be to loop over all unique fruits as follows</p>
<pre><code>fruits_unique <- unique(unlist(dat[-1]))
occurence <- sapply(fruits_unique, function(x) rowSums(dat[,-1] == x))
# Using this data to create the resulting columns
ind <- apply(occurence,1,which.max)
dat$Names <- fruits_unique[ind]
dat$count <- occurence[cbind(seq_along(ind), ind)]
</code></pre>
<p>Result:</p>
<pre><code> Name Mon Tue Wed Thu Fri Sat Sun Names Count
1 John Apple Orange Apple Banana Apple Apple Orange Apple 4
2 Ricky Banana Apple Banana Banana Banana Banana Apple Banana 5
3 Alex Apple Orange Orange Apple Apple Orange Orange Orange 4
4 Robbin Apple Apple Apple Apple Apple Banana Banana Apple 5
5 Sunny Banana Banana Apple Apple Apple Banana Banana Banana 4
</code></pre>
| 2 | 2016-09-03T07:35:43Z | [
"python",
"pyspark",
"rpy2",
"word-frequency"
] |
Scapy not picking up a single ARP request | 39,299,825 | <p>I have the following running (finally after installing libnet etc) on my Mac trying to listen for a Dash button's MAC address:</p>
<pre><code>from scapy.all import *
def arp_display(pkt):
if pkt[ARP].op == 1: #who-has (request)
if pkt[ARP].psrc == '0.0.0.0': # ARP Probe
print ("ARP Probe from: " + pkt[ARP].hwsrc)
print (sniff(prn=arp_display, filter="arp", store=0, count=300))
</code></pre>
<p>However, this just runs indefinitely and nothing is picked up even after numerous presses on the Dash and many other devices connecting and disconnecting.</p>
<p>I tried the following too</p>
<pre><code>from scapy.all import *
print (sniff(filter="arp",count=10).summary())
</code></pre>
<p>Which also yields no results. Nothing I find online tells me what might be causing this.</p>
<p>Any ideas? Or even how I could debug?</p>
| 0 | 2016-09-02T19:53:37Z | 39,787,889 | <p>The new buttons don't put out the same ARP request as the old ones. Remove this line and it should work.</p>
<pre><code>if pkt[ARP].psrc == '0.0.0.0': # ARP Probe
</code></pre>
| 0 | 2016-09-30T09:14:07Z | [
"python",
"networking",
"iot",
"scapy",
"hacking"
] |
How do I import module in jupyter notebook directory into notebooks in lower directories? | 39,299,838 | <p>I have used jupyter notebook for data analysis for quite sometime. I would like to develop a module in my jupyter notebook directory and be able to import that new module into notebooks. My jupyter notebook file directory can be represented as follows;</p>
<pre><code>Jupyter notebooks\
notebook1.ipynb
new_module\
__init__.py
newfunction.py
currentnotebooks\
notebook2.ipynb
</code></pre>
<p>When use <code>import new_module</code> in notebook1.ipynb it works however when I try the same command in notebook2.ipynb I get the following <code>ImportError: No module named 'new_module'</code>. The two obvious solutions are A) move new_module into the currentnotebooks directory or B) move notebook2.ipynb up to the same level as new_module. I don't want to mess around with the file structure at all. Is this possible?</p>
| 1 | 2016-09-02T19:54:43Z | 39,311,677 | <p>You need to make sure that the parent directory of <code>new_module</code> is on your python path. For a notebook that is one level below <code>new_module</code>, this code will do the trick:</p>
<pre><code>import os
import sys
nb_dir = os.path.split(os.getcwd())[0]
if nb_dir not in sys.path:
sys.path.append(nb_dir)
</code></pre>
<p>If you're further down in the directory hierarchy, you will need to adjust the way <code>nb_dir</code> is set, but that's all. You should <em>not</em> run this code for a notebook in <code>Jupyter notebooks</code>, since it would add the parent of that directory to the python path, which is probably undesirable.</p>
<p>The reason the import works for <code>notebook1</code> is that sys.path contains <code>''</code> (the empty string), which is aways the current directory of the running interpreter (kernel, in this case). A google search for <code>explain python path</code> turns up several good explanations of how pyhton used <code>PYTHONPATH</code> (aka <code>sys.path</code>).</p>
| 2 | 2016-09-03T21:55:59Z | [
"python",
"python-3.x",
"ipython",
"python-import",
"jupyter-notebook"
] |
django equivalent to sqlalchemy's sessionmaker | 39,299,852 | <p>I'm new to Django 1.9 (fairly new to flask, as well), and am trying to populate my models.py much in the same way that sqlite sessionmaker does it. </p>
<p>this is a snip of my models.py:</p>
<pre><code>@python_2_unicode_compatible
class Location(models.Model):
def __str__(self):
return self.location_text
location_text = models.CharField(max_length=200)
travel_date = models.DateTimeField('date traveled')
@python_2_unicode_compatible
class Countries(models.Model):
def __str__(self):
return self.capitals
location = models.ForeignKey(Location, on_delete=models.CASCADE)
capitals = models.CharField(max_length=200)
</code></pre>
<p>With flask and sqlalchemy I'd create a new file and be able to do something like this:</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from this_app.models import Location, Countries
engine = create_engine("the db")
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
loc1 = Location(location_text = "Indonesia")
session.add(loc1)
session.commit()
cap = Countries(
capitals="Jakarta",
location=loc1
)
session.add(cap)
session.commit()
</code></pre>
<p>How is this done with django's orm?</p>
| 1 | 2016-09-02T19:55:52Z | 39,300,619 | <p>If i get you right, you want to save objects in one query.</p>
<p>This could be done with <a href="https://docs.djangoproject.com/en/1.9/ref/models/querysets/#django.db.models.query.QuerySet.bulk_create" rel="nofollow"><code>bulk_create</code></a></p>
<blockquote>
<p>This method inserts the provided list of objects into the database in an efficient manner (generally only 1 query, no matter how many objects there are)</p>
</blockquote>
| 0 | 2016-09-02T20:58:20Z | [
"python",
"django",
"sqlalchemy"
] |
django equivalent to sqlalchemy's sessionmaker | 39,299,852 | <p>I'm new to Django 1.9 (fairly new to flask, as well), and am trying to populate my models.py much in the same way that sqlite sessionmaker does it. </p>
<p>this is a snip of my models.py:</p>
<pre><code>@python_2_unicode_compatible
class Location(models.Model):
def __str__(self):
return self.location_text
location_text = models.CharField(max_length=200)
travel_date = models.DateTimeField('date traveled')
@python_2_unicode_compatible
class Countries(models.Model):
def __str__(self):
return self.capitals
location = models.ForeignKey(Location, on_delete=models.CASCADE)
capitals = models.CharField(max_length=200)
</code></pre>
<p>With flask and sqlalchemy I'd create a new file and be able to do something like this:</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from this_app.models import Location, Countries
engine = create_engine("the db")
Base.metadata.bind = engine
DBSession = sessionmaker(bind=engine)
session = DBSession()
loc1 = Location(location_text = "Indonesia")
session.add(loc1)
session.commit()
cap = Countries(
capitals="Jakarta",
location=loc1
)
session.add(cap)
session.commit()
</code></pre>
<p>How is this done with django's orm?</p>
| 1 | 2016-09-02T19:55:52Z | 39,302,902 | <pre><code>loc1 = Location.objects.create(location_text="Indonesia")
cap = Countries.objects.create(capitals="Jakarta", location=loc1)
</code></pre>
<p>That's all.</p>
| 0 | 2016-09-03T03:16:02Z | [
"python",
"django",
"sqlalchemy"
] |
wxPython - if condition = False then skip some lines or exit without closing Main Frame | 39,299,853 | <p>in my <code>wxApp</code> that is currently under development, I have bind a button to call a new frame. However I want to put condition in my def that is actually calling the new frame and if that fails the def method should simple exit but not close the main Frame. So basically something like <code>Exit Sub</code> in VBA. Below is my code:-</p>
<pre><code>self.btn_CreateItem.Bind(wx.EVT_BUTTON, self.CreateBtnClicked)
def CreateBtnClicked(self, event):
if self.rgnCombo.GetValue() == '':
ctypes.windll.user32.MessageBoxA(0, "Can't create item without selecting Region!!!", '', 1)
exit()
call_CreateFrame = CreateItemFrame(None, 'Create work item(s)!!!')
</code></pre>
<p>so in place of <code>exit()</code> in the code above (because it is closing the whole main frame) I want something equivalent to VBA's <code>Exit Sub</code>.</p>
<p>Also is there a way to also skip some scripts and continue from a certain line like the <code>GoTo</code> method of VBA.</p>
| 0 | 2016-09-02T19:55:53Z | 39,302,701 | <p>Replace</p>
<pre><code>exit()
</code></pre>
<p>with</p>
<pre><code>return
</code></pre>
<p><a href="http://stackoverflow.com/questions/18863309/the-equivalent-of-a-goto-in-python">The equivalent of a GOTO in python</a></p>
| 1 | 2016-09-03T02:27:06Z | [
"python",
"if-statement",
"wxpython",
"goto"
] |
Finding particular column value via regex | 39,299,879 | <p>I have a txt file containing multiple rows as below. </p>
<pre><code>56.0000 3 1
62.0000 3 1
74.0000 3 1
78.0000 3 1
82.0000 3 1
86.0000 3 1
90.0000 3 1
94.0000 3 1
98.0000 3 1
102.0000 3 1
106.0000 3 1
110.0000 3 0
116.0000 3 1
120.0000 3 1
</code></pre>
<p>Now I am looking for the row which has '0' in the third column . </p>
<p>I am using python regex package. What I have tried is <code>re.match("(.*)\s+(0-9)\s+(1)",line)</code> but of no help.. </p>
<p>What should be the regular expression pattern I should be looking for?</p>
| 0 | 2016-09-02T19:57:58Z | 39,299,939 | <p>You probably don't need a regex for this. You can <em>strip</em> trailing whitespaces from the right side of the line and then check the last character:</p>
<pre><code>if line.rstrip()[-1] == "0": # since your last column only contains 0 or 1
...
</code></pre>
| 2 | 2016-09-02T20:02:29Z | [
"python",
"regex"
] |
Finding particular column value via regex | 39,299,879 | <p>I have a txt file containing multiple rows as below. </p>
<pre><code>56.0000 3 1
62.0000 3 1
74.0000 3 1
78.0000 3 1
82.0000 3 1
86.0000 3 1
90.0000 3 1
94.0000 3 1
98.0000 3 1
102.0000 3 1
106.0000 3 1
110.0000 3 0
116.0000 3 1
120.0000 3 1
</code></pre>
<p>Now I am looking for the row which has '0' in the third column . </p>
<p>I am using python regex package. What I have tried is <code>re.match("(.*)\s+(0-9)\s+(1)",line)</code> but of no help.. </p>
<p>What should be the regular expression pattern I should be looking for?</p>
| 0 | 2016-09-02T19:57:58Z | 39,300,024 | <p>Just split line and read value from list.</p>
<pre><code>>>> line = "56.0000 3 1"
>>> a=line.split()
>>> a
['56.0000', '3', '1']
>>> print a[2]
1
>>>
</code></pre>
<p><strong>Summary:</strong></p>
<pre><code>f = open("sample.txt",'r')
for line in f:
tmp_list = line.split()
if int(tmp_list[2]) == 0:
print "Line has 0"
print line
f.close()
</code></pre>
<p>Output:</p>
<pre><code>C:\Users\dinesh_pundkar\Desktop>python c.py
Line has 0
110.0000 3 0
</code></pre>
| 1 | 2016-09-02T20:08:49Z | [
"python",
"regex"
] |
Converting from Excel to HDF5 using Pandas | 39,299,911 | <p>I want to extract the content of an Excel document into a pandas dataframe and then write that dataframe into an HDF5 file. To do so, I've done this:</p>
<pre><code>xls_df = pd.read_excel(fn_xls)
xls_df.to_hdf(fn_h5, 'table', format='table', mode='w')
</code></pre>
<p>This results in the following error: </p>
<blockquote>
<p>TypeError: Cannot serialize the column [Col1] because
its data contents are [unicode] object dtype</p>
</blockquote>
<p>I tried using convert.objects() on the dataframe from the Excel file, but this doesn't work (and convert.objects() is deprecated). Are there any suggestions on going about this?</p>
<p>Here is a little information on the Excel file:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 101 entries, 0 to 100
Data columns (total 5 columns):
Col1 101 non-null object
Col2 101 non-null object
Col3 94 non-null float64
Col4 98 non-null object
Col5 93 non-null float64
dtypes: float64(2), object(3)
</code></pre>
<p>The first and second columns are strings, the fourth column has 1 string but is mostly integers and the third and fifth columns are integers.</p>
| 0 | 2016-09-02T19:59:51Z | 39,314,740 | <p>The mixed string and integer data types in column "Col4" are cause an error when converting to HDF5 in "table" format.</p>
<p>To save in hdf5 "tables" format you need to convert the numbers in Col4 to floats (and strings to NaN):</p>
<p><code>df["Col4"] = pd.to_numeric(df["Col4"], errors="coerce")</code></p>
<p>Or convert everything in the column to strings: </p>
<p><code>df["Col4"] = df["Col4"].astype(str)</code></p>
<p>Or use the "fixed" hdf5 format which allows columns to have mixed datatypes. This saves the mixed datatype column in in the python pickle format, and currently gives a PerformanceWarning.</p>
<p><code>df.to_hdf(outpath, 'yourkey', format='fixed', mode='w')</code></p>
| 0 | 2016-09-04T07:54:26Z | [
"python",
"excel",
"pandas",
"dataframe",
"hdf5"
] |
Checking if the value of a string is true - python | 39,299,917 | <p>I am trying to see if a string is true or false but im stuck... In the following code I have made a list of statements that I want to evaluate, I have put them in strings because the variables need to be declared after the list is .(Reason is that the list of statements to be evaluated are being passed as a parameter to a function, in this function the variables are declared) I want to run through all of the items in the list and if it is a true statement then do something... Here is my code: (There is no output generated)</p>
<pre><code>a = ['b > c', 'b = c', 'b < c']
b = 5
c = 3
for item in a:
if exec(item):
print(item)
</code></pre>
| -2 | 2016-09-02T20:00:24Z | 39,299,958 | <p><code>exec</code> is a function in python3.x<sup>1</sup> that returns <code>None</code> so you'll always have a falsy result. You probably want <code>eval</code>.</p>
<p>Also be careful here. Do not use this unless you completely trust the input strings as it will allow execution of arbitrary code otherwise.</p>
<hr>
<p>Note that this is a <em>very</em> strange code design and there is <em>probably</em> a better way to accomplish what you want... For example:</p>
<p><em>Why</em> do you want to define the strings <em>before</em> the variables are defined? Coupling strings to names in your code in this way is likely to lead to a painful code maintenance experience.</p>
<p><sup><sup>1</sup>In python2.x, this would fail with a <code>SyntaxError</code> since <code>exec</code> was a <em>statement</em> prior to python3.x</sup></p>
<hr>
<p>Having tried to understand your use-case a little more, I would propose that you create an API where you pass functions.</p>
<pre><code>def f1(a, b, **kwargs):
return a > b
def f2(a, b, **kwargs):
return a == b
def f3(a, c, **kwargs):
return a <= c
funcs = [f1, f2, f3]
</code></pre>
<p>Now you can define a function that will pass the parameters. You'll need to define <em>which</em> parameters it intends to pass -- but it will always pass them all:</p>
<pre><code>def func_caller(funcs):
param_map = {
'a': get_a_somehow(),
'b': get_b_somehow(),
'c': get_c_somehow(),
...
}
for func in funcs:
if func(**param_map):
print("Hello World!")
</code></pre>
<p>There are other way to make the "contract" between <code>func_caller</code> and the functions that it is calling even more binding (e.g. pass the params as a more structured object like a <code>namedtuple</code>).</p>
<pre><code>from collections import namedtuple
FuncCallerParams = namedtuple('FuncCallerParams', 'a,b,c')
def f1(func_caller_params):
return func_caller_params.a > func_caller_params.b
...
funcs = [f1, f2, ...]
def func_caller(funcs):
a = ...
b = ...
c = ...
fcp = FuncCallerParams(a, b, c)
for func in funcs:
if func(fcp):
...
</code></pre>
| 4 | 2016-09-02T20:03:48Z | [
"python",
"string",
"list",
"loops",
"if-statement"
] |
Accessing columns with MultiIndex after using pandas groupby | 39,300,049 | <p>I am using the df.groupby() method: </p>
<pre><code>g1 = df[['md', 'agd', 'hgd']].groupby(['md']).agg(['mean', 'count', 'std'])
</code></pre>
<p>It produces exactly what I want!</p>
<pre><code> agd hgd
mean count std mean count std
md
-4 1.398350 2 0.456494 -0.418442 2 0.774611
-3 -0.281814 10 1.314223 -0.317675 10 1.161368
-2 -0.341940 38 0.882749 0.136395 38 1.240308
-1 -0.137268 125 1.162081 -0.103710 125 1.208362
0 -0.018731 603 1.108109 -0.059108 603 1.252989
1 -0.034113 178 1.128363 -0.042781 178 1.197477
2 0.118068 43 1.107974 0.383795 43 1.225388
3 0.452802 18 0.805491 -0.335087 18 1.120520
4 0.304824 1 NaN -1.052011 1 NaN
</code></pre>
<p>However, I now want to access the groupby object columns like a "normal" dataframe. </p>
<p>I will then be able to:
1) calculate the errors on the agd and hgd means
2) make scatter plots on md (x axis) vs agd mean (hgd mean) with appropriate error bars added. </p>
<p>Is this possible? Perhaps by playing with the indexing?</p>
<p>Thanks in advance!</p>
| 3 | 2016-09-02T20:10:10Z | 39,300,083 | <p>1) You can rename the columns and proceed as normal (will get rid of the multi-indexing)</p>
<pre><code>g1.columns = ['agd_mean', 'agd_std','hgd_mean','hgd_std']
</code></pre>
<p>2) You can keep multi-indexing and use both levels in turn (<a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html#basic-indexing-on-axis-with-multiindex" rel="nofollow">docs</a>)</p>
<pre><code>g1['agd']['mean count']
</code></pre>
| 1 | 2016-09-02T20:13:27Z | [
"python",
"pandas",
"group-by"
] |
Accessing columns with MultiIndex after using pandas groupby | 39,300,049 | <p>I am using the df.groupby() method: </p>
<pre><code>g1 = df[['md', 'agd', 'hgd']].groupby(['md']).agg(['mean', 'count', 'std'])
</code></pre>
<p>It produces exactly what I want!</p>
<pre><code> agd hgd
mean count std mean count std
md
-4 1.398350 2 0.456494 -0.418442 2 0.774611
-3 -0.281814 10 1.314223 -0.317675 10 1.161368
-2 -0.341940 38 0.882749 0.136395 38 1.240308
-1 -0.137268 125 1.162081 -0.103710 125 1.208362
0 -0.018731 603 1.108109 -0.059108 603 1.252989
1 -0.034113 178 1.128363 -0.042781 178 1.197477
2 0.118068 43 1.107974 0.383795 43 1.225388
3 0.452802 18 0.805491 -0.335087 18 1.120520
4 0.304824 1 NaN -1.052011 1 NaN
</code></pre>
<p>However, I now want to access the groupby object columns like a "normal" dataframe. </p>
<p>I will then be able to:
1) calculate the errors on the agd and hgd means
2) make scatter plots on md (x axis) vs agd mean (hgd mean) with appropriate error bars added. </p>
<p>Is this possible? Perhaps by playing with the indexing?</p>
<p>Thanks in advance!</p>
| 3 | 2016-09-02T20:10:10Z | 39,301,351 | <p>It is possible to do what you are searching for and it is called <code>transform</code>. You will find an example that does exactly what you are searching for in the pandas documentation <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation" rel="nofollow">here</a>.</p>
| 0 | 2016-09-02T22:15:48Z | [
"python",
"pandas",
"group-by"
] |
paho MQTT on_message returning a funny message - python | 39,300,102 | <p>some help please :)
I just started to play with MQTT in python.
When I run the following program:</p>
<pre><code>import paho.mqtt.client as mqtt
def on_connect(client, userdata, flags, rc):
print("Connected with result code "+str(rc))
client.subscribe("watchdog/#")
def on_message(client, userdata, msg):
message = str(msg.payload)
print(msg.topic+" "+message)
client = mqtt.Client()
client.username_pw_set('XXXX', password='XXXXXXX')
client.on_connect = on_connect
client.on_message = on_message
client.connect("XXXX", XXXXX, 60)
client.loop_forever()
</code></pre>
<p>the payload always have the following text:</p>
<p>b'XXX'</p>
<p>XXX is the message, but the b' ' part ALWAYS appear.
once i open the same message on off the shelf client, the message is fine... so i assume the problem is in the code, but i cannot find where.</p>
<p>any help or directions?</p>
<p>thanks!</p>
| 1 | 2016-09-02T20:15:09Z | 39,355,723 | <p>As Moses Koledoye says, b is for bytes - this means that what you are printing is the string version of a set of bytes. If you changed the str(msg.payload) to simply msg.payload, you will get a different output.</p>
<p>But you haven't talked about what the message payload is, so you may still get gibberish out printing the msg.payload. For example, if the message being sent is actually a string of bytes...</p>
| 1 | 2016-09-06T18:36:39Z | [
"python",
"mqtt"
] |
ImportError: No module named qgis.core ubuntu 16.04 python 2.7 qgis 2.16.2 | 39,300,116 | <p>I keep getting this error in python from trying to call qgis from a script.</p>
<p>The code is:</p>
<pre><code>from qgis.core import *
from qgis.analysis import *
</code></pre>
<p>I have read every posting on SO about this; wiped QGIS and reinstalled. Reset my PYTHON_PATH and QGIS_PREFIX variables to the correct directory. I've also checked the dependencies via <code>dpkg -l | grep qgis</code>and all of my dependencies are the xenial version.</p>
<p>Any other suggestions?</p>
| 0 | 2016-09-02T20:16:17Z | 39,330,862 | <p>I had the same problem but it was with Windows 7. Following the last point called Running Custom Applications in <a href="http://docs.qgis.org/2.8/en/docs/pyqgis_developer_cookbook/intro.html" rel="nofollow">http://docs.qgis.org/2.8/en/docs/pyqgis_developer_cookbook/intro.html</a> I solved it. </p>
<p>You will need to tell your system where to search for QGIS libraries and appropriate Python modules if they are not in a well-known location â otherwise Python will complain:</p>
<pre><code>>>> import qgis.core
ImportError: No module named qgis.core
</code></pre>
<p>This can be fixed by setting the PYTHONPATH environment variable. In the following commands, qgispath should be replaced with your actual QGIS installation path:</p>
<p><b>on Linux: export </b>PYTHONPATH=/qgispath/share/qgis/python<br>
<b>on Windows: set</b> PYTHONPATH=c:\qgispath\python</p>
<p>The path to the PyQGIS modules is now known, however they depend on qgis_core and qgis_gui libraries (the Python modules serve only as wrappers). Path to these libraries is typically unknown for the operating system, so you get an import error again (the message might vary depending on the system):</p>
<pre><code>>>> import qgis.core
ImportError: libqgis_core.so.1.5.0: cannot open shared object file: No such file or directory
</code></pre>
<p>Fix this by adding the directories where the QGIS libraries reside to search path of the dynamic linker:</p>
<p><b>on Linux: export </b>LD_LIBRARY_PATH=/qgispath/lib<br>
<b>on Windows: set </b>PATH=C:\qgispath;%PATH%<br></p>
<p>These commands can be put into a bootstrap script that will take care of the startup. When deploying custom applications using PyQGIS, there are usually two possibilities:</p>
<p>require user to install QGIS on his platform prior to installing your application. The application installer should look for default locations of QGIS libraries and allow user to set the path if not found. This approach has the advantage of being simpler, however it requires user to do more steps.
package QGIS together with your application. Releasing the application may be more challenging and the package will be larger, but the user will be saved from the burden of downloading and installing additional pieces of software.
The two deployment models can be mixed - deploy standalone application on Windows and Mac OS X, for Linux leave the installation of QGIS up to user and his package manager.</p>
| 0 | 2016-09-05T12:41:31Z | [
"python",
"ubuntu",
"qgis"
] |
ImportError: No module named qgis.core ubuntu 16.04 python 2.7 qgis 2.16.2 | 39,300,116 | <p>I keep getting this error in python from trying to call qgis from a script.</p>
<p>The code is:</p>
<pre><code>from qgis.core import *
from qgis.analysis import *
</code></pre>
<p>I have read every posting on SO about this; wiped QGIS and reinstalled. Reset my PYTHON_PATH and QGIS_PREFIX variables to the correct directory. I've also checked the dependencies via <code>dpkg -l | grep qgis</code>and all of my dependencies are the xenial version.</p>
<p>Any other suggestions?</p>
| 0 | 2016-09-02T20:16:17Z | 39,341,243 | <p>Finally got it working. Had to completely wipe and reinstall QGIS twice and separately remove python-qgis. Also had to uninstall anaconda. After the second fresh install of QGIS I've gotten it working.</p>
<p>No other changes to my configuration.</p>
| 0 | 2016-09-06T05:30:15Z | [
"python",
"ubuntu",
"qgis"
] |
Visual Studio Code - python console | 39,300,131 | <p>I'm using visual studio code with standard python extension, my issue is that when I run the code the python interpreter instantly closes right after and I only see the output which means that if I create some data structure I have to create it every single time. Is it possible to leave the console open after running the code and maybe running multiple files in the same python interpreter instance?</p>
| -1 | 2016-09-02T20:17:48Z | 39,311,974 | <p>When you run a program, it runs until it ends. Then it closes. If you want it to stay live longer, you can make a program which does not stop until told so, e.g.</p>
<pre><code>while True:
something = raw_input('Write something: ')
print('You wrote: %s' % something)
if something == 'bye':
print 'bye.'
break
</code></pre>
<p>This will run until user writes "bye".</p>
| 1 | 2016-09-03T22:48:54Z | [
"python",
"vscode"
] |
Pandas - Concat strings after groupby in column, ignore NaN, ignore duplicates | 39,300,163 | <p>Depending on the query, my DF can have a column with strings or a column with NaN.</p>
<p>Ex:</p>
<pre><code> ID grams Projects
0 891 4.0 NaN
1 725 9.0 NaN
</code></pre>
<p>or</p>
<pre><code> ID grams Projects
0 890 1.0 P1, P2
1 724 1.0 P1
2 880 1.0 P1, P2
3 943 1.0 P1
4 071 1.0 P1
</code></pre>
<p>I can handle one or the other, but when I try to make a function that is generic I'm failing miserably. I need to ignore the NaN at the end, because I'm sending this DF as JSON response and NaN gives me an invalid format.</p>
<p>The way I'm doing right now is:</p>
<pre><code>#When Projects is a string
df['Projects'] = _df.groupby("ID")['External_Id'].apply(lambda x: ",".join(x))
#When Projects is NaN
df['Projects'] = _df.groupby("ID")['External_Id'].apply(lambda x: "")
</code></pre>
<p>I tried to use <code>fillna()</code> and also to check the dtype of 'x' but it always returns as <strong>object</strong>, so I can't check whether it is a <strong>str</strong> or <strong>NaN</strong></p>
<p>Also, the result of the 'Projects' column should not allow duplicates. Some rows when grouped by ID have important information which will be summed ('grams'), but the 'External_Id' should not appear more than once.
Ex:</p>
<pre><code> ID grams External_Id
0 890 1.0 P1
1 890 1.0 P2
2 890 1.0 P2
3 724 1.0 P1
4 724 1.0 P1
</code></pre>
<p>Result should be</p>
<pre><code> ID grams Projects
0 890 3.0 P1, P2
1 724 2.0 P1
</code></pre>
<p>And not</p>
<pre><code> ID grams Projects
0 890 1.0 P1, P2, P2
1 724 1.0 P1, P1
</code></pre>
| 2 | 2016-09-02T20:19:44Z | 39,300,358 | <p>I think this should help: </p>
<pre><code>import numpy
df_new = df.replace(numpy.nan,' ', regex=True)
</code></pre>
<p>EDIT:</p>
<p>I think this <a href="http://stackoverflow.com/questions/25401295/using-python-to-remove-duplicated-contents-in-cells-in-excel">solution</a> could work for you (just as an alternative to the answer of @Ami.</p>
| 1 | 2016-09-02T20:36:34Z | [
"python",
"python-3.x",
"pandas",
null,
"missing-data"
] |
Pandas - Concat strings after groupby in column, ignore NaN, ignore duplicates | 39,300,163 | <p>Depending on the query, my DF can have a column with strings or a column with NaN.</p>
<p>Ex:</p>
<pre><code> ID grams Projects
0 891 4.0 NaN
1 725 9.0 NaN
</code></pre>
<p>or</p>
<pre><code> ID grams Projects
0 890 1.0 P1, P2
1 724 1.0 P1
2 880 1.0 P1, P2
3 943 1.0 P1
4 071 1.0 P1
</code></pre>
<p>I can handle one or the other, but when I try to make a function that is generic I'm failing miserably. I need to ignore the NaN at the end, because I'm sending this DF as JSON response and NaN gives me an invalid format.</p>
<p>The way I'm doing right now is:</p>
<pre><code>#When Projects is a string
df['Projects'] = _df.groupby("ID")['External_Id'].apply(lambda x: ",".join(x))
#When Projects is NaN
df['Projects'] = _df.groupby("ID")['External_Id'].apply(lambda x: "")
</code></pre>
<p>I tried to use <code>fillna()</code> and also to check the dtype of 'x' but it always returns as <strong>object</strong>, so I can't check whether it is a <strong>str</strong> or <strong>NaN</strong></p>
<p>Also, the result of the 'Projects' column should not allow duplicates. Some rows when grouped by ID have important information which will be summed ('grams'), but the 'External_Id' should not appear more than once.
Ex:</p>
<pre><code> ID grams External_Id
0 890 1.0 P1
1 890 1.0 P2
2 890 1.0 P2
3 724 1.0 P1
4 724 1.0 P1
</code></pre>
<p>Result should be</p>
<pre><code> ID grams Projects
0 890 3.0 P1, P2
1 724 2.0 P1
</code></pre>
<p>And not</p>
<pre><code> ID grams Projects
0 890 1.0 P1, P2, P2
1 724 1.0 P1, P1
</code></pre>
| 2 | 2016-09-02T20:19:44Z | 39,300,572 | <p>Say you start with</p>
<pre><code>In [37]: df = pd.DataFrame({'a': [1, 1, 2, 2], 'b': [1, None, 2, 4], 'c': ['foo', 'sho', 'sha', 'bar']})
In [43]: df
Out[43]:
a b c
0 1 1.0 foo
1 1 NaN foo
2 2 2.0 sha
3 2 4.0 bar
</code></pre>
<p>Then you can apply the same function to either <code>b</code> or <code>c</code>, taking care of the NaNs and duplicates:</p>
<pre><code>In [44]: df.b.groupby(df.a).apply(lambda x: '' if x.isnull().any() else ','.join(set(x.astype(str).values)))
Out[44]:
a
1
2 2.0,4.0
dtype: object
In [45]: df.c.groupby(df.a).apply(lambda x: '' if x.isnull().any() else ','.join(set(x.astype(str).values)))
Out[45]:
a
1 foo
2 sha,bar
dtype: object
</code></pre>
| 2 | 2016-09-02T20:53:56Z | [
"python",
"python-3.x",
"pandas",
null,
"missing-data"
] |
Extracting IP addresses from a file | 39,300,166 | <p>I'm trying to extract IP addresses from an <code>asp</code> file in Python, the file looks something like this:</p>
<pre><code>onInternalNet = (
isInNet(hostDNS, "147.163.1.0", "255.255.0.0") ||
isInNet(hostDNS, "123.264.0.0", "255.255.0.0") ||
isInNet(hostDNS, "137.5.0.0", "255.0.0.0") ||
isInNet(hostDNS, "100.01.02.0", "255.0.0.0") ||
isInNet(hostDNS, "172.146.30.0", "255.240.0.0") ||
isInNet(hostDNS, "112.268.0.0", "255.255.0.0") ||
</code></pre>
<p>How I'm attempting to extract them is with a regex:</p>
<pre><code>if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
</code></pre>
<p>However I'm getting an error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "pull_proxy.py", line 27, in <module>
write_to_file(extract_proxies(in_file), out_file)
File "pull_proxy.py", line 8, in extract_proxies
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
File "C:\Python27\lib\re.py", line 194, in compile
return _compile(pattern, flags)
File "C:\Python27\lib\re.py", line 233, in _compile
bypass_cache = flags & DEBUG
TypeError: unsupported operand type(s) for &: 'str' and 'int'
</code></pre>
<p>I don't understand why I'm getting that error, what can I do to this code to make it extract the information like I want it to?</p>
<pre><code>import re
def extract_proxies(in_file):
buffer = []
for line in in_file:
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
print "{} appened to buffer.".format(line)
buffer.append(line)
else:
pass
return buffer
def write_to_file(buffer, out_file):
for proxy in buffer:
with open(out_file, "a+") as res:
res.write(proxy)
if __name__ == '__main__':
print "Running...."
in_file = "C:/Users/thomas_j_perkins/Downloads/test.asp"
out_file = "c:/users/thomas_j_perkins/Downloads/results.txt"
write_to_file(extract_proxies(in_file), out_file)
</code></pre>
<p><strong>EDIT</strong></p>
<p>Realized I hadn't opened the file:</p>
<pre><code>import re
def extract_proxies(in_file):
buffer = []
for line in in_file:
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
print "{} appened to buffer.".format(line)
buffer.append(line)
else:
pass
in_file.close()
return buffer
def write_to_file(buffer, out_file):
for proxy in buffer:
with open(out_file, "a+") as res:
res.write(proxy)
if __name__ == '__main__':
print "Running...."
in_file = "C:/Users/thomas_j_perkins/Downloads/PAC-Global-Vista.asp"
out_file = "c:/users/thomas_j_perkins/Downloads/results.txt"
write_to_file(extract_proxies(open(in_file, "r+")), out_file)
</code></pre>
<p>Still getting the same error:</p>
<pre class="lang-none prettyprint-override"><code>Running....
Traceback (most recent call last):
File "pull_proxy.py", line 28, in <module>
write_to_file(extract_proxies(open(in_file)), out_file)
File "pull_proxy.py", line 8, in extract_proxies
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
File "C:\Python27\lib\re.py", line 194, in compile
return _compile(pattern, flags)
File "C:\Python27\lib\re.py", line 233, in _compile
bypass_cache = flags & DEBUG
TypeError: unsupported operand type(s) for &: 'str' and 'int'
</code></pre>
| 0 | 2016-09-02T20:20:11Z | 39,300,245 | <p><code>re.compile</code> was expecting an appropriate <code>flags</code> parameter (an integer) of which <code>line</code> (a string) is not. </p>
<p>You should be doing <code>re.match</code> not <code>re.compile</code>:</p>
<blockquote>
<p><a href="https://docs.python.org/2/library/re.html#re.compile" rel="nofollow"><code>re.compile</code></a></p>
<p>Compile a regular expression pattern into a regular expression object,
which can be used for matching using its <code>match()</code> and <code>search()</code>
methods...</p>
</blockquote>
| 2 | 2016-09-02T20:27:26Z | [
"python",
"python-2.7",
"extract",
"ipv4"
] |
Extracting IP addresses from a file | 39,300,166 | <p>I'm trying to extract IP addresses from an <code>asp</code> file in Python, the file looks something like this:</p>
<pre><code>onInternalNet = (
isInNet(hostDNS, "147.163.1.0", "255.255.0.0") ||
isInNet(hostDNS, "123.264.0.0", "255.255.0.0") ||
isInNet(hostDNS, "137.5.0.0", "255.0.0.0") ||
isInNet(hostDNS, "100.01.02.0", "255.0.0.0") ||
isInNet(hostDNS, "172.146.30.0", "255.240.0.0") ||
isInNet(hostDNS, "112.268.0.0", "255.255.0.0") ||
</code></pre>
<p>How I'm attempting to extract them is with a regex:</p>
<pre><code>if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
</code></pre>
<p>However I'm getting an error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "pull_proxy.py", line 27, in <module>
write_to_file(extract_proxies(in_file), out_file)
File "pull_proxy.py", line 8, in extract_proxies
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
File "C:\Python27\lib\re.py", line 194, in compile
return _compile(pattern, flags)
File "C:\Python27\lib\re.py", line 233, in _compile
bypass_cache = flags & DEBUG
TypeError: unsupported operand type(s) for &: 'str' and 'int'
</code></pre>
<p>I don't understand why I'm getting that error, what can I do to this code to make it extract the information like I want it to?</p>
<pre><code>import re
def extract_proxies(in_file):
buffer = []
for line in in_file:
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
print "{} appened to buffer.".format(line)
buffer.append(line)
else:
pass
return buffer
def write_to_file(buffer, out_file):
for proxy in buffer:
with open(out_file, "a+") as res:
res.write(proxy)
if __name__ == '__main__':
print "Running...."
in_file = "C:/Users/thomas_j_perkins/Downloads/test.asp"
out_file = "c:/users/thomas_j_perkins/Downloads/results.txt"
write_to_file(extract_proxies(in_file), out_file)
</code></pre>
<p><strong>EDIT</strong></p>
<p>Realized I hadn't opened the file:</p>
<pre><code>import re
def extract_proxies(in_file):
buffer = []
for line in in_file:
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
print "{} appened to buffer.".format(line)
buffer.append(line)
else:
pass
in_file.close()
return buffer
def write_to_file(buffer, out_file):
for proxy in buffer:
with open(out_file, "a+") as res:
res.write(proxy)
if __name__ == '__main__':
print "Running...."
in_file = "C:/Users/thomas_j_perkins/Downloads/PAC-Global-Vista.asp"
out_file = "c:/users/thomas_j_perkins/Downloads/results.txt"
write_to_file(extract_proxies(open(in_file, "r+")), out_file)
</code></pre>
<p>Still getting the same error:</p>
<pre class="lang-none prettyprint-override"><code>Running....
Traceback (most recent call last):
File "pull_proxy.py", line 28, in <module>
write_to_file(extract_proxies(open(in_file)), out_file)
File "pull_proxy.py", line 8, in extract_proxies
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
File "C:\Python27\lib\re.py", line 194, in compile
return _compile(pattern, flags)
File "C:\Python27\lib\re.py", line 233, in _compile
bypass_cache = flags & DEBUG
TypeError: unsupported operand type(s) for &: 'str' and 'int'
</code></pre>
| 0 | 2016-09-02T20:20:11Z | 39,300,328 | <p>Your initial error </p>
<pre><code>TypeError: unsupported operand type(s) for &: 'str' and 'int'
</code></pre>
<p>is caused by exactly what @Moses said in his answer. flags are supposed to be int values, not strings.</p>
<hr>
<p>You should compile your regex once. Also, you need to use an open file handle when you iterate over the lines.</p>
<p>import re</p>
<pre><code>IP_MATCHER = re.compile(r"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})")
def extract_proxies(fh):
for line in fh:
line = line.strip()
match = IP_MATCHER.findall(line)
if match:
print "{} appened to buffer.".format(line)
print match
else:
pass
def write_to_file(buffer, out_file):
for proxy in buffer:
with open(out_file, "a+") as res:
res.write(proxy)
if __name__ == '__main__':
print "Running...."
in_file = "in.txt"
with open(in_file) as fh:
extract_proxies(fh)
</code></pre>
<p>This will find all matches, if you only want the first, then use <code>IP_MATCHER.search</code> and <code>match.groups()</code>. This is of course assuming you actually want to extract the IP addresses.</p>
<p>For instance:</p>
<pre><code>def extract_proxies(fh):
for line in fh:
line = line.strip()
match = IP_MATCHER.findall(line)
if len(match) == 2:
print "{} appened to buffer.".format(line)
ip, mask = match
print "IP: %s => Mask: %s" % (ip, mask)
else:
pass
</code></pre>
| 1 | 2016-09-02T20:34:07Z | [
"python",
"python-2.7",
"extract",
"ipv4"
] |
Extracting IP addresses from a file | 39,300,166 | <p>I'm trying to extract IP addresses from an <code>asp</code> file in Python, the file looks something like this:</p>
<pre><code>onInternalNet = (
isInNet(hostDNS, "147.163.1.0", "255.255.0.0") ||
isInNet(hostDNS, "123.264.0.0", "255.255.0.0") ||
isInNet(hostDNS, "137.5.0.0", "255.0.0.0") ||
isInNet(hostDNS, "100.01.02.0", "255.0.0.0") ||
isInNet(hostDNS, "172.146.30.0", "255.240.0.0") ||
isInNet(hostDNS, "112.268.0.0", "255.255.0.0") ||
</code></pre>
<p>How I'm attempting to extract them is with a regex:</p>
<pre><code>if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
</code></pre>
<p>However I'm getting an error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "pull_proxy.py", line 27, in <module>
write_to_file(extract_proxies(in_file), out_file)
File "pull_proxy.py", line 8, in extract_proxies
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
File "C:\Python27\lib\re.py", line 194, in compile
return _compile(pattern, flags)
File "C:\Python27\lib\re.py", line 233, in _compile
bypass_cache = flags & DEBUG
TypeError: unsupported operand type(s) for &: 'str' and 'int'
</code></pre>
<p>I don't understand why I'm getting that error, what can I do to this code to make it extract the information like I want it to?</p>
<pre><code>import re
def extract_proxies(in_file):
buffer = []
for line in in_file:
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
print "{} appened to buffer.".format(line)
buffer.append(line)
else:
pass
return buffer
def write_to_file(buffer, out_file):
for proxy in buffer:
with open(out_file, "a+") as res:
res.write(proxy)
if __name__ == '__main__':
print "Running...."
in_file = "C:/Users/thomas_j_perkins/Downloads/test.asp"
out_file = "c:/users/thomas_j_perkins/Downloads/results.txt"
write_to_file(extract_proxies(in_file), out_file)
</code></pre>
<p><strong>EDIT</strong></p>
<p>Realized I hadn't opened the file:</p>
<pre><code>import re
def extract_proxies(in_file):
buffer = []
for line in in_file:
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
print "{} appened to buffer.".format(line)
buffer.append(line)
else:
pass
in_file.close()
return buffer
def write_to_file(buffer, out_file):
for proxy in buffer:
with open(out_file, "a+") as res:
res.write(proxy)
if __name__ == '__main__':
print "Running...."
in_file = "C:/Users/thomas_j_perkins/Downloads/PAC-Global-Vista.asp"
out_file = "c:/users/thomas_j_perkins/Downloads/results.txt"
write_to_file(extract_proxies(open(in_file, "r+")), out_file)
</code></pre>
<p>Still getting the same error:</p>
<pre class="lang-none prettyprint-override"><code>Running....
Traceback (most recent call last):
File "pull_proxy.py", line 28, in <module>
write_to_file(extract_proxies(open(in_file)), out_file)
File "pull_proxy.py", line 8, in extract_proxies
if re.compile(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", line):
File "C:\Python27\lib\re.py", line 194, in compile
return _compile(pattern, flags)
File "C:\Python27\lib\re.py", line 233, in _compile
bypass_cache = flags & DEBUG
TypeError: unsupported operand type(s) for &: 'str' and 'int'
</code></pre>
| 0 | 2016-09-02T20:20:11Z | 39,300,353 | <p>Please check the below code:</p>
<p>Did couple of changes</p>
<ol>
<li>re.compile - Regex should be complied first and then can be used with 'match/search/findall'.</li>
<li>Regex was not proper. While writing regex we need to consider from the start of line. Regex didn't match words in between line directly.</li>
</ol>
<p></p>
<pre><code> import re
def extract_proxies(in_file):
buffer1 = []
#Regex compiled here
m = re.compile(r'\s*\w+\(\w+,\s+\"(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\"')
for line in in_file:
#Used here to match
r = m.match(line)
if r is not None:
print "{} appened to buffer.".format(line)
buffer1.append(r.group(1))
else:
pass
in_file.close()
return buffer1
def write_to_file(buffer1, out_file):
for proxy in buffer1:
with open(out_file, "a+") as res:
res.write(proxy+'\n')
if __name__ == '__main__':
print "Running...."
in_file = "sample.txt"
out_file = "results.txt"
write_to_file(extract_proxies(open(in_file)), out_file)
</code></pre>
<p>Output:</p>
<pre><code>C:\Users\dinesh_pundkar\Desktop>python c.py
Running....
isInNet(hostDNS, "147.163.1.0", "255.255.0.0") ||
appened to buffer.
isInNet(hostDNS, "123.264.0.0", "255.255.0.0") ||
appened to buffer.
isInNet(hostDNS, "137.5.0.0", "255.0.0.0") ||
appened to buffer.
isInNet(hostDNS, "100.01.02.0", "255.0.0.0") ||
appened to buffer.
isInNet(hostDNS, "172.146.30.0", "255.240.0.0") ||
appened to buffer.
isInNet(hostDNS, "112.268.0.0", "255.255.0.0") || appened to buffer.
C:\Users\dinesh_pundkar\Desktop>python c.py
</code></pre>
| 1 | 2016-09-02T20:36:06Z | [
"python",
"python-2.7",
"extract",
"ipv4"
] |
python dataframe convert column to row | 39,300,265 | <p>Attempting to convert a single dataframe column into a row. I've seen multiple responses to similar questions, but most of the questions pertain to multiple columns and rows. I can't find the simple solution to converting the following:</p>
<pre><code>value
</code></pre>
<p>0 A<br>
1 B<br>
2 C<br>
3 D<br>
4 E<br>
5 F<br>
6 G<br>
7 H<br>
8 I<br>
9 J<br>
10 K<br>
11 L</p>
<p>to</p>
<p>A B C D E F G H I J K L</p>
| 0 | 2016-09-02T20:28:53Z | 39,300,527 | <p>To transpose the <code>value</code> dataframe</p>
<p><code>value1 = value.transpose()</code></p>
| 0 | 2016-09-02T20:49:54Z | [
"python",
"dataframe"
] |
How many session objects are needed for synchronous in-graph replication? | 39,300,401 | <p>When using synchronous in-graph replication I only call <code>tf.Session.run()</code> once.</p>
<p><strong>Question 1:</strong> Do I still have to create a new session object for each worker and have to pass the URL of the master server (the one that calls <code>tf.Session.run()</code>) as the session target?</p>
<p><strong>Question 2:</strong> Can I get the session target for each server by using <code>server.target</code> or do I have to specify the URL of the master server specifically?</p>
| 0 | 2016-09-02T20:39:44Z | 39,300,842 | <p>If you are using "in-graph replication", the graph contains multiple copies of the computational nodes, typically with one copy per device (i.e. one per worker task if you're doing distributed CPU training, or one per GPU if you're doing distributed or local multi-GPU training). Since all of the replicas are in the same graph, you only need one <code>tf.Session</code> to control the entire training process. You <strong>don't</strong> need to create <code>tf.Session</code> objects in the workers that don't call <code>Session.run()</code>. </p>
<p>For in-graph training, it's typical to have a single master that is separate from the worker tasks (for performance isolation), but you could colocate it with your client program. In that case, you could simply create a single-task job called <code>"client"</code> and in that task create a session using <code>server.target</code>. The following example shows how you could write a single script for your <code>"client"</code>, <code>"worker"</code>, and <code>"ps"</code> jobs:</p>
<pre><code>server = tf.train.Server({"client": ["client_host:2222"],
"worker": ["worker_host0:2222", ...],
"ps": ["ps_host0:2222", ...]})
if job_name == "ps" or job_name == "worker":
server.join()
elif job_name == "client":
# Build a replicated graph.
# ...
sess = tf.Session(server.target)
# Insert training loop here.
# ...
</code></pre>
| 2 | 2016-09-02T21:18:34Z | [
"python",
"tensorflow"
] |
Saving tweets to different files using Python | 39,300,456 | <p>I am relatively new to python and trying to download tweets and save them to different text files. I want the file name to be dynamic and hence tried to modify code according to my requirement. Below, is the code that I am trying to modify:-</p>
<pre><code>class StdOutListener(StreamListener):
def on_data(self, data):
i=1
try:
if os.path.isfile('filename'+str(i)+'.txt'):
if os.stat('filename'+str(i)+'.txt').st_size > 5000000:
i=i+1
# print data
savefile=open('filename'+str(i)+'.txt','a')
savefile.write(data)
savefile.write('\n')
savefile.close()
return True
else:
savefile=open('filename'+str(i)+'.txt','a')
savefile.write(data)
savefile.write('\n')
savefile.close()
return True
else:
savefile=open('filename'+str(i)+'.txt','a')
savefile.write(data)
savefile.write('\n')
savefile.close()
except BaseException, e:
print 'failed_ondata,',str(e)
time.sleep(5)
def on_error(self, status):
print status
</code></pre>
<p>Something is off in the code above as it doesn't seem to work. I am still learning and It could be the most obvious thing but I would really appreciate if someone can help me work the code above. </p>
| 0 | 2016-09-02T20:44:24Z | 39,300,616 | <p>A loop to increment i each time a file exist does not exist: your mechanism only works with file of index 1, it creates a file of index 2, then stops there.</p>
<p>Fix: I have added a loop which breaks as soon as a "free" filename is found. Check better way to get file size and code is much much more compact. Tested and works as designed: creates files, incrementing the number each time it is too big.</p>
<p>Here's my proposal:</p>
<pre><code>import os
class StdOutListener(StreamListener):
def get_filename(self,i):
return 'filename'+str(i)+'.txt'
def on_data(self, data):
i=1
try:
# compute first free file
while True:
f = self.get_filename(i)
if os.path.isfile(f):
if os.path.getsize(f) > 5000000:
i+=1 # next file index
else:
break # file exists but size small enough
else:
break # ok file does not exist
savefile=open(self.get_filename(i),'a')
savefile.write(data)
savefile.write('\n')
savefile.close()
return True # done!
except BaseException as e:
print('failed_ondata,',str(e))
time.sleep(5)
return False
def on_error(self, status):
print(status)
</code></pre>
| 0 | 2016-09-02T20:58:12Z | [
"python",
"python-2.7",
"twitter",
"tweepy"
] |
How to generate mazes with fixed entry and exit points? | 39,300,491 | <p>I have read about the Depth-First search algorithm to create and solve mazes. However, I have not found anything on creating mazes with fixed entry and exit. On each maze the entry would always be at (0, 1) and the exit at the opposite side on both axis.</p>
<p>During the generation of the maze, every cell should be visited (to generate as many dead-ends as possible) but the the exit should always be at the same point.</p>
<p>The solution path to the resulting mazes would look like this:</p>
<p><a href="http://i.stack.imgur.com/nBnI1.png" rel="nofollow"><img src="http://i.stack.imgur.com/nBnI1.png" alt="enter image description here"></a></p>
<p>Or this:</p>
<p><a href="http://i.stack.imgur.com/1AeKX.png" rel="nofollow"><img src="http://i.stack.imgur.com/1AeKX.png" alt="enter image description here"></a></p>
| 0 | 2016-09-02T20:47:11Z | 39,300,710 | <p>I've made grid-based mazes before using a breadth-first search, but a similar algorithm can be devised from a depth first one. </p>
<p>First I'd create a <a href="https://en.wikipedia.org/wiki/Graph_theory" rel="nofollow">graph</a>, where each node in your coordinate graph above links to the node up, down, left, and right of it. For example, node (1, 1) has an edge to (0, 1), (1, 0), (2, 1), and (1, 2). Edge nodes will only have 3 edges, and corner ones will only have 2, since the neighbors in the appropriate directions don't exist. While you are generating these edges, assign each one a random weight. When I implemented it I found that a range of [0, 100) worked well, but you can tweak that. Then finally you can do a Depth First Search from your desired start node to the desired end one, and just trace out the path as you go. If you recurse down an edge that would connect you to a node you've already visited, don't draw the edge there. That'll get you something that looks maze-like.</p>
<p>When I did it, I actually set up the graph in the same way, but instead of DFS I used <a href="https://en.wikipedia.org/wiki/Prim%27s_algorithm" rel="nofollow">Prim's Algorithm</a> to calculate the minimum spanning tree of that graph. This gave me something that looked maze-like, touched every node at least once, and contained no cycles. Then I could assign any point I wanted to be the start and end point, and the maze would contain no cycles, and exactly 1 shortest path between any 2 points. I added tools on top of that for editing the maze, removing dead ends, rendering it in 3D, etc, but those are beyond the scope of your question.</p>
<p>If you want to see how I did it, <a href="https://github.com/CodyIsAwesome/Minotaur" rel="nofollow">check out my project</a> on GitHub. The "Minotaur" folder contains the executable (a Frankenstein's monster of Python, C++, and C#), and the source is in there too. The <a href="https://github.com/CodyIsAwesome/Minotaur/blob/master/Source/CPP/PrimMazeGenerator/PrimMazeGenerator/Graph.cpp" rel="nofollow">maze generation part is in this file</a>. </p>
<p>I know you asked for this in Python but I'm too busy reverse engineer my C++ code right now, I hope you still find this answer helpful.</p>
<p><em><strong>Edit:</strong></em> I nearly forgot I made a fancy video showing it off, so if you want to see it in action but don't want to compile my source or don't trust my executable, <a href="https://codyachilders.com/portfolio/procedural-maze-generator-2015/" rel="nofollow">you can view the project on my portfolio</a>.</p>
| 1 | 2016-09-02T21:07:54Z | [
"python",
"depth-first-search",
"maze"
] |
Python Webdriver not loading page on Windows | 39,300,553 | <p>I am using Python 3.5 on a Windows computer. When I run this code on my Mac it works perfect, no issues what so ever. But when I bring the code to my Windows computer it doesn't work.</p>
<p>Basically the web browser will open but I will just get a blank page. Nothing will load, not even the home page. I don't get any error messages.</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get('https://www.google.com')
cookies = driver.get_cookies()
print(cookies)
</code></pre>
<p>Once I close the web browser I get this message in the shell:</p>
<p><code>"The browser appears to have exited "
selenium.common.exceptions.WebDriverException: Message: The browser appears to have exited before we could connect. If you specified a log_file in the FirefoxBinary constructor, check it for details.</code></p>
<p>From what I've been able to find online (most is for Java) it looks like I may need to setup a profile? Is this correct and could anyone help with this?</p>
| 1 | 2016-09-02T20:52:08Z | 39,301,141 | <p>It looks like your client doesn't have the fix for the new switch to launch the gecko driver:</p>
<p><a href="https://github.com/SeleniumHQ/selenium/commit/c76917839c868603c9ab494d8aa0e9d600515371" rel="nofollow">https://github.com/SeleniumHQ/selenium/commit/c76917839c868603c9ab494d8aa0e9d600515371</a></p>
<p>Make sure that you have the latest beta version installed (selenium-3.0.0b2) if you which to use the geckodriver v0.10.0 and above:</p>
<pre><code>pip install -U selenium --pre
</code></pre>
<p>Note that you need the <code>--pre</code> flag to install the beta version.</p>
| 1 | 2016-09-02T21:48:44Z | [
"python",
"selenium",
"firefox",
"webdriver"
] |
python need faster response with getch | 39,300,669 | <p>I'm trying to create a setup to blink a led and be able to control the frequency.
Right now I'm just printing 10s as placeholders for testing.
Everything runs and does what should, but getch is throwing me off.</p>
<pre><code>freq = 1
while freq > 0:
time.sleep(.5/freq) #half dutycycle / Hz
print("1")
time.sleep(.5/freq) #half dutycycle / Hz
print("0")
def kbfunc():
return ord(msvcrt.getch()) if msvcrt.kbhit() else 0
#print(kbfunc())
if kbfunc() == 27: #ESC
break
if kbfunc() == 49: #one
freq = freq + 10
if kbfunc() == 48: #zero
freq = freq - 10
</code></pre>
<p>Now when it starts up, the freq change portion seems buggy like it's not reading all the time or I have to time the press just right. The break line has no problem whenever pressed though.</p>
| 0 | 2016-09-02T21:02:59Z | 39,300,707 | <p>There should be only one <code>kbfunc()</code> call. Store the result in a variable.</p>
<p>E.g.: In your code if the key isn't <code>Esc</code>, you'll read the keyboard again.</p>
| 2 | 2016-09-02T21:07:48Z | [
"python",
"msvcrt",
"getch"
] |
python need faster response with getch | 39,300,669 | <p>I'm trying to create a setup to blink a led and be able to control the frequency.
Right now I'm just printing 10s as placeholders for testing.
Everything runs and does what should, but getch is throwing me off.</p>
<pre><code>freq = 1
while freq > 0:
time.sleep(.5/freq) #half dutycycle / Hz
print("1")
time.sleep(.5/freq) #half dutycycle / Hz
print("0")
def kbfunc():
return ord(msvcrt.getch()) if msvcrt.kbhit() else 0
#print(kbfunc())
if kbfunc() == 27: #ESC
break
if kbfunc() == 49: #one
freq = freq + 10
if kbfunc() == 48: #zero
freq = freq - 10
</code></pre>
<p>Now when it starts up, the freq change portion seems buggy like it's not reading all the time or I have to time the press just right. The break line has no problem whenever pressed though.</p>
| 0 | 2016-09-02T21:02:59Z | 39,301,126 | <pre><code>from msvcrt import getch,kbhit
import time
def read_kb():
return ord(getch()) if kbhit() else 0
def next_state(state):
return (state + 1)%2 # 1 -> 0, 0 -> 1
freq = 1.0 # in blinks per second
state = 0
while freq > 0:
print(state)
state = next_state(state)
key = read_kb()
if key == 27: #ESC
break
if key == 49: #one
freq = freq + 1.0
if key == 48: #zero
freq = max(freq - 1.0, 1.0)
time.sleep(0.5/freq)
</code></pre>
| 0 | 2016-09-02T21:47:04Z | [
"python",
"msvcrt",
"getch"
] |
for loop to extract header for a dataframe in pandas | 39,300,691 | <p>I am a newbie in python. I have a data frame that looks like this:</p>
<pre><code> A B C D E
0 1 0 1 0 1
1 0 1 0 0 1
2 0 1 1 1 0
3 1 0 0 1 0
4 1 0 0 1 1
</code></pre>
<p>How can I write a for loop to gather the column names for each row. I expect my result set looks like that:</p>
<pre><code> A B C D E Result
0 1 0 1 0 1 ACE
1 0 1 0 0 1 BE
2 0 1 1 1 0 BCD
3 1 0 0 1 0 AD
4 1 0 0 1 1 ADE
</code></pre>
<p>Anyone can help me with that? Thank you!</p>
| 4 | 2016-09-02T21:05:30Z | 39,300,749 | <p>The <code>dot</code> function is done for that purpose as you want the matrix dot product between your matrix and the vector of column names:</p>
<pre><code>df.dot(df.columns)
Out[5]:
0 ACE
1 BE
2 BCD
3 AD
4 ADE
</code></pre>
<p>If your dataframe is numeric, then obtain the boolean matrix first by test your <code>df</code> against 0:</p>
<pre><code>(df!=0).dot(df.columns)
</code></pre>
<p>PS: Just assign the result to the new column</p>
<pre><code>df['Result'] = df.dot(df.columns)
df
Out[7]:
A B C D E Result
0 1 0 1 0 1 ACE
1 0 1 0 0 1 BE
2 0 1 1 1 0 BCD
3 1 0 0 1 0 AD
4 1 0 0 1 1 ADE
</code></pre>
| 9 | 2016-09-02T21:11:10Z | [
"python",
"pandas",
"for-loop"
] |
having Key Error while printing json data | 39,300,705 | <p><strong>python 3.5.1</strong></p>
<p>hi i have following json and python code and i want to print json data but it has an error that says:</p>
<blockquote>
<blockquote>
<p>Key Error : 'A'</p>
</blockquote>
</blockquote>
<p><strong>python</strong></p>
<pre><code>data = json.load(...)
for item in data['x']
print (item['A'])
</code></pre>
<p><strong>json</strong></p>
<pre><code>{"x":[
{"A":"B"},
{"C":"D"}
]}
</code></pre>
<p>whats my problem?</p>
| 0 | 2016-09-02T21:07:42Z | 39,300,800 | <p>As @elethan pointed the second item would not have key <code>'A'</code></p>
<p>You can do the following</p>
<pre><code>data = json.load(...)
for item in data['x']:
print(item.get('A'))
</code></pre>
<p>That would not get any error for your specific json input, and print <code>None</code> if it won't find <code>'A'</code> key in element.</p>
<p>You can also supply default value to <code>.get()</code>, e.g. <code>item.get('A', 'default')</code>.</p>
<p>Thanks @elethan</p>
| 0 | 2016-09-02T21:15:46Z | [
"python",
"json"
] |
having Key Error while printing json data | 39,300,705 | <p><strong>python 3.5.1</strong></p>
<p>hi i have following json and python code and i want to print json data but it has an error that says:</p>
<blockquote>
<blockquote>
<p>Key Error : 'A'</p>
</blockquote>
</blockquote>
<p><strong>python</strong></p>
<pre><code>data = json.load(...)
for item in data['x']
print (item['A'])
</code></pre>
<p><strong>json</strong></p>
<pre><code>{"x":[
{"A":"B"},
{"C":"D"}
]}
</code></pre>
<p>whats my problem?</p>
| 0 | 2016-09-02T21:07:42Z | 39,300,833 | <p>To print the values in each dictionary (with <em>unmatching</em> keys), use the <code>values</code> method of the dictionary:</p>
<pre><code>data = json.load(...)
for item in data['x']:
print(item.values())
</code></pre>
| 1 | 2016-09-02T21:17:55Z | [
"python",
"json"
] |
having Key Error while printing json data | 39,300,705 | <p><strong>python 3.5.1</strong></p>
<p>hi i have following json and python code and i want to print json data but it has an error that says:</p>
<blockquote>
<blockquote>
<p>Key Error : 'A'</p>
</blockquote>
</blockquote>
<p><strong>python</strong></p>
<pre><code>data = json.load(...)
for item in data['x']
print (item['A'])
</code></pre>
<p><strong>json</strong></p>
<pre><code>{"x":[
{"A":"B"},
{"C":"D"}
]}
</code></pre>
<p>whats my problem?</p>
| 0 | 2016-09-02T21:07:42Z | 39,300,928 | <p>The problem is that your code assumes that every item in <code>data['x']</code> will have a key <code>'A'</code>, but as soon as you iterate to a <code>dict</code> that does not have such a key you will get a <code>KeyError</code>.</p>
<p>Try using <code>item.get('A')</code> which will return <code>None</code> (or a default you provide) if there is no key <code>'A'</code> in your dictionary. It seems like you want to do something like this:</p>
<pre><code>data = json.load(...)
for item in data['x']:
value = item.get('A')
if value:
print(value)
else:
continue
</code></pre>
<p>This will print the value associated with the key <code>'A'</code> if it exists, otherwise it will move on to the next dictionary in the list.</p>
| 0 | 2016-09-02T21:27:26Z | [
"python",
"json"
] |
Numpy Int Array to HEX and then to String Conversion PYTHON | 39,300,846 | <p>I have couple of questions </p>
<p>Say I have a numpy array </p>
<pre><code>a = np.array([0,1,2,3,4,31])
a0 = a[0]
a1 = a[1]
a2 = a[2]
a3 = a[3]
a4 = a[4]
a5 = a[5]
print hex(a4), hex(a5)
</code></pre>
<p>gives me </p>
<pre><code> 0x4L 0x1F
</code></pre>
<p>same for a0, a1, a2, a3,a5. I know the L is because of the numpy array. </p>
<p>Now how would I get 0x04 and not 0x4. </p>
<p>My required outcome is </p>
<pre><code>'0x1F0403020100'
</code></pre>
<p>My required answer should start with 0x -- the hex values of a5, a4, a3, a2, a1, a0 - without the OX. The required output is a string. I can do the bit manipulation, if I have the zero. But not without it. </p>
| 0 | 2016-09-02T21:19:19Z | 39,300,989 | <p>What you really want to do is to store your array in a single number by shifting each element of the array by a certain (8) amount of bits:</p>
<pre><code>>>> a = np.array([0,1,2,3,4,31])
>>> hex(sum([ai*256**i for i,ai in enumerate(a)]))
'0x1f0403020100'
</code></pre>
<p>But for this to work, you need to be sure that your array elements are at most 255. That's entirely up to you to keep/check. You should consider using an <code>ndarray</code> of <code>dtype</code> <code>np.uint8</code>, that way there's no chance for you to mangle up the data in your array (since you can't have overflow in your array).</p>
| 2 | 2016-09-02T21:34:01Z | [
"python",
"arrays",
"string",
"numpy"
] |
Numpy Int Array to HEX and then to String Conversion PYTHON | 39,300,846 | <p>I have couple of questions </p>
<p>Say I have a numpy array </p>
<pre><code>a = np.array([0,1,2,3,4,31])
a0 = a[0]
a1 = a[1]
a2 = a[2]
a3 = a[3]
a4 = a[4]
a5 = a[5]
print hex(a4), hex(a5)
</code></pre>
<p>gives me </p>
<pre><code> 0x4L 0x1F
</code></pre>
<p>same for a0, a1, a2, a3,a5. I know the L is because of the numpy array. </p>
<p>Now how would I get 0x04 and not 0x4. </p>
<p>My required outcome is </p>
<pre><code>'0x1F0403020100'
</code></pre>
<p>My required answer should start with 0x -- the hex values of a5, a4, a3, a2, a1, a0 - without the OX. The required output is a string. I can do the bit manipulation, if I have the zero. But not without it. </p>
| 0 | 2016-09-02T21:19:19Z | 39,301,070 | <p>You can try this workaround. An element wise <code>hex</code> conversion and a later <code>join</code>. <code>'0x'</code> is added to the start of the string:</p>
<pre><code>>>> a = np.array([0,1,2,3,4,31])
>>> '0x' + ''.join('{:02X}'.format(i) for i in reversed(a))
'0x1F0403020100'
</code></pre>
| 2 | 2016-09-02T21:41:38Z | [
"python",
"arrays",
"string",
"numpy"
] |
Numpy Int Array to HEX and then to String Conversion PYTHON | 39,300,846 | <p>I have couple of questions </p>
<p>Say I have a numpy array </p>
<pre><code>a = np.array([0,1,2,3,4,31])
a0 = a[0]
a1 = a[1]
a2 = a[2]
a3 = a[3]
a4 = a[4]
a5 = a[5]
print hex(a4), hex(a5)
</code></pre>
<p>gives me </p>
<pre><code> 0x4L 0x1F
</code></pre>
<p>same for a0, a1, a2, a3,a5. I know the L is because of the numpy array. </p>
<p>Now how would I get 0x04 and not 0x4. </p>
<p>My required outcome is </p>
<pre><code>'0x1F0403020100'
</code></pre>
<p>My required answer should start with 0x -- the hex values of a5, a4, a3, a2, a1, a0 - without the OX. The required output is a string. I can do the bit manipulation, if I have the zero. But not without it. </p>
| 0 | 2016-09-02T21:19:19Z | 39,301,254 | <p><strong>tl;dr</strong></p>
<pre><code>("0x" + ("{:0>2x}" * len(a))).format(*tuple(a[::-1]))
</code></pre>
<p><hr>
<strong>Explanation:</strong></p>
<ul>
<li><p>Multiply string <code>"{:0>2x}"</code> a number of times equal to <code>len(a)</code>, i.e. do <code>"{:0>2x}" * len(a)</code>. This will create the following string:</p>
<pre><code>'{:0>2x}{:0>2x}{:0>2x}{:0>2x}{:0>2x}{:0>2x}'
</code></pre>
<p><code>{:0>2x}</code> used inside a string can later be formated using the <code>.format</code> method, resulting in a translation of an int into a hexadecimal string, of width 2, where any padding is done with <code>0</code>.<br>
Multiplying by the length of the array means you can create that many hex-formatted arguments.</p></li>
<li>Concatenate / prefix this string by <code>"0x"</code></li>
<li>Reverse your array, since you want it reversed, by doing <code>a[::-1]</code></li>
<li>Put the reversed array into a tuple, i.e. <code>tuple(a[::-1])</code>.</li>
<li>Expand this tuple using * syntax to make it into method arguments, i.e. <code>*tuple(a[::-1])</code></li>
<li>Now you can use the expanded tuple as an argument to the <code>.format</code> method, on the concatenated string you created containing the custom formatting the correct number of times.</li>
</ul>
<p>Result:</p>
<pre><code>>>> ("0x" + ("{:0>2x}" * len(a))).format(*tuple(a[::-1]))
'0x1f0403020100'
</code></pre>
<p>PS. If you prefer capital hex strings, replace <code>x</code> with <code>X</code>, i.e.:</p>
<pre><code>>>> ("0x" + ("{:0>2X}" * len(a))).format(*tuple(a[::-1]))
'0x1F0403020100'
</code></pre>
| 2 | 2016-09-02T22:03:04Z | [
"python",
"arrays",
"string",
"numpy"
] |
Python OpenCV Sliding Window Object Detection | 39,300,872 | <p>I'm implementing a sliding window in python 2.7, openCV version 3, using sklearn, skimage to apply a HOG detector to localise an object.</p>
<p>The HOG set-up works fine. If I do not apply a sliding window everything works okay.</p>
<p>The problem is the sliding window has a size of 128x128, giving a feature vector length of 15876. Whereas the training set has a size of 579474, as it was trained on 800x600 images.</p>
<p>I haven't seen any question that directly address this in a clear way, but it really does have me baffled. I also don't see many papers addressing this issue.</p>
<p>My code is this: </p>
<pre><code>clf = joblib.load(model_path)
# load the image and define the window width and height
image = imread(args["image"], flatten=True)
(winW, winH) = (128, 128)
# loop over the image pyramid
for resized in pyramid(image, scale=1.5):
# loop over the sliding window for each layer of the pyramid
for (x, y, window) in sliding_window(resized, stepSize=32, windowSize=(winW, winH)):
# if the window does not meet our desired window size, ignore it
if window.shape[0] != winH or window.shape[1] != winW:
continue
fd = hog(window, orientations, pixels_per_cell, cells_per_block, visualize, normalize)
pred = clf.predict(fd)
if pred == 1:
print("found, found, found, found, found")
</code></pre>
<p>The sliding window visualises fine if I draw it, it's just the prediction function. How to compare window features to the training features vectors of larger lengths?</p>
<p>Thanks a lot for your time!</p>
<p>Kind regards,</p>
<p>Fred</p>
| 0 | 2016-09-02T21:21:08Z | 39,310,061 | <p>I think I have the answer to this:</p>
<p>Simply train images of the same dimensions as the window size. Might seem like you are losing data, but then test on a larger image. In order for this to work well said target object should fit in the window size.</p>
<p>So I'm training on 270x200, then scan a 270x200 window over say a 2.7K X 2K ( same aspect ratio).</p>
<p>It works like this, for anyone else who is confused :)</p>
<p>Fred</p>
| 0 | 2016-09-03T18:20:30Z | [
"python",
"opencv",
"histogram",
"object-detection",
"sliding-window"
] |
How to find wrong prediction cases in test set (CNNs using Keras) | 39,300,880 | <p>I'm using MNIST example with 60000 training image and 10000 testing image. How do I find which of the 10000 testing image that has an incorrect classification/prediction?</p>
| 0 | 2016-09-02T21:22:13Z | 39,303,937 | <p>Simply use <code>model.predict_classes()</code> and compare the output with true labes. i.e:</p>
<pre><code>incorrects = np.nonzero(model.predict_class(X_test) != y_test)
</code></pre>
<p>to get indices of incorrect predictions</p>
| 1 | 2016-09-03T06:29:18Z | [
"python",
"machine-learning",
"theano",
"convolution",
"keras"
] |
Is there any reason for giving self a default value? | 39,300,924 | <p>I was browsing through some code, and I noticed a line that caught my attention. The code is similar to the example below</p>
<pre><code>class MyClass:
def __init__(self):
pass
def call_me(self=''):
print(self)
</code></pre>
<p>This looks like any other class that I have seen, however a <code>str</code> is being passed in as default value for <code>self</code>. </p>
<p>If I print out <code>self</code>, it behaves as normal</p>
<pre><code>>>> MyClass().call_me()
<__main__.MyClass object at 0x000002A12E7CA908>
</code></pre>
<p>This has been bugging me and I cannot figure out why this would be used. Is there any reason to why a <code>str</code> instance would be passed in as a default value for <code>self?</code></p>
| 31 | 2016-09-02T21:27:11Z | 39,300,946 | <p>The short answer is yes. That way, you can call the function as: </p>
<pre><code>MyClass.call_me()
</code></pre>
<p>without instantiating <code>MyClass</code>, that will print an empty string.</p>
<p>To give a longer answer, we need to look at what is going on behind the scenes.</p>
<p>When you create an instance of the class, <code>__new__</code> and <code>__init__</code> are called to create it, that is:</p>
<pre><code>a = MyClass()
</code></pre>
<p>is roughly equivalent to:</p>
<pre><code>a = MyClass.__new__(MyClass)
MyClass.__init__(a)
</code></pre>
<p>Whenever you use some method on a created instance <code>a</code>:</p>
<pre><code>a.call_me()
</code></pre>
<p>It is "replaced" with <code>MyClass.call_me(a)</code>.</p>
<p>So, having a default parameter for <code>call_me</code> allows you to call this function not only as a method of an instance, in which case <code>self</code> is an instance itself, but also as a static class method.</p>
<p>That way, instead of <code>MyClass.call_me(a)</code>, just <code>MyClass.call_me()</code> is called. Because the argument list is empty, the default argument is assigned to <code>self</code> and the desired result (empty string) is printed.</p>
| 13 | 2016-09-02T21:29:36Z | [
"python",
"class",
"python-3.x"
] |
Is there any reason for giving self a default value? | 39,300,924 | <p>I was browsing through some code, and I noticed a line that caught my attention. The code is similar to the example below</p>
<pre><code>class MyClass:
def __init__(self):
pass
def call_me(self=''):
print(self)
</code></pre>
<p>This looks like any other class that I have seen, however a <code>str</code> is being passed in as default value for <code>self</code>. </p>
<p>If I print out <code>self</code>, it behaves as normal</p>
<pre><code>>>> MyClass().call_me()
<__main__.MyClass object at 0x000002A12E7CA908>
</code></pre>
<p>This has been bugging me and I cannot figure out why this would be used. Is there any reason to why a <code>str</code> instance would be passed in as a default value for <code>self?</code></p>
| 31 | 2016-09-02T21:27:11Z | 39,300,948 | <p><em>Not really</em>, it's just an <em>odd</em> way of making it not raise an error when called via the class:</p>
<pre><code>MyClass.call_me()
</code></pre>
<p>works fine since, even though nothing is implicitly passed as with instances, the default value for that argument is provided. If no default was provided, when called, this would of course raise the <code>TypeError</code> for args we all love. As to why he chose an empty string as the value, <em>only he knows</em>.</p>
<p>Bottom line, this is more <em>confusing</em> than it is practical. If you need to do something similar I'd advice a simple <a href="https://docs.python.org/3/library/functions.html#staticmethod" rel="nofollow"><code>staticmethod</code></a> with a default argument to achieve a similar effect. </p>
<p>That way <em>you don't stump anyone reading your code</em> (like the developer who wrote this did with you ;-):</p>
<pre><code>@staticmethod
def call_me(a=''):
print(a)
</code></pre>
<p>If instead you need access to class attributes you could always opt for the <a href="https://docs.python.org/3/library/functions.html#classmethod" rel="nofollow"><code>classmethod</code></a> decorator. Both these (<code>class</code> and <code>static</code> decorators) also serve a secondary purpose of making your intent crystal clear to others reading your code.</p>
| 28 | 2016-09-02T21:29:41Z | [
"python",
"class",
"python-3.x"
] |
Checking decimal point python | 39,300,980 | <p>I'm trying to create a script that counts to 3 (step size 0.1) using while, and I'm trying to make it not display .0 for numbers without decimal number (1.0 should be displayed as 1, 2.0 should be 2...)
What I tried to do is convert the float to int and then check if they equal. the problem is that it works only with the first number (0) but it doesn't work when it gets to 1.0 and 2.0..</p>
<p>this is my code:</p>
<pre><code>i = 0
while i < 3.1:
if int(i) == i:
print int(i)
else:
print i
i = i + 0.1
</code></pre>
<p>that's the output I get:</p>
<pre><code>0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.0
</code></pre>
<p>the output I should get:</p>
<pre><code>0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3
</code></pre>
<p>thank you for your time.</p>
| 1 | 2016-09-02T21:33:11Z | 39,301,036 | <p>Due to lack of precision in floating point numbers, they will not have an exact integral representation. Therefore, you want to make sure the difference is smaller than some small <code>epsilon</code>.</p>
<pre><code>epsilon = 1e-10
i = 0
while i < 3.1:
if abs(round(i) - i) < epsilon:
print round(i)
else:
print i
i = i + 0.1
</code></pre>
| 3 | 2016-09-02T21:38:13Z | [
"python"
] |
Checking decimal point python | 39,300,980 | <p>I'm trying to create a script that counts to 3 (step size 0.1) using while, and I'm trying to make it not display .0 for numbers without decimal number (1.0 should be displayed as 1, 2.0 should be 2...)
What I tried to do is convert the float to int and then check if they equal. the problem is that it works only with the first number (0) but it doesn't work when it gets to 1.0 and 2.0..</p>
<p>this is my code:</p>
<pre><code>i = 0
while i < 3.1:
if int(i) == i:
print int(i)
else:
print i
i = i + 0.1
</code></pre>
<p>that's the output I get:</p>
<pre><code>0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.0
</code></pre>
<p>the output I should get:</p>
<pre><code>0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3
</code></pre>
<p>thank you for your time.</p>
| 1 | 2016-09-02T21:33:11Z | 39,301,090 | <p>You can remove trailing zeros with '{0:g}'.format(1.00).</p>
<pre><code>i = 0
while i < 3.1:
if int(i) == i:
print int(i)
else:
print '{0:g}'.format(i)
i = i + 0.1
</code></pre>
<p>See: <a href="https://docs.python.org/3/library/string.html#format-specification-mini-language" rel="nofollow">https://docs.python.org/3/library/string.html#format-specification-mini-language</a></p>
<p>Update: Too lazy while copy/pasting. Thanks @aganders3</p>
<pre><code>i = 0
while i < 3.1:
print '{0:g}'.format(i)
i = i + 0.1
</code></pre>
| 2 | 2016-09-02T21:43:39Z | [
"python"
] |
Checking decimal point python | 39,300,980 | <p>I'm trying to create a script that counts to 3 (step size 0.1) using while, and I'm trying to make it not display .0 for numbers without decimal number (1.0 should be displayed as 1, 2.0 should be 2...)
What I tried to do is convert the float to int and then check if they equal. the problem is that it works only with the first number (0) but it doesn't work when it gets to 1.0 and 2.0..</p>
<p>this is my code:</p>
<pre><code>i = 0
while i < 3.1:
if int(i) == i:
print int(i)
else:
print i
i = i + 0.1
</code></pre>
<p>that's the output I get:</p>
<pre><code>0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3.0
</code></pre>
<p>the output I should get:</p>
<pre><code>0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
3
</code></pre>
<p>thank you for your time.</p>
| 1 | 2016-09-02T21:33:11Z | 39,301,140 | <pre><code>i = 0
while i < 3.1:
if int(i*10) % 10 == 0:
print int(i)
else:
print i
i = i + 0.1
</code></pre>
| 1 | 2016-09-02T21:48:30Z | [
"python"
] |
PyQt QLineEdit get value from separate .py file | 39,301,017 | <p>I have the following code in one python file (sales.py) and want to display the results of the script's calculation in a QLineEdit of a separate file (control.py).</p>
<p>All line_edit.setText(def), line_edit.dispayText(def), line_edit.setText(subtotal) are not working. Any ideas of how I could go about doing this? </p>
<p>Thanks in advance for any suggestions. </p>
<pre><code>#sales py
def main () :
total()
def total () :
totals = { "quantity" : 4 , "price" : 1.5}
total_quant = totals [ "quantity" ]
total_price = totals [ "price" ]
subtotal = str(total_quant * total_price)
return subtotal
main()
--------------
#the below is not working
#controls.py
from sales import *
import sys
from PyQt4 import QtGui, QtCore
class Example(QtGui.QWidget):
def __init__(self):
super(Example, self).__init__()
self.initUI()
def initUI(self):
q_le = QtGui.QLineEdit(self)
q_le.move (50,50)
q_le.setText(total())
self.setGeometry(300, 300, 250, 150)
self.setWindowTitle('Line Edit')
self.show()
def main():
app = QtGui.QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
| -1 | 2016-09-02T21:36:26Z | 39,431,477 | <p>At the beginning I had my doubts that what I had in mind - get a script result from one py file and display it in a QLineEdit of another py file - would ever work. With the help of more experienced developers the solution turned out to be actually quite easy. </p>
<p>In fact, my inquiries are part of a personal learning project, that is, subdivide a large script into smaller portions, saved in separate files. </p>
<p>I have updated the code already, in case others are facing similar issues. </p>
| 0 | 2016-09-10T23:25:44Z | [
"python",
"pyqt4",
"qlineedit"
] |
How to perform a loop action at least once and also when a condition is true | 39,301,041 | <p>I am looking for a Python logical equivalent of a <code>do while</code> loop in other languages. I have page results I am iterating through. The results structure:</p>
<pre><code>1, 2, 3, 4 , ... NEXT
</code></pre>
<p>Each element there is a link. only the last page has no <code>NEXT</code> element so I have identified <code>NEXT</code> as the condition that i need to check for when iterating. </p>
<p>I have identified it using :</p>
<p><code>next_link = driver.find_element_by_id('anch_25')</code></p>
<p>So I have a function <code>my_function()</code> that i want to run on each page where <code>next_link</code> exists then click the next_link using a <code>click()</code> function. If the element does not exist it means either there is only 1 page result or I am on the last page of results. Either way, I still want <code>my_function</code> to run in either case.</p>
<p>So i have:</p>
<pre><code>def my_function():
print "Another result page"
###This is where I am trying to loop through the results pages
next_link = driver.find_element_by_id('anch_25')
if next_link:
my_function()
next_link.click()
else:
my_function()
</code></pre>
<p>Unfortunately, this is only working for the first page and does not iterate over the other pages.</p>
<p>I have also tried this, </p>
<pre><code>while next_link:
my_function()
next_link.click()
my_function()
</code></pre>
<p>It doesn't seem to work either. Any suggestions?</p>
| 1 | 2016-09-02T21:38:45Z | 39,301,085 | <p>Check for the next link <strong>after</strong> calling your function. Then use <code>break</code> to break out of the loop instead of using <code>next_link</code> as the <code>while</code> condition.</p>
<pre><code>while True:
my_function()
next_link = driver.find_element_by_id('anch_25')
if not next_link:
break
next_link.click()
</code></pre>
| 1 | 2016-09-02T21:42:57Z | [
"python",
"for-loop",
"while-loop",
"pagination",
"do-while"
] |
How to perform a loop action at least once and also when a condition is true | 39,301,041 | <p>I am looking for a Python logical equivalent of a <code>do while</code> loop in other languages. I have page results I am iterating through. The results structure:</p>
<pre><code>1, 2, 3, 4 , ... NEXT
</code></pre>
<p>Each element there is a link. only the last page has no <code>NEXT</code> element so I have identified <code>NEXT</code> as the condition that i need to check for when iterating. </p>
<p>I have identified it using :</p>
<p><code>next_link = driver.find_element_by_id('anch_25')</code></p>
<p>So I have a function <code>my_function()</code> that i want to run on each page where <code>next_link</code> exists then click the next_link using a <code>click()</code> function. If the element does not exist it means either there is only 1 page result or I am on the last page of results. Either way, I still want <code>my_function</code> to run in either case.</p>
<p>So i have:</p>
<pre><code>def my_function():
print "Another result page"
###This is where I am trying to loop through the results pages
next_link = driver.find_element_by_id('anch_25')
if next_link:
my_function()
next_link.click()
else:
my_function()
</code></pre>
<p>Unfortunately, this is only working for the first page and does not iterate over the other pages.</p>
<p>I have also tried this, </p>
<pre><code>while next_link:
my_function()
next_link.click()
my_function()
</code></pre>
<p>It doesn't seem to work either. Any suggestions?</p>
| 1 | 2016-09-02T21:38:45Z | 39,301,096 | <p>You may use <code>while</code> loop with the variable that is set to <code>True</code> by default and based on your condition, you may change it to <code>True/False</code>. For example:</p>
<pre><code>is_continue = True
while is_continue:
... # Your Logic
if my_condition:
is_continue = True
else:
is_continue = False
</code></pre>
<p>PS: I am giving you the sample example and leaving the rest to you to implement. So, you may implement it by your own know and get the idea about how things work.</p>
| 1 | 2016-09-02T21:44:06Z | [
"python",
"for-loop",
"while-loop",
"pagination",
"do-while"
] |
Strange nested loop behavior while handling an exception | 39,301,069 | <p>Goal: if count is larger than actual line count, in <code>except</code> block: tell user and have them press enter. set <code>count</code> equal to total number of lines in file and retry the loop. </p>
<pre><code>count = 10000
with open('mobydick_ch1.txt') as f:
while 1:
lines = []
try:
for i in range(count):
lines.append(next(f)) # iterate through file and append each line in range
break
except StopIteration:
if not input("File does not contain that many lines, press enter to continue printing maximum lines:"):
for i, k in enumerate(f, 1):
count = i
f.close() # close file
# format output. enumerate lines, start at 1
# http://stackoverflow.com/questions/4440516/in-python-is-there-an-elegant-
# way-to-print-a-list-in-a-custom-format-without-ex
print(''.join('Line {0}: {1}'.format(*k) for k in enumerate(lines, 1)))
</code></pre>
<p>I am currently getting:</p>
<blockquote>
<p>File does not contain that many lines, press enter to continue printing maximum lines:</p>
</blockquote>
<p>every time I press enter. What is causing this unwanted behavior?</p>
| 2 | 2016-09-02T21:41:36Z | 39,301,095 | <p>You already exhausted the file, you can't then read from the file <em>again</em> without seeking back to 0. As a result your <code>for i, k in enumerate(f, 1):</code> loop exits immediately. The same then applies to every future iteration of your <code>while 1:</code> loop; the file is still at the end and all access with <code>next()</code> will raise a <code>StopIteration</code> immediately.</p>
<p>You already know how many lines you have read, just set <code>count = len(lines)</code>. There is no need to read the whole file <em>again</em> just to set <code>count</code>.</p>
<p>It'd be better if you used <a href="https://docs.python.org/2/library/itertools.html#itertools.islice" rel="nofollow"><code>itertools.islice()</code></a> to get your 1000 lines:</p>
<pre><code>from itertools import islice
count = 10000
with open('mobydick_ch1.txt') as f:
lines = list(islice(f, count)) # list of up to count lines
if len(lines) < count:
input("File does not contain that many lines, press enter to continue printing maximum lines:")
count = len(lines) # set count to actual number of lines
</code></pre>
<p>If you are trying to <em>wait</em> until a file contains at least <code>count</code> lines, you'll have to re-open the file each time and seek to the last recorded location:</p>
<pre><code>lines = []
pos = 0
while len(lines) < count:
with open('mobydick_ch1.txt') as f:
f.seek(pos)
lines.extend(islice(f, count - len(lines)))
pos = f.tell()
if len(lines) < count:
input("File does not contain that many lines, press enter to continue printing maximum lines:")
</code></pre>
| 2 | 2016-09-02T21:44:00Z | [
"python",
"file"
] |
ASCII text executable, with CRLF line terminators | 39,301,086 | <p>Good evening,</p>
<p>I'm currently enrolled in an introduction-course to python and have come across an issue that I haven't been able to solve. I'm sure it's a simple error somewhere in my code, but I haven't been able to find any questions on SO that solved my issue.</p>
<p><em>Strangely enough it compiles and runs fine when executing it from cygwin...</em></p>
<p><strong>I'm getting this error while validating through 3rd party tests (that I don't have access to):</strong></p>
<blockquote>
<p>Python script, ASCII text executable, with CRLF line terminators</p>
</blockquote>
<p>This is my code: </p>
<pre><code> height = float(input("What is the plane's elevation in metres? \r\n"))
height = format(height * 3.28084, '.2f')
speed = float(input("What is the plane's speed in km/h? \r\n"))
speed = format(speed * 0.62137, '.2f')
temperature = float(input("Finally, what is the temperature (in celsius) outside? \r\n"))
temperature = format(temperature * (9/5) + 32, '.2f')
print("""\n########### OUTPUT ###########\n\nThe elevation is {feet} above the sea level, \n
you are going {miles} miles/h, \n
finally the temperature outside is {temp} degrees fahrenheit \n
########### OUTPUT ###########""".format(feet=height, miles=speed, temp=temperature))
</code></pre>
<p>And this is a cgi based on it (both are throwing the same error):</p>
<pre><code> #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# To write pagecontent to sys.stdout as bytes instead of string
import sys
import codecs
#Enable debugging of cgi-.scripts
import cgitb
cgitb.enable()
# Send the HTTP header
#print("Content-Type: text/plain;charset=utf-8")
print("Content-Type: text/html;charset=utf-8")
print("")
height = format(1100 * 3.28084, '.2f')
speed = format(1000 * 0.62137, '.2f')
temperature = format(-50 * (9/5) + 32, '.2f')
toPrint = """\n########### OUTPUT ###########\n\nThe elevation is {feet} above the sea level, \n
you are going {miles} miles/h, \n
finally the temperature outside is {temp} degrees fahrenheit \n
########### OUTPUT ###########"""
toPrint.format(feet=height, miles=speed, temp=temperature)
# Here comes the content of the webpage
content = """<!doctype html>
<meta charset="utf-8">
<title>Min redovisnings-sida</title>
<pre>
<h1>Min Redovisnings-sida </h1>
</pre>
<body>
{printer}
</body>
"""
# Write page content
sys.stdout = codecs.getwriter("utf-8")(sys.stdout.detach())
sys.stdout.write(content.format(printer=toPrint))
</code></pre>
| 1 | 2016-09-02T21:43:03Z | 39,301,195 | <p>You need to convert CRLF to LF, to do so you can run this command:</p>
<pre><code>dos2unix your_file
</code></pre>
<p>If you need to apply that to a specific folder content, use the below command inside your folder:</p>
<pre><code>find . -type f -exec dos2unix {} \;
</code></pre>
<p>You need to install <strong>dos2unix</strong> package first:</p>
<pre><code>sudo apt-get install dos2unix
</code></pre>
| 1 | 2016-09-02T21:54:38Z | [
"python",
"python-3.x"
] |
How do I speed up a piece of python code which has a numpy function embedded in it? | 39,301,122 | <p>Here is the rate limiting function in my code </p>
<pre><code>def timepropagate(wv1, ham11,
ham12, ham22, scalararray, nt):
wv2 = np.zeros((nx, ny), 'c16')
fw1 = np.zeros((nx, ny), 'c16')
fw2 = np.zeros((nx, ny), 'c16')
for t in range(0, nt, 1):
wv1, wv2 = scalararray*wv1, scalararray*wv2
fw1, fw2 = (np.fft.fft2(wv1), np.fft.fft2(wv2))
fw1 = ham11*fw1+ham12*fw2
fw2 = ham12*fw1+ham22*fw2
wv1, wv2 = (np.fft.ifft2(fw1), np.fft.ifft2(fw2))
wv1, wv2 = scalararray*wv1, scalararray*wv2
del(fw1)
del(fw2)
return np.array([wv1, wv2])
</code></pre>
<p>What I would need to do is find a reasonably fast implementation that would allow me to go at twice the speed, preferably the fastest.</p>
<p>The more general question I'm interested in, is what way can I speed up this piece, using minimal possible connections back to python. I assume that even if I speed up specific segments of the code, say the scalar array multiplications, I would still come back and go from python at the Fourier transforms which would take time. Are there any ways I can use, say numba or cython and not make this "coming back" to python in the middle of the loops?
On a personal note, I'd prefer something fast on a single thread considering that I'd be using my other threads already.</p>
<p>Edit: here are results of profiling, the 1st one for 4096x4096 arrays for 10 time steps, I need to scale it up for nt = 8000. </p>
<pre><code>ncalls tottime percall cumtime percall filename:lineno(function)
1 0.099 0.099 432.556 432.556 <string>:1(<module>)
40 0.031 0.001 28.792 0.720 fftpack.py:100(fft)
40 45.867 1.147 68.055 1.701 fftpack.py:195(ifft)
80 0.236 0.003 47.647 0.596 fftpack.py:46(_raw_fft)
40 0.102 0.003 1.260 0.032 fftpack.py:598(_cook_nd_args)
40 1.615 0.040 99.774 2.494 fftpack.py:617(_raw_fftnd)
20 0.225 0.011 29.739 1.487 fftpack.py:819(fft2)
20 2.252 0.113 72.512 3.626 fftpack.py:908(ifft2)
80 0.000 0.000 0.000 0.000 fftpack.py:93(_unitary)
40 0.631 0.016 0.820 0.021 fromnumeric.py:43(_wrapit)
80 0.009 0.000 0.009 0.000 fromnumeric.py:457(swapaxes)
40 0.338 0.008 1.158 0.029 fromnumeric.py:56(take)
200 0.064 0.000 0.219 0.001 numeric.py:414(asarray)
1 329.728 329.728 432.458 432.458 profiling.py:86(timepropagate)
1 0.036 0.036 432.592 432.592 {built-in method builtins.exec}
40 0.001 0.000 0.001 0.000 {built-in method builtins.getattr}
120 0.000 0.000 0.000 0.000 {built-in method builtins.len}
241 3.930 0.016 3.930 0.016 {built-in method numpy.core.multiarray.array}
3 0.000 0.000 0.000 0.000 {built-in method numpy.core.multiarray.zeros}
40 18.861 0.472 18.861 0.472 {built-in method numpy.fft.fftpack_lite.cfftb}
40 28.539 0.713 28.539 0.713 {built-in method numpy.fft.fftpack_lite.cfftf}
1 0.000 0.000 0.000 0.000 {built-in method numpy.fft.fftpack_lite.cffti}
80 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
40 0.006 0.000 0.006 0.000 {method 'astype' of 'numpy.ndarray' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
80 0.000 0.000 0.000 0.000 {method 'pop' of 'list' objects}
40 0.000 0.000 0.000 0.000 {method 'reverse' of 'list' objects}
80 0.000 0.000 0.000 0.000 {method 'setdefault' of 'dict' objects}
80 0.001 0.000 0.001 0.000 {method 'swapaxes' of 'numpy.ndarray' objects}
40 0.022 0.001 0.022 0.001 {method 'take' of 'numpy.ndarray' objects}
</code></pre>
<p>I think I've done it wrong the first time, using time.time() to calculate time differences for small arrays and extrapolating the conclusions for larger ones.</p>
| 0 | 2016-09-02T21:46:23Z | 39,316,416 | <p>If most of the time is spent in the hamiltonian multiplication, you may want to apply numba on that part. The most benefit coming from removing all the temporal arrays that would be needed if evaluating expressions from within NumPy.</p>
<p>Bear also in mind that the arrays (4096, 4096, c16) are big enough to not fit comfortably in the processor caches. A single matrix would take 256 MiB. So think that the performance is unlikely to be related at all with the operations, but rather on the bandwidth. So implement those operations in a way that you only perform one pass in the input operands. This is really trivial to implement in <em>numba</em>. Note: You will only need to implement in <em>numba</em> the hamiltonian expressions.</p>
<p>I want also to point out that the "preallocations" using np.zeros seems to signal that your code is not following your intent as:</p>
<pre><code> fw1 = ham11*fw1+ham12*fw2
fw2 = ham12*fw1+ham22*fw2
</code></pre>
<p>will actually create new arrays for fw1, fw2. If your intent was to reuse the buffer, you may want to use "fw1[:,:] = ...". Otherwise the np.zeros do nothing but waste time and memory.</p>
<p>You may want to consider to join (wv1, wv2) into a (2, 4096, 4096, c16) array. The same with (fw1, fw2). That way code will be simpler as you can rely on broadcasting to handle the "scalararray" product. fft2 and ifft2 will actually do the right thing (AFAIK).</p>
| 0 | 2016-09-04T11:27:37Z | [
"python",
"performance",
"numpy",
"cython",
"numba"
] |
Wait until all threads are started before run them | 39,301,149 | <p>i'd like to do the same thing of this part of script, but possibly in a different way. Is it possible? This script waits that all threads are ready and started and then run the while True of urllib.request... Since I don't want to copy-paste this method, is there a different way to do it?</p>
<pre><code>import threading, urllib.request
class request(threading.Thread):
def run(self):
while nload:
time.sleep(1)
while True:
urllib.request.urlopen("www.google.it")
nload = 1
for x in range(800):
request().start()
time.sleep(0.003)
print "Thread " + str(x) + " started!"
print "The cycle is running..."
nload = 0
while not nload:
time.sleep(1)
</code></pre>
| 0 | 2016-09-02T21:49:32Z | 39,301,531 | <p>A better way is to use a <a href="https://docs.python.org/2/library/threading.html#event-objects" rel="nofollow">threading.Event</a> object.</p>
<pre><code>import threading, urllib.request
go = threading.Event()
class request(threading.Thread):
def run(self):
go.wait()
while True:
urllib.request.urlopen("www.google.it")
for x in range(800):
request().start()
time.sleep(0.003)
print "Thread " + str(x) + " started!"
print "The cycle is running..."
go.set()
</code></pre>
<p>This guarantees that no thread will advance past the <code>go.wait()</code> line until <code>go.set()</code> is called. It can't ensure that the 800 threads wake up at the exact same moment, since your number of CPU cores, your operating system's scheduler, python's GIL, and probably a lot of other things I don't know about prevent you from having that degree of control.</p>
| 1 | 2016-09-02T22:40:23Z | [
"python",
"multithreading",
"python-3.x"
] |
Excessive memory usage for very large Python lists loaded from 90GB of JSON for word2vec | 39,301,199 | <p>I'm working using a cluster to generate word2vec models using gensim from sentences from medical journals that are stored in JSON files and I'm having trouble with memory usage being too large.</p>
<p>The task is to keep a cumulative list of all sentences up to a particular year, and then generate a word2vec model for that year. Then, add the sentences for the next year to the cumulative list and generate and save another model for that year based on all the sentences.</p>
<p>The I/O on this particular cluster is slow enough and the data large enough (reading 2/3 into memory takes about 3 days) that streaming each JSON from disk for each year's model would have taken forever, so the solution was to load all 90GB of JSON into memory in a python list. I have permission to use up to 256GB of memory for this, but could get more if necessary.</p>
<p>The trouble I'm having is that I'm running out of memory. I have read some other posts about the way Python implements free lists not returning memory to the OS and I think that may be part of the problem, but I am not sure.</p>
<p>Thinking that the free list might be the problem and that maybe a numpy would have a better implementation for a large number of elements, I changed from the cumulative list of sentences to a cumulative array of sentences (gensim requires that sentences be lists of words/strings). But I ran this on a small subset of the sentences and it used slightly more memory, so I am unsure of how to proceed. </p>
<p>If anyone has any experience with this I would be very happy to have your help. Also, if there is anything else that could be changed I would appreciate you telling me as well. The full code is below:</p>
<pre><code>import ujson as json
import os
import sys
import logging
from gensim.models import word2vec
import numpy as np
PARAMETERS = {
'numfeatures': 250,
'minwordcount': 10,
'context': 7,
'downsampling': 0.0001,
'workers': 32
}
logger = logging.getLogger()
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
def generate_and_save_model(cumulative_sentences, models_dir, year):
"""
Generates and saves the word2vec model for the given year
:param cumulative_sentences: The list of all sentences up to the current year
:param models_dir: The directory to save the models to
:param year: The current year of interest
:return: Nothing, only saves the model to disk
"""
cumulative_model = word2vec.Word2Vec(
sentences=cumulative_sentences,
workers=PARAMETERS['workers'],
size=PARAMETERS['numfeatures'],
min_count=PARAMETERS['minwordcount'],
window=PARAMETERS['context'],
sample=PARAMETERS['downsampling']
)
cumulative_model.init_sims(replace=True)
cumulative_model.save(models_dir + 'medline_abstract_word2vec_' + year)
def save_year_models(json_list, json_dir, models_dir, min_year, max_year):
"""
:param json_list: The list of json year_sentences file names
:param json_dir: The directory holding the the sentences json files
:param models_dir: The directory to serialize the models to
:param min_year: The minimum value of a year to generate a model for
:param max_year: The maximum value of a year to generate a model for
Goes year by year through each json of sentences, saving a cumulative word2vec
model for each year
"""
cumulative_sentences = np.array([])
for json_file in json_list:
year = json_file[16:20]
# If this year is greater than the maximum, we're done creating models
if int(year) > max_year:
break
with open(json_dir + json_file, 'rb') as current_year_file:
cumulative_sentences = np.concatenate(
(np.array(json.load(current_year_file)['sentences']),
cumulative_sentences)
)
logger.info('COMPLETE: ' + year + ' sentences loaded')
logger.info('Cumulative length: ' + str(len(cumulative_sentences)) + ' sentences loaded')
sys.stdout.flush()
# If this year is less than our minimum, add its sentences to the list and continue
if int(year) < min_year:
continue
generate_and_save_model(cumulative_sentences, models_dir, year)
logger.info('COMPLETE: ' + year + ' model saved')
sys.stdout.flush()
def main():
json_dir = '/projects/chemotext/sentences_by_year/'
models_dir = '/projects/chemotext/medline_year_models/'
# By default, generate models for all years we have sentences for
minimum_year = 0
maximum_year = 9999
# If one command line argument is used
if len(sys.argv) == 2:
# Generate the model for only that year
minimum_year = int(sys.argv[1])
maximum_year = int(sys.argv[1])
# If two CL arguments are used
if len(sys.argv) == 3:
# Generate all models between the two year arguments, inclusive
minimum_year = int(sys.argv[1])
maximum_year = int(sys.argv[2])
# Sorting the list of files so that earlier years are first in the list
json_list = sorted(os.listdir(json_dir))
save_year_models(json_list, json_dir, models_dir, minimum_year, maximum_year)
if __name__ == '__main__':
main()
</code></pre>
| 2 | 2016-09-02T21:54:52Z | 39,375,500 | <p>I think you should be able to reduce the memory footprint of the corpus significantly by only explicitly storing the first occurrence of every word. All occurrences after that only need to store a reference to the first. This way you don't spend memory on duplicate strings, at the cost of some overhead. In code it could look something like this:</p>
<pre><code>class WordCache(object):
def __init__(self):
self.cache = {}
def filter(self, words):
for i, word in enumerate(words):
try:
words[i] = self.cache[word]
except KeyError:
self.cache[word] = word
return words
cache = WordCache()
...
for sentence in json.load(current_year_file)['sentences']:
cumulative_sentences.append(cache.filter(sentence))
</code></pre>
<p>Another thing you might try is moving to Python 3.3 or above. It has a more memory efficient representation of Unicode strings, see <a href="https://www.python.org/dev/peps/pep-0393/" rel="nofollow">PEP 393</a>.</p>
| 0 | 2016-09-07T16:53:46Z | [
"python",
"numpy",
"word2vec"
] |
Inserting an array of arrays as the last column | 39,301,247 | <p>I have an array A:</p>
<pre><code>array([[1, 2, 3],
[1, 1, 1],
[2, 2, 2]])
</code></pre>
<p>and an array B:</p>
<pre><code>array([[1, 0],
[1, 0],
[0, 1]])
</code></pre>
<p>I want to make array B as the last column of array A, so I want the result array (let's call it C) to look like this:</p>
<pre><code>array([[1, 2, 3, [1, 0]],
[1, 1, 1, [1, 0]],
[2, 2, 2, [0, 1]]])
</code></pre>
<p>I tried: <code>np.insert(a,-1,b,axis=1)</code> , but this gave me an error:</p>
<pre><code>ValueError: could not broadcast input array from shape (2,3) into shape (3,3)
</code></pre>
| -1 | 2016-09-02T22:02:36Z | 39,301,311 | <p>Maybe that's what you're looking for:</p>
<pre><code>import numpy as np
a = np.array([[1, 2, 3],
[1, 1, 1],
[2, 2, 2]])
b = np.array([[1, 0],
[1, 0],
[0, 1]])
np.hstack([a,b])
</code></pre>
<p>Which results in: </p>
<pre><code>array([[1, 2, 3, 1, 0],
[1, 1, 1, 1, 0],
[2, 2, 2, 0, 1]])
</code></pre>
| 1 | 2016-09-02T22:10:42Z | [
"python",
"numpy"
] |
Inserting an array of arrays as the last column | 39,301,247 | <p>I have an array A:</p>
<pre><code>array([[1, 2, 3],
[1, 1, 1],
[2, 2, 2]])
</code></pre>
<p>and an array B:</p>
<pre><code>array([[1, 0],
[1, 0],
[0, 1]])
</code></pre>
<p>I want to make array B as the last column of array A, so I want the result array (let's call it C) to look like this:</p>
<pre><code>array([[1, 2, 3, [1, 0]],
[1, 1, 1, [1, 0]],
[2, 2, 2, [0, 1]]])
</code></pre>
<p>I tried: <code>np.insert(a,-1,b,axis=1)</code> , but this gave me an error:</p>
<pre><code>ValueError: could not broadcast input array from shape (2,3) into shape (3,3)
</code></pre>
| -1 | 2016-09-02T22:02:36Z | 39,301,340 | <pre><code>print zip(*zip(*a)+[b.tolist(),])
</code></pre>
<p>although it wont be a numpy array afterwards </p>
<pre><code>>>> a
array([[1, 2, 3],
[1, 1, 1],
[2, 2, 2]])
>>> b
array([[1, 0],
[1, 0],
[0, 1]])
>>> zip(*zip(*a)+[b.tolist(),])
[(1, 2, 3, [1, 0]), (1, 1, 1, [1, 0]), (2, 2, 2, [0, 1])]
</code></pre>
| 1 | 2016-09-02T22:14:16Z | [
"python",
"numpy"
] |
The formula for "Two-for" price is 20% off the total price of two items. Prompt the user for each of the values and calculate the result | 39,301,263 | <p>The following is what the output should look like:</p>
<p>Enter price 1: 10.0</p>
<p>Enter price 2: 20.0</p>
<p>The 'two-for' price is $24.0</p>
<p>The code I entered is:</p>
<pre><code>price_one = float(input('Enter price 1: '))
print(price_one)
price_two = float(input('Enter price 2: '))
print(price_two)
two_for_price = (price_one + price_two)-((price_one + price_two)*(20/100))
print("The 'two-for' price is $",two_for_price)
</code></pre>
<p>(The inputs are 10.0 and 20.0 respectively.)</p>
<p>But the output I am getting is:</p>
<pre><code>Enter price 1: 10.0
Enter price 2: 20.0
The 'two-for' price is $ 24.0
</code></pre>
<p>In the last line I need:</p>
<pre><code>The 'two-for' price is $24.0
</code></pre>
<p>Please help me out!!</p>
| 2 | 2016-09-02T22:04:37Z | 39,301,294 | <p>If i'm reading this correctly you just need to remove a space from your output.</p>
<p>Change your last line to this:</p>
<p><code>print("The 'two-for' price is ${0}".format(two_for_price))</code></p>
| 3 | 2016-09-02T22:08:32Z | [
"python"
] |
The formula for "Two-for" price is 20% off the total price of two items. Prompt the user for each of the values and calculate the result | 39,301,263 | <p>The following is what the output should look like:</p>
<p>Enter price 1: 10.0</p>
<p>Enter price 2: 20.0</p>
<p>The 'two-for' price is $24.0</p>
<p>The code I entered is:</p>
<pre><code>price_one = float(input('Enter price 1: '))
print(price_one)
price_two = float(input('Enter price 2: '))
print(price_two)
two_for_price = (price_one + price_two)-((price_one + price_two)*(20/100))
print("The 'two-for' price is $",two_for_price)
</code></pre>
<p>(The inputs are 10.0 and 20.0 respectively.)</p>
<p>But the output I am getting is:</p>
<pre><code>Enter price 1: 10.0
Enter price 2: 20.0
The 'two-for' price is $ 24.0
</code></pre>
<p>In the last line I need:</p>
<pre><code>The 'two-for' price is $24.0
</code></pre>
<p>Please help me out!!</p>
| 2 | 2016-09-02T22:04:37Z | 39,301,530 | <p>Your underlying problem is that the <code>print</code> function behavior, given a list of items, is to print each item, separated by a space. This is often convenient for quick-and-dirty print-outs, but you want something more refined.</p>
<p>What you need to do is create a string with the proper spacing and then print that string out.</p>
<p>So you could do this:</p>
<pre><code>print("The 'two-for' price is $" + str(two_for_price) + ".")
</code></pre>
<p>The problems are (a) that's kind of clumsy and unreadable and (b) it does not format properly, it's "$2.6" instead of "$2.60".</p>
<p>You can use either of two formatting mechanisms offered by Python, either explicit, like this:</p>
<pre><code>print("The 'two-for' price is ${0}".format(two_for_price))
</code></pre>
<p>or implicit, like this</p>
<pre><code>print("The 'two-for' price is $%f" % two_for_price)
</code></pre>
<p>Both of them look a little better, but the formatting errors are the same and worse ("$2.600000"!) respectively. Fortunately, both offer nice customizable formatting:</p>
<pre><code>print("The 'two-for' price is ${0:.2f}".format(two_for_price))
</code></pre>
<p>and</p>
<pre><code>print("The 'two-for' price is $%0.2f" % two_for_price)
</code></pre>
<p>Both of which look reasonably clean and display perfectly.</p>
| 1 | 2016-09-02T22:39:55Z | [
"python"
] |
Parsing with regex | 39,301,366 | <p>I'm trying to count the number of lines contained by a file that looks like this:</p>
<pre><code>-StartACheck
---Lines--
-EndACheck
-StartBCheck
---Lines--
-EndBCheck
</code></pre>
<p>with this:</p>
<pre><code>count=0
z={}
for line in file:
s=re.search(r'\-+Start([A-Za-z0-9]+)Check',line)
if s:
e=s.group(1)
for line in file:
z.setdefault(e,[]).append(count)
q=re.search(r'\-+End',line)
if q:
count=0
break
for a,b in z.items():
print(a,len(b))
</code></pre>
<p>I want to basically store the number of lines present inside ACheck , BCheck etc in a dictionary but I keep getting the wrong output</p>
<p>Something like this</p>
<pre><code>A,15
B,9
</code></pre>
<p>etc</p>
| -2 | 2016-09-02T22:18:05Z | 39,301,539 | <p>You could consider using something like:</p>
<pre><code>import re
from collections import defaultdict
counts = defaultdict(int) # zero if key doesn't exists
for line in file:
start = re.fullmatch('^Start([AB])Check\n$', line).groups()[0]
end = re.fullmatch('^End([AB])Check\n$', line).groups()[0]
if start:
curr_key = group
elif end:
assert curr_key == group, "ending line {} doesn't match with an opening line for {}".format(line, curr_key)
curr_key = None
else: # it's a normal line
counts[curr_key] += 1
</code></pre>
<p>Bonus point: detect non-matching start-end lines + count lines outside start-end lines.</p>
<h1>Without defaultdict</h1>
<p>Replace <code>else</code> clause by:</p>
<pre><code> else: # it's a normal line
if curr_key in counts:
counts[curr_key] += 1
else:
counts[curr_key] = 1
</code></pre>
<p>And define <code>counts</code> as a regular dict:</p>
<pre><code>counts = {}
</code></pre>
<h1>Fixing the given code</h1>
<p>Given code seems working:</p>
<p>Here is a (apparently valid) file definition:</p>
<pre><code>FILE = iter(( # generator of lines
'-StartACheck',
'a',
'b',
'c',
'-EndACheck',
'-StartBCheck',
'a',
'b',
'-EndBCheck',
))
</code></pre>
<p>Here is the missing definitions:</p>
<pre><code>import re
z = {}
</code></pre>
<p>And the provided code:</p>
<pre><code>count=0
for line in FILE:
s=re.search(r'\-+Start([A-Za-z0-9]+)Check',line)
if s:
e=s.group(1)
for line in FILE:
z.setdefault(e,[]).append(count)
q=re.search(r'\-+End',line)
if q:
count=0
break
for a,b in z.items():
print(a,len(b))
</code></pre>
<p>Output is:</p>
<pre><code>A 4
B 3
</code></pre>
<p>Which is accurate, as the first line (<code>StartACheck</code>) is counted:</p>
<pre><code> if s:
e=s.group(1)
for line in FILE:
z.setdefault(e,[]).append(count) # first called with the Start line
</code></pre>
<hr>
<p>Error could be around the file lines extraction : if the file is read as:</p>
<pre><code>file = tuple(open('filename.ext'))
</code></pre>
<p>Then the double for-loop of the source code iterates over each line of the file for each line of the file. Example:</p>
<pre><code>filelines = (1, 2, 3, 4)
for line in filelines:
for line in filelines:
print(line)
</code></pre>
<p>And the (valid in this case) almost identical:</p>
<pre><code>filelines = iter((1, 2, 3, 4))
for line in filelines:
for line in filelines:
print(line)
</code></pre>
| 0 | 2016-09-02T22:42:19Z | [
"python",
"regex"
] |
Parsing with regex | 39,301,366 | <p>I'm trying to count the number of lines contained by a file that looks like this:</p>
<pre><code>-StartACheck
---Lines--
-EndACheck
-StartBCheck
---Lines--
-EndBCheck
</code></pre>
<p>with this:</p>
<pre><code>count=0
z={}
for line in file:
s=re.search(r'\-+Start([A-Za-z0-9]+)Check',line)
if s:
e=s.group(1)
for line in file:
z.setdefault(e,[]).append(count)
q=re.search(r'\-+End',line)
if q:
count=0
break
for a,b in z.items():
print(a,len(b))
</code></pre>
<p>I want to basically store the number of lines present inside ACheck , BCheck etc in a dictionary but I keep getting the wrong output</p>
<p>Something like this</p>
<pre><code>A,15
B,9
</code></pre>
<p>etc</p>
| -2 | 2016-09-02T22:18:05Z | 39,301,580 | <p>If the file isn't too big (say, smaller than 1GB or so), I'd just read the whole thing and call <code>re.findall()</code>:</p>
<pre><code>import re
result = {
name: lines.count('\n')
for name, lines in
re.findall(r'^-Start([A-Za-z0-9]+)Check$(.*)^-End\1Check$',
open('x.in').read(),
re.DOTALL + re.MULTILINE)
}
print (result)
</code></pre>
| 0 | 2016-09-02T22:47:53Z | [
"python",
"regex"
] |
Parsing with regex | 39,301,366 | <p>I'm trying to count the number of lines contained by a file that looks like this:</p>
<pre><code>-StartACheck
---Lines--
-EndACheck
-StartBCheck
---Lines--
-EndBCheck
</code></pre>
<p>with this:</p>
<pre><code>count=0
z={}
for line in file:
s=re.search(r'\-+Start([A-Za-z0-9]+)Check',line)
if s:
e=s.group(1)
for line in file:
z.setdefault(e,[]).append(count)
q=re.search(r'\-+End',line)
if q:
count=0
break
for a,b in z.items():
print(a,len(b))
</code></pre>
<p>I want to basically store the number of lines present inside ACheck , BCheck etc in a dictionary but I keep getting the wrong output</p>
<p>Something like this</p>
<pre><code>A,15
B,9
</code></pre>
<p>etc</p>
| -2 | 2016-09-02T22:18:05Z | 39,302,099 | <p>Given the pattern of </p>
<pre><code>-StartACheck
--- Line 1
-EndACheck
-StartBCheck
---Line 1
-EndBCheck
-StartACheck
---Line 1
---Line 2
---Line 3
-EndACheck
</code></pre>
<p>You can use a multi-line regex to capture the blocks that start with <code>-Start[pattern]Check</code> and end with <code>-End[pattern]Check</code> with something like this:</p>
<pre><code>^-Start([A-Za-z0-9]+)Check$(.*?)^-End(?:\1)Check$
</code></pre>
<p><a href="https://regex101.com/r/oD8aW5/3" rel="nofollow">Demo</a></p>
<p>In Python, you can combine that with <code>re.finditer</code> and a <code>Counter</code> like so:</p>
<pre><code>import re
from collections import Counter
pattern=r'^-Start([A-Za-z0-9]+)Check$(.*?)^-End(?:\1)Check$'
c=Counter()
with open(fn, "r") as f:
for m in re.finditer(pattern, f.read(), re.S | re.M):
c+=Counter({m.group(1): len(m.group(2).splitlines())-1})
</code></pre>
<p>Prints:</p>
<pre><code>Counter({'A': 4, 'B': 1})
</code></pre>
<p>If you are concerned about reading the entire file into memory, use a <code>mmap</code> of the file like so:</p>
<pre><code>import re
from collections import Counter
import mmap
pattern=r'^-Start([A-Za-z0-9]+)Check$(.*?)^-End(?:\1)Check$'
c=Counter()
with open(fn, "r+") as f:
mm=mmap.mmap(f.fileno(), 0)
for m in re.finditer(pattern, mm, re.S | re.M):
c+=Counter({m.group(1): len(m.group(2).splitlines())-1})
</code></pre>
<p>Then the OS will manage reading the file in appropriate chunks to match the regex. </p>
| 0 | 2016-09-03T00:12:44Z | [
"python",
"regex"
] |
Counting the number of set bits in a number | 39,301,375 | <p>The problem statement is: </p>
<p>Write an efficient program to count number of 1s in binary representation of an integer.</p>
<p>I found a post on this problem <a href="http://www.geeksforgeeks.org/count-set-bits-in-an-integer/" rel="nofollow">here</a> which outlines multiple solutions which run in log(n) time including Brian Kernigan's algorithm and the gcc <code>__builtin_popcount()</code> method.</p>
<p>One solution that wasn't mentioned was the python method: <code>bin(n).count("1")</code>
which also achieves the same effect. Does this method also run in log n time?</p>
| -1 | 2016-09-02T22:18:55Z | 39,301,404 | <p>You are converting the integer to a string, which means it'll have to produce N <code>'0'</code> and <code>'1'</code> characters. You then use <code>str.count()</code> which must visit <em>every character</em> in the string to count the <code>'1'</code> characters.</p>
<p>All in all you have a O(N) algorithm, with a relatively high constant cost.</p>
<p>Note that this is the same complexity as the code you linked to; the integer <code>n</code> has log(n) bits, but the algorithm still has to make N = log(n) steps to calculate the number of bits. The <code>bin(n).count('1')</code> algorithm is thus equivalent, but <em>slow</em> as there is a high cost to produce the string in the first place.</p>
<p>At the cost of a table, you could move to processing integers <em>per byte</em>:</p>
<pre><code>table = [0]
while len(table) < 256:
table += [t + 1 for t in table]
length = sum(map(table.__getitem__, n.to_bytes(n.bit_length() // 8 + 1, 'little')))
</code></pre>
<p>However, because Python needs to produce a series of new objects (a <code>bytes</code> object and several integers) this method never quite is fast enough to beat the <code>bin(n).count('1')</code> method:</p>
<pre><code>>>> from random import choice
>>> import timeit
>>> table = [0]
>>> while len(table) < 256:
... table += [t + 1 for t in table]
...
>>> def perbyte(n): return sum(map(table.__getitem__, n.to_bytes(n.bit_length() // 8 + 1, 'little')))
...
>>> def strcount(n): return bin(n).count('1')
...
>>> n = int(''.join([choice('01') for _ in range(2 ** 16)]))
>>> for f in (strcount, perbyte):
... print(f.__name__, timeit.timeit('f(n)', 'from __main__ import f, n', number=1000))
...
strcount 1.11822146497434
perbyte 1.4401431040023454
</code></pre>
<p>No matter the bit-length of the test number, <code>perbyte</code> is always a percentage slower.</p>
| 5 | 2016-09-02T22:22:45Z | [
"python",
"algorithm",
"bit-manipulation"
] |
Counting the number of set bits in a number | 39,301,375 | <p>The problem statement is: </p>
<p>Write an efficient program to count number of 1s in binary representation of an integer.</p>
<p>I found a post on this problem <a href="http://www.geeksforgeeks.org/count-set-bits-in-an-integer/" rel="nofollow">here</a> which outlines multiple solutions which run in log(n) time including Brian Kernigan's algorithm and the gcc <code>__builtin_popcount()</code> method.</p>
<p>One solution that wasn't mentioned was the python method: <code>bin(n).count("1")</code>
which also achieves the same effect. Does this method also run in log n time?</p>
| -1 | 2016-09-02T22:18:55Z | 39,301,432 | <p>Let's say you are trying to count the number of set bits of <code>n</code>. On <em>Python</em> typical implementations, <code>bin</code> will compute the binary representation in <code>O(log n)</code> time and <code>count</code> will go through the string, therefore resulting in an overall <code>O(log n)</code> complexity.</p>
<p>However, note that usually, the input parameter of algorithms is the "size" of the input. When you work with integers, this corresponds to their logarithm. That's why the current algorithm is said to have a <em>linear</em> complexity (the variable is <code>m = log n</code>, and the complexity <code>O(m)</code>).</p>
| 4 | 2016-09-02T22:25:45Z | [
"python",
"algorithm",
"bit-manipulation"
] |
Append a new column based on existing columns | 39,301,422 | <p>Pandas newbie here. </p>
<p>I'm trying to create a new column in my data frame that will serve as a training label when I feed this into a classifier.</p>
<p>The value of the label column is 1.0 if a given Id has (Value1 > 0) or (Value2 > 0) for Apples or Pears, and 0.0 otherwise.</p>
<p>My dataframe is row indexed by Id and looks like this:</p>
<pre><code>Out[30]:
Value1 Value2 \
ProductName 7Up Apple Cheetos Onion Pear PopTart 7Up
ProductType Drinks Groceries Snacks Groceries Groceries Snacks Drinks
Id
100 0.0 1.0 2.0 4.0 0.0 0.0 0.0
101 3.0 0.0 0.0 0.0 3.0 0.0 4.0
102 0.0 0.0 0.0 0.0 0.0 2.0 0.0
ProductName Apple Cheetos Onion Pear PopTart
ProductType Groceries Snacks Groceries Groceries Snacks
Id
100 1.0 3.0 3.0 0.0 0.0
101 0.0 0.0 0.0 2.0 0.0
102 0.0 0.0 0.0 0.0 1.0
</code></pre>
<p>If the pandas wizards could give me a hand with the syntax for this operation - my mind is struggling to put it all together.</p>
<p>Thanks!</p>
| 0 | 2016-09-02T22:24:30Z | 39,301,486 | <p>Define your function: </p>
<pre><code>def new_column (x):
if x['Value1'] > 0 :
return '1.0'
if x['Value2'] > 0 :
return '1.0'
return '0.0'
</code></pre>
<p>Apply it on your data:</p>
<pre><code>df.apply (lambda x: new_column (x),axis=1)
</code></pre>
| 2 | 2016-09-02T22:32:02Z | [
"python",
"pandas",
"numpy",
"sklearn-pandas"
] |
Append a new column based on existing columns | 39,301,422 | <p>Pandas newbie here. </p>
<p>I'm trying to create a new column in my data frame that will serve as a training label when I feed this into a classifier.</p>
<p>The value of the label column is 1.0 if a given Id has (Value1 > 0) or (Value2 > 0) for Apples or Pears, and 0.0 otherwise.</p>
<p>My dataframe is row indexed by Id and looks like this:</p>
<pre><code>Out[30]:
Value1 Value2 \
ProductName 7Up Apple Cheetos Onion Pear PopTart 7Up
ProductType Drinks Groceries Snacks Groceries Groceries Snacks Drinks
Id
100 0.0 1.0 2.0 4.0 0.0 0.0 0.0
101 3.0 0.0 0.0 0.0 3.0 0.0 4.0
102 0.0 0.0 0.0 0.0 0.0 2.0 0.0
ProductName Apple Cheetos Onion Pear PopTart
ProductType Groceries Snacks Groceries Groceries Snacks
Id
100 1.0 3.0 3.0 0.0 0.0
101 0.0 0.0 0.0 2.0 0.0
102 0.0 0.0 0.0 0.0 1.0
</code></pre>
<p>If the pandas wizards could give me a hand with the syntax for this operation - my mind is struggling to put it all together.</p>
<p>Thanks!</p>
| 0 | 2016-09-02T22:24:30Z | 39,305,094 | <p>The answer provided by @vlad.rad works, but it is not very efficient since pandas has to manually loop in Python over all rows, not being able to take advantage of numpy vectorized functions speedup. The following vectorized solution should be more efficient:</p>
<pre><code>condition = (df['Value1'] > 0) | (df['Value2'] > 0)
df.loc[condition, 'label'] = 1.
df.loc[~condition, 'label'] = 0.
</code></pre>
| 1 | 2016-09-03T08:54:00Z | [
"python",
"pandas",
"numpy",
"sklearn-pandas"
] |
Appending one data frame into another | 39,301,465 | <p>I want to bring three data frames into a single one .All data frames have a single column .</p>
<pre><code>org_city_id=p.DataFrame(training_data['origcity_id'])
pol_city_id=p.DataFrame(training_data['pol_city_id'])
pod_city_id=p.DataFrame(training_data['pod_city_id'])
</code></pre>
<p>All have 100 records in it so my goal is to bring them into a single data frame which will then contain 300 records .My below code is not working </p>
<pre><code>org_city_id.append([pol_city_id,pod_city_id])
</code></pre>
<p>the total number of records in org_city_id is still 100 .</p>
<p>Can someone please suggest .</p>
| 2 | 2016-09-02T22:29:30Z | 39,301,490 | <p>Get a single column name for all your dataframes:</p>
<pre><code>org_city_id.columns = pol_city_id.columns = pod_city_id.columns = 'Final Name'
</code></pre>
<p>Then concat them:</p>
<pre><code>pd.concat([org_city_id,pol_city_id,pod_city_id])
</code></pre>
| 1 | 2016-09-02T22:32:17Z | [
"python",
"pandas",
"dataframe"
] |
Appending one data frame into another | 39,301,465 | <p>I want to bring three data frames into a single one .All data frames have a single column .</p>
<pre><code>org_city_id=p.DataFrame(training_data['origcity_id'])
pol_city_id=p.DataFrame(training_data['pol_city_id'])
pod_city_id=p.DataFrame(training_data['pod_city_id'])
</code></pre>
<p>All have 100 records in it so my goal is to bring them into a single data frame which will then contain 300 records .My below code is not working </p>
<pre><code>org_city_id.append([pol_city_id,pod_city_id])
</code></pre>
<p>the total number of records in org_city_id is still 100 .</p>
<p>Can someone please suggest .</p>
| 2 | 2016-09-02T22:29:30Z | 39,301,496 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow">concat</a> from Pandas documentation.</p>
<p>Here is what you'll do:</p>
<p><code>pd.concat([org_city_id, pol_city_id, pod_city_id])</code></p>
| 0 | 2016-09-02T22:33:08Z | [
"python",
"pandas",
"dataframe"
] |
Appending one data frame into another | 39,301,465 | <p>I want to bring three data frames into a single one .All data frames have a single column .</p>
<pre><code>org_city_id=p.DataFrame(training_data['origcity_id'])
pol_city_id=p.DataFrame(training_data['pol_city_id'])
pod_city_id=p.DataFrame(training_data['pod_city_id'])
</code></pre>
<p>All have 100 records in it so my goal is to bring them into a single data frame which will then contain 300 records .My below code is not working </p>
<pre><code>org_city_id.append([pol_city_id,pod_city_id])
</code></pre>
<p>the total number of records in org_city_id is still 100 .</p>
<p>Can someone please suggest .</p>
| 2 | 2016-09-02T22:29:30Z | 39,302,322 | <pre><code>dfs = [org_city_id, pol_city_id, pod_city_id]
pd.concat([df.squeeze() for df in dfs], ignore_index=True)
</code></pre>
| 3 | 2016-09-03T00:55:55Z | [
"python",
"pandas",
"dataframe"
] |
Dynamically call a var inside string in function | 39,301,564 | <p>I'm new to python, I have var (string) which is an <em>Xpath</em> query. I want to pass the variable <em><code>i</code></em> into the <em>Xpath</em> query. A simple example below:</p>
<pre><code>i = 0
self.var = 'li['+i+']'
def test(self):
while(i<10):
print self.var # 'li[0]', 'li[1]' ...
i += 1
</code></pre>
| -2 | 2016-09-02T22:45:59Z | 39,301,596 | <p>You would need to call <em><code>str</code></em> on the variable <em><code>i</code></em>, you cannot concatenate an int and a str:</p>
<pre><code> 'li['+str(i)+']'
</code></pre>
<p>Or just use <em>str.format</em>, you can also pass use range and xpath indexing is also 1-based so you would start at 1:</p>
<pre><code> self.var = "li[{}]"
def test(self):
for i in range(1, 11):
print self.var.format(i)
</code></pre>
<p>Or if using <em>lxml</em> for your Xpath queries you can use an <a href="http://lxml.de/xpathxslt.html" rel="nofollow">Xpath variable</a> like below:</p>
<pre><code> .xpath("li[$i]", i=i)
</code></pre>
| 1 | 2016-09-02T22:49:48Z | [
"python"
] |
Dynamically call a var inside string in function | 39,301,564 | <p>I'm new to python, I have var (string) which is an <em>Xpath</em> query. I want to pass the variable <em><code>i</code></em> into the <em>Xpath</em> query. A simple example below:</p>
<pre><code>i = 0
self.var = 'li['+i+']'
def test(self):
while(i<10):
print self.var # 'li[0]', 'li[1]' ...
i += 1
</code></pre>
| -2 | 2016-09-02T22:45:59Z | 39,301,647 | <p>You can create a list with the first string, the integer, and the second string. </p>
<pre><code>i = 0
var = ['li[',i,']']
def test(var):
while(var[1]<10):
print var[0]+str(var[1])+var[2]
var[1] += 1
test(var)
</code></pre>
| 0 | 2016-09-02T22:56:59Z | [
"python"
] |
How to log to different files based on from which python process logging is called? | 39,301,593 | <p>I am working on a test framework. Each test is launched as a new python multiprocessing process.
There is one master log file and individual log files corresponding to each test.
There is a master logger created at the launch of framework code and a new logger created in each test process. Test loggers log to both - it's own log file and master log file.</p>
<p>There are multiple libraries that can be used by any of the test.<br>
Currently there is no logging done in library function. In order to add logging to the library functions, logger object needs to passed as a parameter to this library function. To achieve this, every function signature in library modules and function call will have to be modified, which is not practical.</p>
<p>As I understand, I cannot have module level logger because module level logger will log to different file for each module and not for each test process. </p>
<p>Can you suggest a solution where I don't have to pass log objects around function and log statements would log to the right file based on which process is calling the function?</p>
| 0 | 2016-09-02T22:49:42Z | 39,301,704 | <p>The <a href="https://docs.python.org/3/library/threading.html" rel="nofollow">threading</a> module has a <code>get_ident</code> member which could be used to index some logger dictionary, something like;</p>
<pre><code>from threading import get_ident
loggers[get_ident()].logError('blah blah blah')
</code></pre>
<p>However, once you have all of this test logging throughout your libraries, how will that impact your production performance?</p>
| 0 | 2016-09-02T23:05:42Z | [
"python",
"logging",
"python-multiprocessing"
] |
Quickly convert numpy arrays with index to dict of numpy arrays keyed on that index | 39,301,666 | <p>I have a set of numpy arrays. One of these is a list of "keys", and I'd like to rearrange the arrays into a dict of arrays keyed on that key. My current code is:</p>
<pre><code>for key, val1, val2 in itertools.izip(keys, vals1, vals2):
dict1[key].append(val1)
dict2[key].append(val2)
</code></pre>
<p>This is pretty slow, since the arrays involved are millions of entries long, and this happens many times. Is it possible to rewrite this in vectorized form? The set of possible keys is known ahead of time, and there are ~10 distinct keys.</p>
<p><strong>Edit:</strong> If there are k distinct keys and the list is n long, the current answers are O(nk) (iterate once for each key) and O(n log n) (sort first). I'm still looking for an O(n) vectorized solution, though. This is hopefully possible; after all, the easiest possible nonvectorized thing (i.e. what I already have) is O(n).</p>
| 2 | 2016-09-02T22:59:49Z | 39,301,716 | <p>Let's import numpy and create some sample data:</p>
<pre><code>>>> import numpy as np
>>> keys = np.array(('key1', 'key2', 'key3', 'key1', 'key2', 'key1'))
>>> vals1 = np.arange(6)
>>> vals2 = np.arange(10, 16)
</code></pre>
<p>Now, let's create the dictionary:</p>
<pre><code>>>> d1 = {}; d2 = {}
>>> for k in set(keys):
... d1[k] = vals1[k==keys]
... d2[k] = vals2[k==keys]
...
>>> d1
{'key3': array([2]), 'key2': array([1, 4]), 'key1': array([0, 3, 5])}
>>> d2
{'key3': array([12]), 'key2': array([11, 14]), 'key1': array([10, 13, 15])}
</code></pre>
<p>The idea behind <code>numpy</code> is that C code is much faster than python and numpy provides many common operations coded at the C level.
As you mentioned that there were only "~10 distinct keys," that means that the python loop is done only 10 or so times. The rest is C.</p>
| 2 | 2016-09-02T23:07:14Z | [
"python",
"numpy",
"vectorization"
] |
Quickly convert numpy arrays with index to dict of numpy arrays keyed on that index | 39,301,666 | <p>I have a set of numpy arrays. One of these is a list of "keys", and I'd like to rearrange the arrays into a dict of arrays keyed on that key. My current code is:</p>
<pre><code>for key, val1, val2 in itertools.izip(keys, vals1, vals2):
dict1[key].append(val1)
dict2[key].append(val2)
</code></pre>
<p>This is pretty slow, since the arrays involved are millions of entries long, and this happens many times. Is it possible to rewrite this in vectorized form? The set of possible keys is known ahead of time, and there are ~10 distinct keys.</p>
<p><strong>Edit:</strong> If there are k distinct keys and the list is n long, the current answers are O(nk) (iterate once for each key) and O(n log n) (sort first). I'm still looking for an O(n) vectorized solution, though. This is hopefully possible; after all, the easiest possible nonvectorized thing (i.e. what I already have) is O(n).</p>
| 2 | 2016-09-02T22:59:49Z | 39,301,900 | <p>The vectorized way to do this is probably going to require you to sort your keys. The basic idea is that you sort the keys and vals to match. Then you can split the val array every time there is a new key in the sorted keys array. The vectorized code looks something like this:</p>
<pre><code>import numpy as np
keys = np.random.randint(0, 10, size=20)
vals1 = np.random.random(keys.shape)
vals2 = np.random.random(keys.shape)
order = keys.argsort()
keys_sorted = keys[order]
# Find uniq keys and key changes
diff = np.ones(keys_sorted.shape, dtype=bool)
diff[1:] = keys_sorted[1:] != keys_sorted[:-1]
key_change = diff.nonzero()[0]
uniq_keys = keys_sorted[key_change]
vals1_split = np.split(vals1[order], key_change[1:])
dict1 = dict(zip(uniq_keys, vals1_split))
vals2_split = np.split(vals2[order], key_change[1:])
dict2 = dict(zip(uniq_keys, vals2_split))
</code></pre>
<p>This method has complexity O(n * log(n)) because of the argsort step. In practice, argsort is very fast unless n is very large. You're likely going to run into memory issues with this method before argsort gets noticeably slow.</p>
| 2 | 2016-09-02T23:37:21Z | [
"python",
"numpy",
"vectorization"
] |
Quickly convert numpy arrays with index to dict of numpy arrays keyed on that index | 39,301,666 | <p>I have a set of numpy arrays. One of these is a list of "keys", and I'd like to rearrange the arrays into a dict of arrays keyed on that key. My current code is:</p>
<pre><code>for key, val1, val2 in itertools.izip(keys, vals1, vals2):
dict1[key].append(val1)
dict2[key].append(val2)
</code></pre>
<p>This is pretty slow, since the arrays involved are millions of entries long, and this happens many times. Is it possible to rewrite this in vectorized form? The set of possible keys is known ahead of time, and there are ~10 distinct keys.</p>
<p><strong>Edit:</strong> If there are k distinct keys and the list is n long, the current answers are O(nk) (iterate once for each key) and O(n log n) (sort first). I'm still looking for an O(n) vectorized solution, though. This is hopefully possible; after all, the easiest possible nonvectorized thing (i.e. what I already have) is O(n).</p>
| 2 | 2016-09-02T22:59:49Z | 39,302,125 | <p><code>defaultdict</code> is intended for building dictionaries like this. In particular is streamlines the step of creating a new dictionary entry for a new key.</p>
<pre><code>In [19]: keys = np.random.choice(np.arange(10),100)
In [20]: vals=np.arange(100)
In [21]: from collections import defaultdict
In [22]: dd = defaultdict(list)
In [23]: for k,v in zip(keys, vals):
...: dd[k].append(v)
...:
In [24]: dd
Out[24]:
defaultdict(list,
{0: [4, 39, 47, 84, 87],
1: [0, 25, 41, 46, 55, 58, 74, 77, 89, 92, 95],
2: [3, 9, 15, 24, 44, 54, 63, 66, 71, 80, 81],
3: [1, 13, 16, 37, 57, 76, 91, 93],
...
8: [51, 52, 56, 60, 68, 82, 88, 97, 99],
9: [21, 29, 30, 34, 35, 59, 73, 86]})
</code></pre>
<p>But with a small known set of keys you don't need this specialized dictionary, since you can easily create the dictionary key entries ahead of time </p>
<pre><code>dd = {k:[] for k in np.unique(keys)}
</code></pre>
<p>But since you are starting with arrays, array operations to sort and collect like values might well be worth it.</p>
| 0 | 2016-09-03T00:16:59Z | [
"python",
"numpy",
"vectorization"
] |
Quickly convert numpy arrays with index to dict of numpy arrays keyed on that index | 39,301,666 | <p>I have a set of numpy arrays. One of these is a list of "keys", and I'd like to rearrange the arrays into a dict of arrays keyed on that key. My current code is:</p>
<pre><code>for key, val1, val2 in itertools.izip(keys, vals1, vals2):
dict1[key].append(val1)
dict2[key].append(val2)
</code></pre>
<p>This is pretty slow, since the arrays involved are millions of entries long, and this happens many times. Is it possible to rewrite this in vectorized form? The set of possible keys is known ahead of time, and there are ~10 distinct keys.</p>
<p><strong>Edit:</strong> If there are k distinct keys and the list is n long, the current answers are O(nk) (iterate once for each key) and O(n log n) (sort first). I'm still looking for an O(n) vectorized solution, though. This is hopefully possible; after all, the easiest possible nonvectorized thing (i.e. what I already have) is O(n).</p>
| 2 | 2016-09-02T22:59:49Z | 39,302,214 | <p>Some timings:</p>
<pre><code>import numpy as np
import itertools
def john1024(keys, v1, v2):
d1 = {}; d2 = {};
for k in set(keys):
d1[k] = v1[k==keys]
d2[k] = v2[k==keys]
return d1,d2
def birico(keys, v1, v2):
order = keys.argsort()
keys_sorted = keys[order]
diff = np.ones(keys_sorted.shape, dtype=bool)
diff[1:] = keys_sorted[1:] != keys_sorted[:-1]
key_change = diff.nonzero()[0]
uniq_keys = keys_sorted[key_change]
v1_split = np.split(v1[order], key_change[1:])
d1 = dict(zip(uniq_keys, v1_split))
v2_split = np.split(v2[order], key_change[1:])
d2 = dict(zip(uniq_keys, v2_split))
return d1,d2
def knzhou(keys, v1, v2):
d1 = {k:[] for k in np.unique(keys)}
d2 = {k:[] for k in np.unique(keys)}
for key, val1, val2 in itertools.izip(keys, v1, v2):
d1[key].append(val1)
d2[key].append(val2)
return d1,d2
</code></pre>
<p>I used 10 keys, 20 million entries:</p>
<pre><code>import timeit
keys = np.random.randint(0, 10, size=20000000) #10 keys, 20M entries
vals1 = np.random.random(keys.shape)
vals2 = np.random.random(keys.shape)
timeit.timeit("john1024(keys, vals1, vals2)", "from __main__ import john1024, keys, vals1, vals2", number=3)
11.121668815612793
timeit.timeit("birico(keys, vals1, vals2)", "from __main__ import birico, keys, vals1, vals2", number=3)
8.107877969741821
timeit.timeit("knzhou(keys, vals1, vals2)", "from __main__ import knzhou, keys, vals1, vals2", number=3)
51.76217794418335
</code></pre>
<p>So, we see than the sorting technique is a bit faster than letting Numpy find the indices corresponding to each key, but of course both are much much faster than looping in Python. Vectorization is great!</p>
<p>This is on Python 2.7.12, Numpy 1.9.2</p>
| 2 | 2016-09-03T00:34:03Z | [
"python",
"numpy",
"vectorization"
] |
Identifying Consecutive Sequences of Data and Counting their Lengths | 39,301,691 | <p>I am working with a DataFrame where each row observation has an ordinal datetime object attached to it. I've written a function that I believe looks through my DataFrame and identifies consecutively occurring days and the length of run of these consecutively occurring days with the following code: </p>
<pre><code> def consecutiveCount(df):
df= df.copy()
cond1 = df['DATE_INT'].shift(-1) - df['DATE_INT'] == 1
cond2 = df['DATE_INT'].shift(1) - df['DATE_INT'] == -1
cond3 = df['DATE_INT'].shift(-2) - df['DATE_INT'] == 2
cond4 = df['DATE_INT'].shift(2) - df['DATE_INT'] == -2
</code></pre>
<p>Now I continue making these conditions in the same fashion until the point:</p>
<pre><code> cond55 = df['DATE_INT'].shift(-28) - df['DATE_INT'] == 28
cond56 = df['DATE_INT'].shift(28) - df['DATE_INT'] == -28
cond57 = df['DATE_INT'].shift(-29) - df['DATE_INT'] == 29
cond58 = df['DATE_INT'].shift(29) - df['DATE_INT'] == -29
</code></pre>
<p>I then write the length of the 'run' of days in a column variable with the following code:</p>
<pre><code> df.loc[cond1 | cond2, 'CONSECUTIVE_COUNT'] = 2
df.loc[cond3 | cond4, 'CONSECUTIVE_COUNT'] = 3
</code></pre>
<p>again I continue until I reach 'runs' of days of length 30.</p>
<pre><code> df.loc[cond55 | cond56, 'CONSECUTIVE_COUNT'] = 29
df.loc[cond57 | cond58, 'CONSECUTIVE_COUNT'] = 30
</code></pre>
<p>Finally I apply the function to particular groups of my DataFrame as follows:</p>
<pre><code> df1 = df.groupby(['COUNTY_GEOID_YEAR','TEMPBIN']).apply(consecutiveCount)
</code></pre>
<p>I am sure there are much more efficient ways to write this code. I've identified the bottle neck in my script is when applying the function by printing various strings throughout my script. </p>
<p>Any help on writing the function in a more efficient manner or how to speed up applying the function would be great! Please let me know if I can provide anymore info.</p>
<p>EDIT: As @DSM pointed out my code is not accurately counting the length of consecutive runs of days properly. His solution did so accurately for me! </p>
| 1 | 2016-09-02T23:03:27Z | 39,301,862 | <p>IIUC, you can you can use the shift-compare-cumsum pattern after applying your groupby, and then do a transform.</p>
<p>Assuming that your data looks something like this (simplifying a bit)</p>
<pre><code>df = pd.DataFrame({"GEOID_YEAR": [2000]*10 + [2001]*4, "TEMPBIN": [1]*14,
"DATE_INT": [1,2,3,4,6,7,9,10,11,14] + list(range(14,18)),
"OTHER_COL": [2]*14})
</code></pre>
<p>or</p>
<pre><code> DATE_INT GEOID_YEAR OTHER_COL TEMPBIN
0 1 2000 2 1
1 2 2000 2 1
2 3 2000 2 1
3 4 2000 2 1
4 6 2000 2 1
5 7 2000 2 1
6 9 2000 2 1
7 10 2000 2 1
8 11 2000 2 1
9 14 2000 2 1
10 14 2001 2 1
11 15 2001 2 1
12 16 2001 2 1
13 17 2001 2 1
</code></pre>
<p>then</p>
<pre><code>df["cons_id"] = df.groupby(["GEOID_YEAR", "TEMPBIN"])["DATE_INT"].apply(
lambda x: (x != x.shift() + 1).cumsum())
df["cons_count"] = (df.groupby(["GEOID_YEAR", "TEMPBIN", "cons_id"])
["cons_id"].transform("count"))
</code></pre>
<p>gives us</p>
<pre><code>In [78]: df
Out[78]:
DATE_INT GEOID_YEAR OTHER_COL TEMPBIN cons_id cons_count
0 1 2000 2 1 1 4
1 2 2000 2 1 1 4
2 3 2000 2 1 1 4
3 4 2000 2 1 1 4
4 6 2000 2 1 2 2
5 7 2000 2 1 2 2
6 9 2000 2 1 3 3
7 10 2000 2 1 3 3
8 11 2000 2 1 3 3
9 14 2000 2 1 4 1
10 14 2001 2 1 1 4
11 15 2001 2 1 1 4
12 16 2001 2 1 1 4
13 17 2001 2 1 1 4
</code></pre>
| 1 | 2016-09-02T23:30:12Z | [
"python",
"function",
"datetime",
"pandas",
"dataframe"
] |
Python: delete all variables except one for loops without contaminations | 39,301,752 | <pre><code>%reset
%reset -f
</code></pre>
<p>and </p>
<pre><code>%reset_selective a
%reset_selective -f a
</code></pre>
<p>are usefull Python alternative to the Matlab command "clear all", in which "-f" means "force without asking for confirmation" and "_selective" could be used in conjunction with </p>
<pre><code>who_ls
</code></pre>
<p>to selectively delete variables in workspace as clearly shown here <a href="https://ipython.org/ipython-doc/3/interactive/magics.html" rel="nofollow">https://ipython.org/ipython-doc/3/interactive/magics.html</a> .</p>
<p>Now I am managing loops in which I am going to define a large number of variables, for example</p>
<pre><code>for j in range(1000):
a = crazy_function1()
b = crazy_function2()
...
m = crazy_function18()
n = crazy_function19()
...
z = crazy_functionN()
</code></pre>
<p>and at the end of each cycle I want to delete ALL variables EXCEPT the standard variables of the Python workspace and some of the variables I introduced (in this example only m and n). This would avoid contaminations and memory burdening hence it will make the code more efficient and safe.</p>
<p>I saw that "who_ls" result looks like a list, hence I thought at a loop that delete all variables that are not equal to m or n</p>
<pre><code>for j in range(1000):
a = crazy_function1()
b = crazy_function2()
...
m = crazy_function18()
n = crazy_function19()
...
z = crazy_functionN()
if who_ls[j] != m or who_ls[j] != n:
%reset_selective -f who_ls[j]
</code></pre>
<p>but it doesn't work as who_ls looks as a list but it doesn't work as a list. How would you modify the last lines of code?
Is there anything like</p>
<pre><code> %reset_selective -f, except variables(m, n)
</code></pre>
<p>?</p>
| -1 | 2016-09-02T23:12:35Z | 39,302,526 | <p>The normal approach to limiting the scope of variables is to use them in a function. When the function is done, its <code>locals</code> disappear.</p>
<pre><code>In [71]: def foo():
...: a=1
...: b=2
...: c=[1,2,3]
...: d=np.arange(12)
...: print(locals())
...: del(a,b,c)
...: print(locals())
...:
In [72]: foo()
{'d': array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]), 'c': [1, 2, 3], 'a': 1, 'b': 2}
{'d': array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])}
</code></pre>
<p>==================</p>
<p><code>%who_ls</code> returns a list, and can be used on the RHS, as in</p>
<pre><code>xx = %who_ls
</code></pre>
<p>and then that list can be iterated. But note that this is a list of variable names, not the variables themselves.</p>
<pre><code>for x in xx:
if len(x)==1:
print(x)
# del(x) does not work
</code></pre>
<p>shows all names of length 1. </p>
<p>======================</p>
<p>A simple way to use <code>%reset_selective</code> is to give the temporary variables a distinctive name, such as a prefix that regex can easily find. For example</p>
<pre><code>In [198]: temp_a, temp_b, temp_c, x, y = 1,'one string',np.arange(10), 10, [1,23]
In [199]: who_ls
Out[199]: ['np', 'temp_a', 'temp_b', 'temp_c', 'x', 'y']
In [200]: %reset_selective -f temp
In [201]: who_ls
Out[201]: ['np', 'x', 'y']
</code></pre>
<p>====================</p>
<p>Here's an example of doing this deletion from a list of names. Keep in mind that there is a difference between the actual variable that we are trying to delete, and its name.</p>
<p>Make some variables, and list of names to delete</p>
<pre><code>In [221]: temp_a, temp_b, temp_c, x, y = 1,'one string',np.arange(10), 10, [1,23]
In [222]: dellist=['temp_a', 'temp_c','x']
</code></pre>
<p>Get the shell, and the <code>user_ns</code>. <code>who_ls</code> uses the keys from <code>self.shell.user_ns</code>.</p>
<pre><code>In [223]: ip=get_ipython()
In [224]: user_ns=ip.user_ns
In [225]: %who_ls
Out[225]: ['dellist', 'i', 'ip', 'np', 'temp_a', 'temp_b', 'temp_c', 'user_ns', 'x', 'y']
In [226]: for i in dellist:
...: del(user_ns[i])
...:
In [227]: %who_ls
Out[227]: ['dellist', 'i', 'ip', 'np', 'temp_b', 'user_ns', 'y']
</code></pre>
<p>So we have to look up the names in the <code>user_ns</code> dictionary in order to delete them. Note that this deletion code creates some variables, <code>dellist</code>, <code>i</code>, <code>ip</code>, <code>user_ns</code>.</p>
<p>==============</p>
<p>How many variables are you worried about? How big are they? Scalars, lists, numpy arrays. A dozen or so scalars that can be named with letters don't take up much memory. And if there's any pattern in the generation of the variables, it may make more sense to collect them in a list or dictionary, rather than trying to give each a unique name.</p>
<p>In general it is better to use functions to limit the scope of variables, rather than use <code>del()</code> or <code>%reset</code>. Occasionally if dealing with very large arrays, the kind that take a meg of memory and can create memory errors, I may use <code>del</code> or just <code>a=None</code> to remove them. But ordinary variables don't need special attention (not even in an <code>ipython</code> session that hangs around for several days).</p>
| 2 | 2016-09-03T01:48:09Z | [
"python",
"ipython"
] |
Tensor too big for graph distribution - "InternalError: Message length was negative" | 39,301,773 | <p><strong>I tried to run the following graph:</strong>
<a href="http://i.stack.imgur.com/y5d5x.png" rel="nofollow"><img src="http://i.stack.imgur.com/y5d5x.png" alt="Graph that causes the error."></a></p>
<p><strong>Unfortunately, I receive the following error message:</strong></p>
<pre><code>tensorflow.python.framework.errors.InternalError: Message length was negativ
[[Node: random_uniform_1_S1 = _Recv[client_terminated=false,
recv_device= "/job:worker/replica:0/task:1/cpu:0",
send_device="/job:worker/replica:0/task:0/cpu:0",
send_device_incarnation=3959744268201087672,
tensor_name="edge_18_random_uniform_1",
tensor_type=DT_DOUBLE,
_device="/job:worker/replica:0/task:1/cpu:0"]()]]
</code></pre>
<p>I noticed that this error message does not occur if the size of <code>random_uniform_1</code> is 800MB, but it does occur if the size is 8GB.</p>
<p>(Notice that <code>random_uniform_1</code> has to be transferred from one device to another device.)</p>
<p><strong>Question:</strong> Is there a limit on how big a tensor can be, if that tensor has to be transferred between devices?</p>
| 1 | 2016-09-02T23:15:16Z | 39,301,914 | <p>Yes, currently there is a <strong>2GB limit</strong> on an individual tensor when sending it between processes. This limit is imposed by the protocol buffer representation (more precisely, by the auto-generated C++ wrappers produced by the <code>protoc</code> compiler) that is used in TensorFlow's communication layer.</p>
<p>We are investigating ways to lift this restriction. In the mean time, you can work around it by manually adding <code>tf.split()</code> or <code>tf.slice()</code>, and <code>tf.concat()</code> operations to partition the tensor for transfer. If you have very large <code>tf.Variable</code> objects, you can use <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/state_ops.html#variable-partitioners-for-sharding" rel="nofollow">variable partitioners</a> to perform this transformation automatically. Note that in your program you have multiple 8GB tensors in memory at once, so the peak memory utilization will be at least 16GB.</p>
| 2 | 2016-09-02T23:39:33Z | [
"python",
"tensorflow",
"distribution"
] |
Django 1.10: extend/override admin css | 39,301,791 | <p>I want to override the css definition of a select-multiple widget by decreasing the min-height attribute from <code>contrib/admin/static/admin/css/base.css</code>. However, my css is not loaded I think. I use a custom admin form and have defined the Media subclass within this form as well as within the corresponding ModelAdmin:</p>
<pre><code> class Media:
css = {
'all': ('/css/admin widgets.css',)
}
</code></pre>
<p>The file itself is located in <code>myapp/static/css/admin widgets.css</code> and contains:</p>
<pre><code> select[multiple] {
min-height: 65px;
}
</code></pre>
<p>In <code>settings.py</code>, I have defined</p>
<pre><code> BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
STATICFILES_DIRS = [os.path.join(BASE_DIR, "myapp", "static")]
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATIC_URL = '/static'
</code></pre>
<p>I've ran <code>manage.py collectstatic</code> which collected all the admin files and my own css file in the project's static-subdir. I've also extended <code>urls.py</code> as described <a href="https://docs.djangoproject.com/en/1.10/howto/static-files/#serving-static-files-during-development" rel="nofollow">here</a>, but the widget is still rendered with 150px min-height which is defined in base.css. Any help is appreciated.</p>
<p><strong>EDIT:</strong> I actually use the form within a <code>django.contrib.admin.TabularInline</code>. I just also tried to add the Media-subclass to this inline - to no avail.</p>
| 0 | 2016-09-02T23:18:06Z | 39,355,192 | <p>I've found the solution myself. It is indeed only one character, but not the whitespace in the filename that Jonas pointed me to (I'm on a Windows filesystem, maybe that's different on linux). Instead, it's the leading slash of the URL in the Media class. Instead of:</p>
<pre><code> class Media:
css = {
'all': ('/css/admin widgets.css',)
}
</code></pre>
<p>it has to be</p>
<pre><code> class Media:
css = {
'all': ('css/admin widgets.css',)
}
</code></pre>
<p>or otherwise Django looks directly in the project's root directory for <code>css/admin widgets.css</code> instead of the static subdirectory <code>static/css/admin.widgets.css</code>.</p>
<p>Furthermore, it seems irrelevant if the Media subclass resides in the Admin, the Inline or the Form directly. It works in any case.</p>
<p>And one more hint: <em>Always</em> clear the browser cache before testing when working with css or js. How many times have I already fallen into the same pit? -.-</p>
| 0 | 2016-09-06T18:01:57Z | [
"python",
"css",
"django",
"admin"
] |
How to determine whether a Python string contains an emoji? | 39,301,796 | <p>I've seen previous responses to this question, but none of them are recent and none of them are working for me in Python 3. I have a list of strings, and I simply want to identify which ones contain emoji. What's the fastest way to do this?</p>
<p>To be more specific, I have a lengthy list of email subject lines from AB tests, and I'm trying to determine which subject lines contained emoji.</p>
| 0 | 2016-09-02T23:18:46Z | 39,303,198 | <p>You could do something like this</p>
<pre><code>text = input('What is your text? ')
if text != 'a' # or b c d -z then all other characters + - etc
</code></pre>
<p>However this would be a long way of doing it.</p>
| 0 | 2016-09-03T04:20:48Z | [
"python",
"emoji"
] |
How to determine whether a Python string contains an emoji? | 39,301,796 | <p>I've seen previous responses to this question, but none of them are recent and none of them are working for me in Python 3. I have a list of strings, and I simply want to identify which ones contain emoji. What's the fastest way to do this?</p>
<p>To be more specific, I have a lengthy list of email subject lines from AB tests, and I'm trying to determine which subject lines contained emoji.</p>
| 0 | 2016-09-02T23:18:46Z | 39,303,814 | <p><a href="http://stackoverflow.com/questions/28290240/python-2-7-detect-emoji-from-text/28327594#28327594">this link</a> and <a href="http://stackoverflow.com/questions/38730560/is-there-a-specific-range-of-unicode-code-points-which-can-be-checked-for-emojis">this link</a> both count © and other common characters as emoji. Also the former has minor mistakes and the latter still doesn't appear to work.</p>
<p>Here's an implementation that errs on the conservative side using <a href="ftp://ftp.unicode.org/Public/emoji/3.0/emoji-data.txt" rel="nofollow">this newer data</a> and <a href="http://www.unicode.org/reports/tr51/" rel="nofollow">this documentation</a>. It only considers code points that are marked with the unicode property <code>Emoji_Presentation</code> (which means it's definitely an emoji), or code points marked only with the property <code>Emoji</code> (which means it defaults to text but it <em>could</em> be an emoji), that are followed by a special variation selector code point <code>fe0f</code> that says to default to an emoji instead. The reason I say this is conservative is because certain systems aren't as picky about the <code>fe0f</code> and will treat characters as emoji wherever they can (read more about this <a href="http://www.unicode.org/reports/tr51/#Presentation_Style" rel="nofollow">here</a>).</p>
<pre><code>import re
from collections import defaultdict
def parse_line(line):
"""Return a pair (property, codepoints) where property is a string and
codepoints is a set of int unicode code points"""
pat = r'([0-9A-Z]+)(\.\.[0-9A-Z]+)? + ; +(\w+) + #.*'
match = re.match(pat, line)
assert match
codepoints = set()
start = int(match.group(1), 16)
if match.group(2):
trimmed = match.group(2)[2:]
end = int(trimmed, 16) + 1
else:
end = start + 1
for cp in range(start, end):
codepoints.add(cp)
return (match.group(3), codepoints)
def parse_emoji_data():
"""Return a dictionary mapping properties to code points"""
result = defaultdict(set)
with open('emoji-data.txt', mode='r', encoding='utf-8') as f:
for line in f:
if '#' != line[0] and len(line.strip()) > 0:
property, cp = parse_line(line)
result[property] |= cp
return result
def test_parse_emoji_data():
sets = parse_emoji_data()
sizes = {
'Emoji': 1123,
'Emoji_Presentation': 910,
'Emoji_Modifier': 5,
'Emoji_Modifier_Base': 83,
}
for k, v in sizes.items():
assert len(sets[k]) == v
def contains_emoji(text):
"""
Return true if the string contains either a code point with the
`Emoji_Presentation` property, or a code point with the `Emoji`
property that is followed by \uFE0F
"""
sets = parse_emoji_data()
for i, ch in enumerate(text):
if ord(ch) in sets['Emoji_Presentation']:
return True
elif ord(ch) in sets['Emoji']:
if len(text) > i+1 and text[i+1] == '\ufe0f':
return True
return False
test_parse_emoji_data()
assert not contains_emoji('hello')
assert not contains_emoji('hello :) :D 125% #%&*(@#%&!@(^*(')
assert contains_emoji('here is a smiley \U0001F601 !!!')
</code></pre>
<p>To run this you need <a href="ftp://ftp.unicode.org/Public/emoji/3.0/emoji-data.txt" rel="nofollow">ftp://ftp.unicode.org/Public/emoji/3.0/emoji-data.txt</a> in the working directory.</p>
<p>Once the regex module supports emoji properties it will be easier to use that instead.</p>
| 1 | 2016-09-03T06:11:43Z | [
"python",
"emoji"
] |
Divide date as day,month and year | 39,301,871 | <p>l have 41 year-dataset and l would like to do some calculations by using these dataset but for this, l need to divide the date into day, month and year respectively.</p>
<p>example dataset(csv file data)</p>
<pre><code> date stations pcp
1.01.1979 6 1.071
2.01.1979 6 5.909
3.01.1979 6 9.134
1.01.1979 5 1.229
2.01.1979 5 0.014
3.01.1979 5 3.241
</code></pre>
<p>l need to convert these data in this:</p>
<pre><code>day month year stations pcp
1 01 1979 6 1.071
2 01 1979 6 5.909
3 01 1979 6 9.134
</code></pre>
<p>when l run code , it stops and l have to close it. No error message. how can l correct this? l m a fresh user and probably ,there are many mistakes. l hope l can learn my mistake here.l made two try and l m not sure which one is correct
here is my code:</p>
<pre><code>from datetime import date, datetime, timedelta,time
import csv
import numpy as np
a=[]
dd=[]
mm=[]
yy=[]
with open('p2.csv') as csvfile:
x=[]
reader = csv.DictReader(csvfile,fieldnames=("date","stations","pcp"),delimiter=';', quotechar='|')
for row in reader:
x.append(row["date"])
#try1
for i in range(len (x)):
day,month,year=a.split(x[i])
d=int(day)
m=int(month)
y=int(year)
dd.append(d)
mm.append(m)
yy.append(y)
#try2
"""for i in range(len(x)):
an=x[i]
y=datetime.datetime.strftime(an, '%Y')
d=datetime.datetime.strftime(an, '%d')
m=datetime.datetime.strftime(an, '%m')
dd.append(d)
mm.append(m)
yy.append(y)"""
</code></pre>
| 0 | 2016-09-02T23:31:49Z | 39,301,965 | <p>There are a few problems:</p>
<ol>
<li>Your delimiter should be a comma instead of a semicolon if it's a CSV.</li>
<li>You are splitting on a date string instead of a delimiter when you call <code>a.split(x[i])</code>. You probably want to split on a <code>.</code>, since that's what's separating the date fields.</li>
</ol>
<p>Without changing too much code, the following code works for me. It wasn't clear from your question what you want to actually do with the data, but I tried to demonstrate how you would get it.</p>
<pre><code>import csv
with open('p2.csv') as csvfile:
reader = csv.DictReader(
csvfile, fieldnames=('date', 'stations', 'pcp'), delimiter=',', quotechar='|')
next(reader) # skip header row
x = [row['date'] for row in reader]
for date_str in x:
day, month, year = date_str.split('.')
print(day, month, year)
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>1 01 1979
2 01 1979
3 01 1979
1 01 1979
2 01 1979
3 01 1979
</code></pre>
| 3 | 2016-09-02T23:49:49Z | [
"python",
"date",
"csv"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.