title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Flask, MySQL and SQLAlchemy quering ID error
39,186,521
<p>I get a weird error when I try to query an item by id. I have tried all suggestion I have found and only when doing raw query I get a proper result.</p> <p>Part of the Traceback:</p> <pre><code> File "C:\Users\pgsid\Envs\xo\lib\site-packages\flask\app.py", line 1988, in wsgi_app response = self.full_dispatch_request() File "C:\Users\pgsid\Envs\xo\lib\site-packages\flask\app.py", line 1641, in full_dispatch_requ`est rv = self.handle_user_exception(e) File "C:\Users\pgsid\Envs\xo\lib\site-packages\flask\app.py", line 1544, in handle_user_exception reraise(exc_type, exc_value, tb) File "C:\Users\pgsid\Envs\xo\lib\site-packages\flask\app.py", line 1639, in full_dispatch_request rv = self.dispatch_request() File "C:\Users\pgsid\Envs\xo\lib\site-packages\flask\app.py", line 1625, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "xow.py", line 24, in get_user a = User.get_item(user_id) File "C:\Users\pgsid\xo\xow\models\users.py", line 57, in get_item result = User.query.filter_by(id=idd).first() File "C:\Users\pgsid\Envs\xo\lib\site-packages\sqlalchemy\orm\query.py", line 2659, in first ret = list(self[0:1]) File "C:\Users\pgsid\Envs\xo\lib\site-packages\sqlalchemy\orm\query.py", line 2457, in __getitem__ return list(res) File "C:\Users\pgsid\Envs\xo\lib\site-packages\sqlalchemy\orm\loading.py", line 86, in instances util.raise_from_cause(err) File "C:\Users\pgsid\Envs\xo\lib\site-packages\sqlalchemy\util\compat.py", line 202, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "C:\Users\pgsid\Envs\xo\lib\site-packages\sqlalchemy\orm\loading.py", line 71, in instances rows = [proc(row) for row in fetch] File "C:\Users\pgsid\Envs\xo\lib\site-packages\sqlalchemy\orm\loading.py", line 428, in _instance loaded_instance, populate_existing, populators) File "C:\Users\pgsid\Envs\xo\lib\site-packages\sqlalchemy\orm\loading.py", line 486, in _populate_full dict_[key] = getter(row) TypeError: an integer is required </code></pre> <p>The query is <code>result = User.query.filter_by(id=idd).first()</code> with idd of type <code>int</code>.</p> <p>The type of the ID field in MySQL db is <code>INT</code> </p> <p>and the model is like this</p> <pre><code>class User(db.Model): __tablename__ = 'user' id = db.Column('id', db.INT, primary_key=True) name = db.Column(db.VARCHAR, index=True) post = db.Column(db.VARCHAR, nullable=True) type = db.Column(db.VARCHAR) url = db.Column(db.VARCHAR, nullable=True) subtype = db.Column(db.VARCHAR, nullable=True) tel = db.Column(db.VARCHAR, nullable=True) address = db.Column(db.VARCHAR, nullable=True) latitude = db.Column(db.FLOAT) longitude = db.Column(db.FLOAT) deleted = db.Column(db.Boolean, default=False, index=True) children = db.relationship("Children") </code></pre> <p>The database is initialized as such:</p> <pre><code>app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql+pymysql://root:test@localhost/mydb' db = SQLAlchemy(app) </code></pre> <p>What could be wrong? Any suggestions?</p>
0
2016-08-27T22:55:47Z
39,214,729
<p>Change your id type as such:</p> <pre><code>id = db.Column('id', db.Integer(), primary_key=True) </code></pre>
0
2016-08-29T20:32:58Z
[ "python", "mysql", "flask", "sqlalchemy" ]
Read data from text file, into numpy array in python
39,186,610
<p>I want to read the file of below format into numpy array in python. </p> <pre><code>ADIDGoogle#8a65c466-****-4a7e-****-0836c8884dae 2016-06-01 17:55:53 ADIDGoogle#8a65c466-****-4a7e-****-0836c8884dae 2016-06-01 17:55:53 ADIDGoogle#8a65c466-****-4a7e-****-0836c8884dae 2016-06-01 17:55:53 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:20:02 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:35:48 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:26:20 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:31:12 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:19:17 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:20:02 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:36:39 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:31:12 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:35:48 </code></pre> <p>it is having three columns separated by '\t'. i want to read this into numpy array with two columns where the date and time in to one column and id in another column.</p> <p>I tried using </p> <pre><code>Data = np.loadtxt(filename,dtype='string',usecols=(1,2),delimiter="\t") </code></pre> <p>but it is giving me error as : </p> <pre><code>IndexError: list index out of range </code></pre>
2
2016-08-27T23:09:40Z
39,186,697
<p>First of all, the <code>#</code> character in your file will make <code>numpy</code> think everything after "ADIDGoogle" in each line is a comment. It appears you can change the comment character using the <code>comments</code> kwarg in <code>np.loadtxt</code>. This will solve the <code>IndexError</code>, leaving the delimiter problem.</p>
2
2016-08-27T23:28:11Z
[ "python", "arrays", "numpy" ]
Read data from text file, into numpy array in python
39,186,610
<p>I want to read the file of below format into numpy array in python. </p> <pre><code>ADIDGoogle#8a65c466-****-4a7e-****-0836c8884dae 2016-06-01 17:55:53 ADIDGoogle#8a65c466-****-4a7e-****-0836c8884dae 2016-06-01 17:55:53 ADIDGoogle#8a65c466-****-4a7e-****-0836c8884dae 2016-06-01 17:55:53 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:20:02 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:35:48 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:26:20 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:31:12 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:19:17 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:20:02 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:36:39 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:31:12 ADIDGoogle#8a664a70-****-4103-****-4f7e6cb9a33a 2016-06-01 13:35:48 </code></pre> <p>it is having three columns separated by '\t'. i want to read this into numpy array with two columns where the date and time in to one column and id in another column.</p> <p>I tried using </p> <pre><code>Data = np.loadtxt(filename,dtype='string',usecols=(1,2),delimiter="\t") </code></pre> <p>but it is giving me error as : </p> <pre><code>IndexError: list index out of range </code></pre>
2
2016-08-27T23:09:40Z
39,186,880
<p>You can read via <code>genfromtxt</code> line-by-line</p> <pre><code>import numpy as np fname = "./data.txt" with open(fname, 'r') as f: data = np.genfromtxt(f,comments="!",dtype="string",usecols=(1,2)) print data </code></pre>
1
2016-08-28T00:09:01Z
[ "python", "arrays", "numpy" ]
Set vertical size for the pop-up menu of a GtkComboBox
39,186,665
<p>I created a GtkComboBox with dozens of items. When I perform I see that the pop-up menu containing the items is very vertically large. How do I set a maximum size? I checked the documentation and I not found a method to define it.</p> <p>Example:</p> <pre><code>#!/usr/bin/env python3 import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk class ComboBox(Gtk.Window): def __init__(self): Gtk.Window.__init__(self) self.set_title("ComboBox") self.set_default_size(150, -1) self.connect("destroy", Gtk.main_quit) slist = Gtk.ListStore(str, str) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) slist.append(['01', 'Ferro']) slist.append(['07', 'Uranio']) slist.append(['08', 'Cobalto']) combobox = Gtk.ComboBox() combobox.set_model(slist) self.add(combobox) cell1 = Gtk.CellRendererText() cell2 = Gtk.CellRendererText() combobox.pack_start(cell1, True) combobox.pack_start(cell2, True) combobox.add_attribute(cell1, "text", 0) combobox.add_attribute(cell2, "text", 1) window = ComboBox() window.show_all() Gtk.main() </code></pre>
2
2016-08-27T23:23:04Z
39,257,099
<p>As far as I know (and can find), it is impossible to set the number of rows in a GtkComboBox drop down menu.</p> <p>If you insist on using <code>Gtk.ComboBox()</code>, you can however reduce the height in case of large amounts of entries by using:</p> <pre><code>combobox.set_wrap_width(5) </code></pre> <p>Which would show:</p> <p><a href="http://i.stack.imgur.com/KBOk9.png" rel="nofollow"><img src="http://i.stack.imgur.com/KBOk9.png" alt="enter image description here"></a></p>
1
2016-08-31T19:00:02Z
[ "python", "gtk", "gtk3" ]
How to make a proxy object behave like the integer it wraps?
39,186,692
<p>I want to create a proxy class that wraps an <code>int</code> for thread-safe access. In contrast to the built-in type, the proxy class is mutable, so that it can be incremented in-place. Now, I want to use that class just as a normal integer from the outside. Usually, Python's <code>__getattr__</code> makes it very easy to forward attribute access to the inner object:</p> <pre><code>class Proxy: def __init__(self, initial=0): self._lock = threading.Lock() self._value = initial def increment(self): with self._lock: self._value += 1 def __getattr__(self, name): return getattr(self._value, name) </code></pre> <p>However, <a href="http://stackoverflow.com/q/33824228/1079110"><code>__getattr__</code> does not get triggered for magic methods</a> like <code>__add__</code>, <code>__rtruediv__</code>, etc that I need for the proxy to behave like an integer. Is there a way to generate those methods automatically, or otherwise forward them to the wrapped integer object?</p>
2
2016-08-27T23:27:34Z
39,190,081
<p>The blog post linked by <a href="http://stackoverflow.com/users/5378816/vpfb">@VPfB</a> in the <a href="http://stackoverflow.com/questions/39186692/how-to-make-a-proxy-object-behave-like-the-integer-it-wraps/39190081#comment65719596_39186692">comments</a> has a more generic and thorough solution to proxying dunder methods for builtin types, but here's a simplified and rather brutish example for the same. I hope it helps in understanding how to create such forwarding methods.</p> <pre><code>import threading import numbers def _proxy_slotted(name): def _proxy_method(self, *args, **kwgs): return getattr(self._value, name)(*args, **kwgs) # Not a proper qualname, but oh well _proxy_method.__name__ = _proxy_method.__qualname__ = name return _proxy_method # The list of abstract methods of numbers.Integral _integral_methods = """ __abs__ __add__ __and__ __ceil__ __eq__ __floor__ __floordiv__ __int__ __invert__ __le__ __lshift__ __lt__ __mod__ __mul__ __neg__ __or__ __pos__ __pow__ __radd__ __rand__ __rfloordiv__ __rlshift__ __rmod__ __rmul__ __ror__ __round__ __rpow__ __rrshift__ __rshift__ __rtruediv__ __rxor__ __truediv__ __trunc__ __xor__""".split() # The dunder, aka magic methods _Magic = type('_Magic', (), {name: _proxy_slotted(name) for name in _integral_methods}) class IntProxy(_Magic, numbers.Integral): """ &gt;&gt;&gt; res = IntProxy(1) + 1 &gt;&gt;&gt; print(type(res), res) &lt;class 'int'&gt; 2 &gt;&gt;&gt; print(IntProxy(2) / 3) 0.6666666666666666 """ def __init__(self, initial=0, Lock=threading.Lock): self._lock = Lock() self._value = initial def increment(self): with self._lock: self._value += 1 def __getattr__(self, name): return getattr(self._value, name) </code></pre>
2
2016-08-28T09:58:25Z
[ "python", "python-3.x", "proxy" ]
Internal C++ object already deleted (pyside)
39,186,716
<p>The aim of this program is to show the tradeWindow as a QWidget and then show a QDialog each time doStuff is called (via button) if there are results. The code works first time, but second time i get error messages:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "GUI.py", line 68, in doStuff popup = Dialog((Qt.WindowSystemMenuHint | Qt.WindowTitleHint), popLayout) File "GUI.py", line 47, in __init__ self.setLayout(popLayout) RuntimeError: Internal C++ object (PySide.QtGui.QHBoxLayout) already deleted. </code></pre> <p>It seems my layout gets deleted when i close QDialog the first time. Moving <code>popLayout = QHBoxLayout()</code> to start of <code>doStuff</code> which I thought would fix the problem gives me this error instead:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "GUI.py", line 69, in doStuff popup = Dialog((Qt.WindowSystemMenuHint | Qt.WindowTitleHint), popLayout) File "GUI.py", line 47, in __init__ self.setLayout(popLayout) NameError: name 'popLayout' is not defined </code></pre> <p>That doesn't make much sense to me at all since it should always be getting defined before being referenced? I can't find the problem anyway. I'm sure a lot of my code could be improved as well as I am very new to classes etc. </p> <p>If you have any tips on how to open the QDialog each time that is better than what I'm currently trying or other helpful tips, please don't hesitate to mention that as well. (Try to ignore the crappy naming convention, I will fix that in the future.)</p> <p>Thank you for any help!</p> <p><a href="http://i.stack.imgur.com/fGDea.png" rel="nofollow"><img src="http://i.stack.imgur.com/fGDea.png" alt="GUI example"></a></p> <pre><code>#!/usr/bin/python # -*- coding: utf-8 -*- import sys from PySide.QtCore import * from PySide.QtGui import * import webbrowser class Window(QWidget): def __init__(self, windowTitle, layout): super().__init__() self.resize(800,500) self.setWindowTitle(windowTitle) self.setLayout(layout) class TextField(QTextEdit): def __init__(self, tooltip, layout): super().__init__() self.setToolTip(tooltip) layout.addWidget(self) class Button(QPushButton): def __init__(self, text, layout): super().__init__() self.setText(text) layout.addWidget(self) class Label(QLabel): def __init__(self, text, layout): super().__init__() self.setText(text) layout.addWidget(self) class Table(QTableWidget): def __init__(self, layout): super().__init__() self.cellDoubleClicked.connect(self.slotItemDoubleClicked) layout.addWidget(self) def slotItemDoubleClicked(self,row,col): if col == 0 or col == 1: webbrowser.open(self.item(row, 1).text()) class Dialog(QDialog): def __init__(self, flags, layout): super().__init__() self.setWindowFlags(flags) self.resize(800,500) self.setLayout(popLayout) #Layouts mainLayout = QVBoxLayout() subLayout = QHBoxLayout() subLayout2 = QHBoxLayout() mainLayout.addLayout(subLayout) mainLayout.addLayout(subLayout2) popLayout = QHBoxLayout() #Main tradeApp = QApplication(sys.argv) textedit = TextField('bla',subLayout) textedit2 = TextField('bla2',subLayout) label = Label('Hover over input fields for instructions.', subLayout2) button = Button('click me', subLayout2) label2 = Label('Hover over input fields for instructions.', subLayout2) def doStuff(): gameResults = {'doom' : '111232', 'quake' : '355324'} if len(gameResults) &gt; 0: popup = Dialog((Qt.WindowSystemMenuHint | Qt.WindowTitleHint), popLayout) table = Table(popLayout) table.setRowCount(len(gameResults)) table.setColumnCount(2); table.setHorizontalHeaderItem(0, QTableWidgetItem("Game")) table.setHorizontalHeaderItem(1, QTableWidgetItem("URL")) for index, game in enumerate(sorted(gameResults)): table.setItem(index,0,QTableWidgetItem(game)) table.item(index,0).setFlags( Qt.ItemIsSelectable | Qt.ItemIsEnabled ) table.setItem(index,1,QTableWidgetItem('http://store.steampowered.com/app/'+gameResults[game]+'/')) table.item(index,1).setFlags( Qt.ItemIsSelectable | Qt.ItemIsEnabled ) table.resizeColumnsToContents() popup.exec_() else: msgBox = QMessageBox() msgBox.setText("No results.") msgBox.exec_() button.clicked.connect(doStuff) tradeWindow = Window('Tradefinder', mainLayout) tradeWindow.show() tradeApp.exec_() </code></pre>
0
2016-08-27T23:31:01Z
39,190,134
<p>After your <code>Dialog</code> is closed, and the <code>popup</code> variable referencing it goes out of scope, Python will garbage-collect it. This causes the entire underlying C++ object, including all of its sub-widgets and layouts, to be deleted. However, you're keeping a reference to a layout used by the dialog, and hence the layout will have been deleted by the second time you try to open the dialog.</p> <p>I find it odd that you're doing all of the initialization of your <code>Dialog</code> class outside of the <code>Dialog</code> class. Instead, I would recommend moving the creation of <code>popLayout</code>, and all of the creation and setup of <code>table</code>, inside your <code>Dialog</code> class. This way the layout gets created each time the dialog is opened.</p> <p>You'll need to add <code>gameResults</code> as a parameter to the <code>__init__</code> method of <code>Dialog</code>, and you can also remove the <code>layout</code> parameter you have there at the moment because it isn't used. </p> <p>After doing this, your <code>Dialog</code> class should look like the following:</p> <pre><code>class Dialog(QDialog): def __init__(self, flags, gameResults): super().__init__() self.setWindowFlags(flags) self.resize(800,500) popLayout = QHBoxLayout() self.setLayout(popLayout) table = Table(popLayout) table.setRowCount(len(gameResults)) table.setColumnCount(2); table.setHorizontalHeaderItem(0, QTableWidgetItem("Game")) table.setHorizontalHeaderItem(1, QTableWidgetItem("URL")) for index, game in enumerate(sorted(gameResults)): table.setItem(index,0,QTableWidgetItem(game)) table.item(index,0).setFlags( Qt.ItemIsSelectable | Qt.ItemIsEnabled ) table.setItem(index,1,QTableWidgetItem('http://store.steampowered.com/app/'+gameResults[game]+'/')) table.item(index,1).setFlags( Qt.ItemIsSelectable | Qt.ItemIsEnabled ) table.resizeColumnsToContents() </code></pre> <p>and your <code>doStuff()</code> method should look like the following:</p> <pre><code>def doStuff(): gameResults = {'doom' : '111232', 'quake' : '355324'} if len(gameResults) &gt; 0: popup = Dialog((Qt.WindowSystemMenuHint | Qt.WindowTitleHint), gameResults) popup.exec_() else: msgBox = QMessageBox() msgBox.setText("No results.") msgBox.exec_() </code></pre> <p>I made these changes to your code and I was able to open the dialog multiple times. </p> <p>I'll leave it up to you to move your main window set-up code inside your <code>Window</code> class in the same way.</p> <p>Finally, please note that I have only tested this using PyQt. However, I would expect that my changes would also work for PySide.</p>
1
2016-08-28T10:04:30Z
[ "python", "qt", "pyside" ]
get both unique count and max in group-by of pandas dataframe
39,186,843
<p>Using Pandas data frame group by feature and I want to group by column <code>c_b</code> and (1) calculate unique count for column <code>c_a</code> and column <code>c_c</code>, (2) and get the max value of column c_d. Wondering if there is any solution to write one line of group by code to achieve both goals? I tried the following line of code, but it seems not correct.</p> <pre><code>sampleGroup = sample.groupby('c_b')(['c_a', 'c_d'].agg(pd.Series.nunique), ['c_d'].agg(pd.Series.max)) </code></pre> <p>My expected results are,</p> <p><strong>Expected results</strong>,</p> <pre><code>c_b,c_a_unique_count,c_c_unique_count,c_d_max python,2,2,1.0 c++,2,2,0.0 </code></pre> <p>Thanks.</p> <p><strong>Input file</strong>,</p> <pre><code>c_a,c_b,c_c,c_d hello,python,numpy,0.0 hi,python,pandas,1.0 ho,c++,vector,0.0 ho,c++,std,0.0 go,c++,std,0.0 </code></pre> <p><strong>Source code</strong>,</p> <pre><code>sample = pd.read_csv('123.csv', header=None, skiprows=1, dtype={0:str, 1:str, 2:str, 3:float}) sample.columns = pd.Index(data=['c_a', 'c_b', 'c_c', 'c_d']) sample['c_d'] = sample['c_d'].astype('int64') sampleGroup = sample.groupby('c_b')(['c_a', 'c_d'].agg(pd.Series.nunique), ['c_d'].agg(pd.Series.max)) results.to_csv(sampleGroup, index= False) </code></pre>
2
2016-08-28T00:01:23Z
39,187,330
<p>You can pass a dict to <code>agg()</code>:</p> <pre><code>df.groupby('c_b').agg({'c_a':'nunique', 'c_c':'nunique', 'c_d':'max'}) </code></pre> <p>If you don't want <code>c_b</code> as index, you can pass <code>as_index=False</code> to <code>groupby</code>:</p> <pre><code>df.groupby('c_b', as_index=False).agg({'c_a':'nunique', 'c_c':'nunique', 'c_d':'max'}) </code></pre>
3
2016-08-28T01:50:19Z
[ "python", "python-2.7", "pandas", "dataframe", "group-by" ]
import hooks (custom module loaders) for pypy do not work
39,186,850
<p>I'm successfully able to create import hooks to load files directly from memory in python2.7. The example I used was the accepted response to this question:</p> <p><a href="http://stackoverflow.com/questions/14191900/pythonimport-module-from-memory">python:Import module from memory</a></p> <p>However; when applying this code on pypy; i get an import error. I have also tried other import hook examples that work with regular python but not with pypy, such as this:</p> <p><a href="http://stackoverflow.com/questions/39135750/python-load-zip-with-modules-from-memory/39136473#39136473">python load zip with modules from memory</a></p> <p>Does anyone know why import hooks do not work in pypy? Is there something I am missing?</p>
0
2016-08-28T00:02:29Z
39,188,737
<p>The problem is that in both of the examples you point to, <code>load_module()</code> does not add the loaded module to <code>sys.modules</code>. Normally, it should do so (and then PyPy works like CPython).</p> <p>If <code>load_module()</code> does not add the module to <code>sys.modules</code>, then every single <code>import a</code> will call <code>load_module()</code> again and return a new copy of the module. For example, in the example from <a href="http://stackoverflow.com/questions/14191900/pythonimport-module-from-memory">python:Import module from memory</a>:</p> <pre><code>import a as a1 import a as a2 print a1 is a2 # False! a1.foo = "foo" print a2.foo # AttributeError </code></pre> <p>This is documented in <a href="https://www.python.org/dev/peps/pep-0302/#id27" rel="nofollow">https://www.python.org/dev/peps/pep-0302/#id27</a>. The <code>load_module()</code> method is responsible for doing more checks than these simple examples show. In particular, note this line (emphasis in the original):</p> <blockquote> <p>Note that the module object <em>must</em> be in sys.modules before the loader executes the module code.</p> </blockquote> <p>So, the fact that PyPy behaves differently than CPython in this case could be understood as a behavior difference that follows from code that fails to respect the docs.</p> <p>But anyway, my opinion is that it should be fixed. I've created an issue at <a href="https://bitbucket.org/pypy/pypy/issues/2382/sysmeta_path-not-working-like-cpythons" rel="nofollow">https://bitbucket.org/pypy/pypy/issues/2382/sysmeta_path-not-working-like-cpythons</a>.</p>
1
2016-08-28T06:55:23Z
[ "python", "pypy" ]
Max and min value in list of dicts where each dict key has a value
39,186,852
<p>All the answers I found to this seem to make you have to <a href="http://stackoverflow.com/questions/5320871/in-list-of-dicts-find-min-value-of-a-common-dict-field">specify which key</a> you want to find the max/min for e.g. <a href="http://stackoverflow.com/questions/19759374/finding-minimum-values-in-a-list-of-dicts">Finding minimum values in a list of dicts</a>.</p> <p>I have a list of dicts like:</p> <pre><code> [ { 1:34, 2:12, 3:24 }, { 1:121, 2:1211, 3:1891 }, { 1:191, 2:8991, 3:3232 } ] </code></pre> <p>So here it would pull out 12 and 8991.</p> <p>And I want to find the max and min of all the values in this list of dicts.</p> <p>How do I do this without having to specify each key separately (e.g. 1,2,3) - in the real version I have 20+.</p>
0
2016-08-28T00:03:01Z
39,186,865
<p>You can flatten the values, then find the min and max:</p> <pre><code>from itertools import chain flattened = list(chain(*(d.values() for d in list_of_dicts))) print(min(flattened)) print(max(flattened)) </code></pre>
1
2016-08-28T00:07:03Z
[ "python", "list", "dictionary", "max", "min" ]
Max and min value in list of dicts where each dict key has a value
39,186,852
<p>All the answers I found to this seem to make you have to <a href="http://stackoverflow.com/questions/5320871/in-list-of-dicts-find-min-value-of-a-common-dict-field">specify which key</a> you want to find the max/min for e.g. <a href="http://stackoverflow.com/questions/19759374/finding-minimum-values-in-a-list-of-dicts">Finding minimum values in a list of dicts</a>.</p> <p>I have a list of dicts like:</p> <pre><code> [ { 1:34, 2:12, 3:24 }, { 1:121, 2:1211, 3:1891 }, { 1:191, 2:8991, 3:3232 } ] </code></pre> <p>So here it would pull out 12 and 8991.</p> <p>And I want to find the max and min of all the values in this list of dicts.</p> <p>How do I do this without having to specify each key separately (e.g. 1,2,3) - in the real version I have 20+.</p>
0
2016-08-28T00:03:01Z
39,186,878
<p>You can use a list comprehension or generator expression.</p> <pre><code>max([max(d.values()) for d in data]) </code></pre>
2
2016-08-28T00:08:44Z
[ "python", "list", "dictionary", "max", "min" ]
filling individual frames in tkinter notebook Tabs with widgets (after automatic tab generation)
39,186,863
<p>I found a way makeing it easy to automatically generate multiple Tabs with the ttk notebook. As a "solution" to: <a href="http://stackoverflow%20-%20Automatic%20multiple%20Tab%20generation%20with%20tkinter%20notebook">http://stackoverflow.com/questions/39175898/automatic-multiple-tab-generation-with-tkinter-notebook?noredirect=1#comment65695477_39175898</a> But now I have a problem to fill the Tabs with individual content namely widgets. What do I need for "???" in the commented out part? I would appreciate it very much if you show me how to solve this.</p> <pre><code>from tkinter import * from tkinter import ttk ### class MyTab(Frame): def __init__(self, root, name): Frame.__init__(self, root) self.root = root self.name = name ### class Application(): def __init__(self): self.tabs = {'ky': 1} print(self.tabs['ky']) self.root = Tk() self.root.minsize(300, 300) ### Tab generation self.notebook = ttk.Notebook(self.root, width=800, height=550) tab_names = ["info", "S00", "S01", "S02", "S03", "S04", "S05", "S06", "S07", "S08", "help"] for i in range(0, len(tab_names)): tab = MyTab(self.notebook, tab_names[i]) self.notebook.add(tab, text=tab_names[i]) self.button = Button(self.root, text='next -&gt;', command=self.next_Tab).pack(side=BOTTOM) ### info Tab Widgets #self.btn1 = Button(???, text='Info Button', command=self.next_Tab).pack(side=RIGHT) ### S00 Tab Widgets #self.btn2 = Button(???, text="test_btn") #self.btn2.pack() ### S01 Tab Widgets and so on... self.notebook.pack(side=TOP) def next_Tab(self): print("next Tab -&gt; not yet defined") def run(self): self.root.mainloop() ### Application().run() </code></pre>
-3
2016-08-28T00:05:14Z
39,472,023
<p>The rule is quite simple: to place a widget in another widget, you need a reference to the other widget. In your case, the simple thing to do is create a dictionary to hold references to your frames:</p> <pre><code>tabs = {} for i in range(0, len(tab_names)): tab = MyTab(self.notebook, tab_names[i]) self.notebook.add(tab, text=tab_names[i]) tabs[tab_names[i]] = tab ... self.btn1 = Button(tabs["info"], ...) </code></pre> <hr> <p>By the way, you can make your loop more readable (and more "pythonic") by directly iterating over the list of tab names rather than iterating over the index values:</p> <pre><code>tabs = {} for tab_name in tab_names: tab = MyTab(self.notebook, tab_name) self.notebook.add(tab, text=tab_name) tabs[tab_name] = tab </code></pre>
0
2016-09-13T14:02:37Z
[ "python", "tkinter", "tabs", "widget", "ttk" ]
Logging for multiple objects: IOError
39,186,906
<p>I have a class called <code>Job</code> which has a logger</p> <pre><code>class MyFileHandler(logging.FileHandler): def __init__(self, filename): self.filename = filename super(MyFileHandler, self).__init__(filename) def emit(self, record): log_text = self.format(record) try: fh = open(self.filename, "a") fh.write("%s\n" % log_text) fh.close() return True except: return False log_formatter = logging.Formatter('br: %(message)s') class Job(object): def __init__(self, name): self.name = name self.logger = logging.getLogger(self.name) log_hdlr = MyFileHandler('/tmp/%s' % name) log_hdlr.setFormatter(log_formatter) self.logger.addHandler(log_hdlr) self.logger.setLevel(logging.INFO) jobs = [] for i in range(100): j = Job(str(i)) job.append(j) </code></pre> <p>and jobs go off do something and logs via <code>job.logger.info()</code></p> <p>but when i have multiple jobs i.e. thousands, it's throwing error</p> <pre class="lang-none prettyprint-override"><code>IOError: [Errno 24] Too many open files: '/x/prototype_3885946_1608131132/cm/cm_conv/logs/20160827-195925.log' </code></pre> <p>I thought every time I logged something, it would open then close the file as I have overwritten <code>emit()</code></p> <p>Is there a pattern/ways to have thousands of loggers?</p>
0
2016-08-28T00:12:55Z
39,186,991
<p>Guess your operating system is running out of file handles.</p>
0
2016-08-28T00:30:42Z
[ "python", "python-2.7", "logging" ]
Logging for multiple objects: IOError
39,186,906
<p>I have a class called <code>Job</code> which has a logger</p> <pre><code>class MyFileHandler(logging.FileHandler): def __init__(self, filename): self.filename = filename super(MyFileHandler, self).__init__(filename) def emit(self, record): log_text = self.format(record) try: fh = open(self.filename, "a") fh.write("%s\n" % log_text) fh.close() return True except: return False log_formatter = logging.Formatter('br: %(message)s') class Job(object): def __init__(self, name): self.name = name self.logger = logging.getLogger(self.name) log_hdlr = MyFileHandler('/tmp/%s' % name) log_hdlr.setFormatter(log_formatter) self.logger.addHandler(log_hdlr) self.logger.setLevel(logging.INFO) jobs = [] for i in range(100): j = Job(str(i)) job.append(j) </code></pre> <p>and jobs go off do something and logs via <code>job.logger.info()</code></p> <p>but when i have multiple jobs i.e. thousands, it's throwing error</p> <pre class="lang-none prettyprint-override"><code>IOError: [Errno 24] Too many open files: '/x/prototype_3885946_1608131132/cm/cm_conv/logs/20160827-195925.log' </code></pre> <p>I thought every time I logged something, it would open then close the file as I have overwritten <code>emit()</code></p> <p>Is there a pattern/ways to have thousands of loggers?</p>
0
2016-08-28T00:12:55Z
39,187,435
<p>Fyi, instead of executing <code>self.logger.info(msg)</code> directly, i just wrapped it around the following code which opens filehandler and closes it each time im writing to a log..</p> <p>rewrite <code>self.logger.info(msg)</code> to <code>self.write_to_log(msg)</code></p> <p>where:</p> <pre><code>def write_to_log(self, msg): log_hdlr = MyFileHandler('/tmp/%s' % self.name) log_hdlr.setFormatter(log_formatter) self.logger.addHandler(log_hdlr) self.logger.setLevel(logging.INFO) self.logger.info(msg) # &lt;----- actually calling .info() here for handler in self.logger.handlers: handler.close() self.logger.removeHandler(handler) </code></pre>
0
2016-08-28T02:11:51Z
[ "python", "python-2.7", "logging" ]
Cygwin Setup does not install gdb.exe
39,186,984
<p>I'm trying to use NetBeans 8.1 with Cygwin to write, compile and debug a C program. I knew nothing about C when I started this, and somehow found my way to fixing all the compiler errors. But when it came time to debug there was no debugger! Long story short, there's no gdb.exe in the Cygwin/bin directory and even a fresh install of Cygwin didn't produce one.</p> <p>I tried another gcc compiler that did have gdb, but Netbeans won't use it.</p> <p>I really don't know anything about debugging C in Netbeans with Cygwin. All I wanted to do was just bash my way through this one C program because I need to access a C library.</p> <p>Alternatively, does anyone know if and how to run a C subroutine in python? (A vastly superior language to C/C++, in my opinion.)</p> <p>I would be delighted to get access to this C library from either NetBeans or rewrite my access code in python.</p>
0
2016-08-28T00:28:26Z
39,187,596
<p>(I wrote this before I saw matzeri's response. I still may try extending python to run code from the C library, but getting the C code I've already written to work would be vastly preferable.)</p> <p>I have no idea what's going on with Cygwin's gdb, but after getting over my frustration I answered my alternative question. Turns out you can indeed call a C function from python. <a href="https://docs.python.org/2/extending/extending.html" rel="nofollow">Extending Python with C or C++</a></p> <p>I'll try it tomorrow and I see caveats, but if python.org has a section on it, I'm betting it can be done.</p>
0
2016-08-28T02:51:32Z
[ "python", "c", "netbeans", "gdb", "cygwin" ]
Cygwin Setup does not install gdb.exe
39,186,984
<p>I'm trying to use NetBeans 8.1 with Cygwin to write, compile and debug a C program. I knew nothing about C when I started this, and somehow found my way to fixing all the compiler errors. But when it came time to debug there was no debugger! Long story short, there's no gdb.exe in the Cygwin/bin directory and even a fresh install of Cygwin didn't produce one.</p> <p>I tried another gcc compiler that did have gdb, but Netbeans won't use it.</p> <p>I really don't know anything about debugging C in Netbeans with Cygwin. All I wanted to do was just bash my way through this one C program because I need to access a C library.</p> <p>Alternatively, does anyone know if and how to run a C subroutine in python? (A vastly superior language to C/C++, in my opinion.)</p> <p>I would be delighted to get access to this C library from either NetBeans or rewrite my access code in python.</p>
0
2016-08-28T00:28:26Z
39,188,541
<p>Cygwin setup can install ~ 4000 packages.GDB is one of them.<br> Why do you think GDB should be installed by default ?</p> <p>Please read:<br> <a href="https://cygwin.com/cygwin-ug-net/setup-net.html#setup-packages" rel="nofollow">https://cygwin.com/cygwin-ug-net/setup-net.html#setup-packages</a></p>
1
2016-08-28T06:20:53Z
[ "python", "c", "netbeans", "gdb", "cygwin" ]
TensorFlow: how to name operations for tf.get_variable
39,187,009
<p>My question is related to this <a href="http://stackoverflow.com/questions/36612512/tensorflow-how-to-get-a-tensor-by-name">Tensorflow: How to get a tensor by name?</a></p> <p>I can give names to operations. But actually they named differently. For example:</p> <pre><code>In [11]: with tf.variable_scope('test_scope') as scope: ...: a = tf.get_variable('a',[1]) ...: b = tf.maximum(1,2, name='b') ...: print a.name ...: print b.name ...: ...: ...: test_scope/a:0 test_scope_1/b:0 In [12]: with tf.variable_scope('test_scope') as scope: ...: scope.reuse_variables() ...: a = tf.get_variable('a',[1]) ...: b = tf.maximum(1,2, name='b') ...: print a.name ...: print b.name ...: ...: ...: test_scope/a:0 test_scope_2/b:0 </code></pre> <p><code>tf.get_variable</code> creates variable with exactly the same name as I ask. Operations add prefixes to scope. </p> <p>I want to name my operation so that I can get it. In my case I want to get <code>b</code> with <code>tf.get_variable('b')</code> in my scope.</p> <p>How can I do it? I can't do it with <code>tf.Variable</code> because of this issue <a href="https://github.com/tensorflow/tensorflow/issues/1325" rel="nofollow">https://github.com/tensorflow/tensorflow/issues/1325</a> May be I need to set addition parameters to variable scope, or to operation, or somehow use <code>tf.get_variable</code> ?</p>
1
2016-08-28T00:35:16Z
39,189,462
<p><code>tf.get_variable()</code> won't work to get an operation. Therefore, I would define a new variable storing <code>tf.maximum(1,2)</code> to retrieve it later:</p> <pre><code>import tensorflow as tf with tf.variable_scope('test_scope') as scope: a1 = tf.get_variable('a', [1]) b1 = tf.get_variable('b', initializer=tf.maximum(1, 2)) with tf.variable_scope('test_scope') as scope: scope.reuse_variables() a2 = tf.get_variable('a', [1]) b2 = tf.get_variable('b', dtype=tf.int32) assert a1 == a2 assert b1 == b2 </code></pre> <p>Note that you need to define <code>b</code> using <code>tf.get_variable()</code> in order to retrieve it later.</p>
1
2016-08-28T08:43:08Z
[ "python", "tensorflow" ]
TensorFlow: how to name operations for tf.get_variable
39,187,009
<p>My question is related to this <a href="http://stackoverflow.com/questions/36612512/tensorflow-how-to-get-a-tensor-by-name">Tensorflow: How to get a tensor by name?</a></p> <p>I can give names to operations. But actually they named differently. For example:</p> <pre><code>In [11]: with tf.variable_scope('test_scope') as scope: ...: a = tf.get_variable('a',[1]) ...: b = tf.maximum(1,2, name='b') ...: print a.name ...: print b.name ...: ...: ...: test_scope/a:0 test_scope_1/b:0 In [12]: with tf.variable_scope('test_scope') as scope: ...: scope.reuse_variables() ...: a = tf.get_variable('a',[1]) ...: b = tf.maximum(1,2, name='b') ...: print a.name ...: print b.name ...: ...: ...: test_scope/a:0 test_scope_2/b:0 </code></pre> <p><code>tf.get_variable</code> creates variable with exactly the same name as I ask. Operations add prefixes to scope. </p> <p>I want to name my operation so that I can get it. In my case I want to get <code>b</code> with <code>tf.get_variable('b')</code> in my scope.</p> <p>How can I do it? I can't do it with <code>tf.Variable</code> because of this issue <a href="https://github.com/tensorflow/tensorflow/issues/1325" rel="nofollow">https://github.com/tensorflow/tensorflow/issues/1325</a> May be I need to set addition parameters to variable scope, or to operation, or somehow use <code>tf.get_variable</code> ?</p>
1
2016-08-28T00:35:16Z
39,189,727
<p>I disagree with @rvinas answer, you don't need to create a Variable to hold the value of a tensor you want to retrieve. You can just use <code>graph.get_tensor_by_name</code> with the correct name to retrieve your tensor:</p> <pre class="lang-py prettyprint-override"><code>with tf.variable_scope('test_scope') as scope: a = tf.get_variable('a',[1]) b = tf.maximum(1,2, name='b') print a.name # should print 'test_scope/a:0' print b.name # should print 'test_scope/b:0' </code></pre> <hr> <p>Now you want to recreate the same scope and get back <code>a</code> and <code>b</code>.<br> For <code>b</code>, you don't even need to be in the scope, you just need the exact name of <code>b</code>.</p> <pre class="lang-py prettyprint-override"><code>with tf.variable_scope('test_scope') as scope: scope.reuse_variables() a2 = tf.get_variable('a', [1]) graph = tf.get_default_graph() b2 = graph.get_tensor_by_name('test_scope/b:0') assert a == a2 assert b == b2 </code></pre>
1
2016-08-28T09:18:11Z
[ "python", "tensorflow" ]
Python: Line break after every nth line split
39,187,012
<p>I want to force a line break after every 10 numbers or 9 nine splits on a txt file in python? How would I go about this? So say i have </p> <pre><code>Input would be something like: 40 20 30 50 40 40 40 40 40 40 20 40 20 30 50 40 40 40 20 40 20 30 50 40 40 40 40 20 20 20 20 20 20 </code></pre> <p>int a txt.file and output should be</p> <pre><code>40 20 30 50 40 40 40 40 40 40 20 40 20 30 50 40 40 40 20 40 20 30 50 40 40 40 40 20 20 20 20 20 20 </code></pre> <p>so essentially break after every 10 numbers, or 9 line splits </p> <p>I have tried:</p> <pre><code>with open('practice.txt') as f: for line in f: int_list = [int(num) for num in line.split() ] if len(int_list) &lt; 20: print(int_list) else: new_list = int_list[20:] int_list = int_list[:20] print(int_list) print(new_list) </code></pre> <p>But this doesn't quite solve it. Also note that the line lengths can vary. So the first line could have 5 numbers and the second line have 9 and the third 10 and the fourth 10</p>
0
2016-08-28T00:36:04Z
39,187,045
<p>How about using [count] to count the occurrences of an item in the list?</p> <pre><code>list == [40, 20, 30, 50, 40, 40, 40, 40, 40, 40, 20] for i in list: if list.count(i) &gt; 10: # Do Stuff </code></pre>
0
2016-08-28T00:45:01Z
[ "python" ]
Python: Line break after every nth line split
39,187,012
<p>I want to force a line break after every 10 numbers or 9 nine splits on a txt file in python? How would I go about this? So say i have </p> <pre><code>Input would be something like: 40 20 30 50 40 40 40 40 40 40 20 40 20 30 50 40 40 40 20 40 20 30 50 40 40 40 40 20 20 20 20 20 20 </code></pre> <p>int a txt.file and output should be</p> <pre><code>40 20 30 50 40 40 40 40 40 40 20 40 20 30 50 40 40 40 20 40 20 30 50 40 40 40 40 20 20 20 20 20 20 </code></pre> <p>so essentially break after every 10 numbers, or 9 line splits </p> <p>I have tried:</p> <pre><code>with open('practice.txt') as f: for line in f: int_list = [int(num) for num in line.split() ] if len(int_list) &lt; 20: print(int_list) else: new_list = int_list[20:] int_list = int_list[:20] print(int_list) print(new_list) </code></pre> <p>But this doesn't quite solve it. Also note that the line lengths can vary. So the first line could have 5 numbers and the second line have 9 and the third 10 and the fourth 10</p>
0
2016-08-28T00:36:04Z
39,187,233
<p>It looks like everything below the <code>with</code> statement should be indented one more level. Your method seems like a good start, but it will not work if there are more than 2 groups on one line. The following code takes care of that and simplifies things a bit:</p> <pre><code>with open('practice.txt') as f: values = [] for line in f: int_list = [int(num) for num in line.split()] # the next line splits int_list into groups of 10 items, # and appends all the groups to the values list values.extend(int_list[i:i+10] for i in range(0, len(int_list), 10)) print values # [ # [40, 20, 30, 50, 40, 40, 40, 40, 40, 40], # [20, 40, 20, 30, 50, 40, 40, 40], # [20, 40, 20, 30, 50, 40, 40, 40, 40, 20], # [20, 20], # [20, 20, 20] # ] </code></pre>
2
2016-08-28T01:24:02Z
[ "python" ]
Python list slicing
39,187,018
<p>I'm not able understand what to do here. Can someone help.</p> <p>I've a few lists:</p> <pre><code>array = [7,8,2,3,4,10,5,6,7,10,8,9,10,4,5,12,13,14,1,2,15,16,17] slice = [2, 4, 6, 8, 10, 12, 15, 17, 20, 22] intervals = [12, 17, 22] output = [] intermediate = [] </code></pre> <p><code>slice</code> is a list of indices I need to get from slicing <code>array</code>. <code>interval</code> is a list of indices used to stop the slicing when <code>slice[i] is interval[j]</code> where i and j are looping variables. I need to form a list of lists from <code>array</code> based on <code>slice</code> and <code>intervals</code> based on the condition that when <code>slice[i] is not interval[j]</code> </p> <pre><code>intermediate =intermediate + array[slice[i]:slice[i+1]+1] </code></pre> <p>here in my case:</p> <p>when <code>slice[i]</code> and <code>interval[j]</code> are equal for value 12. So I need to form a list of lists from <code>array</code> </p> <pre><code>intermediate = array[slice[0]:slice[0+1]+1] + array[slice[2]:slice[2+1]+1] + array[slice[4]:slice[4+1]+1] </code></pre> <p>which is</p> <pre><code>intermediate = array[2:(4+1)] + array[6:(8+1)] + array[10:(12+1)] </code></pre> <p>and when <code>slice[i] is interval[j]</code> <code>output = output + intermediate</code> and the slicing is continued.</p> <pre><code>output = output + [intermediate] </code></pre> <p>which is</p> <pre><code>output = output + [array[2:(4+1)] + array[6:(8+1)] + array[10:(12+1)]] </code></pre> <p>now the next value in interval is 17 so till we have 17 in <code>slice</code> we form another list from <code>array[slice[6]:slice[6+1]+1]</code> and add this to the output. This continues.</p> <p>The final output should be:</p> <pre><code>output = [array[slice[0]:slice[0+1]+1] + array[slice[2]:slice[2+1]+1] + array[slice[4]:slice[4+1]+1] , array[slice[6]:slice[6+1]+1], array[slice[8]:slice[8+1]+1]] </code></pre> <p>which is</p> <pre><code>output = [[2, 3, 4, 5, 6, 7, 8, 9, 10], [12, 13, 14], [15, 16, 17]] </code></pre>
1
2016-08-28T00:36:49Z
39,187,334
<p>A straightforward solution:</p> <pre><code>array_ = [7,8,2,3,4,10,5,6,7,10,8,9,10,4,5,12,13,14,1,2,15,16,17] slice_ = [2, 4, 6, 8, 10, 12, 15, 17, 20, 22] intervals = [12, 17, 22] output = [] intermediate = [] for i in range(0, len(slice_), 2): intermediate.extend(array_[slice_[i]:slice_[i+1]+1]) if slice_[i+1] in intervals: output.append(intermediate) intermediate = [] print output # [[2, 3, 4, 5, 6, 7, 8, 9, 10], [12, 13, 14], [15, 16, 17]] </code></pre> <p>I have changed some variable names to avoid conflicts. On large data, you may convert <code>intervals</code> to a set.</p>
3
2016-08-28T01:50:53Z
[ "python", "list" ]
Python list slicing
39,187,018
<p>I'm not able understand what to do here. Can someone help.</p> <p>I've a few lists:</p> <pre><code>array = [7,8,2,3,4,10,5,6,7,10,8,9,10,4,5,12,13,14,1,2,15,16,17] slice = [2, 4, 6, 8, 10, 12, 15, 17, 20, 22] intervals = [12, 17, 22] output = [] intermediate = [] </code></pre> <p><code>slice</code> is a list of indices I need to get from slicing <code>array</code>. <code>interval</code> is a list of indices used to stop the slicing when <code>slice[i] is interval[j]</code> where i and j are looping variables. I need to form a list of lists from <code>array</code> based on <code>slice</code> and <code>intervals</code> based on the condition that when <code>slice[i] is not interval[j]</code> </p> <pre><code>intermediate =intermediate + array[slice[i]:slice[i+1]+1] </code></pre> <p>here in my case:</p> <p>when <code>slice[i]</code> and <code>interval[j]</code> are equal for value 12. So I need to form a list of lists from <code>array</code> </p> <pre><code>intermediate = array[slice[0]:slice[0+1]+1] + array[slice[2]:slice[2+1]+1] + array[slice[4]:slice[4+1]+1] </code></pre> <p>which is</p> <pre><code>intermediate = array[2:(4+1)] + array[6:(8+1)] + array[10:(12+1)] </code></pre> <p>and when <code>slice[i] is interval[j]</code> <code>output = output + intermediate</code> and the slicing is continued.</p> <pre><code>output = output + [intermediate] </code></pre> <p>which is</p> <pre><code>output = output + [array[2:(4+1)] + array[6:(8+1)] + array[10:(12+1)]] </code></pre> <p>now the next value in interval is 17 so till we have 17 in <code>slice</code> we form another list from <code>array[slice[6]:slice[6+1]+1]</code> and add this to the output. This continues.</p> <p>The final output should be:</p> <pre><code>output = [array[slice[0]:slice[0+1]+1] + array[slice[2]:slice[2+1]+1] + array[slice[4]:slice[4+1]+1] , array[slice[6]:slice[6+1]+1], array[slice[8]:slice[8+1]+1]] </code></pre> <p>which is</p> <pre><code>output = [[2, 3, 4, 5, 6, 7, 8, 9, 10], [12, 13, 14], [15, 16, 17]] </code></pre>
1
2016-08-28T00:36:49Z
39,187,609
<p>Here is a recursive solution which goes through the index once and dynamically check if the index is within the intervals and append the sliced results to a list accordingly:</p> <pre><code>def slicing(array, index, stops, sliced): # if the length of index is smaller than two, stop if len(index) &lt; 2: return # if the first element of the index in the intervals, create a new list in the result # accordingly and move one index forward elif index[0] in stops: if len(index) &gt;= 3: sliced += [[]] slicing(array, index[1:], stops, sliced) # if the second element of the index is in the intervals, append the slice to the last # element of the list, create a new sublist and move two indexes forward accordingly elif index[1] in stops: sliced[-1] += array[index[0]:(index[1]+1)] if len(index) &gt;= 4: sliced += [[]] slicing(array, index[2:], stops, sliced) # append the new slice to the last element of the result list and move two index # forward if none of the above conditions satisfied: else: sliced[-1] += array[index[0]:(index[1]+1)] slicing(array, index[2:], stops, sliced) sliced = [[]] slicing(array, slice_, intervals, sliced) sliced # [[2, 3, 4, 5, 6, 7, 8, 9, 10], [12, 13, 14], [15, 16, 17]] </code></pre> <p><em>Data</em>:</p> <pre><code>array = [7,8,2,3,4,10,5,6,7,10,8,9,10,4,5,12,13,14,1,2,15,16,17] slice_ = [2, 4, 6, 8, 10, 12, 15, 17, 20, 22] intervals = [12, 17, 22] </code></pre>
2
2016-08-28T02:54:19Z
[ "python", "list" ]
What is the difference between table level operation and record-level operation?
39,187,032
<p>While going through the documentation of django to muster the detailed knowledge, i endured the word 'table level operation' and 'record level operation'. What is the difference in between them? Could anyone please explain me this 2 word with example? Does they have other name too? </p> <p>P.S I am not asking their difference just because i feel they are alike but i feel it can be more clear to comprehend this way.</p>
0
2016-08-28T00:41:26Z
39,187,633
<p>I do not know specifically how Django people use the terms, but 'record-level operation' should mean an operation on 1 or more records while a 'table-level operation' should mean an operation of the table as a whole. I am not quite sure what an operation on all rows should be -- perhaps both, perhaps it depends on the result.</p> <p>In Python, the usual term for 'record-level' would be 'element-wise'. For Python builtins, bool operates on collections: <code>bool([0, 1, 0, 3])</code> = True. For numpy arrays, bool operates (at least usually) on elements: `bool([0, 1, 0, 2]) = [False, True, False, True]. Also compare [1,2,3]*2 = [1,2,3,1,2,3] versus [1,2,3]*2 = [2,4,6].</p> <p>I hope this helps. See if it makes sense in context.</p>
0
2016-08-28T02:58:44Z
[ "python", "django", "database", "django-models" ]
What is the difference between table level operation and record-level operation?
39,187,032
<p>While going through the documentation of django to muster the detailed knowledge, i endured the word 'table level operation' and 'record level operation'. What is the difference in between them? Could anyone please explain me this 2 word with example? Does they have other name too? </p> <p>P.S I am not asking their difference just because i feel they are alike but i feel it can be more clear to comprehend this way.</p>
0
2016-08-28T00:41:26Z
39,194,511
<p>In the context of Django, record level operations are those that on a single records. An example is when you define custom methods in a model:</p> <pre><code>class Person(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) birth_date = models.DateField() def baby_boomer_status(self): "Returns the person's baby-boomer status." import datetime if self.birth_date &lt; datetime.date(1945, 8, 1): return "Pre-boomer" elif self.birth_date &lt; datetime.date(1965, 1, 1): return "Baby boomer" else: return "Post-boomer" </code></pre> <p>Table level operations are those that act on a set of records and an example of these are when you define a ModelManager for a class:</p> <pre><code># First, define the Manager subclass. class DahlBookManager(models.Manager): def get_queryset(self): return super(DahlBookManager, self).get_queryset().filter(author='Roald Dahl') # Then hook it into the Book model explicitly. class Book(models.Model): title = models.CharField(max_length=100) author = models.CharField(max_length=50) objects = models.Manager() # The default manager. dahl_objects = DahlBookManager() # The Dahl-specific manager. </code></pre> <p>PS: I took these examples from django documentation.</p>
1
2016-08-28T18:31:24Z
[ "python", "django", "database", "django-models" ]
python multi inheritance with parent classes have different __init__()
39,187,040
<p>Here both <code>B</code> and <code>C</code> are derived from <code>A</code>, but with different <code>__init__()</code> parameters. My question is how to write the correct/elegant code here to initialize self.a,self.b,self.c1,self.c2 in the following example? Maybe another question is--is it a good coding practice to do this variable setting in <code>__init()__</code> function or it is better to use simpler <code>__init__()</code> function, and do <code>set()</code> function for each class later, which seems not as simple as to just do it in <code>__init()__</code>?</p> <pre class="lang-py prettyprint-override"><code>class A(object): __init__(self,a): self.a=a class B(A): __init__(self,a,b): super(B,self).__init__(a) self.b=b class C(A): __init__(self,a,c1,c2): super(C,self).__init__(a) self.c1=c1 self.c2=c2 class D(B,C) __init__(self,a,b,c1,c2,d): #how to write the correct/elegant code here to initialize self.a,self.b,self.c1,self.c2? #can I use super(D,self) something? self.d=d self.dd=self.a+self.b+2*self.c1+5*self.c2+3*self.d d=D(1,2,3,4,5) </code></pre>
0
2016-08-28T00:44:02Z
39,188,237
<p>Multiple inheritance in Python requires that all the classes cooperate to make it work. In this case, you can make them cooperate by having the <code>__init__</code> method in each class accept arbitrary <code>**kwargs</code> and pass them on when they call <code>super().__init__</code>.</p> <p>For your example class hierarchy, you could do something like this:</p> <pre><code>class A(object): __init__(self,a): # Don't accept **kwargs here! Any extra arguments are an error! self.a=a class B(A): __init__(self, b, **kwargs): # only name the arg we care about (the rest go in **kwargs) super(B, self).__init__(**kwargs) # pass on the other keyword args self.b=b class C(A): __init__(self, c1, c2, **kwargs): super(C,self).__init__(**kwargs) self.c1=c1 self.c2=c2 class D(B,C) __init__(self, d, **kwargs): super(D,self).__init__(**kwargs) self.d=d self.dd=self.a+self.b+2*self.c1+5*self.c2+3*self.d </code></pre> <p>Note that if you wanted <code>D</code> to use the argument values directly (rather than using <code>self.a</code>, etc.), you could both take them as named arguments and still pass them on in the <code>super()</code> call:</p> <pre><code>class D(B,C) __init__(self,a, b, c1, c2, d, **kwargs): # **kwargs in case there's further inheritance super(D,self).__init__(a=a, b=b, c1=c1, c2=c2, **kwargs) self.d = d self.dd = a + b + 2 * c1 + 5 * c2 + 3 * d # no `self` needed in this expression! </code></pre> <p>Accepting and passing on some args is important if some of the parent classes don't save the arguments (in their original form) as attributes, but you need those values. You can also use this style of code to pass on modified values for some of the arguments (e.g. with <code>super(D, self).__init__(a=a, b=b, c1=2*c1, c2=5*c2, **kwargs)</code>).</p> <p>This kind of collaborative multiple inheritance with varying arguments is almost impossible to make work using positional arguments. With keyword arguments though, the order of the names and values in a call doesn't matter, so it's easy to pass on named arguments and <code>**kwargs</code> at the same time without anything breaking. Using <code>*args</code> doesn't work as well (though recent versions of Python 3 are more flexible about how you can call functions with <code>*args</code>, such as allowing multiple unpackings in a single call: <code>f(*foo, bar, *baz)</code>).</p> <p>If you were using Python 3 (I'm assuming not, since you're explicitly passing arguments to <code>super</code>), you could make the arguments to your collaborative functions "keyword-only", which would prevent users from getting very mixed up and trying to call your methods with positional arguments. Just put a bare <code>*</code> in the argument list before the other named arguments: <code>def __init__(self, *, c1, c2, **kwargs):</code>.</p>
1
2016-08-28T05:15:20Z
[ "python", "multiple-inheritance" ]
NLTK CFG recursion depth error
39,187,042
<pre><code>import nltk from nltk.parse.generate import generate,demo_grammar from nltk import CFG grammar = CFG.fromstring(""" ROOT -&gt; S S -&gt; NP VP NP -&gt; NP PP NP -&gt; DT NN DT -&gt; 'The' NN -&gt; 'work' PP -&gt; IN NP IN -&gt; 'of' NP -&gt; DT NN DT -&gt; 'the' NN -&gt; 'painter' VP -&gt; VBZ ADJP VBZ -&gt; 'is' ADJP -&gt; JJ JJ -&gt; 'good' """) print(grammar) for sentence in generate(grammar, n=100): print(' '.join(sentence)) </code></pre> <p>Gives an error</p> <pre><code>RuntimeError: maximum recursion depth exceeded while calling a Python object </code></pre> <p>Tried changing covert function in functools.py, still the same issue.</p>
0
2016-08-28T00:44:11Z
39,191,332
<p>The function <code>generate</code>, as its docstring states, "Generates an iterator of all sentences from a CFG." Clearly it does so by choosing alternative expansions in the order they are listed in the grammar. So, the first time is sees an <code>NP</code>, it expands it with the rule <code>NP -&gt; NP PP</code>. It now has another <code>NP</code> to expand, which it also expands with the same rule... and so on ad infinitum, or rather until python's limits are exceeded.</p> <p>To fix the problem with the grammar you provide, simply reorder your first two <code>NP</code> rules so that the recursive rule is not the first one encountered: </p> <pre><code>grammar = CFG.fromstring(""" ROOT -&gt; S S -&gt; NP VP NP -&gt; DT NN NP -&gt; NP PP DT -&gt; 'The' ... """) </code></pre> <p>Do it like this and the generator will produce lots of complete sentences for you to examine. Note that the corrected grammar is still recursive, hence infinite; if you generate a large enough number of sentences, you will eventually reach the same recursion depth limit.</p>
3
2016-08-28T12:34:44Z
[ "python", "nltk" ]
NLTK CFG recursion depth error
39,187,042
<pre><code>import nltk from nltk.parse.generate import generate,demo_grammar from nltk import CFG grammar = CFG.fromstring(""" ROOT -&gt; S S -&gt; NP VP NP -&gt; NP PP NP -&gt; DT NN DT -&gt; 'The' NN -&gt; 'work' PP -&gt; IN NP IN -&gt; 'of' NP -&gt; DT NN DT -&gt; 'the' NN -&gt; 'painter' VP -&gt; VBZ ADJP VBZ -&gt; 'is' ADJP -&gt; JJ JJ -&gt; 'good' """) print(grammar) for sentence in generate(grammar, n=100): print(' '.join(sentence)) </code></pre> <p>Gives an error</p> <pre><code>RuntimeError: maximum recursion depth exceeded while calling a Python object </code></pre> <p>Tried changing covert function in functools.py, still the same issue.</p>
0
2016-08-28T00:44:11Z
39,196,885
<p>I tried to number the repeating occurrence of NP NN DT etc. It seems to solve the problem due to unique identification( I presume). What wonders me is it should have been like this at first place i.e the tree production thrown out should have serialized the parts of speech. </p> <pre><code>import nltk from nltk.parse.generate import generate,demo_grammar from nltk import CFG grammar = CFG.fromstring(""" ROOT -&gt; S S -&gt; NP VP NP -&gt; NP1 PP NP1 -&gt; DT1 NN1 DT1 -&gt; 'The' NN1 -&gt; 'work' PP -&gt; IN NP2 IN -&gt; 'of' NP2 -&gt; DT2 NN2 DT2 -&gt; 'the' NN2 -&gt; 'painter' VP -&gt; VBZ ADJP VBZ -&gt; 'is' ADJP -&gt; JJ JJ -&gt; 'good' """) print(grammar) for sentence in generate(grammar, n=100): print(' '.join(sentence)) </code></pre>
-1
2016-08-28T23:57:48Z
[ "python", "nltk" ]
How to set environment variables for current Command Prompt session from Python
39,187,043
<p>I've been stumped for far too long. Hoping someone can assist.</p> <p>I am writing a Python CLI application that needs to set a temporary environment variable (PATH) for the current command prompt session (Windows). The application already sets the environment variables permanently for all future sessions, using the method seen <a href="https://gist.github.com/apetrone/5937002" rel="nofollow">here</a>.</p> <p>To attempt to set the temporary env vars for the current session, I attempted the following:</p> <ul> <li>using <code>os.environ</code> to set the environment variables</li> <li>using the answer <a href="http://stackoverflow.com/questions/3636055/how-to-modify-the-path-variable-definitely-through-the-command-line-in-windows">here</a> which makes use of a temporary file. Unfortunately this works when run directly from the Cmd Prompt, but not from Python.</li> <li>calling <code>SET</code> using <code>subprocess.call</code>, <code>subprocess.check_call</code></li> </ul> <p>The users of the tool will need this so they do not have to close the command prompt in order to leverage the environment variables I've set permanently.</p> <p>I've see other applications do this, but I'm not sure how to accomplish this with Python.</p>
1
2016-08-28T00:44:53Z
39,187,084
<p>Brute-force but straightforward is to emit your assignments as a batch script on stdout, and execute that script in the existing interpreter (akin to <code>source</code> in bash):</p> <pre><code>python myscript &gt;%TEMP%\myscript-vars.bat call %TEMP%\myscript-vars.bat del %TEMP%\myscript-vars.bat </code></pre>
2
2016-08-28T00:52:57Z
[ "python", "windows", "environment-variables" ]
Failure to reproduce GridSearch from sklearn in python
39,187,055
<p>I am trying to do something similar to GridSearch in sklearn: I want to get a list of three models, where all parameters are fixed except for C corresponding to the 1, 10, and 100 in each model. I have the following two functions. </p> <pre><code>def params_GridSearch(dic_params): keys, values = dic_params.keys(), dic_params.values() lst_params = [] for vs in itertools.product(*values): lst_params.append( {k:v for k,v in zip(keys,vs)} ) return lst_params def models_GridSearch(model, dic_params): models = [ model.set_params(**params) for params in params_GridSearch(dic_params) ] return models </code></pre> <p>I then build a model and specify a dictionary of parameters. </p> <pre><code>from sklearn.svm import SVC model = SVC() dic = {'C': [1,10,100]} </code></pre> <p>And generate the models using the functions I just defined. </p> <pre><code>models = models_GridSearch(model, dic) </code></pre> <p>However, the outcome is the same model (using the last parameter, i.e. 100) being repeated 3 times. It seems there is some aliasing going on. </p>
0
2016-08-28T00:47:20Z
39,188,223
<p><code>model</code> refers to the same object throughout each iteration of the list comprehension in <code>model_GridSearch</code>, so you're just assigning a <code>C</code> value 3 times to the same object. You can do a few different things to fix this: you could a copy of the object using the <code>copy</code> module, or pass in the class into the <code>models_GridSearch</code> function instead of an instance, and instantiate an object on each iteration. You could also refactor your code in various ways to fix things. It all depends on your goals.</p> <p><b>Copy method:</b></p> <pre><code>import copy def models_GridSearch(model, dic_params): models = [ copy.deepcopy(model).set_params(**params) for params in params_GridSearch(dic_params) ] return models </code></pre> <p><b>Pass in class:</b></p> <pre><code>def models_GridSearch(Model, dic_params): models = [ Model().set_params(**params) for params in params_GridSearch(dic_params) ] return models from sklearn.svm import SVC Model = SVC dic = {'C': [1,10,100]} models = models_GridSearch(Model, dic) print models </code></pre>
2
2016-08-28T05:12:22Z
[ "python", "scikit-learn", "grid-search" ]
TDSVersion keeps defaulting to 7.1 pymssql
39,187,089
<p>IMPORTANT CHANGE:</p> <p>The following command also works and gets me the correct prompt. There must then be an issue with pymssql.</p> <pre><code>sudo TDSVER=7.1 tsql -H asdf.database.windows.net -p 1433 -U adf@ad -P adsf#adf -D adf </code></pre> <p>So I'm fighting with my pymssql and freetds drivers.</p> <p>Platform Versions Etc:</p> <p>Ubuntu 16.04</p> <p>FreeTDS v0.91 (used by working tsql)</p> <p>FreeTDS v0.95 (used by pymssql)</p> <p>pymssql v2.1.3</p> <p>Target Database: SQL Azure (latest)</p> <p>Instructions for install: <a href="https://azure.microsoft.com/en-us/documentation/articles/sql-database-develop-python-simple/" rel="nofollow">https://azure.microsoft.com/en-us/documentation/articles/sql-database-develop-python-simple/</a></p> <p>I've gone into every freetds.conf file I can find: /etc/freetds/freetds.conf ; /root/.freetds.conf</p> <p>I have set the global TDS version to 8.0. I have overwritten from the python perspective in my pymssql.connect to overwrite the version to 8.0</p> <pre><code>import os os.environ['TDSDUMP'] = 'stdout' import pymssql conn = pymssql.connect(server='adsf.database.windows.net', user='asdf@adfs', password='asdf#adfad', database='asdd', tds_version='8.0', ) </code></pre> <p>I run the diagnostics tools:</p> <p>tsql -C and get back 4.2 as the version</p> <p>I run the code dumping the logs to stdout, and notice, the version is 7.1.</p> <p>net.c:202:Connecting to 191.238.6.43 port 1433 (TDS version 7.1)</p> <p>The following tsql command works for me...</p> <pre><code>sudo TDSVER=8.0 tsql -H asdf.database.windows.net -p 1433 -U adf@ad -P adsf#adf -D adf </code></pre> <p>Notice the version number. Its 8.0. I can validate I get data back and can do all I want with this.</p> <p>So there is an obvious issue here with how pymssql is hooking up with freetds.</p> <p>Here is all the output from the log dump in case somebody sees something I am failing to...</p> <pre><code>&gt;&gt;&gt; import os &gt;&gt;&gt; os.environ['TDSDUMP'] = 'stdout' &gt;&gt;&gt; import pymssql &gt;&gt;&gt; conn = pymssql.connect(server='xxx.database.windows.net', ... user='xxx@xxx', ... password='xxxx', ... database='xxx', ... tds_version='8.0', ... ) log.c:167:Starting log file for FreeTDS 0.95 on 2016-08-27 20:47:18 with debug flags 0x4fff. dblib.c:1160:tdsdbopen(0x15d1070, xxx.database.windows.net:1433, [microsoft]) dblib.c:1186:tdsdbopen: dbproc-&gt;dbopts = 0x15fa760 dblib.c:1193:tdsdbopen: tds_set_server(0x14eacf0, "xxx.database.windows.net:1433") dblib.c:258:dblib_get_tds_ctx(void) dblib.c:1210:tdsdbopen: About to call tds_read_config_info... config.c:168:Getting connection information for [xxx.database.windows.net:1433]. config.c:172:Attempting to read conf files. config.c:353:... $FREETDSCONF not set. Trying $FREETDS/etc. config.c:366:... $FREETDS not set. Trying $HOME. config.c:296:Found conf file '/root/.freetds.conf' (.freetds.conf). config.c:495:Looking for section global. config.c:554: Found section egserver50. config.c:554: Found section xxx.database.windows.net. config.c:568: Reached EOF config.c:495:Looking for section xxx.database.windows.net:1433. config.c:554: Found section egserver50. config.c:554: Found section xxx.database.windows.net. config.c:568: Reached EOF config.c:302:[xxx.database.windows.net:1433] not found. config.c:296:Found conf file '/etc/freetds/freetds.conf' (default). config.c:495:Looking for section global. config.c:554: Found section global. config.c:557:Got a match. config.c:580: text size = '64512' config.c:554: Found section egserver50. config.c:554: Found section egserver70. config.c:568: Reached EOF config.c:495:Looking for section xxx.database.windows.net:1433. config.c:554: Found section global. config.c:554: Found section egserver50. config.c:554: Found section egserver70. config.c:568: Reached EOF config.c:302:[xxx.database.windows.net:1433] not found. config.c:353:... $FREETDSCONF not set. Trying $FREETDS/etc. config.c:366:... $FREETDS not set. Trying $HOME. config.c:296:Found conf file '/root/.freetds.conf' (.freetds.conf). config.c:495:Looking for section global. config.c:554: Found section egserver50. config.c:554: Found section xxx.database.windows.net. config.c:568: Reached EOF config.c:495:Looking for section xxx.database.windows.net. config.c:554: Found section egserver50. config.c:554: Found section xxx.database.windows.net. config.c:557:Got a match. config.c:580: host = 'xxx.database.windows.net' config.c:617:Found host entry xxx.database.windows.net config.c:620:IP addr is 191.238.6.43. config.c:580: port = '1433' config.c:580: tds version = '8.0' config.c:886:Setting tds version to 8.0 (0x701). config.c:568: Reached EOF config.c:300:Success: [xxx.database.windows.net] defined in /root/.freetds.conf. config.c:765:Setting 'dump_file' to 'stdout' from $TDSDUMP. config.c:689:tds_config_login: client_charset is UTF-8. config.c:696:tds_config_login: database_name is xxx. config.c:765:Setting 'dump_file' to 'stdout' from $TDSDUMP. dblib.c:1237:tdsdbopen: Calling tds_connect_and_login(0x15fae30, 0x15fb4f0) iconv.c:328:tds_iconv_open(0x15fae30, UTF-8) iconv.c:187:local name for ISO-8859-1 is ISO-8859-1 iconv.c:187:local name for UTF-8 is UTF-8 iconv.c:187:local name for UCS-2LE is UCS-2LE iconv.c:187:local name for UCS-2BE is UCS-2BE iconv.c:346:setting up conversions for client charset "UTF-8" iconv.c:348:preparing iconv for "UTF-8" &lt;-&gt; "UCS-2LE" conversion iconv.c:395:preparing iconv for "ISO-8859-1" &lt;-&gt; "UCS-2LE" conversion iconv.c:400:tds_iconv_open: done net.c:202:Connecting to xxx.xxx.xxx.xxx port 1433 (TDS version 7.1) net.c:275:tds_open_socket: connect(2) returned "Operation now in progress" net.c:314:tds_open_socket() succeeded packet.c:740:Sending packet 0000 12 01 00 34 00 00 00 00-00 00 15 00 06 01 00 1b |...4.... ........| 0010 00 01 02 00 1c 00 0c 03-00 28 00 04 ff 08 00 01 |........ .(......| 0020 55 00 00 02 4d 53 53 51-4c 53 65 72 76 65 72 00 |U...MSSQ LServer.| 0030 08 4d 00 00 - |.M..| packet.c:639:Received packet 0000 04 01 00 25 00 00 01 00-00 00 15 00 06 01 00 1b |...%.... ........| 0010 00 01 02 00 1c 00 01 03-00 1d 00 00 ff 0c 00 03 |........ ........| 0020 2b 00 00 03 00 - |+....| login.c:1106:detected flag 3 login.c:472:login packet rejected query.c:3772:tds_disconnect() util.c:165:Changed query state from IDLE to DEAD util.c:322:tdserror(0x14aafa0, 0x15fae30, 20002, 0) dblib.c:7925:dbperror(0x15fa390, 20002, 0) dblib.c:7993:dbperror: Calling dblib_err_handler with msgno = 20002; msg-&gt;msgtext = "Adaptive Server connection failed (xxx.database.windows.net:1433)" dblib.c:8015:dbperror: dblib_err_handler for msgno = 20002; msg-&gt;msgtext = "Adaptive Server connection failed (xxx.database.windows.net:1433)" -- returns 2 (INT_CANCEL) util.c:352:tdserror: client library returned TDS_INT_CANCEL(2) util.c:375:tdserror: returning TDS_INT_CANCEL(2) dblib.c:1241:tdsdbopen: tds_connect_and_login failed for "xxx.database.windows.net:1433"! dblib.c:1463:dbclose(0x15fa390) dblib.c:243:dblib_del_connection(0x7f7f78fbd980, 0x15fae30) mem.c:648:tds_free_all_results() dblib.c:290:dblib_release_tds_ctx(1) dblib.c:5873:dbfreebuf(0x15fa390) dblib.c:743:dbloginfree(0x15d1070) Traceback (most recent call last): File "pymssql.pyx", line 635, in pymssql.connect (pymssql.c:10734) File "_mssql.pyx", line 1902, in _mssql.connect (_mssql.c:21821) File "_mssql.pyx", line 637, in _mssql.MSSQLConnection.__init__ (_mssql.c:6581) File "_mssql.pyx", line 1630, in _mssql.maybe_raise_MSSQLDatabaseException (_mssql.c:17524) _mssql.MSSQLDatabaseException: (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (ea1eg7cgdn.database.windows.net:1433)\n') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "&lt;stdin&gt;", line 5, in &lt;module&gt; File "pymssql.pyx", line 641, in pymssql.connect (pymssql.c:10824) pymssql.OperationalError: (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (xxxx.database.windows.net:1433)\n') </code></pre>
0
2016-08-28T00:53:43Z
39,190,882
<p>Unfortunately you don't tell us:</p> <ul> <li>Which TDS version you wish to actually use.</li> <li>Which pymssql version you are using and how you've installed it</li> </ul> <p>If you are trying to use TDS 8.0 and see pymssql+FreeTDS is using 7.1 then you </p> <p>a) Don't need to worry as they are the same thing. See <a href="http://www.freetds.org/userguide/choosingtdsprotocol.htm#AEN910" rel="nofollow">http://www.freetds.org/userguide/choosingtdsprotocol.htm#AEN910</a></p> <p>b) Actually use <code>"7.1"</code> as FreeTDS 1.0 deprecates usage of <code>"8.0"</code>. See <a href="https://github.com/FreeTDS/freetds/blob/1855d0f72aadd998ab133208fcd3f4d168074ab5/NEWS#L6-L7" rel="nofollow">https://github.com/FreeTDS/freetds/blob/1855d0f72aadd998ab133208fcd3f4d168074ab5/NEWS#L6-L7</a></p>
2
2016-08-28T11:42:23Z
[ "python", "database", "sql-azure", "freetds", "pymssql" ]
TDSVersion keeps defaulting to 7.1 pymssql
39,187,089
<p>IMPORTANT CHANGE:</p> <p>The following command also works and gets me the correct prompt. There must then be an issue with pymssql.</p> <pre><code>sudo TDSVER=7.1 tsql -H asdf.database.windows.net -p 1433 -U adf@ad -P adsf#adf -D adf </code></pre> <p>So I'm fighting with my pymssql and freetds drivers.</p> <p>Platform Versions Etc:</p> <p>Ubuntu 16.04</p> <p>FreeTDS v0.91 (used by working tsql)</p> <p>FreeTDS v0.95 (used by pymssql)</p> <p>pymssql v2.1.3</p> <p>Target Database: SQL Azure (latest)</p> <p>Instructions for install: <a href="https://azure.microsoft.com/en-us/documentation/articles/sql-database-develop-python-simple/" rel="nofollow">https://azure.microsoft.com/en-us/documentation/articles/sql-database-develop-python-simple/</a></p> <p>I've gone into every freetds.conf file I can find: /etc/freetds/freetds.conf ; /root/.freetds.conf</p> <p>I have set the global TDS version to 8.0. I have overwritten from the python perspective in my pymssql.connect to overwrite the version to 8.0</p> <pre><code>import os os.environ['TDSDUMP'] = 'stdout' import pymssql conn = pymssql.connect(server='adsf.database.windows.net', user='asdf@adfs', password='asdf#adfad', database='asdd', tds_version='8.0', ) </code></pre> <p>I run the diagnostics tools:</p> <p>tsql -C and get back 4.2 as the version</p> <p>I run the code dumping the logs to stdout, and notice, the version is 7.1.</p> <p>net.c:202:Connecting to 191.238.6.43 port 1433 (TDS version 7.1)</p> <p>The following tsql command works for me...</p> <pre><code>sudo TDSVER=8.0 tsql -H asdf.database.windows.net -p 1433 -U adf@ad -P adsf#adf -D adf </code></pre> <p>Notice the version number. Its 8.0. I can validate I get data back and can do all I want with this.</p> <p>So there is an obvious issue here with how pymssql is hooking up with freetds.</p> <p>Here is all the output from the log dump in case somebody sees something I am failing to...</p> <pre><code>&gt;&gt;&gt; import os &gt;&gt;&gt; os.environ['TDSDUMP'] = 'stdout' &gt;&gt;&gt; import pymssql &gt;&gt;&gt; conn = pymssql.connect(server='xxx.database.windows.net', ... user='xxx@xxx', ... password='xxxx', ... database='xxx', ... tds_version='8.0', ... ) log.c:167:Starting log file for FreeTDS 0.95 on 2016-08-27 20:47:18 with debug flags 0x4fff. dblib.c:1160:tdsdbopen(0x15d1070, xxx.database.windows.net:1433, [microsoft]) dblib.c:1186:tdsdbopen: dbproc-&gt;dbopts = 0x15fa760 dblib.c:1193:tdsdbopen: tds_set_server(0x14eacf0, "xxx.database.windows.net:1433") dblib.c:258:dblib_get_tds_ctx(void) dblib.c:1210:tdsdbopen: About to call tds_read_config_info... config.c:168:Getting connection information for [xxx.database.windows.net:1433]. config.c:172:Attempting to read conf files. config.c:353:... $FREETDSCONF not set. Trying $FREETDS/etc. config.c:366:... $FREETDS not set. Trying $HOME. config.c:296:Found conf file '/root/.freetds.conf' (.freetds.conf). config.c:495:Looking for section global. config.c:554: Found section egserver50. config.c:554: Found section xxx.database.windows.net. config.c:568: Reached EOF config.c:495:Looking for section xxx.database.windows.net:1433. config.c:554: Found section egserver50. config.c:554: Found section xxx.database.windows.net. config.c:568: Reached EOF config.c:302:[xxx.database.windows.net:1433] not found. config.c:296:Found conf file '/etc/freetds/freetds.conf' (default). config.c:495:Looking for section global. config.c:554: Found section global. config.c:557:Got a match. config.c:580: text size = '64512' config.c:554: Found section egserver50. config.c:554: Found section egserver70. config.c:568: Reached EOF config.c:495:Looking for section xxx.database.windows.net:1433. config.c:554: Found section global. config.c:554: Found section egserver50. config.c:554: Found section egserver70. config.c:568: Reached EOF config.c:302:[xxx.database.windows.net:1433] not found. config.c:353:... $FREETDSCONF not set. Trying $FREETDS/etc. config.c:366:... $FREETDS not set. Trying $HOME. config.c:296:Found conf file '/root/.freetds.conf' (.freetds.conf). config.c:495:Looking for section global. config.c:554: Found section egserver50. config.c:554: Found section xxx.database.windows.net. config.c:568: Reached EOF config.c:495:Looking for section xxx.database.windows.net. config.c:554: Found section egserver50. config.c:554: Found section xxx.database.windows.net. config.c:557:Got a match. config.c:580: host = 'xxx.database.windows.net' config.c:617:Found host entry xxx.database.windows.net config.c:620:IP addr is 191.238.6.43. config.c:580: port = '1433' config.c:580: tds version = '8.0' config.c:886:Setting tds version to 8.0 (0x701). config.c:568: Reached EOF config.c:300:Success: [xxx.database.windows.net] defined in /root/.freetds.conf. config.c:765:Setting 'dump_file' to 'stdout' from $TDSDUMP. config.c:689:tds_config_login: client_charset is UTF-8. config.c:696:tds_config_login: database_name is xxx. config.c:765:Setting 'dump_file' to 'stdout' from $TDSDUMP. dblib.c:1237:tdsdbopen: Calling tds_connect_and_login(0x15fae30, 0x15fb4f0) iconv.c:328:tds_iconv_open(0x15fae30, UTF-8) iconv.c:187:local name for ISO-8859-1 is ISO-8859-1 iconv.c:187:local name for UTF-8 is UTF-8 iconv.c:187:local name for UCS-2LE is UCS-2LE iconv.c:187:local name for UCS-2BE is UCS-2BE iconv.c:346:setting up conversions for client charset "UTF-8" iconv.c:348:preparing iconv for "UTF-8" &lt;-&gt; "UCS-2LE" conversion iconv.c:395:preparing iconv for "ISO-8859-1" &lt;-&gt; "UCS-2LE" conversion iconv.c:400:tds_iconv_open: done net.c:202:Connecting to xxx.xxx.xxx.xxx port 1433 (TDS version 7.1) net.c:275:tds_open_socket: connect(2) returned "Operation now in progress" net.c:314:tds_open_socket() succeeded packet.c:740:Sending packet 0000 12 01 00 34 00 00 00 00-00 00 15 00 06 01 00 1b |...4.... ........| 0010 00 01 02 00 1c 00 0c 03-00 28 00 04 ff 08 00 01 |........ .(......| 0020 55 00 00 02 4d 53 53 51-4c 53 65 72 76 65 72 00 |U...MSSQ LServer.| 0030 08 4d 00 00 - |.M..| packet.c:639:Received packet 0000 04 01 00 25 00 00 01 00-00 00 15 00 06 01 00 1b |...%.... ........| 0010 00 01 02 00 1c 00 01 03-00 1d 00 00 ff 0c 00 03 |........ ........| 0020 2b 00 00 03 00 - |+....| login.c:1106:detected flag 3 login.c:472:login packet rejected query.c:3772:tds_disconnect() util.c:165:Changed query state from IDLE to DEAD util.c:322:tdserror(0x14aafa0, 0x15fae30, 20002, 0) dblib.c:7925:dbperror(0x15fa390, 20002, 0) dblib.c:7993:dbperror: Calling dblib_err_handler with msgno = 20002; msg-&gt;msgtext = "Adaptive Server connection failed (xxx.database.windows.net:1433)" dblib.c:8015:dbperror: dblib_err_handler for msgno = 20002; msg-&gt;msgtext = "Adaptive Server connection failed (xxx.database.windows.net:1433)" -- returns 2 (INT_CANCEL) util.c:352:tdserror: client library returned TDS_INT_CANCEL(2) util.c:375:tdserror: returning TDS_INT_CANCEL(2) dblib.c:1241:tdsdbopen: tds_connect_and_login failed for "xxx.database.windows.net:1433"! dblib.c:1463:dbclose(0x15fa390) dblib.c:243:dblib_del_connection(0x7f7f78fbd980, 0x15fae30) mem.c:648:tds_free_all_results() dblib.c:290:dblib_release_tds_ctx(1) dblib.c:5873:dbfreebuf(0x15fa390) dblib.c:743:dbloginfree(0x15d1070) Traceback (most recent call last): File "pymssql.pyx", line 635, in pymssql.connect (pymssql.c:10734) File "_mssql.pyx", line 1902, in _mssql.connect (_mssql.c:21821) File "_mssql.pyx", line 637, in _mssql.MSSQLConnection.__init__ (_mssql.c:6581) File "_mssql.pyx", line 1630, in _mssql.maybe_raise_MSSQLDatabaseException (_mssql.c:17524) _mssql.MSSQLDatabaseException: (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (ea1eg7cgdn.database.windows.net:1433)\n') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "&lt;stdin&gt;", line 5, in &lt;module&gt; File "pymssql.pyx", line 641, in pymssql.connect (pymssql.c:10824) pymssql.OperationalError: (20002, b'DB-Lib error message 20002, severity 9:\nAdaptive Server connection failed (xxxx.database.windows.net:1433)\n') </code></pre>
0
2016-08-28T00:53:43Z
39,193,758
<p>It appears that freetds v0.95 is not compatible with Ubuntu 16.04 and neither is the latest version of pymssql since it ships with v0.95 and it appears there isn't much you can do to swap the version it uses out.</p> <p>I was able to get this to work by doing the following:</p> <pre><code>sudo apt-get install freetds-dev freetds-bin sudo pip3 install pymssql=2.1.1 </code></pre> <p>Also note that it will not work with the Anaconda Interpreter. I have only tested the Anaconda Interpreter and the standard CPython interpreter. </p>
0
2016-08-28T17:05:45Z
[ "python", "database", "sql-azure", "freetds", "pymssql" ]
Efficient ways to iterate over several numpy arrays and process current and previous elements?
39,187,149
<p>I've read a lot about different techniques for iterating over numpy arrays recently and it seems that consensus is not to iterate at all (for instance, see <a href="http://stackoverflow.com/questions/38207528/what-are-the-efficient-ways-to-loop-over-vectors-along-a-specified-axis-in-numpy">a comment here</a>). There are several similar questions on SO, but my case is a bit different as I have to combine "iterating" (or not iterating) and accessing previous values.</p> <p>Let's say there are N (N is small, usually 4, might be up to 7) 1-D numpy arrays of <code>float128</code> in a list <code>X</code>, all arrays are of the same size. To give you a little insight, these are data from PDE integration, each array stands for one function, and I would like to apply a Poincare section. Unfortunately, the algorithm should be both memory- and time-efficient since these arrays are sometimes ~1Gb each, and there are only 4Gb of RAM on board (I've just learnt about memmap'ing of numpy arrays and now consider using them instead of regular ones).</p> <p>One of these arrays is used for "filtering" the others, so I start with <code>secaxis = X.pop(idx)</code>. Now I have to locate pairs of indices where <code>(secaxis[i-1] &gt; 0 and secaxis[i] &lt; 0) or (secaxis[i-1] &lt; 0 and secaxis[i] &gt; 0)</code> and then apply simple algebraic transformations to remaining arrays, <code>X</code> (and save results). Worth mentioning, data shouldn't be wasted during this operation.</p> <p>There are multiple ways for doing that, but none of them seem efficient (and elegant enough) to me. One is a C-like approach, where you just iterate in a for-loop:</p> <pre><code>import array # better than lists res = [ array.array('d') for _ in X ] for i in xrange(1,secaxis.size): if condition: # see above co = -secaxis[i-1]/secaxis[i] for j in xrange(N): res[j].append( (X[j][i-1] + co*X[j][i])/(1+co) ) </code></pre> <p>This is clearly very inefficient and besides not a Pythonic way.</p> <p>Another way is to use numpy.nditer, but I haven't figured out yet how one accesses the previous value, though it allows iterating over several arrays at once:</p> <pre><code># without secaxis = X.pop(idx) it = numpy.nditer(X) for vec in it: # vec[idx] is current value, how do you get the previous (or next) one? </code></pre> <p>Third possibility is to first find sought indices with efficient numpy slices, and then use them for bulk multiplication/addition. I prefer this one for now:</p> <pre><code>res = [] inds, = numpy.where((secaxis[:-1] &lt; 0) * (secaxis[1:] &gt; 0) + (secaxis[:-1] &gt; 0) * (secaxis[1:] &lt; 0)) coefs = -secaxis[inds] / secaxis[inds+1] # array of coefficients for f in X: # loop is done only N-1 times, that is, 3 to 6 res.append( (f[inds] + coefs*f[inds+1]) / (1+coefs) ) </code></pre> <p>But this is seemingly done in 7 + 2*(N - 1) passes, moreover, I'm not sure about <code>secaxis[inds]</code> type of addressing (it is not slicing and generally it has to find all elements by indices just like in the first method, doesn't it?).</p> <p>Finally, I've also tried using itertools and it resulted in monstrous and obscure structures, which might stem from the fact that I'm not very familiar with functional programming:</p> <pre><code>def filt(x): return (x[0] &lt; 0 and x[1] &gt; 0) or (x[0] &gt; 0 and x[1] &lt; 0) import array from itertools import izip, tee, ifilter res = [ array.array('d') for _ in X ] iters = [iter(x) for x in X] # N-1 iterators in a list prev, curr = tee(izip(*iters)) # 2 similar iterators, each of which # consists of N-1 iterators next(curr, None) # one of them is now for current value seciter = tee(iter(secaxis)) next(seciter[1], None) for x in ifilter(filt, izip(seciter[0], seciter[1], prev, curr)): co = - x[0]/x[1] for r, p, c in zip(res, x[2], x[3]): r.append( (p+co*c) / (1+co) ) </code></pre> <p>Not only this looks very ugly, it also takes an awful lot of time to complete.</p> <p>So, I have following questions:</p> <ol> <li>Of all these methods is the third one indeed the best? If so, what can be done to impove the last one?</li> <li>Are there any other, better ones yet? </li> <li>Out of sheer curiosity, is there a way to solve the problem using nditer?</li> <li>Finally, will I be better off using memmap versions of numpy arrays, or will it probably slow things down a lot? Maybe I should only load <code>secaxis</code> array into RAM, keep others on disk and use third method?</li> <li>(bonus question) List of equal in length 1-D numpy arrays comes from loading N <code>.npy</code> files whose sizes aren't known beforehand (but N is). Would it be more efficient to read one array, then allocate memory for one 2-D numpy array (slight memory overhead here) and read remaining into that 2-D array?</li> </ol>
2
2016-08-28T01:06:35Z
39,187,490
<p>The <code>numpy.where()</code> version is fast enough, you can speedup it a little by <code>method3()</code>. If the <code>&gt;</code> condition can change to <code>&gt;=</code>, you can also use <code>method4()</code>.</p> <pre><code>import numpy as np a = np.random.randn(100000) def method1(a): idx = [] for i in range(1, len(a)): if (a[i-1] &gt; 0 and a[i] &lt; 0) or (a[i-1] &lt; 0 and a[i] &gt; 0): idx.append(i) return idx def method2(a): inds, = np.where((a[:-1] &lt; 0) * (a[1:] &gt; 0) + (a[:-1] &gt; 0) * (a[1:] &lt; 0)) return inds + 1 def method3(a): m = a &lt; 0 p = a &gt; 0 return np.where((m[:-1] &amp; p[1:]) | (p[:-1] &amp; m[1:]))[0] + 1 def method4(a): return np.where(np.diff(a &gt;= 0))[0] + 1 assert np.allclose(method1(a), method2(a)) assert np.allclose(method2(a), method3(a)) assert np.allclose(method3(a), method4(a)) %timeit method1(a) %timeit method2(a) %timeit method3(a) %timeit method4(a) </code></pre> <p>the <code>%timeit</code> result:</p> <pre><code>1 loop, best of 3: 294 ms per loop 1000 loops, best of 3: 1.52 ms per loop 1000 loops, best of 3: 1.38 ms per loop 1000 loops, best of 3: 1.39 ms per loop </code></pre>
3
2016-08-28T02:24:16Z
[ "python", "arrays", "performance", "numpy", "itertools" ]
Efficient ways to iterate over several numpy arrays and process current and previous elements?
39,187,149
<p>I've read a lot about different techniques for iterating over numpy arrays recently and it seems that consensus is not to iterate at all (for instance, see <a href="http://stackoverflow.com/questions/38207528/what-are-the-efficient-ways-to-loop-over-vectors-along-a-specified-axis-in-numpy">a comment here</a>). There are several similar questions on SO, but my case is a bit different as I have to combine "iterating" (or not iterating) and accessing previous values.</p> <p>Let's say there are N (N is small, usually 4, might be up to 7) 1-D numpy arrays of <code>float128</code> in a list <code>X</code>, all arrays are of the same size. To give you a little insight, these are data from PDE integration, each array stands for one function, and I would like to apply a Poincare section. Unfortunately, the algorithm should be both memory- and time-efficient since these arrays are sometimes ~1Gb each, and there are only 4Gb of RAM on board (I've just learnt about memmap'ing of numpy arrays and now consider using them instead of regular ones).</p> <p>One of these arrays is used for "filtering" the others, so I start with <code>secaxis = X.pop(idx)</code>. Now I have to locate pairs of indices where <code>(secaxis[i-1] &gt; 0 and secaxis[i] &lt; 0) or (secaxis[i-1] &lt; 0 and secaxis[i] &gt; 0)</code> and then apply simple algebraic transformations to remaining arrays, <code>X</code> (and save results). Worth mentioning, data shouldn't be wasted during this operation.</p> <p>There are multiple ways for doing that, but none of them seem efficient (and elegant enough) to me. One is a C-like approach, where you just iterate in a for-loop:</p> <pre><code>import array # better than lists res = [ array.array('d') for _ in X ] for i in xrange(1,secaxis.size): if condition: # see above co = -secaxis[i-1]/secaxis[i] for j in xrange(N): res[j].append( (X[j][i-1] + co*X[j][i])/(1+co) ) </code></pre> <p>This is clearly very inefficient and besides not a Pythonic way.</p> <p>Another way is to use numpy.nditer, but I haven't figured out yet how one accesses the previous value, though it allows iterating over several arrays at once:</p> <pre><code># without secaxis = X.pop(idx) it = numpy.nditer(X) for vec in it: # vec[idx] is current value, how do you get the previous (or next) one? </code></pre> <p>Third possibility is to first find sought indices with efficient numpy slices, and then use them for bulk multiplication/addition. I prefer this one for now:</p> <pre><code>res = [] inds, = numpy.where((secaxis[:-1] &lt; 0) * (secaxis[1:] &gt; 0) + (secaxis[:-1] &gt; 0) * (secaxis[1:] &lt; 0)) coefs = -secaxis[inds] / secaxis[inds+1] # array of coefficients for f in X: # loop is done only N-1 times, that is, 3 to 6 res.append( (f[inds] + coefs*f[inds+1]) / (1+coefs) ) </code></pre> <p>But this is seemingly done in 7 + 2*(N - 1) passes, moreover, I'm not sure about <code>secaxis[inds]</code> type of addressing (it is not slicing and generally it has to find all elements by indices just like in the first method, doesn't it?).</p> <p>Finally, I've also tried using itertools and it resulted in monstrous and obscure structures, which might stem from the fact that I'm not very familiar with functional programming:</p> <pre><code>def filt(x): return (x[0] &lt; 0 and x[1] &gt; 0) or (x[0] &gt; 0 and x[1] &lt; 0) import array from itertools import izip, tee, ifilter res = [ array.array('d') for _ in X ] iters = [iter(x) for x in X] # N-1 iterators in a list prev, curr = tee(izip(*iters)) # 2 similar iterators, each of which # consists of N-1 iterators next(curr, None) # one of them is now for current value seciter = tee(iter(secaxis)) next(seciter[1], None) for x in ifilter(filt, izip(seciter[0], seciter[1], prev, curr)): co = - x[0]/x[1] for r, p, c in zip(res, x[2], x[3]): r.append( (p+co*c) / (1+co) ) </code></pre> <p>Not only this looks very ugly, it also takes an awful lot of time to complete.</p> <p>So, I have following questions:</p> <ol> <li>Of all these methods is the third one indeed the best? If so, what can be done to impove the last one?</li> <li>Are there any other, better ones yet? </li> <li>Out of sheer curiosity, is there a way to solve the problem using nditer?</li> <li>Finally, will I be better off using memmap versions of numpy arrays, or will it probably slow things down a lot? Maybe I should only load <code>secaxis</code> array into RAM, keep others on disk and use third method?</li> <li>(bonus question) List of equal in length 1-D numpy arrays comes from loading N <code>.npy</code> files whose sizes aren't known beforehand (but N is). Would it be more efficient to read one array, then allocate memory for one 2-D numpy array (slight memory overhead here) and read remaining into that 2-D array?</li> </ol>
2
2016-08-28T01:06:35Z
39,187,539
<p>I'll need to read your post in more detail, but will start with some general observations (from previous iteration questions).</p> <p>There isn't an efficient way of iterating over arrays in Python, though there are things that slow things down. I like to distinguish between the iteration mechanism (<code>nditer</code>, <code>for x in A:</code>) and the action (<code>alist.append(...)</code>, <code>x[i+1] += 1</code>). The big time consumer is usually the action, done many times, not the iteration mechanism itself.</p> <p>Letting <code>numpy</code> do the iteration in compiled code is the fastest.</p> <pre><code> xdiff = x[1:] - x[:-1] </code></pre> <p>is much faster than</p> <pre><code> xdiff = np.zeros(x.shape[0]-1) for i in range(x.shape[0]: xdiff[i] = x[i+1] - x[i] </code></pre> <p>The <code>np.nditer</code> isn't any faster.</p> <p><code>nditer</code> is recommended as a general iteration tool in compiled code. But its main value lies in handling broadcasting and coordinating the iteration over several arrays (input/output). And you need to use buffering and <code>c</code> like code to get the best speed from <code>nditer</code> (I'll look up a recent SO question).</p> <p><a href="http://stackoverflow.com/a/39058906/901925">http://stackoverflow.com/a/39058906/901925</a></p> <p>Don't use <code>nditer</code> without studying the relevant <code>iteration</code> tutorial page (the one that ends with a <code>cython</code> example).</p> <p>=========================</p> <p>Just judging from experience, this approach will be fastest. Yes it's going to iterate over <code>secaxis</code> a number of times, but those are all done in compiled code, and will be much faster than any iteration in Python. And the <code>for f in X:</code> iteration is just a few times.</p> <pre><code>res = [] inds, = numpy.where((secaxis[:-1] &lt; 0) * (secaxis[1:] &gt; 0) + (secaxis[:-1] &gt; 0) * (secaxis[1:] &lt; 0)) coefs = -secaxis[inds] / secaxis[inds+1] # array of coefficients for f in X: res.append( (f[inds] + coefs*f[inds+1]) / (1+coefs) ) </code></pre> <p><code>@HYRY</code> has explored alternatives for making the <code>where</code> step faster. But as you can see the differences aren't that big. Other possible tweaks</p> <pre><code>inds1 = inds+1 coefs = -secaxis[inds] / secaxis[inds1] coefs1 = coefs+1 for f in X: res.append(( f[inds] + coefs*f[inds1]) / coefs1) </code></pre> <p>If <code>X</code> was an array, <code>res</code> could be an array as well.</p> <pre><code>res = (X[:,inds] + coefs*X[:,inds1])/coefs1 </code></pre> <p>But for small <code>N</code> I suspect the list <code>res</code> is just as good. Don't need to make the arrays any bigger than necessary. The tweaks are minor, just trying to avoid recalculating things.</p> <p>=================</p> <p>This use of <code>np.where</code> is just <code>np.nonzero</code>. That actually makes two passes of the array, once with <code>np.count_nonzero</code> to determine how many values it will return, and create the return structure (list of arrays of now known length). And a second loop to fill in those indices. So multiple iterations are fine if it keeps action simple.</p>
3
2016-08-28T02:35:54Z
[ "python", "arrays", "performance", "numpy", "itertools" ]
Alphabetical pairing of CSV elements while keeping first element constant in python
39,187,192
<p>How is it possible to take a CSV file with rows of varying lengths (up to 12 in practice) as input and then output a new CSV file where new rows consist of the 0th element of each row + each unique pair of elements (>0th) in alphabetical order?</p> <p>The original row elements are not in alphabetical order.</p> <p>Please see image for example input and output. <a href="http://i.stack.imgur.com/tbQgo.jpg" rel="nofollow">enter image description here</a></p>
0
2016-08-28T01:16:31Z
39,187,305
<p>You can use <a href="https://docs.python.org/3/library/csv.html#module-csv" rel="nofollow"><code>csv</code></a> for reading &amp; writing and <a href="https://docs.python.org/3.5/library/itertools.html#itertools.combinations" rel="nofollow"><code>itertools.combinations</code></a> for generating the pairs:</p> <pre><code>import csv from itertools import combinations with open('input.csv') as in_f, open('output.csv', 'w') as out_f: reader = csv.reader(in_f) writer = csv.writer(out_f) for row in reader: for combination in combinations(sorted(row[1:]), 2): writer.writerow(tuple(row[:1]) + combination) </code></pre> <p>Output:</p> <pre><code>0,A,B 1,A,B 1,A,C 1,B,C 2,A,B 2,A,C 2,A,D 2,B,C 2,B,D 2,C,D </code></pre>
1
2016-08-28T01:43:12Z
[ "python", "python-3.x", "sorting", "csv", "itertools" ]
Brute-force search algorithm in arithmetic operations
39,187,275
<p>Since I'm a Python beginner, I'm trying to study some codes from some websites. I found in GitHub an algorithm which does a <a href="https://en.wikipedia.org/wiki/Brute-force_search" rel="nofollow">Bruteforce search</a> for arithmetic expressions. The code is:</p> <pre><code>#!python import operator import itertools from fractions import Fraction operations = dict() operations['+'] = operator.add operations['-'] = operator.sub operations['/'] = operator.truediv operations['*'] = operator.mul def solve(target, numbers): """List ways to make target from numbers.""" numbers = [Fraction(x) for x in numbers] return solve_inner(target, numbers) def solve_inner(target, numbers): if len(numbers) == 1: if numbers[0] == target: yield str(target) return # combine a pair of numbers with an operation, then recurse for a,b in itertools.permutations(numbers, 2): for symbol, operation in operations.items(): try: product = operation(a,b) except ZeroDivisionError: continue subnumbers = list(numbers) subnumbers.remove(a) subnumbers.remove(b) subnumbers.append(product) for solution in solve_inner(target, subnumbers): # expand product (but only once) yield solution.replace(str(product), "({0}{1}{2})".format(a, symbol, b), 1) if __name__ == "__main__": numbers = [1, 5, 6, 7] target = 5 solutions = solve(target, numbers) for solution in solutions: print("{0}={1}".format(target, solution)) </code></pre> <p>it simply ends trying any arithmetic expression using my <code>numbers</code>, then prints the ones which get the <code>target</code> as result (the result I have).</p> <p>I'm wondering, how can I make it print any solution the script tried when the expression doesn't have as result the <code>target</code> I've set?</p> <p>edit:</p> <p>This is the code I tried:</p> <pre><code>#!python import operator import itertools from fractions import Fraction operations = dict() operations['+'] = operator.add operations['-'] = operator.sub operations['/'] = operator.truediv operations['*'] = operator.mul def solve(target, numbers): """List ways to make target from numbers.""" numbers = [Fraction(x) for x in numbers] return solve_inner(target, numbers) def solve_inner(target, numbers): if len(numbers) == 1: num = numbers[0] yield str(num), num == target return # combine a pair of numbers with an operation, then recurse for a,b in itertools.permutations(numbers, 2): for symbol, operation in operations.items(): try: product = operation(a,b) except ZeroDivisionError: continue subnumbers = list(numbers) subnumbers.remove(a) subnumbers.remove(b) subnumbers.append(product) for solution, truth in solve_inner(target, subnumbers): yield solution.replace(str(product), "{0}=({1}{2}{3})".format(product, a, symbol, b), 1), truth if __name__ == "__main__": numbers = [1, 5, 6, 7] target = 5 solutions = solve(target, numbers) for solution, truth in solutions: print("{0}? {1}".format(solution, 'True' if truth else '')) </code></pre> <p>I get the actual product as result, but I get results for small operations in expressions:</p> <pre><code>42=(7*6)/5=(42/5)=(1*42/5) </code></pre> <p>While I'm actually trying to get only 42 at the start of the string.</p>
2
2016-08-28T01:36:27Z
39,187,572
<p>The recursion terminates by yielding str(num[0]) if num[0] equals target and nothing otherwise. If something is yielded, the string expression is built on successive yields. To get all expressions, something must always be yielded. I choose to also yield whether the target was reached. Instead, the expression could be evaluated before printing. </p> <pre><code>#!python import operator import itertools from fractions import Fraction operations = dict() operations['+'] = operator.add operations['-'] = operator.sub operations['/'] = operator.truediv operations['*'] = operator.mul def solve(target, numbers): """List ways to make target from numbers.""" numbers = [Fraction(x) for x in numbers] return solve_inner(target, numbers) def solve_inner(target, numbers): if len(numbers) == 1: num = numbers[0] yield str(num), num == target return # combine a pair of numbers with an operation, then recurse for a,b in itertools.permutations(numbers, 2): for symbol, operation in operations.items(): try: product = operation(a,b) except ZeroDivisionError: continue subnumbers = list(numbers) subnumbers.remove(a) subnumbers.remove(b) subnumbers.append(product) for solution, truth in solve_inner(target, subnumbers): # expand product (but only once) yield solution.replace(str(product), "({0}{1}{2})".format(a, symbol, b), 1), truth if __name__ == "__main__": numbers = [1, 5, 6, 7] target = 5 solutions = solve(target, numbers) for solution, truth in solutions: print("{0}={1}? {2}".format(target, solution, 'True' if truth else '')) </code></pre> <p>There is a glitch in the original. The product is appended to the end, but the first number matching the product from the front is replaced. I believe that the result could be omission of expressions, in which case the algorithm is not complete. Since replace cannot be done starting at the end, the product should be placed at the front (<code>subnumbers.insert(0, product)</code>) so that it is the product that gets replaced. I will let you experiment with what difference this makes. But I believe the code would have been slightly easier to understand if written correctly.</p>
2
2016-08-28T02:44:19Z
[ "python", "algorithm", "math" ]
How to add a new line after each loop?
39,187,327
<p>Currently getting this output</p> <pre><code>[1, 1, 3, '\n'] [1, 1, 3, '\n', 7, 7, 7, '\n'] </code></pre> <p>looking for this output</p> <pre><code>[1, 1, 3] [7, 7, 7] </code></pre> <p>Code:</p> <pre><code> import random my_list=[] n = 3 m = 1 while m &lt; 10: for i in range(n): # repeats the following line(s) of code n times my_list.append(random.randrange(0,9)) my_list.append("\n") print(my_list) m = m+1 </code></pre>
1
2016-08-28T01:49:45Z
39,187,371
<pre><code>import random n = 3 m = 0 my_list = [] while m &lt; 10: my_list.append([]) for i in range(n): my_list[m].append(random.randrange(0, 9)) print(my_list[m]) m = m + 1 </code></pre> <p>This code creates sublists inside <code>my_list</code>, and prints out each sublist every iteration. So your output might be something like this:</p> <pre><code>[0, 4, 8] [3, 1, 5] ... [9, 2, 5] </code></pre> <p>but <code>my_list</code> would store all the sublists, like this: <code>[[0, 4, 8], [3, 1, 5], ..., [9, 2, 5]]</code></p>
1
2016-08-28T01:58:47Z
[ "python", "python-3.x", "while-loop" ]
How to add a new line after each loop?
39,187,327
<p>Currently getting this output</p> <pre><code>[1, 1, 3, '\n'] [1, 1, 3, '\n', 7, 7, 7, '\n'] </code></pre> <p>looking for this output</p> <pre><code>[1, 1, 3] [7, 7, 7] </code></pre> <p>Code:</p> <pre><code> import random my_list=[] n = 3 m = 1 while m &lt; 10: for i in range(n): # repeats the following line(s) of code n times my_list.append(random.randrange(0,9)) my_list.append("\n") print(my_list) m = m+1 </code></pre>
1
2016-08-28T01:49:45Z
39,187,412
<p>I agree with @Neil It looks like you want a 2d array. Here's another way to do it.</p> <pre><code>from random import randint my_list = [[randint(1, 11) for i in xrange(10)] for j in xrange(randint(5, 10))] </code></pre> <p>This will create a 0 - 9 number object 5 to 9 times. e.g [[2,3,4,5...], [1,2,2,3...]...]</p> <p>Then you just iterate over the object to print it out the way you want</p> <pre><code>for item in my_list: print item </code></pre> <p><strong>UPDATE</strong></p> <p>Given the comment you replied with here is example of a random number file generator.</p> <pre><code>from random import randint #number of lines to create on the file n = 3 with open('my_file.txt', 'w') as cout: for i in xrange(n): cout.write('{0}\n'.format(randint(1000, 10000))) #this is exclusive so it will only go as high as 9999 </code></pre> <p>If you want more numbers on each line format the text inside the write method. E.G</p> <pre><code>cout.write('{0}, {1}\n'.format(randint(1000, 10000), randint(1000, 10000))) </code></pre>
1
2016-08-28T02:06:36Z
[ "python", "python-3.x", "while-loop" ]
How to add a new line after each loop?
39,187,327
<p>Currently getting this output</p> <pre><code>[1, 1, 3, '\n'] [1, 1, 3, '\n', 7, 7, 7, '\n'] </code></pre> <p>looking for this output</p> <pre><code>[1, 1, 3] [7, 7, 7] </code></pre> <p>Code:</p> <pre><code> import random my_list=[] n = 3 m = 1 while m &lt; 10: for i in range(n): # repeats the following line(s) of code n times my_list.append(random.randrange(0,9)) my_list.append("\n") print(my_list) m = m+1 </code></pre>
1
2016-08-28T01:49:45Z
39,187,415
<p>Put the my_list=[] inside the outer loop to reinitialize it each time. Also, if you don't want the newline, don't append it to my_list. If you want all the output on the same line use the optional arg <code>end</code> to the print function.</p> <pre><code>import random n = 3 m = 1 while m &lt; 10: my_list = [] for i in range(n): my_list.append(random.randrange(0,9)) print(my_list, end="") m += 1 </code></pre>
1
2016-08-28T02:07:43Z
[ "python", "python-3.x", "while-loop" ]
Porting, Python to C# - How can I iterate a tuple over a tuple?
39,187,360
<p>I'm port a Python project over to C#.</p> <p>So far I've run into this problem, is there any way I could port this to C#?</p> <pre><code>verts = (-1,-1,-1),(1,-1,-1),(1,1,-1),(-1,1,-1),(-1,-1,1),(1,-1,1),(1,1,1),(-1,1,1) edges = (0,1),(1,2),(2,3),(3,0),(4,5),(5,6),(6,7),(7,4),(0,4),(1,5),(2,6),(3,7) for edge in edges: for x,y,z in (verts[edge[0]],verts[edge[1]]): [...] </code></pre> <p>I've tried this,</p> <pre><code>verts.Add(new List&lt;string&gt; { "-1,-1,-1" }); verts.Add(new List&lt;string&gt; { "1,-1,-1" }); verts.Add(new List&lt;string&gt; { "1,1,-1" }); verts.Add(new List&lt;string&gt; { "-1,1,-1" }); verts.Add(new List&lt;string&gt; { "-1,-1,1" }); verts.Add(new List&lt;string&gt; { "1,-1,1" }); verts.Add(new List&lt;string&gt; { "1,1,1" }); verts.Add(new List&lt;string&gt; { "-1,1,1" }); edges.Add(new List&lt;string&gt; { "0,1" }); edges.Add(new List&lt;string&gt; { "1,2" }); edges.Add(new List&lt;string&gt; { "2,3" }); edges.Add(new List&lt;string&gt; { "3,0" }); edges.Add(new List&lt;string&gt; { "4,5" }); edges.Add(new List&lt;string&gt; { "5,6" }); edges.Add(new List&lt;string&gt; { "6,7" }); edges.Add(new List&lt;string&gt; { "7,4" }); edges.Add(new List&lt;string&gt; { "0,4" }); edges.Add(new List&lt;string&gt; { "1,5" }); edges.Add(new List&lt;string&gt; { "2,6" }); edges.Add(new List&lt;string&gt; { "3,7" }); foreach (List&lt;string&gt; vert in verts) { [...] } List&lt;string&gt; lines1 = new List&lt;string&gt;(); List&lt;string&gt; lines2 = new List&lt;string&gt;(); foreach (List&lt;string&gt; edge in edges) { int edge1 = int.Parse(edge[0].Split(',')[0]); int edge2 = int.Parse(edge[0].Split(',')[1]); int x; int y; int z; foreach (int vert in verts[edge1]) { [...] } } </code></pre> <p>So now I am getting very confused, lot's of bugs, here any there. I seems over complicated and impractical.</p> <p>I hope someone can help me :)</p> <p>If you need any more information, just leave a comment, If it's hard to read, again just leave a comment.</p> <p>~Coolq</p>
0
2016-08-28T01:57:18Z
39,187,742
<p>This is one way you could go about doing it...</p> <pre><code> var verts = new[] { new Tuple&lt;int,int,int&gt; (-1,-1,-1 ), new Tuple&lt;int,int,int&gt; (1,-1,-1 ), new Tuple&lt;int,int,int&gt; (1,1,-1 ), new Tuple&lt;int,int,int&gt; (-1,1,-1 ), new Tuple&lt;int,int,int&gt; (-1,-1,1 ), new Tuple&lt;int,int,int&gt; (1,-1,1 ), new Tuple&lt;int,int,int&gt; (1,1,1 ), new Tuple&lt;int,int,int&gt; (-1,1,1 ) }; var edges = new[] { new Tuple&lt;int,int&gt;(0,1), new Tuple&lt;int,int&gt;(2,2), new Tuple&lt;int,int&gt;(2,3), new Tuple&lt;int,int&gt;(3,0), new Tuple&lt;int,int&gt;(4,5), new Tuple&lt;int,int&gt;(5,6), new Tuple&lt;int,int&gt;(6,7), new Tuple&lt;int,int&gt;(7,4), new Tuple&lt;int,int&gt;(0,4), new Tuple&lt;int,int&gt;(1,5), new Tuple&lt;int,int&gt;(2,6), new Tuple&lt;int,int&gt;(3,7) }; foreach(var edge in edges) { var edge1 = edge.Item1; var edge2 = edge.Item2; int x, y, z;//not sure why you need these? foreach(var vert in new[] { verts[edge1].Item1, verts[edge1].Item2, verts[edge1].Item3 }) { //[...] } } </code></pre>
1
2016-08-28T03:22:38Z
[ "c#", "python", "3d", "port" ]
How to serialize 2 models with custom relationship in django-rest-framework?
39,187,365
<p>I have 2 existing tables in database and i haven't permission to alter them. Show as models below.</p> <pre><code>class Prenames(models.Model): typ = models.DecimalField(max_digits=1, decimal_places=0, db_column='tprpretyp') code = models.DecimalField(max_digits=3, decimal_places=0, db_column='tprprecod') name = models.CharField(max_length=20, db_column='tprprenam') class Profiles(models.Model): userid = models.CharField(max_length=6, db_column='rmsuserid') prename = models.CharField(max_length=4, db_column='rmsprenam', null=True) name = models.CharField(max_length=25, db_column='rmsname') surname = models.CharField(max_length=25, db_column='rmssurnam') </code></pre> <p>If sql i have to </p> <pre><code>SELECT * FROM Profiles left join Prenames on tprpretyp = int(rmsprenam/1000) and tprprecod = mod(rmsprenam,1000) WHERE trim(rmsuserid) = ? </code></pre> <p>Things I've already try:</p> <pre><code>from rest_framework import serializers from .models import * class PrenameSerializer(serializers.ModelSerializer): class Meta: model = Prenames fields = ('type', 'code', 'name') class ProfileSerializer(serializers.ModelSerializer): prenames = PrenameSerializer(read_only=True) class Meta: model = Profile fields = ('userid', 'name', 'surname', 'prenames') </code></pre> <p>Things I've got:</p> <pre><code>{ "userid": "560174", "name": "******", "surname": "******" } </code></pre> <p>Things I expected:</p> <pre><code>{ "userid": "560174", "name": "******", "surname": "******" "prenames":[ { "type:":10, "code": 01, "name": "Mr." } ] } </code></pre> <p>I'm using django-rest-framework 3.2.5 and django 1.6 how do i serialize them?</p>
1
2016-08-28T01:57:59Z
39,189,721
<p>Try this:</p> <pre><code>prenames = PrenameSerializer(source="how_you_get_this_field, many=True, read_only=True) </code></pre>
1
2016-08-28T09:17:29Z
[ "python", "django", "django-rest-framework" ]
How to serialize 2 models with custom relationship in django-rest-framework?
39,187,365
<p>I have 2 existing tables in database and i haven't permission to alter them. Show as models below.</p> <pre><code>class Prenames(models.Model): typ = models.DecimalField(max_digits=1, decimal_places=0, db_column='tprpretyp') code = models.DecimalField(max_digits=3, decimal_places=0, db_column='tprprecod') name = models.CharField(max_length=20, db_column='tprprenam') class Profiles(models.Model): userid = models.CharField(max_length=6, db_column='rmsuserid') prename = models.CharField(max_length=4, db_column='rmsprenam', null=True) name = models.CharField(max_length=25, db_column='rmsname') surname = models.CharField(max_length=25, db_column='rmssurnam') </code></pre> <p>If sql i have to </p> <pre><code>SELECT * FROM Profiles left join Prenames on tprpretyp = int(rmsprenam/1000) and tprprecod = mod(rmsprenam,1000) WHERE trim(rmsuserid) = ? </code></pre> <p>Things I've already try:</p> <pre><code>from rest_framework import serializers from .models import * class PrenameSerializer(serializers.ModelSerializer): class Meta: model = Prenames fields = ('type', 'code', 'name') class ProfileSerializer(serializers.ModelSerializer): prenames = PrenameSerializer(read_only=True) class Meta: model = Profile fields = ('userid', 'name', 'surname', 'prenames') </code></pre> <p>Things I've got:</p> <pre><code>{ "userid": "560174", "name": "******", "surname": "******" } </code></pre> <p>Things I expected:</p> <pre><code>{ "userid": "560174", "name": "******", "surname": "******" "prenames":[ { "type:":10, "code": 01, "name": "Mr." } ] } </code></pre> <p>I'm using django-rest-framework 3.2.5 and django 1.6 how do i serialize them?</p>
1
2016-08-28T01:57:59Z
39,190,687
<p>Thanks @Aison for giving me an inspiration. This solution is not exactly what I want. But it save my life for now. From <a href="http://www.django-rest-framework.org/api-guide/fields/#serializermethodfield" rel="nofollow">SerializerMethodField</a>. I decided to modify ProfileSerializer to</p> <pre><code>class PatientSerializer(serializers.ModelSerializer): prenameth = serializers.SerializerMethodField('getprenameth') prenameen = serializers.SerializerMethodField('getprenameen') def getprenameth(self, obj): return obj.prename.name def getprenameen(self, obj): return obj.prename.en_pre_name class Meta: model = Patients fields = ('userid', 'name', 'surname', 'prenames', 'prenameth', 'prenameen') </code></pre> <p>And this what I got from it.</p> <pre><code>{ "userid": "560174", "name": "******", "surname": "******", "prenameth": "นาย", "prenameen": "Mr." } </code></pre> <p>This is enough for now.</p>
0
2016-08-28T11:14:53Z
[ "python", "django", "django-rest-framework" ]
How to correctly uninstall numpy on MacOSX?
39,187,374
<p>I'm on a Mac, and I installed <code>numpy</code> and <code>sklearn</code> in that order. Now, I'm faced with these errors that have already been mentioned on SO several times: </p> <p><a href="http://stackoverflow.com/questions/38197086/sklearn-numpy-dtype-has-the-wrong-size-try-recompiling-in-both-pycharm-and-te">sklearn &quot;numpy.dtype has the wrong size, try recompiling&quot; in both pycharm and terminal</a></p> <p><a href="http://stackoverflow.com/questions/17709641/valueerror-numpy-dtype-has-the-wrong-size-try-recompiling/18369312#18369312">ValueError: numpy.dtype has the wrong size, try recompiling</a></p> <p><a href="http://stackoverflow.com/questions/15274696/importerror-in-importing-from-sklearn-cannot-import-name-check-build">ImportError in importing from sklearn: cannot import name check_build</a></p> <p>So, I try to remediate this error by uninstalling <code>numpy</code>, and reinstalling a previous version. </p> <p>1) <code>sudo pip install --upgrade numpy</code>..gives permission error</p> <p><code>...OSError: [Errno 1] Operation not permitted: '/tmp/pip-OVY0Vq-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy-1.8.0rc1-py2.7.egg-info'...</code></p> <p>2) I tried <code>brew uninstall numpy</code>, but <code>import numpy</code> still works even after a shell restart.</p> <p>The only thing left I can think of is to manually delete all of the <code>numpy</code> files, which, on a Mac seeem to be found under <code>sudo rm -rf /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy</code></p> <p>....but even that gives me a permission error. what gives? </p>
0
2016-08-28T01:59:05Z
39,188,275
<p>If you're using the brew version of python</p> <pre><code>brew uninstall numpy </code></pre> <p>If you're using the mac version of python:</p> <p>python 2.7</p> <pre><code>pip uninstall numpy </code></pre> <p>python 3</p> <pre><code>pip3 uninstall numpy </code></pre>
1
2016-08-28T05:24:34Z
[ "python", "osx", "python-2.7", "numpy" ]
How to correctly uninstall numpy on MacOSX?
39,187,374
<p>I'm on a Mac, and I installed <code>numpy</code> and <code>sklearn</code> in that order. Now, I'm faced with these errors that have already been mentioned on SO several times: </p> <p><a href="http://stackoverflow.com/questions/38197086/sklearn-numpy-dtype-has-the-wrong-size-try-recompiling-in-both-pycharm-and-te">sklearn &quot;numpy.dtype has the wrong size, try recompiling&quot; in both pycharm and terminal</a></p> <p><a href="http://stackoverflow.com/questions/17709641/valueerror-numpy-dtype-has-the-wrong-size-try-recompiling/18369312#18369312">ValueError: numpy.dtype has the wrong size, try recompiling</a></p> <p><a href="http://stackoverflow.com/questions/15274696/importerror-in-importing-from-sklearn-cannot-import-name-check-build">ImportError in importing from sklearn: cannot import name check_build</a></p> <p>So, I try to remediate this error by uninstalling <code>numpy</code>, and reinstalling a previous version. </p> <p>1) <code>sudo pip install --upgrade numpy</code>..gives permission error</p> <p><code>...OSError: [Errno 1] Operation not permitted: '/tmp/pip-OVY0Vq-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy-1.8.0rc1-py2.7.egg-info'...</code></p> <p>2) I tried <code>brew uninstall numpy</code>, but <code>import numpy</code> still works even after a shell restart.</p> <p>The only thing left I can think of is to manually delete all of the <code>numpy</code> files, which, on a Mac seeem to be found under <code>sudo rm -rf /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy</code></p> <p>....but even that gives me a permission error. what gives? </p>
0
2016-08-28T01:59:05Z
39,323,756
<p>To solve this, I did the following: (note that it is not entirely clear to me which of these solved the problem, since I didn't test thoroughly).</p> <p>1) Installed python at Python.org instead of Mac's stupid version</p> <p>2) re-installed all of the modules like <code>numpy</code>, <code>scipy</code>, <code>matplotlib</code>, <code>sklearn</code> and ran this:<code>hash -r python</code> according to this source: <a href="http://stackoverflow.com/questions/34386527/symbol-not-found-pycodecinfo-getincrementaldecoder-trying-to-register-a-packa">Symbol not found: __PyCodecInfo_GetIncrementalDecoder trying to register a package on (Test) PyPi</a>, because it doesn't make python use the cached versions of the modules.</p> <p>3) Then, I realized that I had this issue: <a href="https://github.com/scipy/scipy/issues/5093" rel="nofollow">https://github.com/scipy/scipy/issues/5093</a>. To solve it, I had to make sure I installed the <code>scipy</code> module using <code>python -m pip install scipy='0.15.0'</code> instead of just <code>pip install scipy='0.15.0'</code>, because this solved the issue based on this source: <a href="http://stackoverflow.com/questions/25276329/cant-load-python-modules-installed-via-pip-from-site-packages-directory">Can&#39;t load Python modules installed via pip from site-packages directory</a>. </p> <p>So, in conclusion it turns out there really is a big different between what is installed by <code>pip</code>, and what is imported when <code>python</code> is executed from the terminal. So, to ensure that you are using the pip to install the modules into a particular python, you can use <code>python -m pip install &lt;package name&gt;</code>.</p>
1
2016-09-05T03:59:50Z
[ "python", "osx", "python-2.7", "numpy" ]
Installed app in Django not found when running tests
39,187,382
<p>I have a pretty simple Django app, that I am trying to run unit tests on. In my tests.py file I am trying to import the parent apps views file. I tried 'from . import views' but got an error:</p> <pre><code>SystemError: Parent module '' not loaded, cannot perform relative import </code></pre> <p>I read that when a relative path does not work, you can try using an absolute path so I tried 'from menu import views' but than got another error:</p> <pre><code>ImportError: No module named 'menu' </code></pre> <p>When I run a local server for the application it works just fine. Its only when I run 'coverage run 'coverage run menu/tests.py'. Since it is running fine, and the module is in my setting's installed apps, Im not entirely sure why this is happening.</p> <p>menu/tests.py</p> <pre><code>import unittest from menu import views class ModelTestCase(unittest.TestCase): def setUp(self): pass def test_menu(self): pass if __name__ == '__main__': unittest.main() </code></pre> <p>settings.py</p> <pre><code>INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'menu', 'django_nose' ) </code></pre> <p>Traceback</p> <pre><code>timothybaney$ coverage run menu/tests.py Traceback (most recent call last): File "menu/tests.py", line 3, in &lt;module&gt; from menu import views ImportError: No module named 'menu' </code></pre>
2
2016-08-28T02:01:24Z
39,188,059
<p>It's not much information you gave us.. but when I take a look at the Traceback it says <code>File 'menu/tests.py'</code>. So if the views.py is also inside the menu Folder you just can write:</p> <pre><code>import views </code></pre> <p>If the views.py is in the main folder you could write:</p> <pre><code>from ..main import views #replace 'main' with your folder name </code></pre>
2
2016-08-28T04:36:25Z
[ "python", "django", "testing", "module" ]
how rsync_project to run on the remote server
39,187,394
<p>Here is my scenario:</p> <p><img src="https://cloud.githubusercontent.com/assets/759107/18025282/f5ba497a-6c5f-11e6-86be-b7324f109192.png" alt="fabric-question"> </p> <p>I want to execute local fab command to rsync code in the bastion server to the web server parallel, but I find <code>rsync_project</code> can only run as a <strong>local</strong> command, fail to find code base path in my local machine. How to slove this issue, and is there a way to set <code>local</code> host string as the bastion server to let rsync_project run on the bastion server properly? </p> <p>Thank you for you time.</p>
0
2016-08-28T02:03:31Z
39,252,618
<p>I think your best bet would be to install fabric and rsync_project on 'bastion' and use fabrics "run" to call rsync_project on bastion.</p> <p>Another option would be to use "get" to receive a copy from bastion and run rsync_project locally. But this approach will not match your picture.</p>
0
2016-08-31T14:40:10Z
[ "python", "fabric" ]
Django error with ViewSet using SimpleRouter
39,187,408
<p>views.py</p> <pre><code>from rest_framework import viewsets from rest_framework.response import Response from rest_framework import generics from ticker.serializers import PriceSerializer from ticker.models import Price import datetime from nexchange.settings import DEFAULT_HOUR_RANGE class LastPricesViewSet(viewsets.ViewSet): def list(self, request): queryset = Price.objects.filter().order_by('-id')[:2] serializer = PriceSerializer(queryset, many=True) return Response(serializer.data) class PriceHistoryViewSet(generics.ListAPIView): serializer_class = PriceSerializer def get_queryset(self, request): hours = self.request.query_params.get('hours', DEFAULT_HOUR_RANGE) relevant = datetime.datetime.now() - datetime.timedelta(seconds=hours * 3600) queryset = Price.objects.filter(created_on__gte=relevant).order_by('id') return queryset </code></pre> <p>urls.py:</p> <pre><code>from rest_framework.routers import SimpleRouter from ticker.views import LastPricesViewSet, PriceHistoryViewSet router = SimpleRouter() router.register(r'price/latest', LastPricesViewSet, base_name='latest') router.register(r'price/history', PriceHistoryViewSet, base_name='history') api_patterns = router.urls </code></pre> <p>The following error is raised during <code>runserver</code> (without basename kwarg): <code>AssertionError:</code>base_name<code>argument not specified, and could not automatically determine the name from the viewset, as it does not have a</code>.queryset<code>attribute.</code></p> <p>However, when I add the wished <code>basename</code>, the error changes to: <code>TypeError: as_view() takes 1 positional argument but 2 were given</code></p> <p>I suspect that it might be related to combining a ViewSet and a ListAPIView in one router.</p>
0
2016-08-28T02:05:27Z
39,187,711
<p>Solution: You must inherit from <code>viewsets.ViewSetMixin</code> to implement the methods that are required by a ViewSetClass to be registered with the DRF router, otherwise it is possible to use the simple Django <code>urlconf</code> notation.</p> <p><code>views.py:</code></p> <pre><code>class PriceHistoryViewSet(viewsets.ViewSetMixin, generics.ListAPIView): serializer_class = PriceSerializer def get_queryset(self, *args, **kwargs): hours = self.request.query_params.get('hours', DEFAULT_HOUR_RANGE) relevant = datetime.datetime.now() - datetime.timedelta(seconds=hours * 3600) queryset = Price.objects.filter(created_on__gte=relevant).order_by('id') return queryset </code></pre> <p><code>urls.py:</code></p> <pre><code>from rest_framework.routers import SimpleRouter from ticker.views import LastPricesViewSet, PriceHistoryViewSet router = SimpleRouter() router.register(r'price/latest', LastPricesViewSet, base_name='latest') router.register(r'price/history', PriceHistoryViewSet, base_name='history') api_patterns = router.urls </code></pre>
0
2016-08-28T03:14:47Z
[ "python", "django-rest-framework" ]
Trying to automate voucher code input with Selenium Python
39,187,438
<p>Ok so, I'm 16 and am new to Python. I need projects to do to help me learn, so I came up with a PlayStation Plus code generator as a project and so far it does:</p> <p>-Generates a code -Logs in to Account Management at Sony's site -Enters code at redeem section -Checks if error occurred</p> <pre><code>voucher_box = driver.find_element_by_id("voucherCode") redeem_button = driver.find_element_by_id("redeemGiftCardButton") while i &lt; amount: voucher_box.clear() currentcode = codegen.codegen() voucher_box.send_keys(currentcode) redeem_button.click() if "The Prepaid Card code you entered is incorrect or is no longer valid" in driver.page_source: print("Error found") else: print("Error not found") i += 1 </code></pre> <p>It works completely fine if only done ONCE but if for example, I set amount to 2, I get my error message "Error found" and then it crashes and gives me this:</p> <pre><code>Traceback (most recent call last): File "C:/Users/M4SS3CR3/PycharmProjects/UKP/main.py", line 38, in &lt;module&gt; voucher_box.clear() File "C:\Users\M4SS3CR3\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webelement.py", line 87, in clear self._execute(Command.CLEAR_ELEMENT) File "C:\Users\M4SS3CR3\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webelement.py", line 461, in _execute return self._parent.execute(command, params) File "C:\Users\M4SS3CR3\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 236, in execute self.error_handler.check_response(response) File "C:\Users\M4SS3CR3\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 192, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.StaleElementReferenceException: Message: Element not found in the cache - perhaps the page has changed since it was looked up Stacktrace: at fxdriver.cache.getElementAt (resource://fxdriver/modules/web-element-cache.js:9454) at Utils.getElementAt (file:///C:/Users/M4SS3CR3/AppData/Local/Temp/tmpn3rxlnty/extensions/fxdriver@googlecode.com/components/command-processor.js:9039) at fxdriver.preconditions.visible (file:///C:/Users/M4SS3CR3/AppData/Local/Temp/tmpn3rxlnty/extensions/fxdriver@googlecode.com/components/command-processor.js:10090) at DelayedCommand.prototype.checkPreconditions_ (file:///C:/Users/M4SS3CR3/AppData/Local/Temp/tmpn3rxlnty/extensions/fxdriver@googlecode.com/components/command-processor.js:12644) at DelayedCommand.prototype.executeInternal_/h (file:///C:/Users/M4SS3CR3/AppData/Local/Temp/tmpn3rxlnty/extensions/fxdriver@googlecode.com/components/command-processor.js:12661) at DelayedCommand.prototype.executeInternal_ (file:///C:/Users/M4SS3CR3/AppData/Local/Temp/tmpn3rxlnty/extensions/fxdriver@googlecode.com/components/command-processor.js:12666) at DelayedCommand.prototype.execute/&lt; (file:///C:/Users/M4SS3CR3/AppData/Local/Temp/tmpn3rxlnty/extensions/fxdriver@googlecode.com/components/command-processor.js:12608) </code></pre> <p>Any help or advice would be appreciated, I apologize before hand if this is not detailed enough.</p>
0
2016-08-28T02:12:16Z
39,187,667
<p>This is an idea of what I mentioned in the comment</p> <pre><code>while i &lt; amount: try: counter = 0 # make sure that the elements can be found # try ten times and then break out of loop while counter &lt; 10: try: voucher_box = driver.find_element_by_id("voucherCode") redeem_button = driver.find_element_by_id("redeemGiftCardButton") break except: time.sleep(1) counter += 1 voucher_box.clear() currentcode = codegen.codegen() voucher_box.send_keys(currentcode) redeem_button.click() if "The Prepaid Card code you entered is incorrect or is no longer valid" in driver.page_source: print("Error found") else: print("Error not found") i += 1 driver.get("my page") # may not need this line because the elements were moved inside the while loop time.sleep(3) # give the page enough time to load except: pass </code></pre> <p>You can also go as far as to do something like html.find(ID) > -1: do something else the ID wasn't present on the page so quit or reload. Also more then anything else it's probably better to move </p> <pre><code>voucher_box = driver.find_element_by_id("voucherCode") redeem_button = driver.find_element_by_id("redeemGiftCardButton") </code></pre> <p>inside the while loop because so that the object is the correct object through everyloop. What could be happening is that you're using elements that belonged to a different dom.</p>
0
2016-08-28T03:06:45Z
[ "python", "python-3.x", "selenium" ]
Am I crazy or is it actually going this fast? (MultiThreading)
39,187,449
<pre><code>from tornado import httpclient import time start = time.time() for x in range(1000): httpclient.AsyncHTTPClient().fetch("https://www.google.com", method="GET") print ('{0} seconds'.format(time.time() - start)) </code></pre> <p>Result <code>1.11500000954 seconds</code></p> <p>I wrote this to see how fast I can send 1000 requests to any site (I chose google) and I don't know why, but I feel like I did something wrong and its not actually going this quick, if I did do something wrong, could someone point out my error?</p> <p>Thank you!</p>
4
2016-08-28T02:14:48Z
39,187,486
<p>Well, yes, you spun off 1000 <em>asynchronous</em> requests to Google and timed that. You did not, however, time the overhead of actually making the HTTP call. That would require a <a href="http://www.tornadoweb.org/en/stable/httpclient.html#tornado.httpclient.AsyncHTTPClient">callback-type</a> of handler.</p>
5
2016-08-28T02:24:01Z
[ "python", "multithreading", "tornado" ]
Am I crazy or is it actually going this fast? (MultiThreading)
39,187,449
<pre><code>from tornado import httpclient import time start = time.time() for x in range(1000): httpclient.AsyncHTTPClient().fetch("https://www.google.com", method="GET") print ('{0} seconds'.format(time.time() - start)) </code></pre> <p>Result <code>1.11500000954 seconds</code></p> <p>I wrote this to see how fast I can send 1000 requests to any site (I chose google) and I don't know why, but I feel like I did something wrong and its not actually going this quick, if I did do something wrong, could someone point out my error?</p> <p>Thank you!</p>
4
2016-08-28T02:14:48Z
39,187,493
<p>I haven't run this yet, so I could be wrong, but fetch() is an asynchronous method that takes a callback, and the callback is what handles the response from the server when it comes. In other words, I'm pretty sure all you're doing here in this ~1 second is creating the representations of the requests — you're definitely not waiting for responses, and I'm not sure you're necessarily even sending the requests. I don't know the internal implementation here, but you might be buffering them in some sense, or you could be making attempts to send them as fast as possible but some/many are failing — and you don't really know what's actually happening, because these asynchronous requests are "fire and forget". </p> <p>If you want to know how many USEFUL requests you can send, as in ones that will reach the server and get a response, you'd need to do the timing in a callback, not in the thread that's creating the requests. </p>
0
2016-08-28T02:24:55Z
[ "python", "multithreading", "tornado" ]
Am I crazy or is it actually going this fast? (MultiThreading)
39,187,449
<pre><code>from tornado import httpclient import time start = time.time() for x in range(1000): httpclient.AsyncHTTPClient().fetch("https://www.google.com", method="GET") print ('{0} seconds'.format(time.time() - start)) </code></pre> <p>Result <code>1.11500000954 seconds</code></p> <p>I wrote this to see how fast I can send 1000 requests to any site (I chose google) and I don't know why, but I feel like I did something wrong and its not actually going this quick, if I did do something wrong, could someone point out my error?</p> <p>Thank you!</p>
4
2016-08-28T02:14:48Z
39,275,759
<p>As the other answers already pointed out, <code>tornado</code> works asynchronously. You have to run the requests in an <a href="http://www.tornadoweb.org/en/stable/ioloop.html#ioloop-objects" rel="nofollow"><code>IOLoop</code></a>. Please note that this has nothing to do with multithreading. The requests are executed in parallel, but within a single thread (the core feature of async IO).</p> <p>This is how it could be done:</p> <pre><code>from tornado import httpclient, ioloop import time start = time.time() loop = ioloop.IOLoop.instance() N = 10 finished = 0 def callback(f): global finished finished += 1 print('%d requests finished' % finished) if finished &gt;= N: loop.stop() print ('{0} seconds'.format(time.time() - start)) for x in range(N): f = httpclient.AsyncHTTPClient().fetch("https://www.google.com", method="GET") loop.add_future(f, callback) loop.start() </code></pre> <p>Your code only creates 1000 request objects (<a href="http://www.tornadoweb.org/en/stable/concurrent.html#tornado.concurrent.Future" rel="nofollow"><code>Future</code></a> objects to be precise), that are actually never send to the server.</p>
0
2016-09-01T15:38:50Z
[ "python", "multithreading", "tornado" ]
Can Python's functool's reduce work with functions in the iterable?
39,187,477
<p>I'm trying to figure out how to return a function in Python. However, I don't know if the issue is with <code>reduce</code> or with my code. What I currently have is this:</p> <h3>Code:</h3> <pre><code>def combine(*args): from functools import reduce def closure(val): return reduce( (lambda x, f: f(x)), args, val) return closure def first(array): return array[0] def shift(array): return array[1:] fifth = combine(first, shift, shift, shift, shift) print(fifth([1,2,3,4,5,6,7,8,9]) # should return 5 </code></pre> <p>However, I get the error message:</p> <blockquote> <p>TypeError: 'int' object is not subscriptable</p> </blockquote> <p>What's causing this?</p>
0
2016-08-28T02:21:19Z
39,187,508
<p>As it is defined in your question, <code>fifth(x)</code> is the same as:</p> <pre><code>fifth(x) = shift(shift(shift(shift(first(x))))) </code></pre> <p>Now, if <code>x = [1,2,3,4,5,6,7,8,9]</code>, then</p> <pre><code>first(x) = 1 </code></pre> <p>But that means that</p> <pre><code>shift(first(x)) = first(x)[1:] = 1[1:] </code></pre> <p>But as the error message tells us, <code>1</code> is not subscriptable.</p> <hr> <p>I assume what's really causing this is a misconception about the order <code>reduce</code> works. It works from left to right, so the leftmost function given is applied first (i.e., innermost):</p> <pre><code>combine(a, b, c)(x) = c(b(a(x))) </code></pre> <p>If you instead wanted <code>a(b(c(x)))</code>, you would have to either pass them to <code>combine</code> in the reverse order, or modify your defintion of <code>combine</code> so that it reverses the order of the arguments for you:</p> <pre><code>def combine(*args): from functools import reduce def closure(val): return reduce( (lambda x, f: f(x)), reversed(args), val) return closure </code></pre>
3
2016-08-28T02:28:45Z
[ "python", "reduce" ]
Scraping ASP Page with Python
39,187,532
<p>I'm trying to access an asp page to get data from a website but it always redirects my to its main page. I've tried setting allow_redirect to false but that throws an error, saying "the object has moved and can be found at href=main.htm". The website requires basic auth.</p> <p>-The url I want the data from is:</p> <pre><code>example.com.au/blah/webpage.asp </code></pre> <p>However, whenever I run the requests.get command, it redirects to example.com.au/blah/main.htm.</p> <p>my code:</p> <pre><code>url = "https://example.com.au/blah/webpage.asp" s = requests.Session() s.get(url, headers = {"Authorization" : auth}) r = requests.get(url, cookies = s.cookies, headers = {"Authorization" : auth}) print(r.url) </code></pre> <p>output:</p> <pre><code>example.com.au/blah/main.htm </code></pre>
0
2016-08-28T02:34:56Z
39,188,252
<p>I was able to access the .asp page by first loading a primary asp page </p> <pre><code>s.get(example.com.au/blah/menu.asp) </code></pre> <p>then loading the asp page I wanted</p> <pre><code>r = s.get(example.com.au/blah/webpage.asp) </code></pre> <p>under the same session.</p>
0
2016-08-28T05:17:51Z
[ "python", "asp.net" ]
Regular Expressions re.findall()
39,187,726
<p>I have a function which takes a count and a string as input. It should return a list of all words in that string of length count, and greater. Python however doesn't recognise my variable and returns an empty list.</p> <pre><code>def word_number(count, string): return re.findall(r'\w{count,}', string) </code></pre> <p>How do I pass in the variable 'count' so that the function returns words of count and longer?</p>
3
2016-08-28T03:18:10Z
39,187,732
<p>You can use <a href="https://docs.python.org/2/library/string.html#format-examples" rel="nofollow">str.format</a> to achieve your goal.</p> <pre><code>def word_number(count, string): return re.findall(r'\w{{{0},}}'.format(count), string) </code></pre>
2
2016-08-28T03:19:11Z
[ "python", "regex" ]
Regular Expressions re.findall()
39,187,726
<p>I have a function which takes a count and a string as input. It should return a list of all words in that string of length count, and greater. Python however doesn't recognise my variable and returns an empty list.</p> <pre><code>def word_number(count, string): return re.findall(r'\w{count,}', string) </code></pre> <p>How do I pass in the variable 'count' so that the function returns words of count and longer?</p>
3
2016-08-28T03:18:10Z
39,187,753
<p>You can use <code>printf</code> style formatting:</p> <pre><code>re.findall(r'\w{%s,}' % count, string) </code></pre>
5
2016-08-28T03:25:29Z
[ "python", "regex" ]
Inheritance vs. Composition in Python for this specific case?
39,187,738
<p>Assume I have a series of classes as follows: </p> <pre><code>class A(object): __init__(self,a): self.a=a class B(A): __init__(self,a,b): super(B,self).__init__(a) self.b=b class C(A): __init__(self,a,c1,c2): super(C,self).__init__(a) self.c1=c1 self.c2=c2 </code></pre> <p>If now D is a both B and C, it can be implemented as </p> <pre><code>class D(B,C) __init__(self,a,b,c1,c2,d): pass </code></pre> <p>Or <code>D</code> can be understood as a combination of <code>B</code> and <code>C</code> </p> <pre><code>class D(B,C) __init__(self,a,b,c1,c2,d): self.b=B(a,b) self.c=C(a,c) </code></pre> <p>which seems a bit more complicated than the multi-inheritance case.</p> <p>Well, Now let's imaging <code>class D2</code> is a combination of 5 pieces of <code>B</code> and 3 pieces of <code>C</code>, and <code>class D3</code> is made of <code>D2</code> plus an additional <code>C</code>(with different parameter than the ones in <code>D</code>), and <code>D4</code> is made of <code>D2</code> and 2 additional pieces of <code>C</code>(these 2 additional <code>C</code> has same parameter but different from the ones in <code>D2</code>). It seems <code>D2</code> is good to use composition, but <code>D3</code> is good to use inheritance of <code>D2</code> and <code>C</code>, and <code>D4</code> is good to use inheritance of <code>D2</code> and <code>C</code>. </p> <p>Use combinations, to me, has the disadvantage that I have to write <code>d.c.c1</code> rather than <code>d.c1</code>(as in inheritance case)to get the parameter <code>c</code> or I need to store the parameter <code>c</code> in D directly(besides the one as <code>d.c.c1</code>). Is there any philosophical consistent way to deal with these classes? Thanks!</p>
0
2016-08-28T03:21:55Z
39,187,786
<p>You're bound to get some level of opinion on a question like this -- My answer is to use inheritance <em>iff</em> your classes are designed to be inherited. Use composition in all the other cases. In practice, this usually means that if the code that you are extending exists in a realm out of your control, don't subclass it unless their documentation talks about <em>how</em> you can subclass it effectively.</p> <p>In this case, since it seems like you are the author of the class hierarchy, inheritance seems to make sense (though some of your assertions are incorrect -- <code>D</code> is not so easy to implement as you have supposed since none of the superclass <code>__init__</code> will get called). You probably should familiarize yourself with the design pattern in Raymond Hettinger's <a href="https://rhettinger.wordpress.com/2011/05/26/super-considered-super/" rel="nofollow">"super considered super" article</a></p> <pre><code>class A(object): def __init__(self, a, **kwargs): super(A, self).__init__(**kwargs) self.a = a class B(A): def __init__(self, b, **kwargs): super(B, self).__init__(**kwargs) self.b = b class C(A): def __init__(self, c1, c2, **kwargs): super(c, self).__init__(**kwargs) self.c1 = c1 self.c2 = c2 </code></pre> <p><em>now</em> the implementation of <code>D</code> is trivially simple:</p> <pre><code>class D(B, C): pass </code></pre> <p>Note that we don't even need to define <code>__init__</code> because all of the constructors of the super classes can accept arbitrary keyword arguments. Instantiating a <code>D</code> looks like:</p> <pre><code>d = D(a='a', b='b', c1='c1', c2='c2') </code></pre>
1
2016-08-28T03:32:49Z
[ "python", "inheritance", "combinations" ]
tensorflow: efficient feeding of eval/train data using queue runners
39,187,764
<p>I'm trying to run a tensorflow graph to train a model and periodically evaluate using a separate evaluation dataset. Both training and evaluation data is implemented using queue runners.</p> <p>My current solution is to create both inputs in the same graph and use a <code>tf.cond</code> dependent on an <code>is_training</code> placeholder. My issue is highlighted by the following code:</p> <pre><code>import tensorflow as tf from tensorflow.models.image.cifar10 import cifar10 from time import time def get_train_inputs(is_training): return cifar10.inputs(False) def get_eval_inputs(is_training): return cifar10.inputs(True) def get_mixed_inputs(is_training): train_inputs = get_train_inputs(None) eval_inputs = get_eval_inputs(None) return tf.cond(is_training, lambda: train_inputs, lambda: eval_inputs) def time_inputs(inputs_fn, n_runs=10): graph = tf.Graph() with graph.as_default(): is_training = tf.placeholder(dtype=tf.bool, shape=(), name='is_training') images, labels = inputs_fn(is_training) with tf.Session(graph=graph) as sess: coordinator = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coordinator) t = time() for i in range(n_runs): im, l = sess.run([images, labels], feed_dict={is_training: True}) dt = time() - t coordinator.request_stop() coordinator.join(threads) return dt / n_runs print('Train inputs: %.3f' % time_inputs(get_train_inputs)) print('Eval inputs: %.3f' % time_inputs(get_eval_inputs)) print('Mixed inputs: %.3f' % time_inputs(get_mixed_inputs)) </code></pre> <p>I also had to comment out the <code>image_summary</code> line <code>133</code> of <code>tensorflow/models/image/cifar10/cifar10_inputs.py</code>.</p> <p>This yielded the following results:</p> <pre><code>Train inputs: 0.055 Eval inputs: 0.050 Mixed inputs: 0.105 </code></pre> <p>It would seem in the mixed case both inputs are being read/parsed, even though only 1 is used. Is there a way of avoiding this redundant computation? Or is there a nicer way of switching between training/evaluation data that still leverages the queue-runner setup?</p>
2
2016-08-28T03:27:25Z
39,196,611
<p>After some experimentation, my current best solution is to have a main graph featuring training inputs and a separate graph with just evaluation data operations. I open a separate session to get evaluation data and feed this to the training graph when I want to evaluate. Highly inelegant (and evaluation runs take longer than they ideally would as they have to come ot of one session only to be fed to another), but assuming evaluation runs are rare compared to training runs, this seems preferable to the original version...</p> <pre><code>import tensorflow as tf from tensorflow.models.image.cifar10 import cifar10 from time import time class DataSupplier: def __init__(self, tensor_fn): graph = tf.Graph() with graph.as_default(): with graph.device('/cpu:0'): self.tensor = tensor_fn() self.sess = tf.Session(graph=graph) self.coord = tf.train.Coordinator() self.threads = tf.train.start_queue_runners(sess=self.sess, coord=self.coord) def get_tensor_val(self): return self.sess.run(self.tensor) def clean_up(self): self.coord.request_stop() self.coord.join(self.threads) eval_batcher = DataSupplier(lambda: cifar10.inputs(True)) graph = tf.Graph() with graph.as_default(): images, labels = cifar10.inputs(False) out_images = tf.identity(images) out_labels = tf.identity(labels) n_runs = 100 with tf.Session(graph=graph) as sess: coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess, coord) for i in range(n_runs): sess.run([out_images, out_labels]) t = time() for i in range(n_runs): sess.run([out_images, out_labels]) dt = (time() - t)/n_runs print('Train time: %.3f' % dt) t = time() for i in range(n_runs): eval_images, eval_labels = eval_batcher.get_tensor_val() sess.run([out_images, out_labels], feed_dict={images: eval_images, labels: eval_labels}) dt = (time() - t)/n_runs print('Eval time: %.3f' % dt) coord.request_stop() coord.join(threads) eval_batcher.clean_up() </code></pre> <p>Results:</p> <pre><code>Train time: 0.050 Eval time: 0.064 </code></pre> <p>Update: when using this approach in training problems with tf.contrib.layers and regularization, I find the regularization losses go to infinity if the DataSupplier graph is on the same device as the training graph. I cannot for the life of me explain why this is the case, but explicitly setting the device of the DataSupplier to the CPU (given the training graph is on my GPU) seems to work...</p>
0
2016-08-28T23:11:01Z
[ "python", "tensorflow" ]
models Django invalid literal for int() with base 10: 'None'
39,187,765
<p>I am new to Django, I am tring to make a Blog here is my models </p> <pre><code>from django.db import models from django.contrib.auth.models import User class UserProfile(models.Model): name = models.OneToOneField(User) def __unicode__(self): return self.user.user_name class Category(models.Model): category = models.CharField(max_length=150) def __str__(self): return self.category class Blog(models.Model): title = models.CharField(max_length=150, default="No title") body = models.TextField(default="None") creation_date = models.DateTimeField(auto_now_add=True) author = models.ForeignKey(UserProfile, default=1) category = models.ForeignKey(Category, default="None") def __str__(self): return self.title class Comment(models.Model): comment = models.TextField(default="") blog = models.ForeignKey(Blog) </code></pre> <p>but when i run the command:</p> <blockquote> <p>python manage.py migrate </p> </blockquote> <p>i get the</p> <blockquote> <p>ValueError: invalid literal for int() with base 10: 'None'</p> </blockquote> <p>I tried to delete the old migration file and migrate again but I got the same error, what should I do? </p>
3
2016-08-28T03:27:43Z
39,187,801
<p>Instead of <code>default="None"</code>, specify <code>null=True</code> to allow null for foreign key:</p> <pre><code>category = models.ForeignKey(Category, null=True) </code></pre>
5
2016-08-28T03:35:49Z
[ "python", "django", "django-models" ]
Find rows with non zero values in a subset of columns in pandas dataframe
39,187,788
<p>I have a datframe with 4 columns of strings and others as integers. Now I need to find out those rows of data where at least one of the column is a non-zero value (or > 0).</p> <pre><code>manwra,sahAyaH,T7,0,0,0,0,T manwra, akriti,T5,0,0,1,0,K awma, prabrtih,B6, 0,1,1,0,S </code></pre> <p>My output should be</p> <pre><code>manwra, akriti,T5,0,0,1,0,K awma, prabrtih,B6, 0,1,1,0,S </code></pre> <p>I have tried the following to obtain the answer. The string values are in colums 0,1,2 and -1 (last column).</p> <pre><code>KT[KT.ix[:,3:-2] != 0] </code></pre> <p>What I am receiving as output is </p> <pre><code>NaN,NaNNaN,NaN,NaN,NaN,NaN,NaN NaN,NaN,NaN,NaN,NaN,1,NaN,NaN NaN,NaN,NaN,NaN,1,1,NaN,NaN </code></pre> <p>How to obtain the desired output</p>
4
2016-08-28T03:33:08Z
39,187,821
<p>assume your dataframe is <code>df</code></p> <pre><code>df.loc[(df.loc[:, df.dtypes != object] != 0).any(1)] </code></pre> <p><a href="http://i.stack.imgur.com/A6MMx.png" rel="nofollow"><img src="http://i.stack.imgur.com/A6MMx.png" alt="enter image description here"></a></p>
3
2016-08-28T03:41:50Z
[ "python", "pandas", "dataframe" ]
Find rows with non zero values in a subset of columns in pandas dataframe
39,187,788
<p>I have a datframe with 4 columns of strings and others as integers. Now I need to find out those rows of data where at least one of the column is a non-zero value (or > 0).</p> <pre><code>manwra,sahAyaH,T7,0,0,0,0,T manwra, akriti,T5,0,0,1,0,K awma, prabrtih,B6, 0,1,1,0,S </code></pre> <p>My output should be</p> <pre><code>manwra, akriti,T5,0,0,1,0,K awma, prabrtih,B6, 0,1,1,0,S </code></pre> <p>I have tried the following to obtain the answer. The string values are in colums 0,1,2 and -1 (last column).</p> <pre><code>KT[KT.ix[:,3:-2] != 0] </code></pre> <p>What I am receiving as output is </p> <pre><code>NaN,NaNNaN,NaN,NaN,NaN,NaN,NaN NaN,NaN,NaN,NaN,NaN,1,NaN,NaN NaN,NaN,NaN,NaN,1,1,NaN,NaN </code></pre> <p>How to obtain the desired output</p>
4
2016-08-28T03:33:08Z
39,188,088
<p>You were close: </p> <pre><code> #your's KT[KT.ix[:,3:-2] != 0] #works KT[(KT.ix[:,3:6] &gt; 0).any(1)] 0 1 2 3 4 5 6 7 1 manwra akriti T5 0 0 1 0 K 2 awma prabrtih B6 0 1 1 0 S #key diff (KT.ix[:,3:6] &gt; 0) 3 4 5 6 0 False False False False 1 False False True False 2 False True True False </code></pre>
2
2016-08-28T04:44:10Z
[ "python", "pandas", "dataframe" ]
Find rows with non zero values in a subset of columns in pandas dataframe
39,187,788
<p>I have a datframe with 4 columns of strings and others as integers. Now I need to find out those rows of data where at least one of the column is a non-zero value (or > 0).</p> <pre><code>manwra,sahAyaH,T7,0,0,0,0,T manwra, akriti,T5,0,0,1,0,K awma, prabrtih,B6, 0,1,1,0,S </code></pre> <p>My output should be</p> <pre><code>manwra, akriti,T5,0,0,1,0,K awma, prabrtih,B6, 0,1,1,0,S </code></pre> <p>I have tried the following to obtain the answer. The string values are in colums 0,1,2 and -1 (last column).</p> <pre><code>KT[KT.ix[:,3:-2] != 0] </code></pre> <p>What I am receiving as output is </p> <pre><code>NaN,NaNNaN,NaN,NaN,NaN,NaN,NaN NaN,NaN,NaN,NaN,NaN,1,NaN,NaN NaN,NaN,NaN,NaN,1,1,NaN,NaN </code></pre> <p>How to obtain the desired output</p>
4
2016-08-28T03:33:08Z
39,190,492
<p>Here is an alternative solution which uses <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.select_dtypes.html" rel="nofollow">select_dtypes()</a> method:</p> <pre><code>In [41]: df[(df.select_dtypes(include=['number']) != 0).any(1)] Out[41]: 0 1 2 3 4 5 6 7 1 manwra akriti T5 0 0 1 0 K 2 awma prabrtih B6 0 1 1 0 S </code></pre> <p>Explanation:</p> <pre><code>In [42]: df.select_dtypes(include=['number']) != 0 Out[42]: 3 4 5 6 0 False False False False 1 False False True False 2 False True True False In [43]: (df.select_dtypes(include=['number']) != 0).any(1) Out[43]: 0 False 1 True 2 True dtype: bool </code></pre>
2
2016-08-28T10:50:09Z
[ "python", "pandas", "dataframe" ]
How to I get python2.7 to be the default option when I call python on terminal?
39,187,797
<p>I recently installed the anaconda package while following a tutorial on data science from the guys at dataquest.io and after my install when I type "python" on the terminal I start Python 3 instead of 2.7.</p> <p>How do I get "python" to open up python 2.7 again?</p> <p>When I type</p> <pre><code>which python </code></pre> <p>and</p> <pre><code>which python3 </code></pre> <p>I get the same path :</p> <p>/Users/me/anaconda/bin/</p> <p><strong>Using Mac OS X El Capitan</strong> </p>
1
2016-08-28T03:35:11Z
39,187,820
<p>I was playing around with this some time ago and seem to remember that in my PATH variable in Windows (assuming Windows) - the PATH variable will be read such that the first python key is taken as the default. So in Win 10, I pushed the one I want to the very top of the list and that worked for me.</p>
0
2016-08-28T03:41:24Z
[ "python", "python-2.7" ]
How to I get python2.7 to be the default option when I call python on terminal?
39,187,797
<p>I recently installed the anaconda package while following a tutorial on data science from the guys at dataquest.io and after my install when I type "python" on the terminal I start Python 3 instead of 2.7.</p> <p>How do I get "python" to open up python 2.7 again?</p> <p>When I type</p> <pre><code>which python </code></pre> <p>and</p> <pre><code>which python3 </code></pre> <p>I get the same path :</p> <p>/Users/me/anaconda/bin/</p> <p><strong>Using Mac OS X El Capitan</strong> </p>
1
2016-08-28T03:35:11Z
39,188,641
<p>Seems has been updated your default anaconda and python folder to python 3.</p> <p>Easy way is to remove <code>anaconda</code> folder listed and download python 2.7.11 from anaconda and reinstall.</p> <p>In case you have anaconda and anaconda3 in your system than you can update the path according with your needs: export <code>PATH=$HOME/anaconda/bin:$PATH</code> or export <code>PATH=$HOME/anaconda3/bin:$PATH</code>.</p> <p>Hope it helps. </p>
0
2016-08-28T06:38:33Z
[ "python", "python-2.7" ]
How to I get python2.7 to be the default option when I call python on terminal?
39,187,797
<p>I recently installed the anaconda package while following a tutorial on data science from the guys at dataquest.io and after my install when I type "python" on the terminal I start Python 3 instead of 2.7.</p> <p>How do I get "python" to open up python 2.7 again?</p> <p>When I type</p> <pre><code>which python </code></pre> <p>and</p> <pre><code>which python3 </code></pre> <p>I get the same path :</p> <p>/Users/me/anaconda/bin/</p> <p><strong>Using Mac OS X El Capitan</strong> </p>
1
2016-08-28T03:35:11Z
39,191,907
<p>You can create a <em>soft link</em> from <code>python</code> to <code>python2.7,</code> namely:</p> <pre><code>$ pushd /bin &amp;&amp; ln -s /bin/python python2.7 $ ls -ls `which python` 1 lrwxrwxrwx 1 Administrator None 13 Feb 25 2016 /bin/python -&gt; python2.7 </code></pre>
0
2016-08-28T13:43:15Z
[ "python", "python-2.7" ]
url_for() paramater was ignored in flask
39,187,809
<p>My route method is:</p> <pre><code>@app.route('/movie/') def movie(page_num=1): #...detail skiped... </code></pre> <p>And my template is:</p> <pre><code>&lt;li&gt;&lt;a href="{{ url_for('movie',page_num=page_num) }}"&gt;{{ page_num }}&lt;/a&gt;&lt;/li&gt; </code></pre> <p>When I click the link,the address bar shows <em>"127.0.0.1:5000/movie/?page_num=5"</em>,but the <em>pagination.page</em> shows it is still page 1.</p> <p>Why the paramater was ignored and how can I fix it?</p>
0
2016-08-28T03:38:02Z
39,189,804
<p>Since you skipped the code of your function it's hard to say what's wrong. But I suspect that you just don't catch GET parameters correctly. To do this you can either use <a href="http://flask.pocoo.org/docs/0.11/quickstart/#variable-rules" rel="nofollow">variable rules</a> with dynamic name component in your route; or access parameters submitted in the URL with <code>request.args.get</code>.</p> <p>Here's a minimal example showing boths methods:</p> <pre><code>from flask import Flask, url_for, request app = Flask(__name__) @app.route('/') def index(): link = url_for('movie',page_num=5) return "&lt;a href='{0}'&gt;Click&lt;/a&gt;".format(link) @app.route('/index2') def index_get(): link = url_for('movie_get',page_num=5) return "&lt;a href='{0}'&gt;Click&lt;/a&gt;".format(link) @app.route('/movie/&lt;page_num&gt;') def movie(page_num=1): return str(page_num) @app.route('/movie_get') def movie_get(): param = request.args.get('page_num', '1') return str(param) if __name__ == '__main__': app.run(debug=True) </code></pre>
1
2016-08-28T09:26:02Z
[ "python", "flask", "url-for" ]
Scikit-learn script giving vastly different results than the tutorial, and gives an error when I change the dataframes
39,187,875
<p>I'm working through a tutorial that has this section:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; from sklearn.feature_extraction.text import TfidfVectorizer &gt;&gt;&gt; from sklearn.linear_model.logistic import LogisticRegression &gt;&gt;&gt; from sklearn.cross_validation import train_test_split, cross_val_score &gt;&gt;&gt; df = pd.read_csv('data/sms.csv') &gt;&gt;&gt; X_train_raw, X_test_raw, y_train, y_test = train_test_split(df['message'], df['label']) &gt;&gt;&gt; vectorizer = TfidfVectorizer() &gt;&gt;&gt; X_train = vectorizer.fit_transform(X_train_raw) &gt;&gt;&gt; X_test = vectorizer.transform(X_test_raw) &gt;&gt;&gt; classifier = LogisticRegression() &gt;&gt;&gt; classifier.fit(X_train, y_train) &gt;&gt;&gt; precisions = cross_val_score(classifier, X_train, y_train, cv=5, scoring='precision') &gt;&gt;&gt; print 'Precision', np.mean(precisions), precisions &gt;&gt;&gt; recalls = cross_val_score(classifier, X_train, y_train, cv=5, scoring='recall') &gt;&gt;&gt; print 'Recalls', np.mean(recalls), recalls </code></pre> <p>Which I then copied with few modifications:</p> <pre><code>ddir = (sys.argv[1]) df = pd.read_csv(ddir + '/SMSSpamCollection', sep='\t', quoting=csv.QUOTE_NONE, names=["label", "message"]) X_train_raw, X_test_raw, y_train, y_test = train_test_split(df['label'], df['message']) vectorizer = TfidfVectorizer() X_train = vectorizer.fit_transform(X_train_raw) X_test = vectorizer.transform(X_test_raw) classifier = LogisticRegression() classifier.fit(X_train, y_train) precisions = cross_val_score(classifier, X_train, y_train, cv=5, scoring='precision') recalls = cross_val_score(classifier, X_train, y_train, cv=5, scoring='recall') print 'Precision', np.mean(precisions), precisions print 'Recalls', np.mean(recalls), recalls </code></pre> <p>However, despite there being next to no differences in the code, the results in the book are far better than mine:</p> <p>Book: <code>Precision 0.992137651822 [ 0.98717949 0.98666667 1. 0.98684211 1. ]</code> <code>Recall 0.677114261885 [ 0.7 0.67272727 0.6 0.68807339 0.72477064]</code></p> <p>Mine: <code>Precision 0.108435683974 [ 2.33542342e-06 1.22271611e-03 1.68918919e-02 1.97530864e-01 3.26530612e-01]</code> <code>Recalls 0.235220281632 [ 0.00152053 0.03370787 0.125 0.44444444 0.57142857]</code></p> <p>Going back over the script to see what went wrong, I thought that line 18: </p> <pre><code>X_train_raw, X_test_raw, y_train, y_test = train_test_split(df['label'], df['message']) </code></pre> <p>was the culprit, and changed <code>(df['label'], df['message'])</code> to <code>(df['message'], df['label'])</code>. But that gave me an error:</p> <pre><code>Traceback (most recent call last): File "Chapter4[B-FLGTLG]C[Y-BCPM][G-PAR--[00].py", line 30, in &lt;module&gt; precisions = cross_val_score(classifier, X_train, y_train, cv=5, scoring='precision') File "/usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py", line 1433, in cross_val_score for train, test in cv) File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 800, in __call__ while self.dispatch_one_batch(iterator): File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 658, in dispatch_one_batch self._dispatch(tasks) File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 566, in _dispatch job = ImmediateComputeBatch(batch) File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 180, in __init__ self.results = batch() File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 72, in __call__ return [func(*args, **kwargs) for func, args, kwargs in self.items] File "/usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py", line 1550, in _fit_and_score test_score = _score(estimator, X_test, y_test, scorer) File "/usr/local/lib/python2.7/dist-packages/sklearn/cross_validation.py", line 1606, in _score score = scorer(estimator, X_test, y_test) File "/usr/local/lib/python2.7/dist-packages/sklearn/metrics/scorer.py", line 90, in __call__ **self._kwargs) File "/usr/local/lib/python2.7/dist-packages/sklearn/metrics/classification.py", line 1203, in precision_score sample_weight=sample_weight) File "/usr/local/lib/python2.7/dist-packages/sklearn/metrics/classification.py", line 984, in precision_recall_fscore_support (pos_label, present_labels)) ValueError: pos_label=1 is not a valid label: array(['ham', 'spam'], dtype='|S4') </code></pre> <p>What could be the problem here? The data is here: <a href="http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection" rel="nofollow">http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection</a> in case anyone wants to try.</p>
0
2016-08-28T03:58:18Z
39,197,222
<p>The error at the end of the stacktrace is the key to understand what's going on here.</p> <blockquote> <p>ValueError: pos_label=1 is not a valid label: array(['ham', 'spam'], dtype='|S4')</p> </blockquote> <p>You're trying to score your model with precision and recall. Recall that these scoring methods are formulated in terms of true positives, false positives, and false negatives. But how does <code>sklearn</code> know what is positive and what is negative? Is it 'ham' or 'spam'? We need a way to tell <code>sklearn</code> that we consider 'spam' the positive label and 'ham' the negative label. According to the <code>sklearn</code> documentation, the precision and recall scorers by default expect a positive label of <code>1</code>, hence the <code>pos_label=1</code> part of the error message.</p> <p>There are at least 3 ways to go about fixing this.</p> <p><b>1. Encode 'ham' and 'spam' values as 0 and 1 directly from the data source in order to accommodate the precision/recall scorers:</b></p> <pre><code># Map dataframe to encode values and put values into a numpy array encoded_labels = df['label'].map(lambda x: 1 if x == 'spam' else 0).values # ham will be 0 and spam will be 1 # Continue as normal X_train_raw, X_test_raw, y_train, y_test = train_test_split(df['message'], encoded_labels) </code></pre> <p><b>2. Use <code>sklearn</code>'s built-in function (<code>label_binarize</code>) to transform the categorical data into encoded integers in order to accommodate precision/recall scorers: </b></p> <p>This will transform your categorical data to integers.</p> <pre><code># Encode labels from sklearn.preprocessing import label_binarize encoded_column_vector = label_binarize(df['label'], classes=['ham','spam']) # ham will be 0 and spam will be 1 encoded_labels = np.ravel(encoded_column_vector) # Reshape array # Continue as normal X_train_raw, X_test_raw, y_train, y_test = train_test_split(df['message'], encoded_labels) </code></pre> <p><b> 3. Create scorer objects with custom arguments for <code>pos_label</code>:</b></p> <p>As the documentation says, the precision and recall scores by default have a <code>pos_label</code> argument of <code>1</code>, but this can be changed to inform the scorer which string represents the positive label. You can construct scorer objects that have different arguments with <code>make_scorer</code>.</p> <pre><code># Start out as you did originally with string labels X_train_raw, X_test_raw, y_train, y_test = train_test_split(df['message'], df['label']) # Fit classifier as normal ... # Get precision and recall from sklearn.metrics import precision_score, recall_score, make_scorer # Precision precision_scorer = make_scorer(precision_score, pos_label='spam') precisions = cross_val_score(classifier, X_train, y_train, cv=5, scoring=precision_scorer) print 'Precision', np.mean(precisions), precisions # Recall recall_scorer = make_scorer(recall_score, pos_label='spam') recalls = cross_val_score(classifier, X_train, y_train, cv=5, scoring=recall_scorer) print 'Recalls', np.mean(recalls), recalls </code></pre> <p>After making any of these changes to your code, I'm getting average precision and recall scores of around <code>0.990</code> and <code>0.704</code>, consistent with the book's numbers.</p> <p>Of all the 3 options, I recommend #3 the most because it is harder to get wrong.</p>
1
2016-08-29T01:02:30Z
[ "python", "csv", "pandas", "dataframe", "scikit-learn" ]
What is the difference between subprocess.popen and subprocess.run
39,187,886
<p>I'm new to the <code>subprocess</code> module and the ducumentation leaves me wondering what the difference is between <code>subprocess.popen</code> and <code>subprocess.run</code>. Is there a difference in what the command does? Is one just newer? Which is better to use? </p>
0
2016-08-28T04:00:30Z
39,187,984
<p><code>subprocess.run</code> <a href="https://docs.python.org/3/library/subprocess.html#subprocess.run" rel="nofollow">was added in Python 3.5</a> as a simplification over <code>subprocess.Popen</code> when you just want to execute a command and wait until it finishes, but you don't want to do anything else meanwhile. For other cases, you still need to use <code>subprocess.Popen</code>.</p> <p>The main difference is that <code>subprocess.run</code> executes a command and <strong>waits</strong> for it to finish, while with <code>subprocess.Popen</code> you can continue doing your stuff while the process finishes and then just repeatedly call <code>subprocess.communicate</code> yourself to pass and receive data to your process.</p> <p>Note that, what <code>subprocess.run</code> is actually doing is invoking for you the <code>Popen</code> and <code>communicate</code>, so you don't need to make a loop to pass/receive data nor wait for the process to finish.</p> <p>Check <a href="https://docs.python.org/3/library/subprocess.html#subprocess.run" rel="nofollow">this site</a> for the information of which parameters of <code>subprocess.run</code> are passed to <code>Popen</code> and which to <code>communicate</code>.</p>
1
2016-08-28T04:23:27Z
[ "python" ]
HackerRank Project Puler #1
39,187,952
<p>What is wrong with this code? It is not passing case 2 &amp; 3 on HackerRank.</p> <pre><code>T=long(input()) while T&gt;0: N=long(input()) sum=0 for i in range (1,N): if i%3==0 or i%5==0: sum+=i print (sum) T-=1 </code></pre> <p>I'm new in programming and can't figure out what I did wrong.</p> <p><a href="https://www.hackerrank.com/contests/projecteuler/challenges/euler001" rel="nofollow">https://www.hackerrank.com/contests/projecteuler/challenges/euler001</a></p>
-2
2016-08-28T04:17:15Z
39,188,199
<p>There are couple of issues with the code. First since you're using <code>long</code> I guess you're running it on Python 2. On Python 2 <a href="https://docs.python.org/2/library/functions.html#range" rel="nofollow"><code>range</code></a> will generate a <code>list</code> and you'll run out of memory since problem description states that max <code>N</code> is <code>10^9</code>. You could fix that problem by switching to <a href="https://docs.python.org/2/library/functions.html#xrange" rel="nofollow"><code>xrange</code></a> that returns <a href="https://docs.python.org/2/library/stdtypes.html#typesseq-xrange" rel="nofollow">xrange object</a> instead.</p> <p>If you make the change described above the second issue would be speed. Since max <code>N</code> is <code>10^5</code> you'd potentially have to iterate over <code>10^14</code> numbers which takes far too long. In order to fix the used algorithm needs to be changed. You can use formula <code>n * (n + 1) * mul / 2</code> to calculate the sum of all multiples of <code>mul</code> in range <code>0...n</code>. Then you solve a case by just adding the sum of multiples of <code>3</code> and <code>5</code> and subtract multiples of <code>15</code>:</p> <pre><code>def sum_multiples(num, mul): n = num / mul return n * (n + 1) * mul / 2 for _ in xrange(int(raw_input())): num = int(raw_input()) - 1 print sum_multiples(num, 3) + sum_multiples(num, 5) - sum_multiples(num, 15) </code></pre>
0
2016-08-28T05:08:00Z
[ "python" ]
How to select Python Tuple/Dictionary Values where Index > x
39,187,958
<p>How do you select python Tuple/Dictionary values where the index is greater than some number. I would think the code should look similar to the following assuming we create a Tuple:</p> <p><code>dt = (100, 200, 300,400) dt[dt.index &gt; 1] </code></p>
0
2016-08-28T04:18:56Z
39,187,993
<p>You could just slice the tuple. </p> <pre><code>&gt;&gt;&gt; dt = (100, 200, 300,400) &gt;&gt;&gt; dt[2:] (300, 400) </code></pre>
3
2016-08-28T04:24:22Z
[ "python" ]
How to select Python Tuple/Dictionary Values where Index > x
39,187,958
<p>How do you select python Tuple/Dictionary values where the index is greater than some number. I would think the code should look similar to the following assuming we create a Tuple:</p> <p><code>dt = (100, 200, 300,400) dt[dt.index &gt; 1] </code></p>
0
2016-08-28T04:18:56Z
39,188,083
<p>You have to use : instead of > in tuples. like the answer ahsanul haque provided. thumbs up for him.</p>
1
2016-08-28T04:43:13Z
[ "python" ]
Python Conjecture
39,188,001
<p>recently I have been trying to make a spin-off of Collatz conjecture using python 3.0. The program works as it should with positive integers, but it will not work with negative integers. In the program I check if the number is negative and if so I square it, and then proceed with collatz, rules. Unfortuanitly it gives off no error messages. Code below:</p> <pre><code> import sys while True: number = input("Enter any positive integer: ") count = 0 negative = "-" try: int(number) except ValueError: print("Invalid Input...") sys.exit(0) else: number = int(number) ORIGINAL = number while not number != 1: count += 1 number ** 2 if number % 2: number = 3*number+1 elif not number % 2: number = number // 2 print(number) while number != 1: count += 1 if number % 2: number = 3*number+1 elif not number % 2: number = number // 2 print(number) print("The number "+str(ORIGINAL)+" took "+str(count)+" calculations to reach 1") </code></pre> <p>feel free to try out my code, it only uses python 3 and sys!</p> <p>-thanks!</p>
-1
2016-08-28T04:25:46Z
39,189,170
<p>In the mapping used by the Collatz conjecture, a positive value always maps to another positive value. So you only need to check for negative values as part of your initialisation:</p> <pre><code>number = input("Enter any integer: ") count = 0 try: number = int(number) except ValueError: print("Invalid Input...") sys.exit(0) if number &lt; 0: number = number ** 2 </code></pre> <p>The loop <code>while not number != 1:</code> can be deleted. It will only run if number is equal to 1 (there is a double negative). If you enter 1, this loop will run once, assign 3*1+1 to number, and then the second loop will start. In other words, this loop is pointless: delete it. Note the line <code>number ** 2</code> will not change the value of number, since there is no assignment.</p>
1
2016-08-28T08:01:32Z
[ "python", "python-3.x", "sys" ]
How to break out of function and back into loop?
39,188,042
<p>I am currently making a web builder where it will generate a random gif based on the GIPHY API. I am having a problem where I test a case on when the api returns with 0 results.</p> <pre><code>def get_image_link(link): global flag set_count = 0 r = requests.get(link) api_response = json.loads(r.text) response = api_response['data'] if not response: print('GIPHY API returned no results... finding another word...') pass elif response: for set in response: set_count += 1 random_gif_num = random.randint(0, set_count) - 1 try: flag = True return response[random_gif_num]['images']['original']['url'] except TypeError: print(TypeError + '... rerunning application...') pass </code></pre> <p><code>while not flag: get_image_link(get_random_query())</code></p> <p>Essentially, if the results come back with no data in the results, i want it to retry the function to grab another word. The program works when a word with results comes back but when it comes back with 0 results I get a <code>TypeError</code> and it doesn't go back into the loop. I am sure it does this because it doesn't break out of the function and instead returns a <code>[]</code> type. How can I break out of the function and get back into the while loop so I can generate another result? Thank you.</p>
1
2016-08-28T04:32:41Z
39,188,060
<p>You can move catching exception outside the function i.e:</p> <pre><code>while not flag: try: get_image_link(get_random_query()) except TypeError: flag = False print('TypeError... rerunning application...') pass </code></pre>
2
2016-08-28T04:36:26Z
[ "python" ]
How to break out of function and back into loop?
39,188,042
<p>I am currently making a web builder where it will generate a random gif based on the GIPHY API. I am having a problem where I test a case on when the api returns with 0 results.</p> <pre><code>def get_image_link(link): global flag set_count = 0 r = requests.get(link) api_response = json.loads(r.text) response = api_response['data'] if not response: print('GIPHY API returned no results... finding another word...') pass elif response: for set in response: set_count += 1 random_gif_num = random.randint(0, set_count) - 1 try: flag = True return response[random_gif_num]['images']['original']['url'] except TypeError: print(TypeError + '... rerunning application...') pass </code></pre> <p><code>while not flag: get_image_link(get_random_query())</code></p> <p>Essentially, if the results come back with no data in the results, i want it to retry the function to grab another word. The program works when a word with results comes back but when it comes back with 0 results I get a <code>TypeError</code> and it doesn't go back into the loop. I am sure it does this because it doesn't break out of the function and instead returns a <code>[]</code> type. How can I break out of the function and get back into the while loop so I can generate another result? Thank you.</p>
1
2016-08-28T04:32:41Z
39,188,142
<p>Reset flag to false! </p> <pre><code>try: flag = True return response[random_gif_num]['images']['original']['url'] except TypeError: flag = false print(TypeError + '... rerunning application...') pass </code></pre>
2
2016-08-28T04:55:46Z
[ "python" ]
Turn list of JSON objects into Django model isntances
39,188,070
<p>I have a huge array of JSON objects in a json file. some objects have more keys than others, but they are all keys that are fields in a model class that I have. I am wondering, what is the best way to iterate over each JSON object to create a model instance with the data from that object, creating null values for any field that the object does not include ? </p> <p>minerals.json (snippet)</p> <pre><code>[ { "name": "Abelsonite", "image filename": "240px-Abelsonite_-_Green_River_Formation%2C_Uintah_County%2C_Utah%2C_USA.jpg", "image caption": "Abelsonite from the Green River Formation, Uintah County, Utah, US", "category": "Organic", "formula": "C&lt;sub&gt;31&lt;/sub&gt;H&lt;sub&gt;32&lt;/sub&gt;N&lt;sub&gt;4&lt;/sub&gt;Ni", "strunz classification": "10.CA.20", "crystal system": "Triclinic", "unit cell": "a = 8.508 Å, b = 11.185 Åc=7.299 Å, α = 90.85°β = 114.1°, γ = 79.99°Z = 1", "color": "Pink-purple, dark greyish purple, pale purplish red, reddish brown", "crystal symmetry": "Space group: P1 or P1Point group: 1 or 1", "cleavage": "Probable on {111}", "mohs scale hardness": "2–3", "luster": "Adamantine, sub-metallic", "streak": "Pink", "diaphaneity": "Semitransparent", "optical properties": "Biaxial", "group": "Organic Minerals" }, { "name": "Abernathyite", "image filename": "240px-Abernathyite%2C_Heinrichite-497484.jpg", "image caption": "Pale yellow abernathyite crystals and green heinrichite crystals", "category": "Arsenate", "formula": "K(UO&lt;sub&gt;2&lt;/sub&gt;)(AsO&lt;sub&gt;4&lt;/sub&gt;)·&lt;sub&gt;3&lt;/sub&gt;H&lt;sub&gt;2&lt;/sub&gt;O", "strunz classification": "08.EB.15", "crystal system": "Tetragonal", "unit cell": "a = 7.176Å, c = 18.126ÅZ = 4", "color": "Yellow", "crystal symmetry": "H-M group: 4/m 2/m 2/mSpace group: P4/ncc", "cleavage": "Perfect on {001}", "mohs scale hardness": "2.5–3", "luster": "Sub-Vitreous, resinous, waxy, greasy", "streak": "Pale yellow", "diaphaneity": "Transparent", "optical properties": "Uniaxial (-)", "refractive index": "nω = 1.597 – 1.608nε = 1.570", "group": "Arsenates" }, { "name": "Abhurite", "image filename": "240px-Abhurite_-_Shipwreck_Hydra%2C_South_coast_of_Norway.jpg", "image caption": "Brownish tabular crystals of abhurite from Shipwreck \"Hydra\", South coast of Norway", "category": "Halide", "formula": "Sn&lt;sub&gt;21&lt;/sub&gt;O&lt;sub&gt;6&lt;/sub&gt;(OH)&lt;sub&gt;14&lt;/sub&gt;Cl&lt;sub&gt;16&lt;/sub&gt;", "strunz classification": "03.DA.30", "crystal symmetry": "Trigonal", "group": "Halides" }, ] </code></pre> <p>models.py</p> <pre><code>from django.db import models class Mineral(models.Model): name = models.CharField(max_length=300, null=True, blank=True) category = models.CharField(max_length=300, null=True, blank=True) formula = models.CharField(max_length=300, null=True, blank=True) crystal_system = models.CharField(max_length=300, null=True, blank=True) unit_cell = models.CharField(max_length=300, null=True, blank=True) color = models.CharField(max_length=300, null=True, blank=True) cleavage = models.CharField(max_length=300, null=True, blank=True) crystal_symmetry = models.CharField(max_length=300, null=True, blank=True) mohs_scale = models.CharField(max_length=300, null=True, blank=True) image_caption = models.CharField(max_length=300, null=True, blank=True) image_filename = models.CharField(max_length=300, null=True, blank=True) strunz_classification = models.CharField(max_length=300, null=True, blank=True) def __str__(self): return self.name </code></pre>
1
2016-08-28T04:38:39Z
39,188,527
<p><code>hasattr</code> and <code>setattr</code> can be very useful to solve your problem. (<a href="https://docs.python.org/3/library/functions.html#hasattr" rel="nofollow">docs</a>)</p> <pre><code>def convert(jsonObject, model): modelObject = model() for key in jsonObject: if hasattr(modelObject, key): setattr(modelObject, key, jsonObject[key]) return modelObject converted = list() for item in jsonArray: mineral = convert(item, Mineral) mineral.save() converted.append(mineral) </code></pre>
2
2016-08-28T06:18:09Z
[ "python", "json", "django", "model" ]
Running Flask with Gunicorn raises TypeError: index() takes 0 positional arguments but 2 were given
39,188,136
<p>I'm trying to run a Flask app with Gunicorn. When I run <code>gunicorn app:index</code>, I get the error <code>TypeError: index() takes 0 positional arguments but 2 were given</code>. None of the examples I've seen show <code>index</code> needing two parameters. What does this error mean? How do I run Flask with Gunicorn?</p> <pre><code>from flask import Flask app = Flask(__name__) @app.route("/") def index(): return 'Hello, World!' </code></pre> <pre><code>gunicorn app:index </code></pre> <pre><code> respiter = self.wsgi(environ, resp.start_response) TypeError: index() takes 0 positional arguments but 2 were given </code></pre> <p>I changed <code>index</code> to take two arguments, but got a different error.</p> <pre><code>@app.route("/") def index(arg1, arg2): return 'Hello, World!' </code></pre> <pre><code> /python3.4/site-packages/flask/templating.py, line 132, in render_template ctx.app.update_template_context(context) AttributeError: 'NoneType' object has no attribute 'app' </code></pre>
1
2016-08-28T04:54:21Z
39,190,113
<p>You have to pass a Flask instance to <code>gunicorn</code>, not a function name like you did with <code>index</code>. So if your app saved as <code>app.py</code>, then you have to run <code>gunicorn</code> as follows:</p> <pre><code>$ gunicorn --bind 127.0.0.1:8000 app:app </code></pre>
1
2016-08-28T10:02:17Z
[ "python", "flask", "gunicorn" ]
Get weekend timestamp in python
39,188,137
<p>I have to convert get weekend timestamp from php to python. I google many times but couldn't find the answer. in php I can do</p> <pre><code>endOfWeek = strtotime('next saturday, 11:59:59pm America/Los_Angeles'); </code></pre> <p>Do you guys know how to do the thing like that in python pls help me out</p>
-1
2016-08-28T04:54:23Z
39,188,928
<p>You can use the <a href="https://dateutil.readthedocs.io/en/stable/" rel="nofollow">dateutil</a> package. </p> <pre><code>from dateutil.relativedelta import * from datetime import * from dateutil.tz import * today = datetime.now(tzutc()) new_time = today+relativedelta(weekday=FR, hours=11, minutes=59, seconds=00) new_time = new_time.astimezone(gettz("America/Los Angeles")) print(new_time) </code></pre> <h3>Output:</h3> <blockquote> <p>2016-09-02 02:19:21.899007-05:00</p> </blockquote>
0
2016-08-28T07:24:33Z
[ "python", "timestamp" ]
Get weekend timestamp in python
39,188,137
<p>I have to convert get weekend timestamp from php to python. I google many times but couldn't find the answer. in php I can do</p> <pre><code>endOfWeek = strtotime('next saturday, 11:59:59pm America/Los_Angeles'); </code></pre> <p>Do you guys know how to do the thing like that in python pls help me out</p>
-1
2016-08-28T04:54:23Z
39,199,473
<p>@Jose Yeah Thank you. I seen work in local for me. But we plan to use Python on Google App Engine and It not allow use extend library. But I find how to do it use standard date, time modules</p> <blockquote> <pre><code> year_no = datetime.datetime.utcnow().isocalendar(0) week_no = datetime.datetime.utcnow().isocalendar()[1] + 1 # next week next_monday_datetime = time.strptime(year_no + ' ' + week_no + ' ' + '1', '%Y %W %w') # monday of next week date # get 23:59:59 of previous sunday timestamp for back-ward compatible with old stack end_of_week_refer = time.mktime(next_monday_datetime) - 1 </code></pre> </blockquote> <p>Although I didn't change it to America/Los Angeles time zone yet, but it should be easy and I already get weekend timestamp</p>
0
2016-08-29T06:04:55Z
[ "python", "timestamp" ]
only rotate part of image python
39,188,198
<p>Can someone tell me how to rotate only part of an image like this:</p> <p><a href="http://i.stack.imgur.com/8MEVW.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/8MEVW.jpg" alt="this"></a></p> <p>How to find coordinate / center of this image:</p> <p><a href="http://i.stack.imgur.com/PxGtN.png" rel="nofollow"><img src="http://i.stack.imgur.com/PxGtN.png" alt="image"></a></p> <p>i can rotate all pict using this</p> <pre><code>from PIL import Image def rotate_image(): img = Image.open("nime1.png") img.rotate(45).save("plus45.png") img.rotate(-45).save("minus45.png") img.rotate(90).save("90.png") img.transpose(Image.ROTATE_90).save("90_trans.png") img.rotate(180).save("180.png") if __name__ == '__main__': rotate_image() </code></pre>
2
2016-08-28T05:07:45Z
39,188,320
<p>You can solve this problem as such. Say you have <code>img = Image.open("nime1.png")</code></p> <ol> <li>Create a copy of the image using img2 = img.copy()</li> <li>Create a crop of img2 at the desired location using img2.crop(). You can read how to do this <a href="http://stackoverflow.com/questions/890051/how-do-i-generate-circular-thumbnails-with-pil">here</a></li> <li>Paste img2 back onto img at the appropriate location using img.paste()</li> </ol> <h1>Notes:</h1> <p>To find the center coordinate, you can divide the width and height by 2 :)</p>
0
2016-08-28T05:35:34Z
[ "python", "image", "rotation", "python-imaging-library", "python-3.4" ]
only rotate part of image python
39,188,198
<p>Can someone tell me how to rotate only part of an image like this:</p> <p><a href="http://i.stack.imgur.com/8MEVW.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/8MEVW.jpg" alt="this"></a></p> <p>How to find coordinate / center of this image:</p> <p><a href="http://i.stack.imgur.com/PxGtN.png" rel="nofollow"><img src="http://i.stack.imgur.com/PxGtN.png" alt="image"></a></p> <p>i can rotate all pict using this</p> <pre><code>from PIL import Image def rotate_image(): img = Image.open("nime1.png") img.rotate(45).save("plus45.png") img.rotate(-45).save("minus45.png") img.rotate(90).save("90.png") img.transpose(Image.ROTATE_90).save("90_trans.png") img.rotate(180).save("180.png") if __name__ == '__main__': rotate_image() </code></pre>
2
2016-08-28T05:07:45Z
39,188,324
<p>You can crop an area of the picture as a new variable. In this case, I cropped a 120x120 pixel box out of the original image. It is rotated by 90 and then pasted back on the original. </p> <pre><code>from PIL import Image img = Image.open('./image.jpg') sub_image = img.crop(box=(200,0,320,120)).rotate(90) img.paste(sub_image, box=(200,0)) </code></pre> <p><a href="http://i.stack.imgur.com/6qc9r.png" rel="nofollow"><img src="http://i.stack.imgur.com/6qc9r.png" alt="enter image description here"></a></p> <p>So I thought about this a bit more and crafted a function that applies a circular mask to the cropped image before rotations. This allows an arbitrary angle without weird effects.</p> <pre><code>def circle_rotate(image, x, y, radius, degree): img_arr = numpy.asarray(image) box = (x-radius, y-radius, x+radius+1, y+radius+1) crop = image.crop(box=box) crop_arr = numpy.asarray(crop) # build the cirle mask mask = numpy.zeros((2*radius+1, 2*radius+1)) for i in range(crop_arr.shape[0]): for j in range(crop_arr.shape[1]): if (i-radius)**2 + (j-radius)**2 &lt;= radius**2: mask[i,j] = 1 # create the new circular image sub_img_arr = numpy.empty(crop_arr.shape ,dtype='uint8') sub_img_arr[:,:,:3] = crop_arr[:,:,:3] sub_img_arr[:,:,3] = mask*255 sub_img = Image.fromarray(sub_img_arr, "RGBA").rotate(degree) i2 = image.copy() i2.paste(sub_img, box[:2], sub_img.convert('RGBA')) return i2 i2 = circle_rotate(img, 260, 60, 60, 45) i2 </code></pre> <p><a href="http://i.stack.imgur.com/CkIVH.png" rel="nofollow"><img src="http://i.stack.imgur.com/CkIVH.png" alt="enter image description here"></a></p>
0
2016-08-28T05:37:07Z
[ "python", "image", "rotation", "python-imaging-library", "python-3.4" ]
Python program with arguments on Windows
39,188,406
<p>How to run this program saved in the file test.py on Windows XP with python 2.7 installed.</p> <pre><code>import argparse parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('integers', metavar='N', type=int, nargs='+',help='an integer for the accumulator') parser.add_argument('--sum', dest='accumulate', action='store_const',const=sum, default=max,help='sum the integers (default: find the max)') args = parser.parse_args() print args.accumulate(args.integers) </code></pre> <p>I tried to run it with command line. For example</p> <pre><code>$ python test.py 1 2 3 4 </code></pre> <p>or</p> <pre><code>$ python test.py 1 2 3 4 --sum </code></pre> <p>gives error "invalid syntax".</p>
0
2016-08-28T05:56:38Z
39,188,473
<p>This is just me being naive, but considering the short error message you posted...</p> <p>Any chance you're getting this code off some book and try to run this on a command line?</p> <p>The book uses <code>$</code> to mark command line/terminal commands, but the character is actually not part of the syntax or command you're supposed to use.</p> <p>So instead of running this:</p> <pre><code>$ python 1 2 3 </code></pre> <p>Run this:</p> <pre><code>python 1 2 3 </code></pre>
0
2016-08-28T06:10:10Z
[ "python", "argparse" ]
Python program with arguments on Windows
39,188,406
<p>How to run this program saved in the file test.py on Windows XP with python 2.7 installed.</p> <pre><code>import argparse parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('integers', metavar='N', type=int, nargs='+',help='an integer for the accumulator') parser.add_argument('--sum', dest='accumulate', action='store_const',const=sum, default=max,help='sum the integers (default: find the max)') args = parser.parse_args() print args.accumulate(args.integers) </code></pre> <p>I tried to run it with command line. For example</p> <pre><code>$ python test.py 1 2 3 4 </code></pre> <p>or</p> <pre><code>$ python test.py 1 2 3 4 --sum </code></pre> <p>gives error "invalid syntax".</p>
0
2016-08-28T05:56:38Z
39,188,523
<p>I tried running your script at the command line and it works perfectly:</p> <pre><code>$ python arg.py 1 2 3 4 --sum 10 </code></pre> <p>In the above, the <code>$</code> is the shell's prompt. What I entered is <code>python arg.py 1 2 3 4 --sum</code>. It works.</p> <p>Now, let's do what I suspect that you are doing: let's start an interactive python shell and enter the above:</p> <pre><code>$ python Python 2.7.12+ (default, Aug 4 2016, 20:04:34) [GCC 6.1.1 20160724] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; python test.py 1 2 3 4 --sum File "&lt;stdin&gt;", line 1 python test.py 1 2 3 4 --sum ^ SyntaxError: invalid syntax </code></pre> <p>This generates the <code>SyntaxError: invalid syntax</code> error that you see. (There is one minor difference: I am on Linux and you are on Windows.)</p> <p>The solution is to exit the python interactive shell and enter the command at the command prompt.</p>
3
2016-08-28T06:17:34Z
[ "python", "argparse" ]
Python program with arguments on Windows
39,188,406
<p>How to run this program saved in the file test.py on Windows XP with python 2.7 installed.</p> <pre><code>import argparse parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('integers', metavar='N', type=int, nargs='+',help='an integer for the accumulator') parser.add_argument('--sum', dest='accumulate', action='store_const',const=sum, default=max,help='sum the integers (default: find the max)') args = parser.parse_args() print args.accumulate(args.integers) </code></pre> <p>I tried to run it with command line. For example</p> <pre><code>$ python test.py 1 2 3 4 </code></pre> <p>or</p> <pre><code>$ python test.py 1 2 3 4 --sum </code></pre> <p>gives error "invalid syntax".</p>
0
2016-08-28T05:56:38Z
39,193,793
<p>Your <code>test</code> script is the first example on the Python argparse documentation. <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow">https://docs.python.org/3/library/argparse.html</a></p> <p>Your comment with new lines added is</p> <pre><code>Python 2.7.8 (default, Jun 30 2014, 16:03:49) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. &gt;&gt;&gt; ================================ RESTART ================================ &gt;&gt;&gt; usage: test [-h] [--sum] N [N ...] test: error: too few arguments &gt;&gt;&gt; $ python test.py 1 2 3 4 SyntaxError: invalid syntax &gt;&gt;&gt; python test.py 1 2 3 4 SyntaxError: invalid syntax &gt;&gt;&gt; $ python test.py 1 2 3 4 SyntaxError: invalid syntax &gt;&gt;&gt; python test.py 1 2 3 4 --sum SyntaxError: invalid syntax &gt;&gt;&gt; python test.py 1 2 3 4 --sum </code></pre> <p>From this I deduce that you saved the script as <code>test</code> ('test.py` would have been better), and ran it, from a Windows command line, as</p> <pre><code>python -i test </code></pre> <p>which produces</p> <pre><code>usage: test [-h] [--sum] N [N ...] test: error: too few arguments </code></pre> <p>That usage message from the <code>parser</code>; <code>test</code> is the name of the script.</p> <p>I'm not sure about the <code>RESTART</code> line. My tests (at the end) suggest your Python call (or some default environment feature) includes the <code>-i</code> option, which leaves you in the interactive Python session, even after the <code>argparse</code> step fails.</p> <p>The next command is straight out of the Python example:</p> <pre><code>&gt;&gt;&gt; $ python test.py 1 2 3 4 SyntaxError: invalid syntax </code></pre> <p>But the context is all wrong. The docs include <code>$</code> to indicate that this is being typed in a commandline (Linux shell or Windows commmand). And the meaning, in the correct context is:</p> <ul> <li>run Python</li> <li>tell it to run the test.py script</li> <li>and pass it the arguments '1','2', etc</li> </ul> <p>But if you are already inside a Python interpreter (indicated by the <code>&gt;&gt;&gt;</code> prompt string), this does not make sense. <code>python</code> and <code>test.py</code> are strings that don't have a default meaning inside Python. So the interpreter gives you a syntax error. And none of the variations fix that.</p> <p>A little further along, the <code>argparse</code> documentation gives an example of calling a <code>parser</code> from within a Python interactive session:</p> <pre><code>&gt;&gt;&gt; parser.parse_args(['--sum', '7', '-1', '42']) </code></pre> <p>That has a very different syntax. In this <code>python -i</code> context it should run.</p> <p>Going back to the Windows command window and typing</p> <pre><code>python test 1 2 3 4 </code></pre> <p>has a better chance of working. If that doesn't work, then you/we need to focus on running an even more basic Python script.</p> <p>=========</p> <p>Here's an example of running another simple script from a Linux shell. The <code>...$</code> is the shell prompt; the <code>&gt;&gt;&gt;</code> is the python prompt. Adding the <code>-i</code> to the initial python call ensures it stays in python after parsing.</p> <pre><code>0957:~/mypy$ python -i simple.py usage: simple.py [-h] foo simple.py: error: too few arguments Traceback (most recent call last): File "simple.py", line 4, in &lt;module&gt; print(parser.parse_args()) ... SystemExit: 2 &gt;&gt;&gt; python simple.py 1 2 File "&lt;stdin&gt;", line 1 python simple.py 1 2 ^ SyntaxError: invalid syntax </code></pre> <p>The main difference between my test and yours is that I don't get the <code>RESTART</code> and I get a traceback. Without the <code>-i</code> I simply get the usage message and a return the command line.</p> <pre><code>1000:~/mypy$ python simple.py usage: simple.py [-h] foo simple.py: error: too few arguments 1000:~/mypy$ </code></pre>
0
2016-08-28T17:09:21Z
[ "python", "argparse" ]
Speech recognition not working in ubuntu
39,188,531
<p>I am starting a work which needs to convert audio into text. I am using the speechrecognition library of python. I saw a tutorial on github on how to use it. The program is not bale to recognise my voice through microphone.</p> <p>I am using python 2.7 on ubuntu 16.04.</p> <p>Code:</p> <pre><code>import speech_recognition as sr # obtain audio from the microphone r = sr.Recognizer() with sr.Microphone() as source: print("Say something!") r.adjust_for_ambient_noise(source) audio = r.listen(source) # recognize speech using Sphinx try: print("Sphinx thinks you said " + r.recognize_sphinx(audio)) except sr.UnknownValueError: print("Sphinx could not understand audio") except sr.RequestError as e: print("Sphinx error; {0}".format(e)) </code></pre> <p><a href="https://github.com/Uberi/speech_recognition/blob/master/examples/microphone_recognition.py" rel="nofollow">github code link</a></p> <p>Output on terminal:</p> <pre><code>shivam@shivam-HP-Pavilion-15-Notebook-PC:~/Python/audio$ python temp.py ALSA lib pcm_dsnoop.c:606:(snd_pcm_dsnoop_open) unable to open slave ALSA lib pcm_dmix.c:1029:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm_dmix.c:1029:(snd_pcm_dmix_open) unable to open slave Say something! </code></pre> <p>After, "Say something!", it keeps on blinking but my voice is not recognised.</p> <p><a href="http://i.stack.imgur.com/kBx15.png" rel="nofollow"><img src="http://i.stack.imgur.com/kBx15.png" alt="mic settings"></a> <a href="http://i.stack.imgur.com/fxN98.png" rel="nofollow"><img src="http://i.stack.imgur.com/fxN98.png" alt=""></a></p>
4
2016-08-28T06:19:00Z
39,232,186
<p>I couldn't get <code>PocketSphinx</code> module installed , running into issues. But if I switch <code>recognize_sphinx</code> with <code>recognize_google</code>, its working for me.</p> <p>Here is your modified code.</p> <pre><code>import speech_recognition as sr # obtain audio from the microphone r = sr.Recognizer() with sr.Microphone() as source: print("Say something!") r.adjust_for_ambient_noise(source) audio = r.listen(source) # recognize speech using Sphinx try: #print("Sphinx thinks you said " + r.recognize_sphinx(audio)) print("Sphinx thinks you said " + r.recognize_google(audio)) except sr.UnknownValueError: print("Sphinx could not understand audio") except sr.RequestError as e: print("Sphinx error; {0}".format(e)) </code></pre> <p>Output</p> <pre><code>Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. &gt;&gt;&gt; ================================ RESTART ================================ &gt;&gt;&gt; Say something! Sphinx thinks you said hello &gt;&gt;&gt; </code></pre> <p>Hope this is useful.</p>
1
2016-08-30T15:58:09Z
[ "python", "speech-recognition", "speech-to-text" ]
precision and recall on biased data set
39,188,687
<p>Suppose a two class classification problem. One class has more than 95% of labelled data, and the other class has 5% of labelled data. The two class are very biased.</p> <p>I am doing class validation to evaluate different classifiers, I found if a classifier intentionally to predict to the class which has majority (95%) label, even if the prediction result on other class is not accurate, from precision/recall, it is hard to distinguish since the other class has only 5% labelled data.</p> <p>Here are the methods/metrics (using precision/recall) I am using. I am wondering if any other better metrics or method to evaluate considering the minor 5% class? I am assign a weight to the minor 5% class, but I ask here for a more systematic method to measure biased data set.</p> <p>Using scikit learn + python 2.7.</p> <pre><code>scores = cross_validation.cross_val_score(bdt, X, Y, cv=10, scoring='recall_weighted') print("Recall: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) scores = cross_validation.cross_val_score(bdt, X, Y, cv=10, scoring='precision_weighted') print("Precision: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) </code></pre>
0
2016-08-28T06:45:56Z
39,188,751
<p>This is a common problem in statistics, thus you will find plenty of resources in the internet. Check, e.g., <a href="http://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/" rel="nofollow">8 Tactics To Combat Imbalanced Training Data</a>.</p> <p>The probably easiest method is to resample your data. The simplest way would be to duplicate the minority class until both classes are equally represented. A statistically more sound approach would be to first learn a probability distribution for reach of your classes, and then draw <em>n</em> samples for every class. Thus, you then have a balanced dataset.</p> <p>This, of course, depends on your data - or simply learn only on an unbiased subset of your data. See the article for more options.</p>
3
2016-08-28T06:57:49Z
[ "python", "python-2.7", "machine-learning", "scikit-learn", "precision-recall" ]
Trying to understand Python loop using underscore and input
39,188,827
<p>One more tip - if anyone is learning Python on HackerRank, knowing this is critical for starting out.</p> <p>I'm trying to understand this code:</p> <pre><code> stamps = set() for _ in range(int(raw_input())): print 'underscore is', _ stamps.add(raw_input().strip()) print stamps </code></pre> <p>Output:</p> <pre><code> &gt;&gt;&gt;2 underscore is 0 &gt;&gt;&gt;first set(['first']) underscore is 1 &gt;&gt;&gt;second set(['second', 'first']) </code></pre> <ol> <li><p>I put 2 as the first raw input. How does the function know that I'm only looping twice? This is throwing me off because it isn't the typical...for i in xrange(0,2) structure. </p></li> <li><p>At first my thinking was the underscore repeats the last command in shell. So I added print statements in the code to see the value of underscore...but the values just show the 0 and 1, like the typical loop structure.</p></li> </ol> <p>I read through this post already and I still can't understand which of those 3 usages of underscore is being used. </p> <p><a href="http://stackoverflow.com/questions/5893163/what-is-the-purpose-of-the-single-underscore-variable-in-python">What is the purpose of the single underscore &quot;_&quot; variable in Python?</a></p> <p>I'm just starting to learn Python so easy explanations would be much appreciated!</p>
2
2016-08-28T07:10:21Z
39,188,897
<p><a href="http://stackoverflow.com/a/5893946/918959">ngoghlan's answer</a> lists 3 conventional uses for <code>_</code> in Python:</p> <blockquote> <ol> <li>To hold the result of the last executed statement in an interactive interpreter session. This precedent was set by the standard CPython interpreter, and other interpreters have followed suit</li> <li><p>For translation lookup in i18n (imported from the corresponding C conventions, I believe), as in code like: </p> <pre><code>raise forms.ValidationError(_("Please enter a correct username"))` </code></pre></li> <li><p>As a general purpose "throwaway" variable name to indicate that part of a function result is being deliberately ignored, as in code like:</p> <pre><code> label, has_label, _ = text.partition(':') </code></pre></li> </ol> </blockquote> <hr> <p>Your question is which one of these is being used in the example in your code. The answer would be that is a throwaway variable (case 3), but its contents are printed <strong>here</strong> for debugging purposes.</p> <p>It is however not a general Python convention to use <code>_</code> as a loop variable if its value is used in any way. Thus you regularly might see:</p> <pre><code> for _ in range(10): print("Hello world") </code></pre> <p>where <code>_</code> immediately signals the reader that the value is not important and it the loop is just repeated 10 times. </p> <p>However in a code such as</p> <pre><code> for i in range(10): do_something(i) </code></pre> <p>where the value of the loop variable is used, it is the convention to use a variable name such as <code>i</code>, <code>j</code> instead of <code>_</code>.</p>
3
2016-08-28T07:20:54Z
[ "python", "loops", "input" ]
Trying to understand Python loop using underscore and input
39,188,827
<p>One more tip - if anyone is learning Python on HackerRank, knowing this is critical for starting out.</p> <p>I'm trying to understand this code:</p> <pre><code> stamps = set() for _ in range(int(raw_input())): print 'underscore is', _ stamps.add(raw_input().strip()) print stamps </code></pre> <p>Output:</p> <pre><code> &gt;&gt;&gt;2 underscore is 0 &gt;&gt;&gt;first set(['first']) underscore is 1 &gt;&gt;&gt;second set(['second', 'first']) </code></pre> <ol> <li><p>I put 2 as the first raw input. How does the function know that I'm only looping twice? This is throwing me off because it isn't the typical...for i in xrange(0,2) structure. </p></li> <li><p>At first my thinking was the underscore repeats the last command in shell. So I added print statements in the code to see the value of underscore...but the values just show the 0 and 1, like the typical loop structure.</p></li> </ol> <p>I read through this post already and I still can't understand which of those 3 usages of underscore is being used. </p> <p><a href="http://stackoverflow.com/questions/5893163/what-is-the-purpose-of-the-single-underscore-variable-in-python">What is the purpose of the single underscore &quot;_&quot; variable in Python?</a></p> <p>I'm just starting to learn Python so easy explanations would be much appreciated!</p>
2
2016-08-28T07:10:21Z
39,188,899
<p>The underscore is like a normal variable in your program.</p>
1
2016-08-28T07:21:04Z
[ "python", "loops", "input" ]
Trying to understand Python loop using underscore and input
39,188,827
<p>One more tip - if anyone is learning Python on HackerRank, knowing this is critical for starting out.</p> <p>I'm trying to understand this code:</p> <pre><code> stamps = set() for _ in range(int(raw_input())): print 'underscore is', _ stamps.add(raw_input().strip()) print stamps </code></pre> <p>Output:</p> <pre><code> &gt;&gt;&gt;2 underscore is 0 &gt;&gt;&gt;first set(['first']) underscore is 1 &gt;&gt;&gt;second set(['second', 'first']) </code></pre> <ol> <li><p>I put 2 as the first raw input. How does the function know that I'm only looping twice? This is throwing me off because it isn't the typical...for i in xrange(0,2) structure. </p></li> <li><p>At first my thinking was the underscore repeats the last command in shell. So I added print statements in the code to see the value of underscore...but the values just show the 0 and 1, like the typical loop structure.</p></li> </ol> <p>I read through this post already and I still can't understand which of those 3 usages of underscore is being used. </p> <p><a href="http://stackoverflow.com/questions/5893163/what-is-the-purpose-of-the-single-underscore-variable-in-python">What is the purpose of the single underscore &quot;_&quot; variable in Python?</a></p> <p>I'm just starting to learn Python so easy explanations would be much appreciated!</p>
2
2016-08-28T07:10:21Z
39,217,043
<p>For anyone that is trying to understand how underscore and input works in a loop - after spending quite sometime debugging and printing - here's the code that made me understand what was going on.</p> <pre><code> for _ in range(int(raw_input())): print raw_input() </code></pre> <p>User input:</p> <pre><code> 2 Dog Cat </code></pre> <p>Output:</p> <pre><code> # no output despite entering 2, but 2 is set as range - loops 2 times Dog Cat </code></pre> <p>Bonus - notice how there's an int() conversion for the first line in the for loop? </p> <p>The first input is 2, so int() converts that just fine. You can tell the first line of code is being ignored now because putting the second input, 'Dog', through int() would yield an error. Can't words into integer numbers.</p>
0
2016-08-30T00:16:32Z
[ "python", "loops", "input" ]
Check if a process is still running
39,188,869
<p>I have the following problem:</p> <p>I need my Python script run a bash script. In case the bash script is running more than let's say 10 seconds, I need to kill it. This is what I have so far:</p> <pre><code>cmd = ["bash", "script.sh", self.get_script_path()] process = subprocess.Popen(cmd) time.sleep(10) # process running here... procinfo = psutil.Process(process.pid) children = procinfo.children(recursive=True) for child in children: os.kill(child.pid, signal.SIGKILL) </code></pre> <p>The thing I am afraid of is this scenario: The bash script finishes in 1 second, frees its PID and the system passes the PID to another process. After 10 seconds, I kill the PID which I think it belongs to my script but it is not true and I kill some other process. The script needs to be run as root because I require <code>chroot</code> in it.</p> <p>Any ideas?</p>
1
2016-08-28T07:16:43Z
39,188,887
<p>I am using command stop process_name on ubuntu for stop my process. May it will helpful.</p>
-3
2016-08-28T07:19:54Z
[ "python", "linux", "bash", "process", "psutil" ]
Check if a process is still running
39,188,869
<p>I have the following problem:</p> <p>I need my Python script run a bash script. In case the bash script is running more than let's say 10 seconds, I need to kill it. This is what I have so far:</p> <pre><code>cmd = ["bash", "script.sh", self.get_script_path()] process = subprocess.Popen(cmd) time.sleep(10) # process running here... procinfo = psutil.Process(process.pid) children = procinfo.children(recursive=True) for child in children: os.kill(child.pid, signal.SIGKILL) </code></pre> <p>The thing I am afraid of is this scenario: The bash script finishes in 1 second, frees its PID and the system passes the PID to another process. After 10 seconds, I kill the PID which I think it belongs to my script but it is not true and I kill some other process. The script needs to be run as root because I require <code>chroot</code> in it.</p> <p>Any ideas?</p>
1
2016-08-28T07:16:43Z
39,189,189
<p>I think the <a href="http://linux.die.net/man/1/timeout" rel="nofollow"><code>timeout</code></a> command is perfect for you. From the doc page:</p> <blockquote> <h2>Synopsis</h2> <p><strong>timeout</strong> <em>[OPTION] NUMBER[SUFFIX] COMMAND [ARG]...</em><br/> <strong>timeout</strong> <em>[OPTION]</em></p> <hr> <h2>Description</h2> <p>Start COMMAND, and kill it if still running after NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.</p> <p><strong>-s</strong>, <strong>--signal</strong>=<em>SIGNAL</em><br/> specify the signal to be sent on timeout.<br/> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<em>SIGNAL</em> may be a name like 'HUP' or a number.<br/> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;See 'kill -l' for a list of signals </p> </blockquote> <p>By depending on <code>timeout</code>, you don't have to worry about the messy details of PID reuse, race conditions, etc. Those concerns are nicely encapsulated in this standard Unix utility. Another benefit is your script will resume execution immediately upon early termination by the subprocess, rather than needlessly sleeping for the full 10 seconds.</p> <p>Demo in bash:</p> <pre><code>timeout -s9 10 sleep 11; echo $?; ## Killed ## 137 timeout -s9 10 sleep 3; echo $?; ## 0 </code></pre> <p>Demo in python:</p> <pre><code>import subprocess; subprocess.Popen(['timeout','-s9','10','sleep','11']).wait(); ## -9 subprocess.Popen(['timeout','-s9','10','sleep','3']).wait(); ## 0 </code></pre>
0
2016-08-28T08:03:28Z
[ "python", "linux", "bash", "process", "psutil" ]
Check if a process is still running
39,188,869
<p>I have the following problem:</p> <p>I need my Python script run a bash script. In case the bash script is running more than let's say 10 seconds, I need to kill it. This is what I have so far:</p> <pre><code>cmd = ["bash", "script.sh", self.get_script_path()] process = subprocess.Popen(cmd) time.sleep(10) # process running here... procinfo = psutil.Process(process.pid) children = procinfo.children(recursive=True) for child in children: os.kill(child.pid, signal.SIGKILL) </code></pre> <p>The thing I am afraid of is this scenario: The bash script finishes in 1 second, frees its PID and the system passes the PID to another process. After 10 seconds, I kill the PID which I think it belongs to my script but it is not true and I kill some other process. The script needs to be run as root because I require <code>chroot</code> in it.</p> <p>Any ideas?</p>
1
2016-08-28T07:16:43Z
39,189,276
<p>Since you are already using <code>psutil</code> I suggest you replace the calls to the <code>subprocess</code> module with calls to <a href="http://pythonhosted.org/psutil/#psutil.Popen" rel="nofollow"><code>psutil.Popen</code></a>. This class has the same interface of <code>subprocess.Popen</code> but provides all the functionality of <code>psutil.Process</code>.</p> <p>Also note that <strong>the <code>psutil</code> library pre-emptively checks for PID reuse already</strong>, at least for a number of methods including <code>terminate</code> and <code>kill</code> (just read the <a href="http://pythonhosted.org/psutil/#psutil.Popen" rel="nofollow">documentation for <code>Process</code></a>).</p> <p>This means that the following code:</p> <pre><code>cmd = ["bash", "script.sh", self.get_script_path()] process = psutil.Popen(cmd) time.sleep(10) # process running here... children = process.children(recursive=True) for child in children: child.terminate() # try to close the process "gently" first child.kill() </code></pre> <p>Note that the documentation for <code>children</code> says:</p> <blockquote> <p><strong>children(recursive=False)</strong></p> <p>Return the children of this process as a list of <code>Process</code> objects, <strong>preemptively checking whether PID has been reused.</strong></p> </blockquote> <p>In summary this means that:</p> <ol> <li>When you call <code>children</code> the <code>psutil</code> library checks that you want the children of the correct process and not one that happen to have the same pid</li> <li>When you call <code>terminate</code> or <code>kill</code> the library makes sure that you are killing your child process and not a random process with the same pid.</li> </ol>
2
2016-08-28T08:15:52Z
[ "python", "linux", "bash", "process", "psutil" ]
TFRecordReader in data input pipeline
39,188,951
<p>I'm currently stuck with an implementation problem of <code>TFRecordReader</code></p> <p>This is the setup : </p> <pre><code>trainQ = tf.train.string_input_producer(fileList) RecReader = tf.TFRecordReader() batch_strings = RecReader.read(trainQ) con,seq=tf.parse_single_sequence_example(batch_strings.value,context_features=lengths_context,sequence_features=convo_pair,name='parse_ex') encoder_inputs,decoder_inputs,enc_len,dec_len = seq['utterance'],seq['response'],con['utter_length'],con['resp_length'] mini_batch = tf.train.batch([encoder_inputs,decoder_inputs,enc_len,dec_len,decoder_inputs],batch_size,2,capacity=50*batch_size,dynamic_pad = True,enqueue_many=False) encoder_inp,decoder_inp,encoder_lens,decoder_lens,labels = mini_batch ... &lt;build rest of the model&gt; ... loss = &lt;some loss&gt; train_ops = &lt;optimizer&gt;.minimize(loss) </code></pre> <p>Now when I do <code>train_ops.run()</code>, it automatically reads off the queue and trains the model over a batch. But if I want to evaluate some intermediate variable, I cannot do <code>variable.eval()</code> since that would mean a new batch being read off the trainQ queue with different values </p> <p>One way I can think of circumventing this to use a placeholder to feed <code>parse_single_example</code> and populating the placeholder in the train loop each time. But is there a better way of doing this i.e. evaluating variables without reading off the queue again?</p> <p>Hope this is not confusing</p>
1
2016-08-28T07:29:12Z
39,189,797
<p>If you want to evaluate an intermediate layer (call it <code>conv3</code>) that depends on the batch input every 100 iterations, you can do the following:</p> <pre class="lang-py prettyprint-override"><code>for step in range(100000): if step % 100 != 0: # only run the training operation sess.run(train_op) else: # run train_op AND `conv3` at the same time _, conv3_value = sess.run([train_op, conv3]) </code></pre> <p>The trick here is to call <code>train_op</code> and <code>conv3</code> in the same call the <code>tf.Session</code>. That way, a batch is read off the training queue to train one step, but it is also used at the same time to compute <code>conv3</code>.</p>
0
2016-08-28T09:25:12Z
[ "python", "queue", "tensorflow" ]
Sublime Text build system for input/output in python
39,189,125
<p>I want to use Sublime Text in Ubuntu for python programming and want to write a build system that takes input from a mentioned file and prints output to other mentioned file. I have done same for c++ using this code :</p> <pre><code>{ "cmd": ["g++ -std=c++11 ${file} -o ${file_path}/${file_base_name} &amp;&amp; ${file_path}/${file_base_name}&lt;${file_path}/inputf.in&gt;${file_path}/outputf.in"], "shell" : true } </code></pre>
0
2016-08-28T07:52:15Z
39,243,853
<p>Tushar found the solution for himself by modifying the above code like below:</p> <pre><code>{ "cmd": ["python ${file}&lt;${file_path}/inputf.in&gt;${file_path}/outputf.in"], "shell":true } </code></pre>
0
2016-08-31T07:57:29Z
[ "python", "ubuntu", "sublimetext3" ]
Python 2.7, i want to get values between = and , in a string without using multiple split.
39,189,172
<p>Ex: "name=sam,city=london, age=24,location=abc" I want to get sam, london, 24 and abc in seperate variables. Without using split because it results in many useless variables. Will regular expression help? Note: The last value will not have , In this case no , after abc. Thank you</p>
-7
2016-08-28T08:01:43Z
39,189,196
<p>What's the useless info obtained with split?</p> <pre><code>In [365]: a = "name=sam,city=london, age=24,location=abc" In [366]: [x.split('=')[1] for x in a.split(',')] Out[366]: ['sam', 'london', '24', 'abc'] </code></pre> <p>Using regex:</p> <pre><code>In [368]: [x[1] for x in re.findall(r'(\w+=(\w+))', a)] Out[368]: ['sam', 'london', '24', 'abc'] </code></pre> <p>Explanation for regex:</p> <pre><code>(\w+=(\w+)) </code></pre> <p><img src="https://www.debuggex.com/i/8N62JtTzjt1SAJhI.png" alt="Regular expression visualization"></p> <p><a href="https://www.debuggex.com/r/8N62JtTzjt1SAJhI" rel="nofollow">Debuggex Demo</a></p>
2
2016-08-28T08:04:11Z
[ "python" ]
Python List comprehension and JSON parsing
39,189,272
<p>I'm new to Python and trying to figure out the best way to parse the values of a JSON object into an array, using a list comprehension. </p> <p>Here is my code - I'm querying the publicly available iNaturalist API and would like to take the JSON object that it returns, so that I take specific parts of the JSON object into a bumpy array:</p> <pre><code>import json import urllib2 #Set Observations URL request for Resplendent Quetzal of Costa Rica query = urllib2.urlopen("http://api.inaturalist.org/v1/observations?place_id=6924&amp;taxon_id=20856&amp;per_page=200&amp;order=desc&amp;order_by=created_at") obSet = json.load(query) #Print out Lat Long of observation n = obSet['total_results'] for i in range(n) : print obSet['results'][i]['location'] </code></pre> <p>This all works fine and gives the following output:</p> <pre><code>9.5142456535,-83.8011438905 10.2335478381,-84.8517773638 10.3358965682,-84.9964271008 10.3744851815,-84.9871494128 10.2468720343,-84.9298072822 ... </code></pre> <p>What I'd like to do next is replace the for loop with a list comprehension, and store the location value in a tuple. I'm struggling with the syntax in that I'm guessing it's something like this:</p> <pre><code>[(long,lat) for i in range(n) for (long,lat) in obSet['results'][i]['location']] </code></pre> <p>But this doesn't work...thanks for any help.</p>
0
2016-08-28T08:15:22Z
39,189,305
<p>You can iterate over the list of results directly:</p> <pre><code>print([tuple(result['location'].split(',')) for result in obSet['results']]) &gt;&gt; [('9.5142456535', '-83.8011438905'), ('10.2335478381', '-84.8517773638'), ... ] </code></pre>
2
2016-08-28T08:20:14Z
[ "python", "json", "list-comprehension" ]
Python List comprehension and JSON parsing
39,189,272
<p>I'm new to Python and trying to figure out the best way to parse the values of a JSON object into an array, using a list comprehension. </p> <p>Here is my code - I'm querying the publicly available iNaturalist API and would like to take the JSON object that it returns, so that I take specific parts of the JSON object into a bumpy array:</p> <pre><code>import json import urllib2 #Set Observations URL request for Resplendent Quetzal of Costa Rica query = urllib2.urlopen("http://api.inaturalist.org/v1/observations?place_id=6924&amp;taxon_id=20856&amp;per_page=200&amp;order=desc&amp;order_by=created_at") obSet = json.load(query) #Print out Lat Long of observation n = obSet['total_results'] for i in range(n) : print obSet['results'][i]['location'] </code></pre> <p>This all works fine and gives the following output:</p> <pre><code>9.5142456535,-83.8011438905 10.2335478381,-84.8517773638 10.3358965682,-84.9964271008 10.3744851815,-84.9871494128 10.2468720343,-84.9298072822 ... </code></pre> <p>What I'd like to do next is replace the for loop with a list comprehension, and store the location value in a tuple. I'm struggling with the syntax in that I'm guessing it's something like this:</p> <pre><code>[(long,lat) for i in range(n) for (long,lat) in obSet['results'][i]['location']] </code></pre> <p>But this doesn't work...thanks for any help.</p>
0
2016-08-28T08:15:22Z
39,189,311
<pre><code>[tuple(obSet['results'][i]['location'].split(',')) for i in range(n)] </code></pre> <p>This will return a list of tuple, elements of the tuples are <code>unicode</code>.</p> <p>If you want that the elements of tuples as floats, do the following:</p> <pre><code>[tuple(map(float,obSet['results'][i]['location'].split(','))) for i in range(n)] </code></pre>
2
2016-08-28T08:20:51Z
[ "python", "json", "list-comprehension" ]
Python List comprehension and JSON parsing
39,189,272
<p>I'm new to Python and trying to figure out the best way to parse the values of a JSON object into an array, using a list comprehension. </p> <p>Here is my code - I'm querying the publicly available iNaturalist API and would like to take the JSON object that it returns, so that I take specific parts of the JSON object into a bumpy array:</p> <pre><code>import json import urllib2 #Set Observations URL request for Resplendent Quetzal of Costa Rica query = urllib2.urlopen("http://api.inaturalist.org/v1/observations?place_id=6924&amp;taxon_id=20856&amp;per_page=200&amp;order=desc&amp;order_by=created_at") obSet = json.load(query) #Print out Lat Long of observation n = obSet['total_results'] for i in range(n) : print obSet['results'][i]['location'] </code></pre> <p>This all works fine and gives the following output:</p> <pre><code>9.5142456535,-83.8011438905 10.2335478381,-84.8517773638 10.3358965682,-84.9964271008 10.3744851815,-84.9871494128 10.2468720343,-84.9298072822 ... </code></pre> <p>What I'd like to do next is replace the for loop with a list comprehension, and store the location value in a tuple. I'm struggling with the syntax in that I'm guessing it's something like this:</p> <pre><code>[(long,lat) for i in range(n) for (long,lat) in obSet['results'][i]['location']] </code></pre> <p>But this doesn't work...thanks for any help.</p>
0
2016-08-28T08:15:22Z
39,189,338
<p>Another way to get list of [long, lat] without list comprehension:</p> <pre><code>In [14]: map(lambda x: obSet['results'][x]['location'].split(','), range(obSet['total_results'])) Out[14]: [[u'9.5142456535', u'-83.8011438905'], [u'10.2335478381', u'-84.8517773638'], [u'10.3358965682', u'-84.9964271008'], [u'10.3744851815', u'-84.9871494128'], ... </code></pre> <p>If you would like list of tuples instead:</p> <pre><code>In [14]: map(lambda x: tuple(obSet['results'][x]['location'].split(',')), range(obSet['total_results'])) Out[14]: [[u'9.5142456535', u'-83.8011438905'], [u'10.2335478381', u'-84.8517773638'], [u'10.3358965682', u'-84.9964271008'], [u'10.3744851815', u'-84.9871494128'], ... </code></pre> <p>If you want to convert to floats too:</p> <pre><code>In [17]: map(lambda x: tuple(map(float, obSet['results'][x]['location'].split(','))), range(obSet['total_results'])) Out[17]: [(9.5142456535, -83.8011438905), (10.2335478381, -84.8517773638), (10.3358965682, -84.9964271008), (10.3744851815, -84.9871494128), (10.2468720343, -84.9298072822), (10.3456659939, -84.9451804822), ... </code></pre>
2
2016-08-28T08:25:45Z
[ "python", "json", "list-comprehension" ]
Python List comprehension and JSON parsing
39,189,272
<p>I'm new to Python and trying to figure out the best way to parse the values of a JSON object into an array, using a list comprehension. </p> <p>Here is my code - I'm querying the publicly available iNaturalist API and would like to take the JSON object that it returns, so that I take specific parts of the JSON object into a bumpy array:</p> <pre><code>import json import urllib2 #Set Observations URL request for Resplendent Quetzal of Costa Rica query = urllib2.urlopen("http://api.inaturalist.org/v1/observations?place_id=6924&amp;taxon_id=20856&amp;per_page=200&amp;order=desc&amp;order_by=created_at") obSet = json.load(query) #Print out Lat Long of observation n = obSet['total_results'] for i in range(n) : print obSet['results'][i]['location'] </code></pre> <p>This all works fine and gives the following output:</p> <pre><code>9.5142456535,-83.8011438905 10.2335478381,-84.8517773638 10.3358965682,-84.9964271008 10.3744851815,-84.9871494128 10.2468720343,-84.9298072822 ... </code></pre> <p>What I'd like to do next is replace the for loop with a list comprehension, and store the location value in a tuple. I'm struggling with the syntax in that I'm guessing it's something like this:</p> <pre><code>[(long,lat) for i in range(n) for (long,lat) in obSet['results'][i]['location']] </code></pre> <p>But this doesn't work...thanks for any help.</p>
0
2016-08-28T08:15:22Z
39,189,350
<p>The direct translation of your code into a list comprehension is:</p> <pre><code>positions = [obSet['results'][i]['location'] for i in range(obSet['total_results'])] </code></pre> <p>The <code>obSet['total_results']</code> is informative but not needed, you could just loop over <code>obSet['results']</code> directly and use each resulting dictionary:</p> <pre><code>positions = [res['location'] for res in obSet['results']] </code></pre> <p>Now you have a list of <em>strings</em> however, as each <code>'location'</code> is still the <code>long,lat</code> formatted string you printed before.</p> <p>Split that string and convert the result into a sequence of floats:</p> <pre><code>positions = [map(float, res['location'].split(',')) for res in obSet['results']] </code></pre> <p>Now you have a list of lists with floating point values:</p> <pre><code>&gt;&gt;&gt; [map(float, res['location'].split(',')) for res in obSet['results']] [[9.5142456535, -83.8011438905], [10.2335478381, -84.8517773638], [10.3358965682, -84.9964271008], [10.3744851815, -84.9871494128], [10.2468720343, -84.9298072822], [10.3456659939, -84.9451804822], [10.3611732346, -84.9450302597], [10.3174360636, -84.8798676791], [10.325110706, -84.939710318], [9.4098152454, -83.9255607577], [9.4907141714, -83.9240819199], [9.562637289, -83.8170178428], [9.4373885911, -83.8312881263], [9.4766746409, -83.8120952573], [10.2651190176, -84.6360466565], [9.6572995298, -83.8322965118], [9.6997991784, -83.9076919066], [9.6811177044, -83.8487647156], [9.7416717045, -83.929327673], [9.4885099275, -83.9583968683], [10.1233252667, -84.5751029683], [9.4411815757, -83.824401543], [9.4202687169, -83.9550344212], [9.4620656621, -83.665183105], [9.5861809119, -83.8358881552], [9.4508914243, -83.9054016165], [9.4798058284, -83.9362558497], [9.5970449879, -83.8969131893], [9.5855562829, -83.8354434596], [10.2366179555, -84.854847472], [9.718459702, -83.8910277016], [9.4424384874, -83.8880459793], [9.5535916157, -83.9578166199], [10.4124554163, -84.9796942349], [10.0476688795, -84.298227929], [10.2129436252, -84.8384097435], [10.2052632717, -84.6053701877], [10.3835784147, -84.8677930134], [9.6079669672, -83.9084281155], [10.3583643315, -84.8069762134], [10.3975986735, -84.9196996767], [10.2060835381, -84.9698814407], [10.3322929317, -84.8805587129], [9.4756504472, -83.963818143], [10.3997876964, -84.9127311339], [10.1777433853, -84.0673088686], [10.3346128571, -84.9306278215], [9.5193346195, -83.9404786293], [9.421538224, -83.7689452093], [9.430427837, -83.9532672942], [10.3243212895, -84.9653175843], [10.021698503, -83.885674888]] </code></pre> <p>If you <em>must</em> have tuples rather than lists, add a <code>tuple()</code> call:</p> <pre><code>positions = [tuple(map(float, res['location'].split(','))) for res in obSet['results']] </code></pre> <p>The latter also makes sure the expression works in Python 3 (where <code>map()</code> returns an iterator, not a list); you'd otherwise have to use a nested list comprehension:</p> <pre><code># produce a list of lists in Python 3 positions = [[float(p) for p in res['location'].split(',')] for res in obSet['results']] </code></pre>
2
2016-08-28T08:27:52Z
[ "python", "json", "list-comprehension" ]
Python List comprehension and JSON parsing
39,189,272
<p>I'm new to Python and trying to figure out the best way to parse the values of a JSON object into an array, using a list comprehension. </p> <p>Here is my code - I'm querying the publicly available iNaturalist API and would like to take the JSON object that it returns, so that I take specific parts of the JSON object into a bumpy array:</p> <pre><code>import json import urllib2 #Set Observations URL request for Resplendent Quetzal of Costa Rica query = urllib2.urlopen("http://api.inaturalist.org/v1/observations?place_id=6924&amp;taxon_id=20856&amp;per_page=200&amp;order=desc&amp;order_by=created_at") obSet = json.load(query) #Print out Lat Long of observation n = obSet['total_results'] for i in range(n) : print obSet['results'][i]['location'] </code></pre> <p>This all works fine and gives the following output:</p> <pre><code>9.5142456535,-83.8011438905 10.2335478381,-84.8517773638 10.3358965682,-84.9964271008 10.3744851815,-84.9871494128 10.2468720343,-84.9298072822 ... </code></pre> <p>What I'd like to do next is replace the for loop with a list comprehension, and store the location value in a tuple. I'm struggling with the syntax in that I'm guessing it's something like this:</p> <pre><code>[(long,lat) for i in range(n) for (long,lat) in obSet['results'][i]['location']] </code></pre> <p>But this doesn't work...thanks for any help.</p>
0
2016-08-28T08:15:22Z
39,189,361
<p>To correct way to get <strong>a list of tuples</strong> using list comprehensions would be:</p> <pre><code>def to_tuple(coords_str): return tuple(coords_str.split(',')) output_list = [to_tuple(obSet['results'][i]['location']) for i in range(obSet['total_results'])] </code></pre> <p>You can of course replace <code>to_tuple()</code> with a lambda function, I just wanted to make the example clear. Moreover, you could use <code>map()</code> to have a tuple with floats instead of string: <code>return tuple(map(float,coords_str.split(',')))</code>.</p>
1
2016-08-28T08:29:18Z
[ "python", "json", "list-comprehension" ]
Python List comprehension and JSON parsing
39,189,272
<p>I'm new to Python and trying to figure out the best way to parse the values of a JSON object into an array, using a list comprehension. </p> <p>Here is my code - I'm querying the publicly available iNaturalist API and would like to take the JSON object that it returns, so that I take specific parts of the JSON object into a bumpy array:</p> <pre><code>import json import urllib2 #Set Observations URL request for Resplendent Quetzal of Costa Rica query = urllib2.urlopen("http://api.inaturalist.org/v1/observations?place_id=6924&amp;taxon_id=20856&amp;per_page=200&amp;order=desc&amp;order_by=created_at") obSet = json.load(query) #Print out Lat Long of observation n = obSet['total_results'] for i in range(n) : print obSet['results'][i]['location'] </code></pre> <p>This all works fine and gives the following output:</p> <pre><code>9.5142456535,-83.8011438905 10.2335478381,-84.8517773638 10.3358965682,-84.9964271008 10.3744851815,-84.9871494128 10.2468720343,-84.9298072822 ... </code></pre> <p>What I'd like to do next is replace the for loop with a list comprehension, and store the location value in a tuple. I'm struggling with the syntax in that I'm guessing it's something like this:</p> <pre><code>[(long,lat) for i in range(n) for (long,lat) in obSet['results'][i]['location']] </code></pre> <p>But this doesn't work...thanks for any help.</p>
0
2016-08-28T08:15:22Z
39,189,369
<p><code>obSet['results']</code> is a list, no need to use <code>range</code> to iterate over it:</p> <pre><code>for item in obSet['results']: print(item['location']) </code></pre> <p>To make this into list comprehension you can write:</p> <pre><code>[item['location'] for item in obSet['results']] </code></pre> <p>But, each location is coded as a string, instead of list or tuple of floats. To get it to the proper format, use</p> <pre><code>[tuple(float(coord) for coord in item['location'].split(',')) for item in obSet['results']] </code></pre> <p>That is, split the <code>item['location']</code> string into parts using <code>,</code> as the delimiter, then convert each part into a float, and make a tuple of these float coordinates.</p>
1
2016-08-28T08:30:56Z
[ "python", "json", "list-comprehension" ]
Python List comprehension and JSON parsing
39,189,272
<p>I'm new to Python and trying to figure out the best way to parse the values of a JSON object into an array, using a list comprehension. </p> <p>Here is my code - I'm querying the publicly available iNaturalist API and would like to take the JSON object that it returns, so that I take specific parts of the JSON object into a bumpy array:</p> <pre><code>import json import urllib2 #Set Observations URL request for Resplendent Quetzal of Costa Rica query = urllib2.urlopen("http://api.inaturalist.org/v1/observations?place_id=6924&amp;taxon_id=20856&amp;per_page=200&amp;order=desc&amp;order_by=created_at") obSet = json.load(query) #Print out Lat Long of observation n = obSet['total_results'] for i in range(n) : print obSet['results'][i]['location'] </code></pre> <p>This all works fine and gives the following output:</p> <pre><code>9.5142456535,-83.8011438905 10.2335478381,-84.8517773638 10.3358965682,-84.9964271008 10.3744851815,-84.9871494128 10.2468720343,-84.9298072822 ... </code></pre> <p>What I'd like to do next is replace the for loop with a list comprehension, and store the location value in a tuple. I'm struggling with the syntax in that I'm guessing it's something like this:</p> <pre><code>[(long,lat) for i in range(n) for (long,lat) in obSet['results'][i]['location']] </code></pre> <p>But this doesn't work...thanks for any help.</p>
0
2016-08-28T08:15:22Z
39,189,407
<p>Let's try to give this a shot, starting with just 1 location: </p> <p><code>&gt;&gt;&gt; (long, lat) = obSet['results'][0]['location']</code> </p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ValueError: too many values to unpack </code></pre> <p>Alright, so that didn't work, but why? It's because the longitude and latitude coordinates are just 1 string, so you can't unpack it immediately as a tuple. We must first separate it into two different strings.</p> <p><code>&gt;&gt;&gt; (long, lat) = obSet['results'][0]['location'].split(",")</code></p> <p>From here we will want to iterate through the whole set of results, which we know are indexed from 0 to n. <code>tuple(obSet['results'][i]['location'].split(","))</code> will give us the tuple of longitude, latitude for the result at index i, so:<br> <code>&gt;&gt;&gt; [tuple(obSet['results'][i]['location'].split(",")) for i in range(n)]</code><br> ought to give us the set of tuples we want.</p>
1
2016-08-28T08:35:55Z
[ "python", "json", "list-comprehension" ]
Read block Text of a text file python
39,189,324
<p><strong>Input File - input.csv</strong></p> <pre><code>#######A Result:######### 2016-07-27 bar 51 14 2015-06-27 roujri 30 86 #######B Result:######### 2016-08-26 foo 34 83 2016-08-26 foo 34 83 ######################### </code></pre> <p><strong>Output result</strong> </p> <pre><code>A result: Col-1: 81 Col-2: 100 B result: Col-1: 68 Col-2: 166 </code></pre> <p>I am trying to solve one problem according to above input, output. So far I can read only first block Text. I want to more generic function so possibly I will only initialise the variable that need to read within the block, not hard coding (e.g. <code>#######A Result:#########</code>) and furthermore pass the block info to another function that will sum up the value. Any suggestion will be greatly appreciate. Thanks :) </p> <pre><code>import re def reading_block_text_file(infile): with open(infile) as fp: for result in re.findall('#######A Result:#########(.*?)#######B Result:#########', fp.read(), re.S): print result, reading_block_text_file(input_file) </code></pre>
-2
2016-08-28T08:23:33Z
39,189,538
<p>Throw in a little bit of regex:</p> <pre><code>$ cat a #######A Result:######### 2016-07-27 bar 51 14 2015-06-27 roujri 30 86 #######B Result:######### 2016-08-26 foo 34 83 2016-08-26 foo 34 83 ######################### $ cat a.py import re col_names = ['abc', 'xyz'] with open("/tmp/a", "r") as f: tables = re.findall(r'#+(\w+ Result:)#+([^#]*)', f.read(), re.S) for table in tables: name = table[0] rows = table[1].strip().split('\n') print name for i in range(len(col_names)): print "\t{}: {}".format(col_names[i], sum(map(lambda x: int(x.split()[i + 2]), rows))) $ python a.py A Result: abc: 81 xyz: 100 B Result: abc: 68 xyz: 166 </code></pre> <p>Regex Explanation:</p> <pre><code>#+(\w+ Result:)#+([^#]*) </code></pre> <p><img src="https://www.debuggex.com/i/Bito9rGLY6SLIMCx.png" alt="Regular expression visualization"></p> <p><a href="https://www.debuggex.com/r/Bito9rGLY6SLIMCx" rel="nofollow">Debuggex Demo</a></p>
1
2016-08-28T08:55:02Z
[ "python", "file", "text", "block" ]
pandas plot automatically assigning color to categories
39,189,368
<p>I want to do a bar plot my dataframe such that the categorical column ('ad') defines the colors of my bar charts. This is my data:</p> <pre><code>"date","shown","clicked","converted","avg_cost_per_click","total_revenue","ad" 2015-10-01,65877,2339,43,0.9,641.62,"ad_group_1" 2015-10-02,65100,2498,38,0.94,756.37,"ad_group_1" 2015-10-03,70658,2313,49,0.86,970.9,"ad_group_2" 2015-10-04,69809,2833,51,1.01,907.39,"ad_group_2" 2015-10-05,68186,2696,41,1,879.45,"ad_group_3" 2015-10-06,66864,2617,46,0.98,746.48,"ad_group_3" 2015-10-07,68227,2390,42,0.94,462.33,"ad_group_4" 2015-10-08,68520,2909,46,1.07,441.28,"ad_group_4" 2015-10-09,67250,2385,49,0.88,602.14,"ad_group_5" </code></pre> <p>i have up to 40 ad groups.</p> <p>my code:</p> <pre><code>columns=['date','converted','clicked','ad'] df2=pd.DataFrame(df,columns=columns) df2.set_index(df2.date,inplace=True) </code></pre> <p>here i take only 1 column ('clicked') to plot against 'date'</p> <pre><code>plt.figure() df2.loc[:,['clicked','ad']].plot(kind='bar') plt.show() #df2.loc[:,['clicked','ad']].plot(kind='bar',colormap='ad') </code></pre> <p><a href="http://i.stack.imgur.com/2sGwR.png" rel="nofollow"><img src="http://i.stack.imgur.com/2sGwR.png" alt="enter image description here"></a></p> <p>The line of code with <code>colormap</code> doesnt work because i dont have a color list. Ive seen answers where the user manually created color mapping of categories to color. I have 40 over categories and cant do it manually. </p> <p>Is there any way to just set distinct value of 'ad' column to be automatically assigned a color?</p> <p>Looking for something like this: <a href="http://i.stack.imgur.com/EJezC.png" rel="nofollow"><img src="http://i.stack.imgur.com/EJezC.png" alt="enter image description here"></a></p>
0
2016-08-28T08:30:52Z
39,189,874
<p>You could get distinct colours for each group present in the <code>ad</code> column as shown:</p> <pre><code>df.set_index(['date'],inplace=True) # Empty list to append later grouped_list = [] label_list = [] # Iterating through each group key grouped by ad for label, key in df.groupby(['ad'])['clicked']: grouped_list.append(key) label_list.append(label) # Concatenating the grouped list column-wise and filling Nans with 0's df_grouped_bar = pd.concat(grouped_list, axis=1).fillna(0) # Renaming columns created to take on new names df_grouped_bar.columns = label_list # Bar plot with a chosen colormap ax = df_grouped_bar.plot(kind='bar', stacked=True, figsize=(6,6), width=0.2, ylim=(0,5000), cmap=plt.cm.rainbow) # Figure Aesthetics ax.set_xticklabels(df_grouped_bar.index.format()) plt.ylabel('clicked') plt.tight_layout() plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/KwdYe.png" rel="nofollow"><img src="http://i.stack.imgur.com/KwdYe.png" alt="Image"></a></p>
0
2016-08-28T09:33:46Z
[ "python", "pandas", "matplotlib" ]