title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
How to update/refresh qTreewidget? | 39,279,792 | <p>I have this little UI that lists text and python script files for a given path, but when I try to UPDATE or REPLACE the list with new items, I think the data is placed there, but they appear blank. I assume it has something to do with not telling the UI that data has been updated and that the UI has to be redrawn?</p>
<p>Here is my code:</p>
<pre><code>import os
import platform
import shutil
import subprocess
import sys
import time
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
def _fromUtf8(s):
return s
try:
_encoding = QtGui.QApplication.UnicodeUTF8
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig, _encoding)
except AttributeError:
def _translate(context, text, disambig):
return QtGui.QApplication.translate(context, text, disambig)
class Window(QtGui.QWidget):
def __init__(self):
super(Window, self).__init__()
self.fileListerObject = fileListGenerator(self)
self.setObjectName(_fromUtf8("Form"))
self.resize(350, 300)
mainSizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Preferred, QtGui.QSizePolicy.Fixed)
mainSizePolicy.setHorizontalStretch(0)
mainSizePolicy.setVerticalStretch(0)
mainSizePolicy.setHeightForWidth(self.sizePolicy().hasHeightForWidth())
self.setSizePolicy(mainSizePolicy)
self.setSizePolicy(mainSizePolicy)
self.uiName = 'Tree test'
self.setWindowTitle(self.uiName)
self.windowElements()
def windowElements(self):
verticalLayout = QtGui.QVBoxLayout(self)
verticalLayout.setObjectName(_fromUtf8("verticalLayout"))
self.inputLineEdit = QtGui.QLineEdit(self)
verticalLayout.addWidget(self.inputLineEdit)
buttonThing = QtGui.QPushButton(self)
buttonThing.setText('SET')
verticalLayout.addWidget(buttonThing)
self.connect(buttonThing, QtCore.SIGNAL("clicked()"), self.fileListerObject.fileLister)
self.treeWidget = QtGui.QTreeWidget(self)
self.treeWidget.setIndentation(5)
self.treeWidget.setAllColumnsShowFocus(True)
self.treeWidget.setObjectName(_fromUtf8("treeWidget"))
self.treeWidget.header().setSortIndicatorShown(True)
self.treeWidget.header().setSortIndicator(0, QtCore.Qt.AscendingOrder)
self.treeWidget.setSortingEnabled(True)
self.treeWidget.headerItem().setText(0, "file name")
self.treeWidget.headerItem().setText(1, "date")
self.treeWidget.headerItem().setText(2, "type")
self.__sortingEnabled = self.treeWidget.isSortingEnabled()
self.treeWidget.setSortingEnabled(False)
verticalLayout.addWidget(self.treeWidget)
buttonLaunchApp = QtGui.QPushButton(self)
buttonLaunchApp.setText('OPEN')
verticalLayout.addWidget(buttonLaunchApp)
self.show()
class fileListGenerator():
def __init__(self, parentWindow):
self.parentWindow = parentWindow
def fileLister(self):
directoryPath = self.parentWindow.inputLineEdit.text()
dirPath = directoryPath
fileList = os.listdir(dirPath)
projectFileList = []
if len(fileList) != 0:
for file in fileList:
if ( file.endswith('.txt') != False ) or ( file.endswith('.py') != False ):
projectFileList.append(file)
print projectFileList
itemID = 0
for item in projectFileList:
filePath = dirPath + '\\' + item
threePartSplit = item.rpartition(".")
extension = threePartSplit[2]
dateTimeStamp = time.ctime(os.path.getmtime(filePath))
QDate = QtCore.QDateTime.fromString (str(dateTimeStamp), 'ddd MMM dd HH:mm:ss yyyy')
QtGui.QTreeWidgetItem(self.parentWindow.treeWidget)
self.parentWindow.treeWidget.topLevelItem(itemID).setText(0, item)
self.parentWindow.treeWidget.topLevelItem(itemID).setData(1, QtCore.Qt.DisplayRole, QDate.toString('yyyy-MM-dd HH:mm:ss'))
self.parentWindow.treeWidget.topLevelItem(itemID).setText(2, extension)
itemID += 1
self.parentWindow.treeWidget.setSortingEnabled(self.parentWindow._Window__sortingEnabled)
def run():
app = QtGui.QApplication(sys.argv)
GUI = Window()
sys.exit(app.exec_())
run()
</code></pre>
<p>I tried looking this up and I came up with an answer of using a QAbstractItem model? But from what I understand you can only use those with a qTreeVIEW, and I'm using the widget version. I'm not doing much with the data in the item so I assume I wouldn't be needing a qTreeview, so I'd like to keep it as a widget. Is there a way with this setup to have the UI update the data correctly? Or do I need to completely reformat this?</p>
<p>Thank you in advance.</p>
| 0 | 2016-09-01T19:48:58Z | 39,289,995 | <p>Honestly, I don't quite get your code. But if you want to add new items to a <code>QTreeWidget</code>, here's how you do it:</p>
<pre><code>item = QTreeWidgetItem()
item.setText(0, "John") # first name
item.setText(1, "Doe") # last name
item.setText(2, "35") # age
self.treeWidget.addTopLevelItem(item)
</code></pre>
| 0 | 2016-09-02T10:17:15Z | [
"python",
"python-2.7",
"qt",
"pyqt",
"pyqt4"
] |
How to define and instantiate a derived class at once in python? | 39,279,810 | <p>I have a base class that I want to derive and instantiate together. I can do that in java like: </p>
<pre><code>BaseClass derivedClassInstance = new BaseClass() {
@override
void someBaseClassMethod() { // my statements}
};
</code></pre>
<p>In python I can derive and and instantiate a base class like: </p>
<pre><code>class DerivedClass(BaseClass):
def some_base_class_method():
# my statements
derived_class_instance = DerivedClass()
</code></pre>
<p>I need to sub-class single instances of some objects with minor changes. Deriving and assigning them separately seems like overkill. </p>
<p>Is there a Java-like <em>one-liner</em> way to derive and instantiate a class on the fly? Or is there a more concise way to do what I did in python?</p>
| 0 | 2016-09-01T19:50:03Z | 39,280,023 | <p>I think you are looking for metaclass programming. </p>
<pre><code>class Base(object):
def test(self):
print 'hit'
a =type( 'Derived',(Base,),{})()
a.test()
hit
</code></pre>
| 0 | 2016-09-01T20:04:09Z | [
"python",
"inheritance"
] |
How to define and instantiate a derived class at once in python? | 39,279,810 | <p>I have a base class that I want to derive and instantiate together. I can do that in java like: </p>
<pre><code>BaseClass derivedClassInstance = new BaseClass() {
@override
void someBaseClassMethod() { // my statements}
};
</code></pre>
<p>In python I can derive and and instantiate a base class like: </p>
<pre><code>class DerivedClass(BaseClass):
def some_base_class_method():
# my statements
derived_class_instance = DerivedClass()
</code></pre>
<p>I need to sub-class single instances of some objects with minor changes. Deriving and assigning them separately seems like overkill. </p>
<p>Is there a Java-like <em>one-liner</em> way to derive and instantiate a class on the fly? Or is there a more concise way to do what I did in python?</p>
| 0 | 2016-09-01T19:50:03Z | 39,280,047 | <p>In general you won't see this kind of code, because is difficult to read and understand. I really suggest you find some alternative and avoid what comes next. Having said that, you can create a class and an instance in one single line, like this:</p>
<pre><code>>>> class BaseClass(object):
... def f1(self, x):
... return 2
... def f2(self, y):
... return self.f1(y) + y
...
>>>
>>> W = BaseClass()
>>> W.f2(2)
4
>>> X = type('DerivedClass', (BaseClass,), {'f1': (lambda self, x: (x + x))})()
>>> X.f2(2)
6
</code></pre>
| 2 | 2016-09-01T20:05:30Z | [
"python",
"inheritance"
] |
Use None instead of np.nan for null values in pandas DataFrame | 39,279,824 | <p>I have a pandas DataFrame with mixed data types. I would like to replace all null values with None (instead of default np.nan). For some reason, this appears to be nearly impossible. </p>
<p>In reality my DataFrame is read in from a csv, but here is a simple DataFrame with mixed data types to illustrate my problem. </p>
<pre><code>df = pd.DataFrame(index=[0], columns=range(5))
df.iloc[0] = [1, 'two', np.nan, 3, 4]
</code></pre>
<p>I can't do:</p>
<pre><code>>>> df.fillna(None)
ValueError: must specify a fill method or value
</code></pre>
<p>nor:</p>
<pre><code>>>> df[df.isnull()] = None
TypeError: Cannot do inplace boolean setting on mixed-types with a non np.nan value
</code></pre>
<p>nor:</p>
<pre><code>>>> df.replace(np.nan, None)
TypeError: cannot replace [nan] with method pad on a DataFrame
</code></pre>
<p>I used to have a DataFrame with only string values, so I could do:</p>
<pre><code>>>> df[df == ""] = None
</code></pre>
<p>which worked. But now that I have mixed datatypes, it's a no go.</p>
<p>For various reasons about my code, it would be helpful to be able to use None as my null value. Is there a way I can set the null values to None? Or do I just have to go back through my other code and make sure I'm using np.isnan or pd.isnull everywhere? </p>
| 2 | 2016-09-01T19:51:12Z | 39,279,898 | <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.where.html" rel="nofollow">Use <code>pd.DataFrame.where</code></a><br>
Uses <code>df</code> value when condition is met, otherwise uses <code>None</code></p>
<pre><code>df.where(df.notnull(), None)
</code></pre>
<p><a href="http://i.stack.imgur.com/sMhz6.png" rel="nofollow"><img src="http://i.stack.imgur.com/sMhz6.png" alt="enter image description here"></a></p>
| 2 | 2016-09-01T19:55:24Z | [
"python",
"pandas",
"dataframe"
] |
How to draw a graphical count table in pandas | 39,279,858 | <p>I have a dataframe df with two columns <code>customer1</code> and <code>customer2</code> which are string valued. I would like to make a square graphical representation of the count number for each pair from those two columns. </p>
<p>I can do</p>
<pre><code>df[['customer1', 'customer2']].value_counts()
</code></pre>
<p>which will give me the counts. But how can I make something that looks a little like:</p>
<p><a href="http://i.stack.imgur.com/GZ6yI.png" rel="nofollow"><img src="http://i.stack.imgur.com/GZ6yI.png" alt="enter image description here"></a> </p>
<p>from the result?</p>
<p>I can't provide my real dataset but here is a toy example with three labels in csv.</p>
<pre><code>customer1,customer2
a,b
a,c
a,c
b,a
b,c
b,c
c,c
a,a
b,c
b,c
</code></pre>
| 2 | 2016-09-01T19:53:24Z | 39,280,203 | <p><strong>UPDATE:</strong> </p>
<blockquote>
<p>Is it possible to sort the rows/columns so the highest count rows are
at the top ? In this case the order would be b,a,c</p>
</blockquote>
<p>IIUC you can do it this way (where ):</p>
<pre><code>In [80]: x = df.pivot_table(index='customer1',columns='customer2',aggfunc='size',fill_value=0)
In [81]: idx = x.max(axis=1).sort_values(ascending=0).index
In [82]: idx
Out[82]: Index(['b', 'a', 'c'], dtype='object', name='customer1')
In [87]: sns.heatmap(x[idx].reindex(idx), annot=True)
Out[87]: <matplotlib.axes._subplots.AxesSubplot at 0x9ee3f98>
</code></pre>
<p><a href="http://i.stack.imgur.com/sIVen.png" rel="nofollow"><img src="http://i.stack.imgur.com/sIVen.png" alt="enter image description here"></a></p>
<p><strong>OLD answer:</strong></p>
<p>you can use <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.heatmap.html" rel="nofollow">heatmap()</a> method from <code>seaborn</code> module:</p>
<pre><code>In [42]: import seaborn as sns
In [43]: df
Out[43]:
customer1 customer2
0 a b
1 a c
2 a c
3 b a
4 b c
5 b c
6 c c
7 a a
8 b c
9 b c
In [44]: x = df.pivot_table(index='customer1',columns='customer2',aggfunc='size',fill_value=0)
In [45]: x
Out[45]:
customer2 a b c
customer1
a 1 1 2
b 1 0 4
c 0 0 1
In [46]: sns.heatmap(x)
Out[46]: <matplotlib.axes._subplots.AxesSubplot at 0xb150b70>
</code></pre>
<p><a href="http://i.stack.imgur.com/YMVgk.png" rel="nofollow"><img src="http://i.stack.imgur.com/YMVgk.png" alt="enter image description here"></a></p>
<p>or with annotations:</p>
<pre><code>In [48]: sns.heatmap(x, annot=True)
Out[48]: <matplotlib.axes._subplots.AxesSubplot at 0xc596d68>
</code></pre>
<p><a href="http://i.stack.imgur.com/dN2YV.png" rel="nofollow"><img src="http://i.stack.imgur.com/dN2YV.png" alt="enter image description here"></a></p>
| 2 | 2016-09-01T20:15:14Z | [
"python",
"pandas"
] |
How to draw a graphical count table in pandas | 39,279,858 | <p>I have a dataframe df with two columns <code>customer1</code> and <code>customer2</code> which are string valued. I would like to make a square graphical representation of the count number for each pair from those two columns. </p>
<p>I can do</p>
<pre><code>df[['customer1', 'customer2']].value_counts()
</code></pre>
<p>which will give me the counts. But how can I make something that looks a little like:</p>
<p><a href="http://i.stack.imgur.com/GZ6yI.png" rel="nofollow"><img src="http://i.stack.imgur.com/GZ6yI.png" alt="enter image description here"></a> </p>
<p>from the result?</p>
<p>I can't provide my real dataset but here is a toy example with three labels in csv.</p>
<pre><code>customer1,customer2
a,b
a,c
a,c
b,a
b,c
b,c
c,c
a,a
b,c
b,c
</code></pre>
| 2 | 2016-09-01T19:53:24Z | 39,280,256 | <p>As @MaxU mentioned, <code>seaborn.heatmap</code> should work. It appears that you can use the Pandas DataFrame as the input.</p>
<p><code>seaborn.heatmap(data, vmin=None, vmax=None, cmap=None, center=None, robust=False, annot=None, fmt='.2g', annot_kws=None, linewidths=0, linecolor='white', cbar=True, cbar_kws=None, cbar_ax=None, square=False, ax=None, xticklabels=True, yticklabels=True, mask=None, **kwargs)</code></p>
<p><a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.heatmap.html#seaborn.heatmap" rel="nofollow">https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.heatmap.html#seaborn.heatmap</a></p>
| 0 | 2016-09-01T20:20:07Z | [
"python",
"pandas"
] |
create a pandas frame of shape (x**0.5,x**0.5) from a array of size x python | 39,279,881 | <p>I have an array of size 64, such has:</p>
<pre><code>In[164]: x_y.values / x_ysquare.stack().values
Out[164]:
array([ 1. , 0.01623716, -0.03305102, 0.03264311, -0.0175754 ,
0.04017079, 0.15731795, -0.01797369, 0.01623716, 1. ,
0.08387368, -0.09562322, 0.02700502, 0.0614588 , 0.03461564,
-0.12421004, -0.03305102, 0.08387368, 1. , -0.00248859,
-0.00391474, 0.01603743, 0.05942098, 0.08989135, 0.03264311,
-0.09562322, -0.00248859, 1. , -0.16354249, -0.00887474,
0.30343543, 0.12873483, -0.0175754 , 0.02700502, -0.00391474,
-0.16354249, 1. , 0.02347214, -0.30337839, -0.09302462,
0.04017079, 0.0614588 , 0.01603743, -0.00887474, 0.02347214,
1. , -0.01125003, -0.31859215, 0.15731795, 0.03461564,
0.05942098, 0.30343543, -0.30337839, -0.01125003, 1. ,
0.18483076, -0.01797369, -0.12421004, 0.08989135, 0.12873483,
-0.09302462, -0.31859215, 0.18483076, 1. ])
</code></pre>
<p>I am trying to create a pandas dataframe of size 8x8 from those 64 data.</p>
<p>this is a correlation matrix where I am forcing the mean to be 0, I am trying to get the exact same output shape as if I were using df.corr() method.</p>
<p>thanks for the help!</p>
| 1 | 2016-09-01T19:54:39Z | 39,280,044 | <p>Maybe you can do:</p>
<pre><code>arr_reshaped = arr.reshape(8,8)
df = pd.DataFrame(arr_reshaped)
</code></pre>
| 2 | 2016-09-01T20:05:15Z | [
"python",
"pandas"
] |
python find FAILED numbers in log file | 39,279,884 | <p>I would like Python code to search for FAILED number from a file.txt .like here 5 , i need to search the FAILED number, whether 0 or any number. then i can send an email the number of failures.
I have tried the grep but it does not works. </p>
<pre><code>searchfile = open("file.txt", "r")
for line in searchfile:
if "FAILED" in line: print line
searchfile.close()
Total Copied Skipped Mismatch FAILED Extras
Dirs : 2575 0 2575 0 5 0
Files : 6039 0 6039 0 0 0
Bytes : 1.547 g 0 1.547 g 0 0 0
Times : 0:00:53 0:00:00 0:00:00 0:00:53
Ended : Tue Aug 30 04:32:48 2016
</code></pre>
| 0 | 2016-09-01T19:54:44Z | 39,280,553 | <p>You can find the position of the FAILED column and look at the values at that position in the rows of interest: </p>
<pre><code>result = {}
col = 'FAILED'
col_index = None
row_names = {'Dirs', 'Files'}
for l in open('file.txt').readlines():
if col_index is None:
if col in l:
col_index = l.find(col)
else:
l_split = l.split()
if len(l_split) > 0 and l_split[0] in row_names:
result[l_split[0]] = int(l[col_index:col_index+len(col)])
print(result)
</code></pre>
| 0 | 2016-09-01T20:42:18Z | [
"python",
"windows",
"robocopy"
] |
how to access an element of the get_context_data in django | 39,279,885 | <p>I'm trying to access to the elements of the get_context_data like:</p>
<pre><code>context = super(DetallePlanillasContratado,self).get_context_data(**kwargs)
</code></pre>
<p>and then access in this way:</p>
<pre><code>context['new_context'] = context.element_of_the_context
</code></pre>
<p>do we have something like this on django?</p>
| 1 | 2016-09-01T19:54:47Z | 39,280,024 | <p><code>context['new_context'] = context['elements_of_the_context']</code></p>
| 1 | 2016-09-01T20:04:16Z | [
"python",
"django"
] |
How to import custom module the same way as pip-installed modules? | 39,279,943 | <p>I feel really dumb asking this question, but it's a quirk of python I've put up with for awhile now that I finally want to fix.</p>
<p>On CentOS 7, given that I have "roflmao.py" and "__init__.py" in the directory:</p>
<pre><code> /usr/lib/python2.7/site-packages/roflmao
</code></pre>
<p>Why is it that when I'm using the python interpreter (and not in the directory containing roflmao.py), I must type:</p>
<pre><code>from roflmao import roflmao
</code></pre>
<p>Instead of simply:</p>
<pre><code>import roflmao
</code></pre>
<p>To gain access to "roflmao.py"'s functions and variables? I can import re, collections, requests, or any PIP-installed module just fine, but not my own custom one.</p>
<p>How can I set things up to accomplish this?</p>
| 3 | 2016-09-01T19:58:41Z | 39,279,965 | <p>Put <code>from roflmao import *</code> into <code>__init__.py</code>.</p>
<p>If you do this, then you don't really need to use <code>roflmao.py</code>. Because it would then be pointless to do <code>from roflmao import roflmao</code>. So it's best to just put the code from <code>roflmao.py</code> into <code>__init__.py</code>.</p>
| 3 | 2016-09-01T20:00:22Z | [
"python",
"import",
"module"
] |
Recursion and breaking out of (or ignoring) a cycle | 39,279,958 | <p>I have the following:</p>
<pre><code>def rfunction(x, y):
new_x = assignment1
new_y = assignment2
print "x depends on new_x of form y"
rfunction(new_x, new_y)
</code></pre>
<p>My true code is a lot more complex, relies on multiple jsons, etc. but this is the gist of the issue. Running this will yield pleasant results, up until I get the following output: <code>x depends on x of label y</code>, and of course run into an infinite loop. How can I ensure this prints once, and then break (or ignore) the recursion, and continue on. The structure of what I'm recursing over is a dependency graph in which arcs only flow downward toward children, but nodes may have a loop. It's this existence of a loop which is throwing me off.</p>
| 2 | 2016-09-01T19:59:57Z | 39,281,204 | <p>If the only cycles you might encounter are self-references (e.g. <code>new_x, new_y</code> is <code>x, y</code>), then you could fix this with a simple <code>if</code> check:</p>
<pre><code>def rfunction(x, y):
new_x = assignment1
new_y = assignment2
print "x depends on new_x of form y"
if new_x != x or new_y != y:
rfunction(new_x, new_y)
else:
do_something_else() # or maybe just return?
</code></pre>
<p>If you need to be concerned about more indirect cycles (e.g. <code>rfunction(1, 1)</code> calls <code>rfunction(1, 2)</code>, which calls <code>rfunction(1, 1)</code> again), you need to keep track of the parameter pairs you've seen before. Here's one way to do that with a <code>set</code>:</p>
<pre><code>def rfunction(x, y, seen=None):
if seen is None:
seen = set()
seen.add((x, y))
new_x = assignment1
new_y = assignment2
print "x depends on new_x of form y"
if (new_x, new_y) not in seen:
rfunction(new_x, new_y, seen)
else:
do_something_else()
</code></pre>
<p>I'd note that neither of these code examples have a base case (other than the recursive cycle that I'm breaking). Presumably your real code does, so I'll leave it up to you to include it as you adapt my examples to fit your code.</p>
| 2 | 2016-09-01T21:33:31Z | [
"python",
"recursion",
"infinite-loop"
] |
What is the simplest way to run a constant loop in a tkinter frame? | 39,280,004 | <p>I want to run a method in the background of my tkinter frame that will constantly check if certain files exist in a specific folder. As long as the files dont exist, there will be a red <code>tk.label</code> that says "Incomplete", and as soon as it detects these specific files, the <code>tk.label</code> will turn green and say "Complete".</p>
<p>The problem is that my method only runs when the frame is initialized, which is as soon as the program opens. If these files are added or removed after the program is opened, the method won't realize, and the <code>tk.label</code> wont change.</p>
<p>What is the best way to run a constant checker in the background? Preferably one that only runs when the frame is opened. Is it just a neverending <code>while</code> loop?</p>
| 1 | 2016-09-01T20:02:36Z | 39,280,143 | <p>Define a function that does whatever you want, and have that function schedule itself to be run again in the future. It will run until the program quits.</p>
<p>This example assumes a global variable named <code>root</code> that refers to the root window, but any widget reference will work.</p>
<pre><code>def do_something():
<your code here>
root.after(3000, do_something)
</code></pre>
<p>Call it once to start it, and then it will run forever</p>
<pre><code>do_some_check()
</code></pre>
| 1 | 2016-09-01T20:11:07Z | [
"python",
"python-2.7",
"tkinter",
"tk"
] |
strange python destructor behaviour | 39,280,050 | <p>While playing with OO Python I came across following curiosity. Consider following simple class:</p>
<pre><code>>>> class Monty():
def __init__(self):
print 'start m'
def __del__(self):
print 'deleted m'
</code></pre>
<p>Instantiating object goes as expected:</p>
<pre><code>>>> a = Monty()
start m
>>> a
<__main__.Monty instance at 0x7fdf9f084830>
</code></pre>
<p>and now the funny part:</p>
<pre><code>>>> del a
>>> a = Monty()
start m
>>> a
deleted m
<__main__.Monty instance at 0x7fdf9f083fc8>
>>> a
<__main__.Monty instance at 0x7fdf9f083fc8>
>>> del a
>>> a = Monty()
start m
>>> del(a)
deleted m
>>> a = Monty()
start m
>>> a
deleted m
<__main__.Monty instance at 0x7fdf9f084830>
</code></pre>
<p>The confusing part here is that I get message from destructor with delay. My understanding is that:</p>
<pre><code>del a
</code></pre>
<p>Deletes object reference so that object a is left for garbage collection. But for some reason interpreter waits with passing message to console. </p>
<p>Another this is the difference between </p>
<pre><code>del x
</code></pre>
<p>and </p>
<pre><code>del(x)
</code></pre>
<p>since if you ran the code using the latter only everything goes as expected - you get immediate info from destructor.</p>
<p>It is possible to reproduce this on Python 2.7 as well as using Python 3.3.</p>
| 1 | 2016-09-01T20:05:43Z | 39,280,120 | <p>The Python interpreter creates an <em>additional reference</em>. Every time an expression doesn't return <code>None</code>, the result is echoed <strong>and</strong> stored in the <code>_</code> built-in name.</p>
<p>When you then echo a different result, <code>_</code> is rebound to the new result, and the old object reference count drops. In your case, that means that <em>only then</em> is the previous <code>Monty()</code> instance reaped.</p>
<p>In other words, when you execute <code>del a</code>, you are not removing the last reference. Only when you later echo the <em>new</em> object is the last reference gone:</p>
<pre><code>>>> a = Monty() # reference count 1
start m
>>> a # _ reference added, count 2
<__main__.Monty instance at 0x7fdf9f084830>
>>> del a # reference count down to 1 again
>>> a = Monty()
start m
>>> a # _ now references the new object, count drops to 0
deleted m # so the old object is deleted.
<__main__.Monty instance at 0x7fdf9f083fc8>
</code></pre>
<p>You can see the reference by echoing <code>_</code>, and you can clear the reference by echoing something entirely unrelated:</p>
<pre><code>>>> a = Monty() # reference count 1
start m
>>> a # echoing, so _ is set and count is now 2
<__main__.Monty instance at 0x1056a2bd8>
>>> _ # see, _ is referencing the same object
<__main__.Monty instance at 0x1056a2bd8>
>>> del a # reference count down to 1
>>> _ # _ is still referencing the result
<__main__.Monty instance at 0x1056a2bd8>
>>> 'some other expression' # point to something else
deleted m # so reference count is down to 0
'some other expression'
>>> _
'some other expression'
</code></pre>
<p>There is no difference between <code>del x</code> and <code>del(x)</code>. Both execute the <code>del</code> statement, the parentheses are part of the expression and are a no-op in this case. There is no <code>del()</code> function.</p>
<p>The real difference is that you <em>did not echo <code>a</code></em> in that section of the code, so no additional <code>_</code> references were created.</p>
<p>You'd get the exact same results wether or not you use <code>(..)</code> parentheses:</p>
<pre><code>>>> a = Monty() # ref count 1
start m
>>> del a # no _ echo reference, so count back to 0
deleted m
>>> a = Monty() # ref count 1
start m
>>> del(a) # no _ echo reference, so count back to 0
deleted m
</code></pre>
| 3 | 2016-09-01T20:09:53Z | [
"python",
"oop",
"garbage-collection"
] |
how to check each letter in a string and do some action, in Python | 39,280,060 | <p>So I was messing around in python, and developed a problem.
I start out with a string like the following:</p>
<pre><code>a = "1523467aa252aaa98a892a8198aa818a18238aa82938a"
</code></pre>
<p>For every number, you have to add it to a <code>sum</code> variable.Also, with every encounter of a letter, the index iterator must move back 2. My program keeps crashing at <code>isinstance()</code>. This is the code I have so far: </p>
<pre><code>def sum():
a = '93752aaa746a27a1754aa90a93aaaaa238a44a75aa08750912738a8461a8759383aa328a4a4935903a6a55503605350'
z = 0
for i in a:
if isinstance(a[i], int):
z = z + a[i]
elif isinstance(a[i], str):
a = a[:i] + a[(i+1):]
i = i - 2
continue
print z
return z
sum()
</code></pre>
| -1 | 2016-09-01T20:06:22Z | 39,280,150 | <p>This part is not doing what you think:</p>
<pre><code>for i in a:
if isinstance(a[i], int):
</code></pre>
<p>Since <code>i</code> is an iterator, there is no need to use <code>a[i]</code>, it will confuse Python.</p>
<p>Also, since <code>a</code> is a string, no element of it will be an <code>int</code>, they will all be <code>string</code>. You want something like this:</p>
<pre><code>for i in a:
if i.isdigit():
z += int(i)
</code></pre>
<p><strong><em>EDIT:</em></strong> removing elements of an iterable while iterating over it is a common problem on SO, I would recommend creating a new string with only the elements you wan to keep:</p>
<pre><code>z = 0
b = ''
for i in a:
if i.isdigit():
z += int(i)
b += str(i)
a = b # set a back to b so the "original string" is set to a string with all non-numeric characters removed.
</code></pre>
| 1 | 2016-09-01T20:11:54Z | [
"python",
"string"
] |
how to check each letter in a string and do some action, in Python | 39,280,060 | <p>So I was messing around in python, and developed a problem.
I start out with a string like the following:</p>
<pre><code>a = "1523467aa252aaa98a892a8198aa818a18238aa82938a"
</code></pre>
<p>For every number, you have to add it to a <code>sum</code> variable.Also, with every encounter of a letter, the index iterator must move back 2. My program keeps crashing at <code>isinstance()</code>. This is the code I have so far: </p>
<pre><code>def sum():
a = '93752aaa746a27a1754aa90a93aaaaa238a44a75aa08750912738a8461a8759383aa328a4a4935903a6a55503605350'
z = 0
for i in a:
if isinstance(a[i], int):
z = z + a[i]
elif isinstance(a[i], str):
a = a[:i] + a[(i+1):]
i = i - 2
continue
print z
return z
sum()
</code></pre>
| -1 | 2016-09-01T20:06:22Z | 39,280,194 | <p>You have a few problems with your code. You don't seem to understand how <code>for... in</code> loops work, but @Will already addressed that problem in his answer. Furthermore, you have a misunderstanding of how <code>isinstance()</code> works. As the numbers are characters of a string, when you iterate over that string each character will also be a (one-length) string. <code>isinstance(a[i], int)</code> will fail for every character regardless of whether or not it can be converted to an <code>int</code>. What you actually want to do is just try converting each character to an <code>int</code> and adding it to the total. If it works, great, and if not just catch the exception and keep on going. You don't need to worry about non-numeric characters because when each one raises a <code>ValueError</code> it will simply be ignored and the next character in the string will be processed.</p>
<pre><code>string = '93752aaa746a27a1754aa90a93aaaaa238a44a75aa08750912738a8461a8759383aa328a4a4935903a6a55503605350'
def sum_(string):
total = 0
for c in string:
try:
total += int(c)
except ValueError:
pass
return total
sum_(string)
</code></pre>
<p>Furthermore, this function is equivalent to the following one-liners:</p>
<pre><code>sum(int(c) for c in string if c.isdigit())
</code></pre>
<p>Or the functional style...</p>
<pre><code>sum(map(int, filter(str.isdigit, string)))
</code></pre>
| 0 | 2016-09-01T20:14:26Z | [
"python",
"string"
] |
closing python comand subprocesses | 39,280,171 | <p>I want to continue with commands after closing subprocess. I have following code but <code>fsutil</code> is not executed. how can I do it?</p>
<pre><code>import os
from subprocess import Popen, PIPE, STDOUT
os.system('mkdir c:\\temp\\vhd')
p = Popen( ["diskpart"], stdin=PIPE, stdout=PIPE )
p.stdin.write("create vdisk file=c:\\temp\\vhd\\test.vhd maximum=2000 type=expandable\n")
p.stdin.write("attach vdisk\n")
p.stdin.write("create partition primary size=10\n")
p.stdin.write("format fs=ntfs quick\n")
p.stdin.write("assign letter=r\n")
p.stdin.write("exit\n")
p.stdout.close
os.system('fsutil file createnew r:\dummy.txt 6553600') #this doesn´t get executed
</code></pre>
| 4 | 2016-09-01T20:12:50Z | 39,280,436 | <p>At the least, I think you need to change your code to look like this:</p>
<pre><code>import os
from subprocess import Popen, PIPE
os.system('mkdir c:\\temp\\vhd')
p = Popen(["diskpart"], stdin=PIPE, stdout=PIPE, stderr=PIPE)
p.stdin.write("create vdisk file=c:\\temp\\vhd\\test.vhd maximum=2000 type=expandable\n")
p.stdin.write("attach vdisk\n")
p.stdin.write("create partition primary size=10\n")
p.stdin.write("format fs=ntfs quick\n")
p.stdin.write("assign letter=r\n")
p.stdin.write("exit\n")
results, errors = p.communicate()
os.system('fsutil file createnew r:\dummy.txt 6553600')
</code></pre>
<p>From the <a href="https://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate" rel="nofollow">documentation for <code>Popen.communicate()</code></a>:</p>
<blockquote>
<p>Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child.</p>
</blockquote>
<p>You could replace the <code>p.communicate()</code> with <code>p.wait()</code>, but there is this warning in the <a href="https://docs.python.org/2/library/subprocess.html#subprocess.Popen.wait" rel="nofollow">documentation for <code>Popen.wait()</code></a></p>
<blockquote>
<p><strong>Warning</strong> This will deadlock when using stdout=PIPE and/or stderr=PIPE and the child process generates enough output to a pipe such that it blocks waiting for the OS pipe buffer to accept more data. Use communicate() to avoid that. </p>
</blockquote>
| 1 | 2016-09-01T20:33:49Z | [
"python",
"subprocess",
"stdout"
] |
UTF-16 codepoint counting in python | 39,280,183 | <p>I'm getting some data from an API (telegram-bot) I'm using.
I'm using the <a href="https://github.com/python-telegram-bot/python-telegram-bot" rel="nofollow">python-telegram-bot</a> library which interacts with the <a href="https://core.telegram.org/bots/api" rel="nofollow">Telegram Bot api</a>.
The data is returned in the UTF-8 encoding in JSON format.
Example (snippet):</p>
<pre><code>{'message': {'text': 'í ½í±¨\u200dí ½í±©\u200dí ½í±¦\u200dí ½í±¦http://google.com/æøå', 'entities': [{'type': 'url', 'length': 21, 'offset': 11}], 'message_id': 2655}}
</code></pre>
<p>It can be seen that 'entities' contains a single entity of type url and it has a length and an offset.
Now say I wanted to extract the url of the link in the 'text' attribute:</p>
<pre><code>data = {'message': {'text': 'í ½í±¨\u200dí ½í±©\u200dí ½í±¦\u200dí ½í±¦http://google.com/æøå', 'entities': [{'type': 'url', 'length': 21, 'offset': 11}], 'message_id': 2655}}
entities = data['entities']
for entity in entities:
start = entity['offset']
end = start + entity['length']
print('Url: ', text[start:end])
</code></pre>
<p>The code above, however, returns: <code>'://google.com/æøå'</code> which is clearly not the actual url.<br>
The reason for this is that the offset and length are in UTF-16 codepoints. So my question is: Is there any way to work with UTF-16 codepoints in python? I don't need more than to be able to count them. </p>
<p>I've already tried:</p>
<pre><code>text.encode('utf-8').decode('utf-16')
</code></pre>
<p>But that gives the error: <code>UnicodeDecodeError: 'utf-16-le' codec can't decode byte 0xa5 in position 48: truncated data</code></p>
<p>Any help would be greatly appreciated.
I'm using python 3.5, but since it's for a unified library it would be lovely to get it to work in python 2.x too.</p>
| 4 | 2016-09-01T20:13:49Z | 39,280,419 | <p>Python has already correctly decoded the UTF-8 encoded JSON data to Python (Unicode) strings, so there is no need to handle UTF-8 here.</p>
<p>You'd have to encode to UTF-16, take the length of the encoded data, and divide by two. I'd encode to either <code>utf-16-le</code> or <code>utf-16-be</code> to prevent a BOM from being added:</p>
<pre><code>>>> len(text.encode('utf-16-le')) // 2
32
</code></pre>
<p>To use the entity offsets, you can encode to UTF-16, slice on <em>doubled</em> offsets, then decode again:</p>
<pre><code>text_utf16 = text.encode('utf-16-le')
for entity in entities:
start = entity['offset']
end = start + entity['length']
entity_text = text_utf16[start * 2:end * 2].decode('utf-16-le')
print('Url: ', entity_text)
</code></pre>
| 3 | 2016-09-01T20:32:18Z | [
"python",
"python-3.x",
"encoding",
"utf-8",
"utf-16"
] |
Handling DisambiguationError? | 39,280,195 | <p>I'm using the <code>wikipedia</code> library and I want to handle the <code>DisambiguationError</code> as an exception. My first try was</p>
<pre><code>try:
wikipedia.page('equipment') # could be any ambiguous term
except DisambiguationError:
pass
</code></pre>
<p>During execution line 3 isn't reached. A more general question is: how can I find the error type for a library-specific class like this?</p>
| 0 | 2016-09-01T20:14:28Z | 39,280,294 | <p>Here's a working example:</p>
<pre><code>import wikipedia
try:
wikipedia.page('equipment')
except wikipedia.exceptions.DisambiguationError as e:
print("Error: {0}".format(e))
</code></pre>
<p>Regarding to your more general question <code>how can I find the error type for a library-specific class like this?</code>, my trick is actually quite simple, I tend to capture Exception and then just printing the <code>__class__</code>, that way I'll know what specific Exception I need to capture.</p>
<p>One example of figuring out which specific exception to capture here:</p>
<pre><code>try:
0/0
except Exception as e:
print("Exception.__class__: {0}".format(e.__class__))
</code></pre>
<p>This would print <code>Exception.__class__: <type 'exceptions.ZeroDivisionError'></code>, so I knew <code>exceptions.ZeroDivisionError</code> would be the exact Exception to deal with instead of something more generic</p>
| 1 | 2016-09-01T20:22:38Z | [
"python",
"error-handling",
"try-except",
"pywikipedia"
] |
Pandas - how to remove spaces in each column in a dataframe? | 39,280,278 | <p>I'm trying to remove spaces, apostrophes, and double quote in each column data using this for loop</p>
<p><code>for c in data.columns:
data[c] = data[c].str.strip().replace(',', '').replace('\'', '').replace('\"', '').strip()</code></p>
<p>but I keep getting this error:</p>
<p><code>AttributeError: 'Series' object has no attribute 'strip'</code></p>
<p>data is the data frame and was obtained from an excel file</p>
<p><code>xl = pd.ExcelFile('test.xlsx');
data = xl.parse(sheetname='Sheet1')</code></p>
<p>Am I missing something? I added the <code>str</code> but that didn't help. Is there a better way to do this.</p>
<p>I don't want to use the column labels, like so <code>data['column label']</code>, because the text can be different. I would like to iterate each column and remove the characters mentioned above.</p>
<p>incoming data:</p>
<p><code>id city country
1 Ontario Canada
2 Calgary ' Canada'
3 'Vancouver Canada</code></p>
<p>desired output:</p>
<p><code>id city country
1 Ontario Canada
2 Calgary Canada
3 Vancouver Canada</code></p>
| -1 | 2016-09-01T20:21:42Z | 39,280,341 | <p>data[c] does not return a value, it returns a series (a whole column of data). </p>
<p>You can apply the strip operation to an entire column df.apply. You can apply the strip function this way.</p>
| 0 | 2016-09-01T20:26:28Z | [
"python",
"pandas"
] |
Pandas - how to remove spaces in each column in a dataframe? | 39,280,278 | <p>I'm trying to remove spaces, apostrophes, and double quote in each column data using this for loop</p>
<p><code>for c in data.columns:
data[c] = data[c].str.strip().replace(',', '').replace('\'', '').replace('\"', '').strip()</code></p>
<p>but I keep getting this error:</p>
<p><code>AttributeError: 'Series' object has no attribute 'strip'</code></p>
<p>data is the data frame and was obtained from an excel file</p>
<p><code>xl = pd.ExcelFile('test.xlsx');
data = xl.parse(sheetname='Sheet1')</code></p>
<p>Am I missing something? I added the <code>str</code> but that didn't help. Is there a better way to do this.</p>
<p>I don't want to use the column labels, like so <code>data['column label']</code>, because the text can be different. I would like to iterate each column and remove the characters mentioned above.</p>
<p>incoming data:</p>
<p><code>id city country
1 Ontario Canada
2 Calgary ' Canada'
3 'Vancouver Canada</code></p>
<p>desired output:</p>
<p><code>id city country
1 Ontario Canada
2 Calgary Canada
3 Vancouver Canada</code></p>
| -1 | 2016-09-01T20:21:42Z | 39,280,710 | <p><strong>UPDATE:</strong> using your sample DF:</p>
<pre><code>In [80]: df
Out[80]:
id city country
0 1 Ontario Canada
1 2 Calgary ' Canada'
2 3 'Vancouver Canada
In [81]: df.replace(r'[,\"\']','', regex=True).replace(r'\s*([^\s]+)\s*', r'\1', regex=True)
Out[81]:
id city country
0 1 Ontario Canada
1 2 Calgary Canada
2 3 Vancouver Canada
</code></pre>
<p><strong>OLD answer:</strong></p>
<p>you can use <code>DataFrame.replace()</code> method:</p>
<pre><code>In [75]: df.to_dict('r')
Out[75]:
[{'a': ' x,y ', 'b': 'a"b"c', 'c': 'zzz'},
{'a': "x'y'z", 'b': 'zzz', 'c': ' ,s,,'}]
In [76]: df
Out[76]:
a b c
0 x,y a"b"c zzz
1 x'y'z zzz ,s,,
In [77]: df.replace(r'[,\"\']','', regex=True).replace(r'\s*([^\s]+)\s*', r'\1', regex=True)
Out[77]:
a b c
0 xy abc zzz
1 xyz zzz s
</code></pre>
<p><code>r'\1'</code> - is a <a href="http://www.regular-expressions.info/replacebackref.html" rel="nofollow">numbered capturing RegEx group</a></p>
| 1 | 2016-09-01T20:55:20Z | [
"python",
"pandas"
] |
Python Entry point 'console_scripts' not found | 39,280,326 | <p>I'm unable to import entry point console scripts in my python package. Looking for help debugging my current issue, as I have read every relevant post on the issue.</p>
<p>Here is what my directory structure looks like:</p>
<pre><code>âââ ContentAnalysis
â  âââ __init__.py
â  âââ command_line.py
â  âââ document.py
â  âââ entities.py
â  âââ sentiment.py
â  âââ summary.py
â  âââ text_tokenize.py
â  âââ tokens.py
âââ local-requirements.txt
âââ requirements.txt
âââ server-requirements.txt
âââ setup.py
âââ tests
âââ tests.py
âââ tests.pyc
</code></pre>
<p>Here is what my setup.py looks like</p>
<pre><code>from setuptools import setup
config = {
'description': 'Tools to extract information from web links',
'author': 'sample',
'version': '0.1',
'install_requires': ['nose'],
'packages': ['ContentAnalysis'],
'entry_points': {
'console_scripts': ['content_analysis=ContentAnalysis.command_line:main'],
},
'name':'ContentAnalysis',
'include_package_data':True
}
setup(**config)
</code></pre>
<p>I've installed the package and verified that content_analysis is reachable from the command line. I've also verified that my ContentAnalysis package is importable from the python interpreter from any cd in the computer. Yet I still get an "Entry point not found error on execution"</p>
<pre><code>grant@DevBox2:/opt/content-analysis$ content_analysis -l 'http://101beauty.org/how-to-use-baking-soda-to-reduce-dark-circles-and-bags-under-the-eyes/'
Traceback (most recent call last):
File "/opt/anaconda2/bin/content_analysis", line 11, in <module>
load_entry_point('ContentAnalysis==0.1', 'console_scripts', 'content_analysis')()
File "/opt/anaconda2/lib/python2.7/site-packages/setuptools-26.1.1-py2.7.egg/pkg_resources/__init__.py", line 565, in load_entry_point
File "/opt/anaconda2/lib/python2.7/site-packages/setuptools-26.1.1-py2.7.egg/pkg_resources/__init__.py", line 2588, in load_entry_point
ImportError: Entry point ('console_scripts', 'content_analysis') not found
</code></pre>
<p>Any help or tips towards debugging this is appreciated</p>
<p><strong>Edit #1:</strong></p>
<p>Attempting to debug the issue, I noticed the command_line is not reachable as a submodule within ContentAnalysis</p>
<pre><code>>>> import ContentAnalysis
>>> ContentAnalysis.tokens
<module 'ContentAnalysis.tokens' from '/opt/anaconda2/lib/python2.7/site-packages/ContentAnalysis/tokens.pyc'>
>>> ContentAnalysis.command_line
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'command_line'
>>>
</code></pre>
<p>It appears that command_line is not being added to the relevant site_packages folder. </p>
<pre><code>grant@DevBox2:/opt/anaconda2/lib/python2.7/site-packages/ContentAnalysis$ ls
data entities.py __init__.pyc summary.py text_tokenize.pyc
document.py entities.pyc sentiment.py summary.pyc tokens.py
document.pyc __init__.py sentiment.pyc text_tokenize.py tokens.pyc
</code></pre>
<p>I wonder why?</p>
| 1 | 2016-09-01T20:25:22Z | 39,281,387 | <p>Investigation of the relevant site-packages folder clued me that my <code>python setup.py install</code> command was not putting all the relevant files where they needed to be. </p>
<p>I'm still not 100% of the underlying cause of the issue, but I was only able to get my site-packages folder to truly update by passing setup.py the <code>--force</code> argument as in </p>
<pre><code>python setup.py install --force
</code></pre>
<p>Now my site-packages folder contains the relevant command_line.py, and the console entry point works as expected.</p>
| 0 | 2016-09-01T21:48:39Z | [
"python",
"debian",
"anaconda",
"setuptools",
"setup.py"
] |
How to construct a puppet resource from Python | 39,280,335 | <p>I would like to construct a puppet resource from within Python. If I had a hash of keys and values, or variables with values, how could this be done?</p>
<p>This is a simple example of a puppet resource.</p>
<pre><code>file { '/etc/passwd':
owner => root,
group => root,
mode => 644
}
</code></pre>
<p>If I had the string <code>/etc/passwd</code>, variable with a value of <code>root</code>, another variable with a value of <code>root</code>, and a variable <code>mode</code> with a value of <code>644</code>, how would I generate the above resource from within Python?</p>
| 0 | 2016-09-01T20:26:04Z | 39,323,645 | <p>From your comment it seems you just want to be able to output your python objects into puppet manifest format. Since there is not a python package that does this I propose writing your own classes to handle the resource types you need, then overriding the <strong>str</strong> function so that it outputs the manifest you need.</p>
<pre><code>class fileresource:
def __init__(self, mfile, owner, group, mode):
self.mfile = mfile
self.owner = owner
self.group = group
self.mode = mode
def __str__(self):
mystring = "file {'" + self.mfile + "':\n"
mystring += " owner => " + self.owner + "\n"
mystring += " group => " + self.group + "\n"
mystring += " mode => " + self.mode + "\n"
mystring += "}\n"
return mystring
if __name__ == "__main__":
myfile = fileresource("/etc/passwd", "root", "root", "0644")
print myfile
</code></pre>
<p>This would be the output:</p>
<pre><code>$ python fileresource.py
file {'/etc/passwd':
owner => root
group => root
mode => 0644
}
</code></pre>
<p>You could conceivably write an entire package that handles all the different types of puppet resources and use this in your code. Hopefully, this is what you are looking for.</p>
| 0 | 2016-09-05T03:46:35Z | [
"python",
"puppet"
] |
Python os.environ throws key error? | 39,280,435 | <p>I'm accessing an environment variable in a script with <code>os.environ.get</code> and it's throwing a <code>KeyError</code>. It doesn't throw the error from the Python prompt. This is running on OS X 10.11.6, and is Python 2.7.10.</p>
<p>What is going on?</p>
<pre><code>$ python score.py
Traceback (most recent call last):
File "score.py", line 4, in <module>
setup_logging()
File "/score/log.py", line 29, in setup_logging
config = get_config()
File "/score/log.py", line 11, in get_config
environment = os.environ.get('NODE_ENV')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'NODE_ENV'
$ python -c "import os; os.environ.get('NODE_ENV')"
$
</code></pre>
<p><strong>As requested, here's the source code for <code>score.py</code></strong></p>
<pre><code>from __future__ import print_function
from log import get_logger, setup_logging
setup_logging()
log = get_logger('score')
</code></pre>
<p><strong>And here's <code>log.py</code></strong></p>
<pre><code>import json
import os
import sys
from iron_worker import IronWorker
from logbook import Logger, Processor, NestedSetup, StderrHandler, SyslogHandler
IRON_IO_TASK_ID = IronWorker.task_id()
def get_config():
environment = os.environ.get('NODE_ENV')
if environment == 'production':
filename = '../config/config-production.json'
elif environment == 'integration':
filename = '../config/config-integration.json'
else:
filename = '../config/config-dev.json'
with open(filename) as f:
return json.load(f)
def setup_logging():
# This defines a remote Syslog handler
# This will include the TASK ID, if defined
app_name = 'scoreworker'
if IRON_IO_TASK_ID:
app_name += '-' + IRON_IO_TASK_ID
config = get_config()
default_log_handler = NestedSetup([
StderrHandler(),
SyslogHandler(
app_name,
address = (config['host'], config['port']),
level = 'ERROR',
bubble = True
)
])
default_log_handler.push_application()
def get_logger(name):
return Logger(name)
</code></pre>
| 8 | 2016-09-01T20:33:48Z | 39,280,619 | <p>I'd recommend you start debugging os.py, for instance, on windows it's being used this implementation:</p>
<pre><code>def get(self, key, failobj=None):
print self.data.__class__
print key
return self.data.get(key.upper(), failobj)
</code></pre>
<p>And if I test it with this:</p>
<pre><code>import os
try:
os.environ.get('NODE_ENV')
except Exception as e:
print("-->{0}".format(e.__class__))
os.environ['NODE_ENV'] = "foobar"
try:
os.environ.get('NODE_ENV')
except Exception as e:
print("{0}".format(e.__class__))
</code></pre>
<p>The output will be:</p>
<pre><code><type 'dict'>
PYTHONUSERBASE
<type 'dict'>
APPDATA
<type 'dict'>
NODE_ENV
<type 'dict'>
NODE_ENV
</code></pre>
<p>So it makes sense the exception is not spawned reading <a href="https://docs.python.org/2/library/stdtypes.html#dict.get" rel="nofollow">dict.get</a> docs.</p>
<p>In any case, if you don't want to mess up or debugging the python modules, try cleaning up the *.pyc files, try to set up properly <code>NODE_ENV</code>. And if all that don't work, restart your terminal to clear up.</p>
| -3 | 2016-09-01T20:47:44Z | [
"python",
"python-2.7"
] |
Python os.environ throws key error? | 39,280,435 | <p>I'm accessing an environment variable in a script with <code>os.environ.get</code> and it's throwing a <code>KeyError</code>. It doesn't throw the error from the Python prompt. This is running on OS X 10.11.6, and is Python 2.7.10.</p>
<p>What is going on?</p>
<pre><code>$ python score.py
Traceback (most recent call last):
File "score.py", line 4, in <module>
setup_logging()
File "/score/log.py", line 29, in setup_logging
config = get_config()
File "/score/log.py", line 11, in get_config
environment = os.environ.get('NODE_ENV')
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'NODE_ENV'
$ python -c "import os; os.environ.get('NODE_ENV')"
$
</code></pre>
<p><strong>As requested, here's the source code for <code>score.py</code></strong></p>
<pre><code>from __future__ import print_function
from log import get_logger, setup_logging
setup_logging()
log = get_logger('score')
</code></pre>
<p><strong>And here's <code>log.py</code></strong></p>
<pre><code>import json
import os
import sys
from iron_worker import IronWorker
from logbook import Logger, Processor, NestedSetup, StderrHandler, SyslogHandler
IRON_IO_TASK_ID = IronWorker.task_id()
def get_config():
environment = os.environ.get('NODE_ENV')
if environment == 'production':
filename = '../config/config-production.json'
elif environment == 'integration':
filename = '../config/config-integration.json'
else:
filename = '../config/config-dev.json'
with open(filename) as f:
return json.load(f)
def setup_logging():
# This defines a remote Syslog handler
# This will include the TASK ID, if defined
app_name = 'scoreworker'
if IRON_IO_TASK_ID:
app_name += '-' + IRON_IO_TASK_ID
config = get_config()
default_log_handler = NestedSetup([
StderrHandler(),
SyslogHandler(
app_name,
address = (config['host'], config['port']),
level = 'ERROR',
bubble = True
)
])
default_log_handler.push_application()
def get_logger(name):
return Logger(name)
</code></pre>
| 8 | 2016-09-01T20:33:48Z | 39,280,914 | <p>Try running:</p>
<pre><code>find . -name \*.pyc -delete
</code></pre>
<p>To delete your <code>.pyc</code> files. </p>
<p>Researching your problem I came across <a href="http://stackoverflow.com/q/26861856/3642398">this question</a>, where a user was experiencing the same thing: <code>.get()</code> seemingly raising a <code>KeyError</code>. In that case, it was caused, according to <a href="http://stackoverflow.com/a/26906934/3642398">this accepted answer</a>, by a <code>.pyc</code> file which contained code where a <code>dict</code> value was being accessed by key (i.e., <code>mydict['potentially_nonexistent_key']</code>), while the traceback was showing the code from the updated <code>.py</code> file where <code>.get()</code> was used. I have never heard of this happening, where the traceback references current code from a <code>.py</code> file, but shows an error raised by an outdated <code>.pyc</code> file, but it seems to have happened at least once in the history of Python...</p>
<p>It is a long shot, but worth a try I thought.</p>
| 4 | 2016-09-01T21:11:34Z | [
"python",
"python-2.7"
] |
Select rows where a particular column has is two characters long | 39,280,448 | <p>I know how to select rows by value in a particular column. For example:</p>
<pre><code>df.loc[df['column_name'] == some_value]
</code></pre>
<p>How do I modify that so the column value is exactly two capital letters. E.g. AB or FZ.</p>
| 0 | 2016-09-01T20:34:32Z | 39,280,479 | <p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.match.html" rel="nofollow">.str.match()</a> method:</p>
<pre><code>In [55]: df
Out[55]:
col
0 xy
1 ABC
2 ZS
3 AAAAA
4 XC
In [56]: df.col.str.match(r'^[A-Z]{2}$')
Out[56]:
0 False
1 False
2 True
3 False
4 True
Name: col, dtype: bool
In [57]: df[df.col.str.match(r'^[A-Z]{2}$')]
Out[57]:
col
2 ZS
4 XC
</code></pre>
| 4 | 2016-09-01T20:36:59Z | [
"python",
"pandas",
"dataframe"
] |
Can't import 'datasets' with scikit-learn | 39,280,466 | <p>I just installed Python (tried both 3.5.2 and 2.7.12 with the exact same result). I've tried Googling it and looking through issues but can't find anything on it.</p>
<p>The code I'm trying to run is simply the beginning of the basic tutorial:</p>
<pre><code>from sklearn import datasets
iris = datasets.load_iris()
digits = datasets.load_digits()
</code></pre>
<p>The error is <code>ImportError: cannot import name 'datasets'</code>.</p>
<p>I've tried re-installing everything. Same result over and over again. I'm on a Macbook with El Capitan which is newly installed as well.</p>
<p>I installed Python with pyenv, and scipy and numpy through pip. I've also upgraded pip, by the way, to the latest version.</p>
<pre><code>import _frozen_importlib # frozen
import _imp # builtin
import sys # builtin
import '_warnings' # <class '_frozen_importlib.BuiltinImporter'>
import '_thread' # <class '_frozen_importlib.BuiltinImporter'>
import '_weakref' # <class '_frozen_importlib.BuiltinImporter'>
import '_frozen_importlib_external' # <class '_frozen_importlib.FrozenImporter'>
import '_io' # <class '_frozen_importlib.BuiltinImporter'>
import 'marshal' # <class '_frozen_importlib.BuiltinImporter'>
import 'posix' # <class '_frozen_importlib.BuiltinImporter'>
import _thread # previously loaded ('_thread')
import '_thread' # <class '_frozen_importlib.BuiltinImporter'>
import _weakref # previously loaded ('_weakref')
import '_weakref' # <class '_frozen_importlib.BuiltinImporter'>
# installing zipimport hook
import 'zipimport' # <class '_frozen_importlib.BuiltinImporter'>
# installed zipimport hook
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/__pycache__/__init__.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/__init__.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/__pycache__/__init__.cpython-35.pyc'
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/codecs.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/codecs.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/codecs.cpython-35.pyc'
import '_codecs' # <class '_frozen_importlib.BuiltinImporter'>
import 'codecs' # <_frozen_importlib_external.SourceFileLoader object at 0x108ac7390>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/__pycache__/aliases.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/aliases.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/__pycache__/aliases.cpython-35.pyc'
import 'encodings.aliases' # <_frozen_importlib_external.SourceFileLoader object at 0x108af7f60>
import 'encodings' # <_frozen_importlib_external.SourceFileLoader object at 0x108ac0f60>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/__pycache__/utf_8.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/utf_8.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/__pycache__/utf_8.cpython-35.pyc'
import 'encodings.utf_8' # <_frozen_importlib_external.SourceFileLoader object at 0x108b07d30>
import '_signal' # <class '_frozen_importlib.BuiltinImporter'>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/__pycache__/latin_1.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/latin_1.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/encodings/__pycache__/latin_1.cpython-35.pyc'
import 'encodings.latin_1' # <_frozen_importlib_external.SourceFileLoader object at 0x108af97f0>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/io.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/io.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/io.cpython-35.pyc'
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/abc.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/abc.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/abc.cpython-35.pyc'
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_weakrefset.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/_weakrefset.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_weakrefset.cpython-35.pyc'
import '_weakrefset' # <_frozen_importlib_external.SourceFileLoader object at 0x108b10470>
import 'abc' # <_frozen_importlib_external.SourceFileLoader object at 0x108af9c50>
import 'io' # <_frozen_importlib_external.SourceFileLoader object at 0x108af99e8>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/site.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/site.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/site.cpython-35.pyc'
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/os.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/os.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/os.cpython-35.pyc'
import 'errno' # <class '_frozen_importlib.BuiltinImporter'>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/stat.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/stat.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/stat.cpython-35.pyc'
import '_stat' # <class '_frozen_importlib.BuiltinImporter'>
import 'stat' # <_frozen_importlib_external.SourceFileLoader object at 0x108b954a8>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/posixpath.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/posixpath.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/posixpath.cpython-35.pyc'
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/genericpath.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/genericpath.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/genericpath.cpython-35.pyc'
import 'genericpath' # <_frozen_importlib_external.SourceFileLoader object at 0x108b97d30>
import 'posixpath' # <_frozen_importlib_external.SourceFileLoader object at 0x108b957f0>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_collections_abc.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/_collections_abc.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_collections_abc.cpython-35.pyc'
import '_collections_abc' # <_frozen_importlib_external.SourceFileLoader object at 0x108ba03c8>
import 'os' # <_frozen_importlib_external.SourceFileLoader object at 0x108b24278>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_sitebuiltins.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/_sitebuiltins.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_sitebuiltins.cpython-35.pyc'
import '_sitebuiltins' # <_frozen_importlib_external.SourceFileLoader object at 0x108b24550>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/sysconfig.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/sysconfig.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/sysconfig.cpython-35.pyc'
import 'sysconfig' # <_frozen_importlib_external.SourceFileLoader object at 0x108bd9668>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_sysconfigdata.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/_sysconfigdata.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_sysconfigdata.cpython-35.pyc'
import '_sysconfigdata' # <_frozen_importlib_external.SourceFileLoader object at 0x108bdff28>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_osx_support.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/_osx_support.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_osx_support.cpython-35.pyc'
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/re.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/re.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/re.cpython-35.pyc'
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/sre_compile.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/sre_compile.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/sre_compile.cpython-35.pyc'
import '_sre' # <class '_frozen_importlib.BuiltinImporter'>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/sre_parse.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/sre_parse.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/sre_parse.cpython-35.pyc'
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/sre_constants.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/sre_constants.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/sre_constants.cpython-35.pyc'
import 'sre_constants' # <_frozen_importlib_external.SourceFileLoader object at 0x108c0a6a0>
import 'sre_parse' # <_frozen_importlib_external.SourceFileLoader object at 0x108bfed68>
import 'sre_compile' # <_frozen_importlib_external.SourceFileLoader object at 0x108bf5da0>
import '_locale' # <class '_frozen_importlib.BuiltinImporter'>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/copyreg.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/copyreg.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/copyreg.cpython-35.pyc'
import 'copyreg' # <_frozen_importlib_external.SourceFileLoader object at 0x108c47470>
import 're' # <_frozen_importlib_external.SourceFileLoader object at 0x108bf1eb8>
import '_osx_support' # <_frozen_importlib_external.SourceFileLoader object at 0x108bf1080>
# /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_bootlocale.cpython-35.pyc matches /Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/_bootlocale.py
# code object from '/Users/fredrik/.pyenv/versions/3.5.2/lib/python3.5/__pycache__/_bootlocale.cpython-35.pyc'
import '_bootlocale' # <_frozen_importlib_external.SourceFileLoader object at 0x108c47390>
import 'site' # <_frozen_importlib_external.SourceFileLoader object at 0x108b17d68>
Python 3.5.2 (default, Aug 30 2016, 00:56:52)
[GCC 4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.31)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
# /Users/fredrik/code/scikit-learn/__pycache__/sklearn.cpython-35.pyc matches /Users/fredrik/code/scikit-learn/sklearn.py
# code object from '/Users/fredrik/code/scikit-learn/__pycache__/sklearn.cpython-35.pyc'
Traceback (most recent call last):
File "sklearn.py", line 1, in <module>
from sklearn import datasets
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/Users/fredrik/code/scikit-learn/sklearn.py", line 1, in <module>
from sklearn import datasets
ImportError: cannot import name 'datasets'
# clear builtins._
# clear sys.path
# clear sys.argv
# clear sys.ps1
# clear sys.ps2
# clear sys.last_type
# clear sys.last_value
# clear sys.last_traceback
# destroy sklearn
# clear sys.path_hooks
# clear sys.path_importer_cache
# clear sys.meta_path
# clear sys.__interactivehook__
# clear sys.flags
# clear sys.float_info
# restore sys.stdin
# restore sys.stdout
# restore sys.stderr
# cleanup[2] removing encodings.latin_1
# cleanup[2] removing _locale
# cleanup[2] removing __main__
# destroy __main__
# cleanup[2] removing os.path
# cleanup[2] removing _sysconfigdata
# destroy _sysconfigdata
# cleanup[2] removing _bootlocale
# destroy _bootlocale
# cleanup[2] removing sysconfig
# destroy sysconfig
# cleanup[2] removing zipimport
# cleanup[2] removing sre_compile
# cleanup[2] removing _stat
# cleanup[2] removing _collections_abc
# destroy _collections_abc
# cleanup[2] removing posixpath
# cleanup[2] removing _imp
# cleanup[2] removing _weakrefset
# destroy _weakrefset
# cleanup[2] removing marshal
# cleanup[2] removing _frozen_importlib_external
# cleanup[2] removing posix
# cleanup[2] removing sre_constants
# destroy sre_constants
# cleanup[2] removing builtins
# cleanup[2] removing site
# destroy site
# cleanup[2] removing sre_parse
# cleanup[2] removing _weakref
# cleanup[2] removing encodings
# destroy encodings
# cleanup[2] removing errno
# cleanup[2] removing encodings.utf_8
# cleanup[2] removing _codecs
# cleanup[2] removing os
# cleanup[2] removing _frozen_importlib
# cleanup[2] removing _warnings
# cleanup[2] removing sys
# cleanup[2] removing codecs
# cleanup[2] removing abc
# cleanup[2] removing _io
# cleanup[2] removing stat
# cleanup[2] removing encodings.aliases
# cleanup[2] removing copyreg
# cleanup[2] removing io
# destroy io
# destroy abc
# cleanup[2] removing _osx_support
# destroy _osx_support
# cleanup[2] removing _thread
# cleanup[2] removing _sre
# cleanup[2] removing genericpath
# cleanup[2] removing re
# cleanup[2] removing _signal
# cleanup[2] removing _sitebuiltins
# destroy zipimport
# destroy _signal
# destroy _sitebuiltins
# destroy posixpath
# destroy errno
# destroy _stat
# destroy genericpath
# destroy stat
# destroy os
# destroy re
# destroy sre_compile
# destroy copyreg
# destroy sre_parse
# destroy _sre
# destroy _locale
# cleanup[3] wiping encodings.latin_1
# cleanup[3] wiping _imp
# cleanup[3] wiping marshal
# cleanup[3] wiping _frozen_importlib_external
# destroy marshal
# cleanup[3] wiping posix
# destroy posix
# cleanup[3] wiping _weakref
# cleanup[3] wiping encodings.utf_8
# cleanup[3] wiping _codecs
# cleanup[3] wiping _frozen_importlib
# destroy _frozen_importlib_external
# destroy _weakref
# destroy _imp
# cleanup[3] wiping _warnings
# destroy _warnings
# cleanup[3] wiping codecs
# cleanup[3] wiping _io
# destroy io
# cleanup[3] wiping encodings.aliases
# cleanup[3] wiping _thread
# destroy _thread
# cleanup[3] wiping sys
# cleanup[3] wiping builtins
# destroy _frozen_importlib
</code></pre>
| 1 | 2016-09-01T20:35:50Z | 39,280,646 | <p>The error is caused from the file that you have named <code>/Users/fredrik/code/scikit-learn/sklearn.py</code></p>
<p>The <code>sklearn</code> library is being overridden by your local file, so you just need to rename the <code>sklearn.py</code> file in your project to something else and it should work.</p>
| 2 | 2016-09-01T20:50:24Z | [
"python",
"numpy",
"scipy",
"scikit-learn"
] |
altering a word encoder and decoder to give recognize spaces and punctuation | 39,280,585 | <p>The 2nd function encodes a word phase and the 3rd one decodes that same word function but it doesn't skip over the spaces and punctuation. </p>
<pre><code>def buildCipher(key):
alpha="abcdefghijklmnopqrstuvwxyz"
rest = ""
for letter in alpha:
if not(letter in key):
rest = rest + letter
print key+rest
def encode(string,keyletters):
alpha="abcdefghijklmnopqrstuvwxyz"
secret = ""
for letter in string:
index = alpha.find(letter)
secret = secret+keyletters[index]
print secret
def decode(secret,keyletters):
alpha="abcdefghijklmnopqrstuvwxyz"
clear = ""
for letter in secret:
index = keyletters.find(letter)
clear = clear+alpha[index]
encode("this is zest!!!" , "earthbcdfgijklmnopqsuvwxyz")
#gives me sdfqfqzhqs
#need it to give me sdfq fq zhqs!!!
decode("tdfq fq zhqs!!!" , "earthbcdfgijklmnopqsuvwxyz")
</code></pre>
| 1 | 2016-09-01T20:44:44Z | 39,280,740 | <p>At the moment the space character isn't in either alpha or your keyletters - if you don't want space encrypted then add it in the same position in both.</p>
<p>NOTE your code currently ignores the fact that space is in the string to encode but not in the keyletters. It would be a good idea to be explicit about this in your code - check that the letter is present rather than simply ignoring the error.</p>
| 0 | 2016-09-01T20:57:04Z | [
"python"
] |
altering a word encoder and decoder to give recognize spaces and punctuation | 39,280,585 | <p>The 2nd function encodes a word phase and the 3rd one decodes that same word function but it doesn't skip over the spaces and punctuation. </p>
<pre><code>def buildCipher(key):
alpha="abcdefghijklmnopqrstuvwxyz"
rest = ""
for letter in alpha:
if not(letter in key):
rest = rest + letter
print key+rest
def encode(string,keyletters):
alpha="abcdefghijklmnopqrstuvwxyz"
secret = ""
for letter in string:
index = alpha.find(letter)
secret = secret+keyletters[index]
print secret
def decode(secret,keyletters):
alpha="abcdefghijklmnopqrstuvwxyz"
clear = ""
for letter in secret:
index = keyletters.find(letter)
clear = clear+alpha[index]
encode("this is zest!!!" , "earthbcdfgijklmnopqsuvwxyz")
#gives me sdfqfqzhqs
#need it to give me sdfq fq zhqs!!!
decode("tdfq fq zhqs!!!" , "earthbcdfgijklmnopqsuvwxyz")
</code></pre>
| 1 | 2016-09-01T20:44:44Z | 39,280,751 | <p>What happens if <code>letter</code> isn't alphabetic? Exactly, it doesn't get added since it can't find it's index in <code>alpha</code>. You need to have an <code>if/else</code> statement:</p>
<pre><code>def encode(string,keyletters):
alpha="abcdefghijklmnopqrstuvwxyz"
secret = ""
for letter in string:
if letter in alpha:
index = alpha.find(letter)
secret = secret+keyletters[index]
else:
secret = secret + letter
print secret
</code></pre>
| 0 | 2016-09-01T20:57:40Z | [
"python"
] |
How to share conda environments across platforms | 39,280,638 | <p>The conda docs at <a href="http://conda.pydata.org/docs/using/envs.html" rel="nofollow">http://conda.pydata.org/docs/using/envs.html</a> explain how to share environments with other people.</p>
<p>However, the docs tell us this is not cross platform:</p>
<pre><code>NOTE: These explicit spec files are not usually cross platform, and
therefore have a comment at the top such as # platform: osx-64 showing the
platform where they were created. This platform is the one where this spec
file is known to work. On other platforms, the packages specified might not
be available or dependencies might be missing for some of the key packages
already in the spec.
NOTE: Conda does not check architecture or dependencies when installing
from an explicit specification file. To ensure the packages work correctly,
be sure that the file was created from a working environment and that it is
used on the same architecture, operating system and platform, such as linux-
64 or osx-64.
</code></pre>
<p>Is there a good method to share and recreate a conda environment in one platform (e.g. CentOS) in another platform (e.g. Windows)?</p>
| 2 | 2016-09-01T20:49:23Z | 39,299,669 | <h2>Answer</h2>
<p>This answer is given with the assumption that you would like to make sure that
the same versions of the packages that you generally care about are on
different platforms and that you don't care about the exact same versions of
all packages in the entire dependency tree. If you are trying to install the
exact same version of all packages in your entire dependency tree that has a
high likelihood of failure since some conda packages have different
dependencies for osx/win/linux. For example, <a href="https://github.com/conda-forge/otrobopt-feedstock/blob/master/recipe/meta.yaml#L43-L45" rel="nofollow">the recipe for
otrobopt</a>
will install different packages on win vs osx/linux, so the environment list
would be different.</p>
<p>Recommendation: Manually create an environment.yaml file and specify or pin
only the dependencies that you care about. Let the conda solver do the rest.
Probably worth noting is that conda-env (the tool that you use to manage conda
environments) explicitly recomments that you "Always create your
environment.yml file by hand."</p>
<p>Then you would just do `conda create --file envir</p>
<p>Have a look at the readme for
<a href="https://github.com/conda/conda-env#environment-file-example" rel="nofollow">conda-env</a>.</p>
<p>They can be quite simple:</p>
<pre><code>name: basic_analysis
dependencies:
- numpy
- pandas
</code></pre>
<p>Or more complex where you pin dependencies and specify anaconda.org channels to
install from:</p>
<pre><code>name: stats-web
channels:
- javascript
dependencies:
- python=3.4 # or 2.7 if you are feeling nostalgic
- bokeh=0.9.2
- numpy=1.9.*
- nodejs=0.10.*
- flask
- pip:
- Flask-Testing
</code></pre>
<p>In either case, you can create an environment with <code>conda create --file environment.yaml</code></p>
<p>If you have a more complex use case or further questions, update the original
question and I'll see if I can't help you a bit more.</p>
| 2 | 2016-09-02T19:41:37Z | [
"python",
"cross-platform",
"conda"
] |
Connect and handle multiple raw sockets at once in Python 2.x | 39,280,687 | <p>I am new to Python, and trying to learn it "on the job". And I am required to do this.</p>
<p>I am required to communicate with 3 servers with a raw socket connection. I can easily do that in a sequential manner. But I was wondering if there is a way I can communicate with these 3 servers at once? All 3 servers have different IP addresses. </p>
<p>Basically try to do the following but in 1 step:</p>
<pre><code>s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST1, PORT1))
s.connect((HOST2, PORT1))
s.connect((HOST3, PORT1))
</code></pre>
<p>which is also need later <code>s.sendall()</code> & <code>s.recv()</code> to be parallelized. </p>
| 1 | 2016-09-01T20:53:54Z | 39,280,725 | <p>If you only have one listening thread, you can use select to wait on multiple sockets and get woken when any of them return data:</p>
<p><a href="https://docs.python.org/2/library/select.html" rel="nofollow">https://docs.python.org/2/library/select.html</a></p>
| 1 | 2016-09-01T20:56:09Z | [
"python",
"multithreading",
"sockets",
"multiprocessing",
"raw-sockets"
] |
Connect and handle multiple raw sockets at once in Python 2.x | 39,280,687 | <p>I am new to Python, and trying to learn it "on the job". And I am required to do this.</p>
<p>I am required to communicate with 3 servers with a raw socket connection. I can easily do that in a sequential manner. But I was wondering if there is a way I can communicate with these 3 servers at once? All 3 servers have different IP addresses. </p>
<p>Basically try to do the following but in 1 step:</p>
<pre><code>s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST1, PORT1))
s.connect((HOST2, PORT1))
s.connect((HOST3, PORT1))
</code></pre>
<p>which is also need later <code>s.sendall()</code> & <code>s.recv()</code> to be parallelized. </p>
| 1 | 2016-09-01T20:53:54Z | 39,281,157 | <p>It's hard to prescribe a wealth of knowledge without knowing more information about your server protocol, what you are listening for and what you intend to do with it, et-c, but I can imagine, given no other additional information, a scenario where the communication is handled by a multiprocessing.Pool(3) where each member of the pool is mapped to an IP address and where all three send data into the same multiprocessing.Queue () which is being evaluated by a loop thread.</p>
<p>Any additional details?</p>
<p>Best of luck on your job!</p>
| 0 | 2016-09-01T21:30:36Z | [
"python",
"multithreading",
"sockets",
"multiprocessing",
"raw-sockets"
] |
Connect and handle multiple raw sockets at once in Python 2.x | 39,280,687 | <p>I am new to Python, and trying to learn it "on the job". And I am required to do this.</p>
<p>I am required to communicate with 3 servers with a raw socket connection. I can easily do that in a sequential manner. But I was wondering if there is a way I can communicate with these 3 servers at once? All 3 servers have different IP addresses. </p>
<p>Basically try to do the following but in 1 step:</p>
<pre><code>s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST1, PORT1))
s.connect((HOST2, PORT1))
s.connect((HOST3, PORT1))
</code></pre>
<p>which is also need later <code>s.sendall()</code> & <code>s.recv()</code> to be parallelized. </p>
| 1 | 2016-09-01T20:53:54Z | 39,281,226 | <p>Have a look at the <a href="https://docs.python.org/3/library/asyncio.html" rel="nofollow">asyncio</a> module.</p>
<p>It requires Python 3, but allows to write single-threaded applications with multiple execution contexts - kind of cooperative multihreading, where the context is switched only when the user says so. You really get the best of thread and event-based concurrency.</p>
| 0 | 2016-09-01T21:34:51Z | [
"python",
"multithreading",
"sockets",
"multiprocessing",
"raw-sockets"
] |
Connect and handle multiple raw sockets at once in Python 2.x | 39,280,687 | <p>I am new to Python, and trying to learn it "on the job". And I am required to do this.</p>
<p>I am required to communicate with 3 servers with a raw socket connection. I can easily do that in a sequential manner. But I was wondering if there is a way I can communicate with these 3 servers at once? All 3 servers have different IP addresses. </p>
<p>Basically try to do the following but in 1 step:</p>
<pre><code>s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST1, PORT1))
s.connect((HOST2, PORT1))
s.connect((HOST3, PORT1))
</code></pre>
<p>which is also need later <code>s.sendall()</code> & <code>s.recv()</code> to be parallelized. </p>
| 1 | 2016-09-01T20:53:54Z | 39,282,157 | <p><a href="http://stackoverflow.com/questions/2957116/make-2-functions-run-at-the-same-time/2957131#2957131">This answer</a> using <a href="https://docs.python.org/2/library/threading.html" rel="nofollow">threading</a> actually worked out for me. </p>
| 0 | 2016-09-01T23:13:06Z | [
"python",
"multithreading",
"sockets",
"multiprocessing",
"raw-sockets"
] |
String Formatting Confusion | 39,280,741 | <p>O'Reilly's Learn Python Powerful Object Oriented Programming by Mark Lutz teaches different ways to format strings.</p>
<p>This following code has me confused. I am interpreting 'ham' as filling the format place marker at index zero, and yet it still pops up at index one of the outputted string. Please help me understand what is actually going on.</p>
<p>Here is the code:</p>
<pre><code>template = '{motto}, {0} and {food}'
template.format('ham', motto='spam', food='eggs')
</code></pre>
<p>And here is the output:</p>
<pre><code>'spam, ham and eggs'
</code></pre>
<p>I expected:</p>
<pre><code>'ham, spam and eggs'
</code></pre>
| 0 | 2016-09-01T20:57:05Z | 39,280,793 | <p>The only thing you have to understand is that <code>{0}</code> refers to the first (zeroeth) <strong>unnamed</strong> argument sent to <code>format()</code>. We can see this to be the case by removing all unnamed references and trying to use a linear fill-in:</p>
<pre><code>>>> "{motto}".format("boom")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
KeyError: 'motto'
</code></pre>
<p>You would expect that 'boom' would fill in 'motto' if this is how it works. But, instead, <code>format()</code> looks for a parameter <em>named</em> 'motto'. The key hint here is the <code>KeyError</code>. Similarly, if it were just taking the sequence of parameters passed to <code>format()</code>, then this wouldn't error, either:</p>
<pre><code>>>> "{0} {1}".format('ham', motto='eggs')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: tuple index out of range
</code></pre>
<p>Here, <code>format()</code> is looking for the second unnamed argument in the parameter list - but that doesn't exist so it gets a 'tuple index out of range' error. This is just the difference between the unnamed (which are positionally sensitive) and named arguments passed in Python. </p>
<p><a href="http://stackoverflow.com/questions/3394835/args-and-kwargs">See this post to understand the difference between these types arguments, known as 'args' and 'kwargs'.</a></p>
| 2 | 2016-09-01T20:59:59Z | [
"python",
"string-formatting"
] |
visualization of convolutional layer in keras model | 39,280,813 | <p>I created a model in Keras (I am a newbie) and somehow managed to train it nicely. It takes 300x300 images and try to classify them in two groups.</p>
<pre><code># size of image in pixel
img_rows, img_cols = 300, 300
# number of classes (here digits 1 to 10)
nb_classes = 2
# number of convolutional filters to use
nb_filters = 16
# size of pooling area for max pooling
nb_pool = 20
# convolution kernel size
nb_conv = 20
X = np.vstack([X_train, X_test]).reshape(-1, 1, img_rows, img_cols)
y = np_utils.to_categorical(np.concatenate([y_train, y_test]), nb_classes)
# build model
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv, border_mode='valid', input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
# run model
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy'])
</code></pre>
<p>Now I would like to visualize the second convolutional layer and if possible also the first dense layer. "Inspiration" was taken from <a href="https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html" rel="nofollow">keras blog</a>. By using <code>model.summary()</code> I found out the name of the layers. Then I created the following frankenstein code:</p>
<pre><code>from __future__ import print_function
from scipy.misc import imsave
import numpy as np
import time
#from keras.applications import vgg16
import keras
from keras import backend as K
# dimensions of the generated pictures for each filter.
img_width = 300
img_height = 300
# the name of the layer we want to visualize
# (see model definition at keras/applications/vgg16.py)
layer_name = 'convolution2d_2'
#layer_name = 'dense_1'
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
if K.image_dim_ordering() == 'th':
x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
# load model
loc_json = 'my_model_short_architecture.json'
loc_h5 = 'my_model_short_weights.h5'
with open(loc_json, 'r') as json_file:
loaded_model_json = json_file.read()
model = keras.models.model_from_json(loaded_model_json)
# load weights into new model
model.load_weights(loc_h5)
print('Model loaded.')
model.summary()
# this is the placeholder for the input images
input_img = model.input
# get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers[1:]])
def normalize(x):
# utility function to normalize a tensor by its L2 norm
return x / (K.sqrt(K.mean(K.square(x))) + 1e-5)
kept_filters = []
for filter_index in range(0, 200):
# we only scan through the first 200 filters,
# but there are actually 512 of them
print('Processing filter %d' % filter_index)
start_time = time.time()
# we build a loss function that maximizes the activation
# of the nth filter of the layer considered
layer_output = layer_dict[layer_name].output
if K.image_dim_ordering() == 'th':
loss = K.mean(layer_output[:, filter_index, :, :])
else:
loss = K.mean(layer_output[:, :, :, filter_index])
# we compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, input_img)[0]
# normalization trick: we normalize the gradient
grads = normalize(grads)
# this function returns the loss and grads given the input picture
iterate = K.function([input_img], [loss, grads])
# step size for gradient ascent
step = 1.
# we start from a gray image with some random noise
if K.image_dim_ordering() == 'th':
input_img_data = np.random.random((1, 3, img_width, img_height))
else:
input_img_data = np.random.random((1, img_width, img_height, 3))
input_img_data = (input_img_data - 0.5) * 20 + 128
# we run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([input_img_data])
input_img_data += grads_value * step
print('Current loss value:', loss_value)
if loss_value <= 0.:
# some filters get stuck to 0, we can skip them
break
# decode the resulting input image
if loss_value > 0:
img = deprocess_image(input_img_data[0])
kept_filters.append((img, loss_value))
end_time = time.time()
print('Filter %d processed in %ds' % (filter_index, end_time - start_time))
# we will stich the best 64 filters on a 8 x 8 grid.
n = 8
# the filters that have the highest loss are assumed to be better-looking.
# we will only keep the top 64 filters.
kept_filters.sort(key=lambda x: x[1], reverse=True)
kept_filters = kept_filters[:n * n]
# build a black picture with enough space for
# our 8 x 8 filters of size 128 x 128, with a 5px margin in between
margin = 5
width = n * img_width + (n - 1) * margin
height = n * img_height + (n - 1) * margin
stitched_filters = np.zeros((width, height, 3))
# fill the picture with our saved filters
for i in range(n):
for j in range(n):
img, loss = kept_filters[i * n + j]
stitched_filters[(img_width + margin) * i: (img_width + margin) * i + img_width,
(img_height + margin) * j: (img_height + margin) * j + img_height, :] = img
# save the result to disk
imsave('stitched_filters_%dx%d.png' % (n, n), stitched_filters)
</code></pre>
<p>After executing it I get:</p>
<pre><code>ValueError Traceback (most recent call last)
/home/user/conv_filter_visualization.py in <module>()
97 # we run gradient ascent for 20 steps
/home/user/.local/lib/python3.4/site-packages/theano/compile/function_module.py in __call__(self, *args, **kwargs)
857 t0_fn = time.time()
858 try:
--> 859 outputs = self.fn()
860 except Exception:
861 if hasattr(self.fn, 'position_of_error'):
ValueError: CorrMM images and kernel must have the same stack size
Apply node that caused the error: CorrMM{valid, (1, 1)}(convolution2d_input_1, Subtensor{::, ::, ::int64, ::int64}.0)
Toposort index: 8
Inputs types: [TensorType(float32, 4D), TensorType(float32, 4D)]
Inputs shapes: [(1, 3, 300, 300), (16, 1, 20, 20)]
Inputs strides: [(1080000, 360000, 1200, 4), (1600, 1600, -80, -4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Elemwise{add,no_inplace}(CorrMM{valid, (1, 1)}.0, Reshape{4}.0), Elemwise{Composite{(i0 * (Abs(i1) + i2 + i3))}}[(0, 1)](TensorConstant{(1, 1, 1, 1) of 0.5}, Elemwise{add,no_inplace}.0, CorrMM{valid, (1, 1)}.0, Reshape{4}.0)]]
Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "/home/user/.local/lib/python3.4/site-packages/keras/models.py", line 787, in from_config
model.add(layer)
File "/home/user/.local/lib/python3.4/site-packages/keras/models.py", line 114, in add
layer.create_input_layer(batch_input_shape, input_dtype)
File "/home/user/.local/lib/python3.4/site-packages/keras/engine/topology.py", line 341, in create_input_layer
self(x)
File "/home/user/.local/lib/python3.4/site-packages/keras/engine/topology.py", line 485, in __call__
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/home/user/.local/lib/python3.4/site-packages/keras/engine/topology.py", line 543, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/home/user/.local/lib/python3.4/site-packages/keras/engine/topology.py", line 148, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
File "/home/user/.local/lib/python3.4/site-packages/keras/layers/convolutional.py", line 356, in call
filter_shape=self.W_shape)
File "/home/user/.local/lib/python3.4/site-packages/keras/backend/theano_backend.py", line 862, in conv2d
filter_shape=filter_shape)
</code></pre>
<p>I guess I am having some bad dimensions, but don't even know where to start. Any help would be appreciated. Thanks.</p>
| 1 | 2016-09-01T21:01:24Z | 39,385,599 | <p>Keras makes it quite easy to get layers' weights and outputs. Have a look at <a href="https://keras.io/layers/about-keras-layers/" rel="nofollow">https://keras.io/layers/about-keras-layers/</a> or <a href="https://keras.io/getting-started/functional-api-guide/#the-concept-of-layer-node" rel="nofollow">https://keras.io/getting-started/functional-api-guide/#the-concept-of-layer-node</a>.</p>
<p>You can basically get it with the properties <code>weights</code> and <code>output</code> of each layer.</p>
| 0 | 2016-09-08T08:01:19Z | [
"python",
"visualization",
"keras"
] |
visualization of convolutional layer in keras model | 39,280,813 | <p>I created a model in Keras (I am a newbie) and somehow managed to train it nicely. It takes 300x300 images and try to classify them in two groups.</p>
<pre><code># size of image in pixel
img_rows, img_cols = 300, 300
# number of classes (here digits 1 to 10)
nb_classes = 2
# number of convolutional filters to use
nb_filters = 16
# size of pooling area for max pooling
nb_pool = 20
# convolution kernel size
nb_conv = 20
X = np.vstack([X_train, X_test]).reshape(-1, 1, img_rows, img_cols)
y = np_utils.to_categorical(np.concatenate([y_train, y_test]), nb_classes)
# build model
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv, border_mode='valid', input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
# run model
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy'])
</code></pre>
<p>Now I would like to visualize the second convolutional layer and if possible also the first dense layer. "Inspiration" was taken from <a href="https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html" rel="nofollow">keras blog</a>. By using <code>model.summary()</code> I found out the name of the layers. Then I created the following frankenstein code:</p>
<pre><code>from __future__ import print_function
from scipy.misc import imsave
import numpy as np
import time
#from keras.applications import vgg16
import keras
from keras import backend as K
# dimensions of the generated pictures for each filter.
img_width = 300
img_height = 300
# the name of the layer we want to visualize
# (see model definition at keras/applications/vgg16.py)
layer_name = 'convolution2d_2'
#layer_name = 'dense_1'
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
if K.image_dim_ordering() == 'th':
x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
# load model
loc_json = 'my_model_short_architecture.json'
loc_h5 = 'my_model_short_weights.h5'
with open(loc_json, 'r') as json_file:
loaded_model_json = json_file.read()
model = keras.models.model_from_json(loaded_model_json)
# load weights into new model
model.load_weights(loc_h5)
print('Model loaded.')
model.summary()
# this is the placeholder for the input images
input_img = model.input
# get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers[1:]])
def normalize(x):
# utility function to normalize a tensor by its L2 norm
return x / (K.sqrt(K.mean(K.square(x))) + 1e-5)
kept_filters = []
for filter_index in range(0, 200):
# we only scan through the first 200 filters,
# but there are actually 512 of them
print('Processing filter %d' % filter_index)
start_time = time.time()
# we build a loss function that maximizes the activation
# of the nth filter of the layer considered
layer_output = layer_dict[layer_name].output
if K.image_dim_ordering() == 'th':
loss = K.mean(layer_output[:, filter_index, :, :])
else:
loss = K.mean(layer_output[:, :, :, filter_index])
# we compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, input_img)[0]
# normalization trick: we normalize the gradient
grads = normalize(grads)
# this function returns the loss and grads given the input picture
iterate = K.function([input_img], [loss, grads])
# step size for gradient ascent
step = 1.
# we start from a gray image with some random noise
if K.image_dim_ordering() == 'th':
input_img_data = np.random.random((1, 3, img_width, img_height))
else:
input_img_data = np.random.random((1, img_width, img_height, 3))
input_img_data = (input_img_data - 0.5) * 20 + 128
# we run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([input_img_data])
input_img_data += grads_value * step
print('Current loss value:', loss_value)
if loss_value <= 0.:
# some filters get stuck to 0, we can skip them
break
# decode the resulting input image
if loss_value > 0:
img = deprocess_image(input_img_data[0])
kept_filters.append((img, loss_value))
end_time = time.time()
print('Filter %d processed in %ds' % (filter_index, end_time - start_time))
# we will stich the best 64 filters on a 8 x 8 grid.
n = 8
# the filters that have the highest loss are assumed to be better-looking.
# we will only keep the top 64 filters.
kept_filters.sort(key=lambda x: x[1], reverse=True)
kept_filters = kept_filters[:n * n]
# build a black picture with enough space for
# our 8 x 8 filters of size 128 x 128, with a 5px margin in between
margin = 5
width = n * img_width + (n - 1) * margin
height = n * img_height + (n - 1) * margin
stitched_filters = np.zeros((width, height, 3))
# fill the picture with our saved filters
for i in range(n):
for j in range(n):
img, loss = kept_filters[i * n + j]
stitched_filters[(img_width + margin) * i: (img_width + margin) * i + img_width,
(img_height + margin) * j: (img_height + margin) * j + img_height, :] = img
# save the result to disk
imsave('stitched_filters_%dx%d.png' % (n, n), stitched_filters)
</code></pre>
<p>After executing it I get:</p>
<pre><code>ValueError Traceback (most recent call last)
/home/user/conv_filter_visualization.py in <module>()
97 # we run gradient ascent for 20 steps
/home/user/.local/lib/python3.4/site-packages/theano/compile/function_module.py in __call__(self, *args, **kwargs)
857 t0_fn = time.time()
858 try:
--> 859 outputs = self.fn()
860 except Exception:
861 if hasattr(self.fn, 'position_of_error'):
ValueError: CorrMM images and kernel must have the same stack size
Apply node that caused the error: CorrMM{valid, (1, 1)}(convolution2d_input_1, Subtensor{::, ::, ::int64, ::int64}.0)
Toposort index: 8
Inputs types: [TensorType(float32, 4D), TensorType(float32, 4D)]
Inputs shapes: [(1, 3, 300, 300), (16, 1, 20, 20)]
Inputs strides: [(1080000, 360000, 1200, 4), (1600, 1600, -80, -4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Elemwise{add,no_inplace}(CorrMM{valid, (1, 1)}.0, Reshape{4}.0), Elemwise{Composite{(i0 * (Abs(i1) + i2 + i3))}}[(0, 1)](TensorConstant{(1, 1, 1, 1) of 0.5}, Elemwise{add,no_inplace}.0, CorrMM{valid, (1, 1)}.0, Reshape{4}.0)]]
Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
File "/home/user/.local/lib/python3.4/site-packages/keras/models.py", line 787, in from_config
model.add(layer)
File "/home/user/.local/lib/python3.4/site-packages/keras/models.py", line 114, in add
layer.create_input_layer(batch_input_shape, input_dtype)
File "/home/user/.local/lib/python3.4/site-packages/keras/engine/topology.py", line 341, in create_input_layer
self(x)
File "/home/user/.local/lib/python3.4/site-packages/keras/engine/topology.py", line 485, in __call__
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/home/user/.local/lib/python3.4/site-packages/keras/engine/topology.py", line 543, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/home/user/.local/lib/python3.4/site-packages/keras/engine/topology.py", line 148, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
File "/home/user/.local/lib/python3.4/site-packages/keras/layers/convolutional.py", line 356, in call
filter_shape=self.W_shape)
File "/home/user/.local/lib/python3.4/site-packages/keras/backend/theano_backend.py", line 862, in conv2d
filter_shape=filter_shape)
</code></pre>
<p>I guess I am having some bad dimensions, but don't even know where to start. Any help would be appreciated. Thanks.</p>
| 1 | 2016-09-01T21:01:24Z | 39,988,934 | <p>In your Network, there are only 16 filters in the first convolution layer and then 16 in the next, so you have 32 Convolution filters. But you are running the for loop for 200. Try to change it to 16 or 32. I am running this code with TF backend and it is working for my small CNN.
Also,change the image stitching code:</p>
<pre><code>for i in range(n):
for j in range(n):
if(i * n + j)<=len(kept_filters)-1:
</code></pre>
<p>Best of Luck...</p>
| 0 | 2016-10-12T00:56:59Z | [
"python",
"visualization",
"keras"
] |
ImportError: No module named 'matplotlib' | 39,280,899 | <p>Brand new to Python (typically program in MSDN C#) and I'm trying to make use of the matplotlib to generate some graphics from .csv files</p>
<p>I've downloaded & installed Python as well as Anaconda onto my Windows 10 machine, the versions are Python 3.5.2 and Anaconda 4.1.1</p>
<p>I open up the Python "notepad" interface and do</p>
<pre><code>import matplotlib.pyplot as plt
plt.plot([1,2,3],[3,2,1])
plt.show()
</code></pre>
<p>but when I run the code I get the error: </p>
<blockquote>
<p>ImportError: No module named 'matplotlib'</p>
</blockquote>
<p>I've looked at some other posts for this but they all seem to be in regard to Mac OSX or Linux. Some have pointed to multiple installs of matplotlib, but I haven't turned up such a situation so far. What might be causing this, or how can I troubleshoot it?</p>
<p>**Edit:</p>
<p>The path returned to me from the import sys recommended in the comments gave me this response</p>
<blockquote>
<p>['C:\Users\a.watts.ISAM-NA\Desktop',</p>
<p>'C:\Users\a.watts.ISAM-NA\AppData\Local\Programs\Python\Python35-32\python35.zip',</p>
<p>'C:\Users\a.watts.ISAM-NA\AppData\Local\Programs\Python\Python35-32\DLLs',</p>
<p>'C:\Users\a.watts.ISAM-NA\AppData\Local\Programs\Python\Python35-32\lib',</p>
<p>'C:\Users\a.watts.ISAM-NA\AppData\Local\Programs\Python\Python35-32',</p>
<p>'C:\Users\a.watts.ISAM-NA\AppData\Local\Programs\Python\Python35-32\lib\site-packages',</p>
<p>'C:\Users\a.watts.ISAM-NA\AppData\Local\Programs\Python\Python35-32\lib\site-packages\setuptools-26.1.1-py3.5.egg']</p>
</blockquote>
| 0 | 2016-09-01T21:09:42Z | 39,282,133 | <p>You essentially have 2 versions of python on your system - the standard one you downloaded and the one that ships with Anaconda. When you are running code in the IDLE you are using the standard version (in <code>C:\Users\a.watts.ISAM-NA\AppData\Local\Programs\Python\Python35-32\python.exe</code>) where <code>matplotlib</code> isn't installed which is why you are getting the error.</p>
<p>You need to use the Anaconda version (<code>C:\Users\a.watts.ISAM-NA\AppData\Local\continuum\anaconda3\python.exe</code>) that comes with the scientific stuff already setup. It looks like your system is using this one from the <code>cmd</code> so if you run scripts from there it should use the Anaconda version. If you want to use something more interactive you can also use <code>spyder</code> - the Anaconda version of the IDLE - or run <code>jupyter notebook</code> from cmd to get a browser based platform for interactive development</p>
| 3 | 2016-09-01T23:09:16Z | [
"python",
"matplotlib"
] |
A way to pass c++ object to another object's method in cython | 39,280,945 | <p>I have two classes (let's assume the most simple ones, implementation is not important). My <code>defs.pxd</code> file (with cython defs) looks like this:</p>
<pre><code>cdef extern from "A.hpp":
cdef cppclass A:
A() except +
cdef extern from "B.hpp":
cdef cppclass B:
B() except +
int func (A)
</code></pre>
<p>My <code>pyx</code> file (with python defs) looks like this:</p>
<pre><code>from cython.operator cimport dereference as deref
from libcpp.memory cimport shared_ptr
cimport defs
cdef class A:
cdef shared_ptr[cquacker_defs.A] _this
@staticmethod
cdef inline A _from_this(shared_ptr[cquacker_defs.A] _this):
cdef A result = A.__new__(A)
result._this = _this
return result
def __init__(self):
self._this.reset(new cquacker_defs.A())
cdef class B:
cdef shared_ptr[cquacker_defs.B] _this
@staticmethod
cdef inline B _from_this(shared_ptr[cquacker_defs.B] _this):
cdef B result = B.__new__(B)
result._this = _this
return result
def __init__(self):
self._this.reset(new cquacker_defs.B())
def func(self, a):
return deref(self._this).func(deref(a._this))
</code></pre>
<p>The thing is that <code>deref(self._this)</code> works right, but <code>deref(a._this)</code> doesn't with this error:</p>
<pre><code>Invalid operand type for '*' (Python object)
</code></pre>
<p>How can I pass one python object's internal c++ object to another one's method in python?</p>
| 1 | 2016-09-01T21:13:48Z | 39,285,420 | <pre><code>def func(self, A a):
return # ... as before
</code></pre>
<p>You need to tell Cython that <code>a</code> is of type <code>A</code> (the type is checked when you call it). That way it knows about <code>a._this</code> and doesn't treat it as a Python attribute lookup. You can only access <code>cdef</code>ed attributes if Cython knows the type at comple time.</p>
| 1 | 2016-09-02T06:10:55Z | [
"python",
"c++",
"cython"
] |
pandas equivalent of SELECT * FROM table WHERE column1=column2 | 39,280,953 | <p>What is the pandas equivalent of 'SELECT * FROM table WHERE column1=column2'?</p>
<p>You have a dataframe, two columns with values. You want all rows where the numbers in both columns are the same. What's the code for that?</p>
<pre><code>dataframe:
column1 column2
a b
b a
c c
d d
a b
a b
The result I want:
column1 column2
c c
d d
</code></pre>
<p>Thank you.</p>
| -1 | 2016-09-01T21:14:27Z | 39,281,169 | <p>In this case, you will use something from Pandas called Masking</p>
<p>Basically, DataFrame[condition, on a column or the entire dataframe itself] returns a DataFrame where the condition is True.</p>
<pre><code>import pandas as pd
import numpy as np
data = {'a':np.random.randint(0, 10, 100),
'b':np.random.randint(0, 10, 100)}
df = pd.DataFrame(data)
df[df.a==df.b]
</code></pre>
| 1 | 2016-09-01T21:31:23Z | [
"python",
"pandas"
] |
How to update Django Rest User attribute? | 39,281,017 | <p>I have this model:</p>
<pre><code>class UserProfile(models.Model):
user = models.OneToOneField(User, related_name='profile')
stuff = models.TextField(default='')
User.profile = property(UserProfile)
</code></pre>
<p>With this serializer:</p>
<pre><code>class UserProfileSerializer(UserSerializer):
stuff = serializers.SerializerMethodField()
def get_stuff(self, obj):
return obj.profile.stuff
def update(self, instance, validated_data):
instance.profile.stuff = validated_data.get('stuff', instance.profile.stuff)
instance.save()
return instance
</code></pre>
<p>With this endpoint:</p>
<pre><code>class UpdateUserProfile(UpdateAPIView):
model = UserProfile
serializer_class = UserProfileSerializer
permission_classes = [permissions.IsAuthenticated]
def get_object(self):
return self.request.user
</code></pre>
<p>When I call HTTP/PUT to update my stuff field, it doesn't update. I don't know why.</p>
| 1 | 2016-09-01T21:19:02Z | 39,284,246 | <p>if your update method is executing and you are getting instance.profile in the method. The Issue happens at </p>
<pre><code>instance.profile.stuff = validated_data.get('stuff', instance.profile.stuff)
instance.save()
</code></pre>
<p>you need to update the profile instance. </p>
<pre><code>instance.profile.stuff = validated_data.get('stuff', instance.profile.stuff)
instance.profile.save()
</code></pre>
<p>"instance.profile.save()" Will update your profile instance.</p>
| 0 | 2016-09-02T04:18:44Z | [
"python",
"django",
"rest",
"django-rest-framework"
] |
pygobject add item to container within signal callback | 39,281,102 | <p>I'm working on a simple GUI application using PyGObject and GTK+ 3. In this case, I'm wanting to have a button which brings up a dialog box that when you click OK will add an item to a list. I have that part working but the final part that doesn't work is adding the item to the list. It appears that an item does get added but it's empty. It's selectable, though, just very small. I've tried adding other kinds of widgets like Gtk.Button to see if it was something weird with Gtk.Label. When I add the Gtk.Label in the constructor it works just fine.</p>
<p>Also I know this isn't quite the way to do things and there are some oddities with how I'm doing stuff in my code but I'm still just learning how to use PyGObject/GTK+ 3. I imagine this problem is just something stupid I'm overlooking.</p>
<p>MainWindow.py</p>
<pre><code>import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
import PromptDialog
class MainWindow(Gtk.Window):
def addURLResponse(self, dialog, response, listBox):
if(response == Gtk.ResponseType.OK):
print(dialog.get_text())
label = Gtk.Label(dialog.get_text())
print(label.get_text())
listBox.add(label)
if(response != Gtk.ResponseType.DELETE_EVENT):
dialog.destroy()
def addURL(self, button):
URLDialog = PromptDialog.PromptDialog("Add URL", self)
URLDialog.connect('response', self.addURLResponse, button.get_parent().get_parent().get_parent().get_children()[1])
URLDialog.show_all()
def __init__(self):
Gtk.Window.__init__(self, title="MPV-VJ")
self.playlistsBar = Gtk.FlowBox()
self.newBtn = Gtk.Button.new_with_label('+')
self.playlistsBar.add(self.newBtn)
self.playlistsList = Gtk.ListBox()
self.playlistsView = Gtk.Box(orientation=Gtk.Orientation.VERTICAL, spacing=1)
self.playlistsView.pack_start(self.playlistsBar, False, False, 0)
self.playlistsView.pack_start(self.playlistsList, True, True, 0)
self.playlist1View = Gtk.Box(orientation=Gtk.Orientation.VERTICAL, spacing=1)
self.playlist1Bar = Gtk.FlowBox()
self.addUrl1Btn = Gtk.Button.new_with_label('+URL')
self.addUrl1Btn.connect('clicked', self.addURL)
self.playlist1Bar.add(self.addUrl1Btn)
self.addFile1Btn = Gtk.Button.new_with_label('+file')
self.playlist1Bar.add(self.addFile1Btn)
self.addDir1Btn = Gtk.Button.new_with_label('+dir')
self.playlist1Bar.add(self.addDir1Btn)
self.playlist1List = Gtk.ListBox()
self.playlist1View.pack_start(self.playlist1Bar, False, False, 0)
self.playlist1View.pack_start(self.playlist1List, True, True, 0)
self.playlist2View = Gtk.Box(orientation=Gtk.Orientation.VERTICAL, spacing=1)
self.playlist2Bar = Gtk.FlowBox()
self.addUrl2Btn = Gtk.Button.new_with_label('+URL')
self.playlist2Bar.add(self.addUrl2Btn)
self.addFile2Btn = Gtk.Button.new_with_label('+file')
self.playlist2Bar.add(self.addFile2Btn)
self.addDir2Btn = Gtk.Button.new_with_label('+dir')
self.playlist2Bar.add(self.addDir2Btn)
self.playlist2List = Gtk.ListBox()
self.playlist2View.pack_start(self.playlist2Bar, False, False, 0)
self.playlist2View.pack_start(self.playlist2List, True, True, 0)
self.plViewsBox = Gtk.HPaned()
self.plViewsBox.pack1(self.playlist1View, True, False)
self.plViewsBox.pack2(self.playlist2View, True, False)
self.viewBox = Gtk.HPaned()
self.viewBox.pack1(self.playlistsView, True, False)
self.viewBox.pack2(self.plViewsBox, True, False)
self.viewBox.set_position(200)
self.logView = Gtk.ListBox()
self.contentBox = Gtk.VPaned()
self.contentBox.pack1(self.viewBox, True, False)
self.contentBox.pack2(self.logView, True, False)
self.contentBox.set_position(400)
self.toolBar = Gtk.FlowBox()
self.newBtn = Gtk.Button.new_with_label('new')
self.toolBar.add(self.newBtn)
self.loadBtn = Gtk.Button.new_with_label('load')
self.toolBar.add(self.loadBtn)
self.saveBtn = Gtk.Button.new_with_label('save')
self.toolBar.add(self.saveBtn)
self.mainBox = Gtk.Box(orientation=Gtk.Orientation.VERTICAL, spacing=1)
self.mainBox.pack_start(self.toolBar, False, False, 0)
self.mainBox.pack_start(self.contentBox, True, True, 0)
self.add(self.mainBox)
self.resize(1000, 500)
</code></pre>
<p>PromptDialog.py</p>
<pre><code>import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
class PromptDialog(Gtk.Dialog):
def get_text(self):
return(self.entry.get_buffer().get_text())
def __init__(self, message, mainWindow):
Gtk.Dialog.__init__(self, title="Prompt")
self.set_modal(True)
self.set_transient_for(mainWindow)
self.label = Gtk.Label(message)
self.entry = Gtk.Entry()
self.get_content_area().pack_start(self.label, True, True, 0)
self.get_content_area().pack_start(self.entry, True, True, 0)
self.add_button("OK", Gtk.ResponseType.OK)
self.add_button("Cancel", Gtk.ResponseType.CANCEL)
</code></pre>
| 1 | 2016-09-01T21:25:09Z | 39,281,322 | <p>The widget added to the window needs to have its .show() method called before it'll appear.</p>
| 0 | 2016-09-01T21:42:49Z | [
"python",
"gtk3",
"gobject"
] |
Solving matrix equation A B = C. with B(n* 1) and C(n *1) | 39,281,149 | <p>I am trying to solve a matrix equation such as <code>A.B = C</code>. The A is the unknown matrix and i must find it.
I have <code>B(n*1)</code> and <code>C(n*1)</code>, so <code>A</code> must be <code>n*n</code>. </p>
<p>I used the <code>BT* A.T =C.T</code> method (<code>numpy.linalg.solve(B.T, C.T)</code>).
But it produces an error:</p>
<blockquote>
<p>LinAlgError: Last 2 dimensions of the array must be square.</p>
</blockquote>
<p>So the problem is that B isn't square.</p>
| 0 | 2016-09-01T21:30:06Z | 39,281,212 | <p>Here's a little example for you:</p>
<pre><code>import numpy as np
a = np.array([[1, 2], [3, 4]])
b = np.array([5, 6])
x = np.linalg.solve(a, b)
print "A={0}".format(a)
print "B={0}".format(b)
print "x={0}".format(x)
</code></pre>
<p>For more information, please read the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html#numpy.linalg.solve" rel="nofollow">docs</a></p>
| 0 | 2016-09-01T21:34:05Z | [
"python",
"matrix",
"equation-solving"
] |
Solving matrix equation A B = C. with B(n* 1) and C(n *1) | 39,281,149 | <p>I am trying to solve a matrix equation such as <code>A.B = C</code>. The A is the unknown matrix and i must find it.
I have <code>B(n*1)</code> and <code>C(n*1)</code>, so <code>A</code> must be <code>n*n</code>. </p>
<p>I used the <code>BT* A.T =C.T</code> method (<code>numpy.linalg.solve(B.T, C.T)</code>).
But it produces an error:</p>
<blockquote>
<p>LinAlgError: Last 2 dimensions of the array must be square.</p>
</blockquote>
<p>So the problem is that B isn't square.</p>
| 0 | 2016-09-01T21:30:06Z | 39,290,812 | <p>If you're solving for the matrix, there is an infinite number of solutions (assuming that <code>B</code> is nonzero). Here's one of the possible solutions: </p>
<p>Choose an nonzero element of <code>B</code>, <code>Bi</code>. Now construct a matrix <code>A</code> such that the <code>i</code>th column is <code>C / Bi</code>, and the other columns are zero.</p>
<p>It should be easy to verify that multiplying this matrix by <code>B</code> gives <code>C</code>.</p>
| 1 | 2016-09-02T11:01:30Z | [
"python",
"matrix",
"equation-solving"
] |
@method_decorator(csrf_exempt) NameError: name 'method_decorator' is not defined | 39,281,162 | <p>I was following the following guide (<a href="https://abhaykashyap.com/blog/post/tutorial-how-build-facebook-messenger-bot-using-django-ngrok" rel="nofollow">https://abhaykashyap.com/blog/post/tutorial-how-build-facebook-messenger-bot-using-django-ngrok</a>) on how to create a chatbot, until the part where I updated the views.py. There seems to be some issue with the method declaration and I don't know what is wrong. Other than that, the code is almost exactly the same as the guide (with some imports that the guide creator forgot to add). Here is the error I got when trying to run the server in my virtual environment:</p>
<pre><code>(ivanteongbot) Ivans-MacBook-Pro:ivanteongbot ivanteong$ python manage.py runserver
Performing system checks...
Unhandled exception in thread started by <function wrapper at 0x1097a2050>
Traceback (most recent call last):
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/utils/autoreload.py", line 226, in wrapper
fn(*args, **kwargs)
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/core/management/commands/runserver.py", line 121, in inner_run
self.check(display_num_errors=True)
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/core/management/base.py", line 385, in check
include_deployment_checks=include_deployment_checks,
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/core/management/base.py", line 372, in _run_checks
return checks.run_checks(**kwargs)
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/core/checks/registry.py", line 81, in run_checks
new_errors = check(app_configs=app_configs)
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/core/checks/urls.py", line 14, in check_url_config
return check_resolver(resolver)
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/core/checks/urls.py", line 24, in check_resolver
for pattern in resolver.url_patterns:
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/urls/resolvers.py", line 310, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/urls/resolvers.py", line 303, in urlconf_module
return import_module(self.urlconf_name)
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/ivanteong/Desktop/comp9323/ivanteongbot/ivanteongbot/urls.py", line 24, in <module>
url(r'^fb_ivanteongbot/', include('fb_ivanteongbot.urls')),
File "/Users/ivanteong/Envs/ivanteongbot/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 50, in include
urlconf_module = import_module(urlconf_module)
File "/usr/local/Cellar/python/2.7.12/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/Users/ivanteong/Desktop/comp9323/ivanteongbot/fb_ivanteongbot/urls.py", line 3, in <module>
from .views import IvanTeongBotView
File "/Users/ivanteong/Desktop/comp9323/ivanteongbot/fb_ivanteongbot/views.py", line 8, in <module>
class IvanTeongBotView(generic.View):
File "/Users/ivanteong/Desktop/comp9323/ivanteongbot/fb_ivanteongbot/views.py", line 15, in IvanTeongBotView
@method_decorator(csrf_exempt)
NameError: name 'method_decorator' is not defined
</code></pre>
<p>The following code is what I have in views.py for my app directory:</p>
<pre><code>from django.shortcuts import render
# ivanteongbot/fb_ivanteongbot/views.py
from django.views import generic
from django.http.response import HttpResponse
from django.views.decorators.csrf import csrf_exempt # add this
# Create your views here.
class IvanTeongBotView(generic.View):
def get(self, request, *args, **kwargs):
if self.request.GET['hub.verify_token'] == '2318934571':
return HttpResponse(self.request.GET['hub.challenge'])
else:
return HttpResponse('Error, invalid token')
@method_decorator(csrf_exempt)
def dispatch(self, request, *args, **kwargs):
return generic.View.dispatch(self, request, *args, **kwargs)
# Post function to handle Facebook messages
def post(self, request, *args, **kwargs):
# Converts the text payload into a python dictionary
incoming_message = json.loads(self.request.body.decode('utf-8'))
# Facebook recommends going through every entry since they might send
# multiple messages in a single call during high load
for entry in incoming_message['entry']:
for message in entry['messaging']:
# Check to make sure the received call is a message call
# This might be delivery, optin, postback for other events
if 'message' in message:
# Print the message to the terminal
print(message)
return HttpResponse()
</code></pre>
<p>This is what I have in my urls.py:</p>
<pre><code># ivanteongbot/fb_ivanteongbot/urls.py
from django.conf.urls import include, url # add this
from .views import IvanTeongBotView
urlpatterns = [
url(r'^99789126bd00b5454d999cf3a9c3f8a9274d4e1460ac4b9863/?$', IvanTeongBotView.as_view())
]
</code></pre>
<p>Not sure what is wrong with the method declaration coz I did some googling and it seems to be declared in the right way as done my the user guide.</p>
| 1 | 2016-09-01T21:30:53Z | 39,281,283 | <p>The tutorial you're reading leaves out a number of crucial imports, such as the one that provides <code>method_decorator</code>:</p>
<pre><code>from django.utils.decorators import method_decorator
</code></pre>
<p>It may help to follow the "View code on github" link, which takes you to a <a href="https://github.com/abhay1/django-facebook-messenger-bot-tutorial/blob/master/yomamabot/fb_yomamabot/views.py" rel="nofollow">more complete file</a>.</p>
| 1 | 2016-09-01T21:39:36Z | [
"python",
"django",
"server",
"bots",
"method-declaration"
] |
Flask: NameError: 'app' is not defined? | 39,281,199 | <p>My Flask application has been running fine for the last 24 hours, however I just took it offline to work on it and i'm trying to start it up again and i'm getting this error:</p>
<pre><code>Traceback (most recent call last):
File "runserver.py", line 1, in <module>
from app import app
File "/home/MY NAME/APP FOLDER NAME/app.py", line 15, in <module>
from Views import *
File "/home/MY NAME/APP FOLDER NAME/Views.py", line 1, in <module>
@app.route('/contact', methods=('GET', 'POST'))
NameError: name 'app' is not defined
</code></pre>
<p>I am running the application currently by calling <code>python runserver.py</code></p>
<p><strong>runserver.py:</strong></p>
<pre><code>from app import app
app.run(threaded = True, debug=True, host='0.0.0.0')
</code></pre>
<p><strong>Views.py</strong>: contains all of my routes, I won't post them all, as the error is coming from the first time <code>app</code> is mentioned in this file.</p>
<pre><code>@app.route('/contact', methods=('GET', 'POST'))
def contact():
form = ContactForm()
if request.method == 'POST':
msg = Message("CENSORED,
sender='CENSORED',
recipients=['CENSORED'])
msg.body = """
From: %s <%s>,
%s
""" % (form.name.data, form.email.data, form.message.data)
mail.send(msg)
return "><p><br>Successfully sent message!</p></body>"
elif request.method == 'GET':
return render_template('contact.html', form=form)
</code></pre>
<p><strong>app.py:</strong> Here is the top of my app.py file, where I define <code>app = Flask(__name__)</code> as well as import my statements.</p>
<pre><code>from flask import Flask, request, render_template, redirect, url_for, send_file
from geopy.geocoders import Bing
from geopy.exc import GeocoderTimedOut
import re
import urllib
from bs4 import BeautifulSoup
from openpyxl import load_workbook
from openpyxl.styles import Style, Font
import os
import pandas as pd
import numpy as np
import datetime
from Helper_File import *
from Lists import *
from Views import *
from flask_mail import Mail, Message
from forms import ContactForm
global today
geolocator = Bing('Censored')
app = Flask(__name__)
</code></pre>
<p><strong>EDIT: I have made the changes suggested in the answer below, but am getting this when I access the page:</strong></p>
<blockquote>
<p>Not Found</p>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>
</blockquote>
<p>Here is my file structure:</p>
<pre><code>DHLSoftware.com
|-static
|-style.css
|-templates
|- seperate html file for each page template
|-app.py
|-forms.py
|-helper_file.py
|-Lists.py
|-runserver.py
|-Views.py
</code></pre>
| 0 | 2016-09-01T21:33:09Z | 39,281,275 | <p>In <code>app.py</code> you should remove the <code>from Views import *</code>. Instead in your <code>Views.py</code> you need to have <code>from app import app</code></p>
<p>That will bring app into the views namespace and I believe that should work for you.</p>
| 1 | 2016-09-01T21:39:14Z | [
"python",
"flask"
] |
Python encode string to html | 39,281,329 | <p>How would one take a string and encode special characters to html?</p>
<p>for example, if I have "test@test" how would I encode it so it becomes "test%40test"</p>
<p>Is there an easy way of doing this instead of using replace to manually list every one I want to replace?</p>
| 0 | 2016-09-01T21:43:46Z | 39,281,354 | <p>Try <code>urlencode</code>function. Documentation here.</p>
<p><a href="https://docs.python.org/2/library/urllib.html#urllib.urlencode" rel="nofollow">https://docs.python.org/2/library/urllib.html#urllib.urlencode</a></p>
| 1 | 2016-09-01T21:45:41Z | [
"python",
"html",
"string",
"encode"
] |
Facebook "Not Delivering Not Approved" Ad Sets via Python SDK | 39,281,371 | <p>Having a little trouble here with filtering ad sets via the Facebook Ads Python SDK.</p>
<p>I'm making the following call (the variable account is an instance of <a href="https://github.com/facebook/facebook-python-ads-sdk/blob/f02c6d5a5a3b211751f4eb3975cd3659cf001f17/facebookads/adobjects/adset.py" rel="nofollow">AdAccount</a>):</p>
<pre><code>account_adsets = account.get_ad_sets(fields=fields, params={'effective_status':['ACTIVE'], 'status':['ACTIVE'],
'date_preset':'today',
'is_completed':False,
'include_deleted':False})
</code></pre>
<p>And I'm getting back adsets that are marked "Not Delivering, Not Approved" in the "Delivery" column of Power Editor, in addition to the ones marked "Active".</p>
<p>As you can see in the call above, I'm already restricting things to 'status':['ACTIVE'], which I would have thought would filter our disapproved adsets (as suggested in the <a href="https://github.com/facebook/facebook-python-ads-sdk/blob/f02c6d5a5a3b211751f4eb3975cd3659cf001f17/facebookads/adobjects/adset.py" rel="nofollow">AdSet source here</a>, and copied below):</p>
<pre><code>class AdSet(
AbstractCrudObject,
HasAdLabels,
CanValidate,
):
def __init__(self, fbid=None, parent_id=None, api=None):
self._isAdSet = True
super(AdSet, self).__init__(fbid, parent_id, api)
class EffectiveStatus:
active = 'ACTIVE'
paused = 'PAUSED'
deleted = 'DELETED'
pending_review = 'PENDING_REVIEW'
disapproved = 'DISAPPROVED'
preapproved = 'PREAPPROVED'
pending_billing_info = 'PENDING_BILLING_INFO'
campaign_paused = 'CAMPAIGN_PAUSED'
archived = 'ARCHIVED'
adset_paused = 'ADSET_PAUSED'
</code></pre>
<p>Anyone have any ideas how I could stop those ad sets from appearing?</p>
| 0 | 2016-09-01T21:47:42Z | 39,353,835 | <p>Ad Sets aren't approved or disapproved, Ads are.</p>
<p>I'm not 100% sure what you're seeing in the Power Editor UI, but I suspect it's showing 'Not Delivering, Not Approved' via detecting that all of the Ads in that Ad Set are DISAPPROVED</p>
<p>In your case, you should filter on the Ads' status by fetching at that level and working back up to the Ad Sets and Campaigns, or do filtering after your current call based on the Ads in the Ad Set</p>
| 0 | 2016-09-06T16:32:10Z | [
"python",
"facebook-graph-api",
"facebook-ads-api"
] |
Concatenating strings from two different for loops at each turn | 39,281,402 | <p>I have two different <code>for</code> loops that run the same number of times and produce a string at each iteration. (I am scraping an html file) I want the string from first loop to merge/concatenate/append with the string from the second loop FOR EACH ITERATION ( this is the tricky part )Here is the code i have:</p>
<pre><code>from bs4 import BeautifulSoup
bsObj = BeautifulSoup(open("samfull.html"), "html.parser")
tableList = bsObj.find_all("table", {"class":"width100 menu_header_top_emr"})
tdList = bsObj.find_all("td", {"class":"menu_header width100"})
for table in tableList:
first_part_of_row_string = ''
item = table.find_all("span", {"class":"results_body_text"})
for i in range(len(item)):
first_part_of_row_string += (item[i].get_text().strip() + ", ")
for td in tdList:
second_part_of_row_string = ''
items = td.find_all("span", {"class":"results_body_text"})
for i in range(len(items)):
second_part_of_row_string += (items[i].get_text().strip() + ", ")
</code></pre>
<p>To give an example: </p>
<p>Sample results for the <code>for table in tableList</code> loop are:</p>
<pre><code>a,b,
1,2,
father, mother,
</code></pre>
<p>and for the <code>for td in tdList</code> loop are:</p>
<pre><code>c, d, e,
3, 4, 5,
son, daughter, twin,
</code></pre>
<p>I want to combine the <code>first_part_of_row_string</code> of each iteration with the <code>second_part_of_row_string</code> of each iteration as well</p>
<p>so I want to print out this:</p>
<pre><code>a, b, c, d, e,
1, 2, 3, 4, 5
father, mother, son, daughter, twin,
</code></pre>
<p>which are effectively <code>first_part_of_row_string + second_part_of_row_string</code> of each iteration of both loops</p>
<p>The length of tableList and tdList is the same so both loops will always return the same number of rows. I could have in one loop if the td was in the same table that is being referred to in the tableList, unfortunately it is not. In the html the table with the class specified in tableList definition is always followed by another table that has no class but always contains a td with the class specified in tdList. A sample occurrence of this html is included below. The whole page is some thousands of line, so I am putting it on a seperate link.<a href="http://pastebin.com/N0qih4Hj" rel="nofollow">link</a></p>
<pre><code><table cellspacing="0" cellpadding="0"
style="margin-left: auto; margin-right: auto;" class="width100 menu_header_top_emr">
<tbody>
<tr>
<td style="width:80px;">
<div style="width:70px;background-color:#B2EE98; border:1px solid grey; padding:2px 5px 2px 5px; text-align:center;">Entity</div>
</td>
<td style="padding-left:5px;">
<span class="results_body_text"><h5 style="vertical-align: middle;">Rascal X-Press, Inc.</h5></span>
</td>
<td style="width:130px;">
<div class="right">
<span class="results_title_text">Status:</span>
<span class="results_body_text">
Submitted
</span>
</div>
</td>
<td style="width:22px;">
<a href="" class="more_duns_link_emr right" style="display: inline;"><img
id="more_duns_link_emr"
src="/SAMSearch/styles/img/expand-small-blue.png" style="padding:8px 8px 8px 2px;"
alt="Expand Search Result for Rascal X-Press, Inc."></a>
<a href="" class="hide_duns_link_emr off right" style="display: none;"><img
id="hide_duns_link_emr"
src="/SAMSearch/styles/img/collapse-small-blue.png" style="padding:8px 8px 8px 2px;"
alt="Collapse Search Result for Rascal X-Press, Inc."></a>
</td>
</tr>
</tbody>
</table>
<table>
<tbody>
<tr>
<td class="menu_header width100">
<table>
<tr>
<td style="width:25%;">
<span class="results_title_text">DUNS:</span> <span class="results_body_text"> 012361296</span>
</td>
<td style="width:25%;">
</td>
<!-- label as CAGE when US Territory is listed as Country -->
<td style="width:27%;">
<span class="results_title_text">CAGE Code:</span> <span class="results_body_text"></span>
</td>
<td style="width:15%" rowspan="2">
<input type="button" value="View Details" title="View Details for Rascal X-Press, Inc." class="center" style="height:25px; width:90px; vertical-align:middle; margin:7px 3px 7px 3px;" onClick="viewEntry('4420848', '1472652382619')" />
</td>
</tr>
<tr>
<td colspan="2">
<span class="results_title_text">Has Active Exclusion?: </span>
<span class="results_body_text">
No
</span>
</td>
<td>
<span class="results_title_text">DoDAAC:</span> <span class="results_body_text"></span>
</td>
</tr>
<tr>
<td colspan="2">
<span class="results_title_text">Expiration Date:</span>
<span class="results_body_text">
</span>
</td>
<td colspan="2"><span class="results_title_text">Delinquent Federal Debt?</span>
<span class="results_body_text">
No
</span>
</td>
</tr>
<tr>
<td colspan="2"><span class="results_title_text">Purpose of Registration:</span>
<span class="results_body_text">
Federal Assistance Awards Only
</span>
</td>
</tr>
</table>
<div class="off_duns_emr" style="display: none;">
<table class="resultbox1 menu_header width100"
style="margin-left: auto; margin-right: auto;" cellpadding="2">
<tbody>
<tr>
<td colspan="3"><span class="results_title_text">Address:</span>
<span class="results_body_text">1372 State Hwy 37</span></td>
</tr>
<tr>
<td style="width:212px;"><span class="results_title_text">City:</span>
<span class="results_body_text">West Frankfort</span></td>
<td style="width:200px;"><span class="results_title_text">State/Province:</span>
<span class="results_body_text">IL</span></td>
</tr>
<tr>
<td style="width:130px;"><span class="results_title_text">ZIP Code:</span>
<span class="results_body_text">62896-5007</span></td>
<td style="width:200px;"><span class="results_title_text">Country:</span>
<span class="results_body_text">UNITED STATES</span></td>
</tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table></td>
</tr>
</tbody>
</table> </li>
</td>
</tr>
</table>
</code></pre>
| 0 | 2016-09-01T21:50:27Z | 39,281,525 | <p>There are a lots of ways to do what you're asking for, here's a very simple one:</p>
<pre><code>tableList = [
["a", "b"],
["1", "2"],
["father", "mother"]
]
tdList = [
["c", "d", "e"],
["3", "4", "5"],
["son", "daughter", "twin"]
]
len_list = max(len(tableList), len(tdList))
for i in range(len_list):
print ", ".join(tableList[i] + tdList[i])
</code></pre>
| 0 | 2016-09-01T22:00:34Z | [
"python",
"string",
"python-2.7",
"loops",
"concatenation"
] |
Concatenating strings from two different for loops at each turn | 39,281,402 | <p>I have two different <code>for</code> loops that run the same number of times and produce a string at each iteration. (I am scraping an html file) I want the string from first loop to merge/concatenate/append with the string from the second loop FOR EACH ITERATION ( this is the tricky part )Here is the code i have:</p>
<pre><code>from bs4 import BeautifulSoup
bsObj = BeautifulSoup(open("samfull.html"), "html.parser")
tableList = bsObj.find_all("table", {"class":"width100 menu_header_top_emr"})
tdList = bsObj.find_all("td", {"class":"menu_header width100"})
for table in tableList:
first_part_of_row_string = ''
item = table.find_all("span", {"class":"results_body_text"})
for i in range(len(item)):
first_part_of_row_string += (item[i].get_text().strip() + ", ")
for td in tdList:
second_part_of_row_string = ''
items = td.find_all("span", {"class":"results_body_text"})
for i in range(len(items)):
second_part_of_row_string += (items[i].get_text().strip() + ", ")
</code></pre>
<p>To give an example: </p>
<p>Sample results for the <code>for table in tableList</code> loop are:</p>
<pre><code>a,b,
1,2,
father, mother,
</code></pre>
<p>and for the <code>for td in tdList</code> loop are:</p>
<pre><code>c, d, e,
3, 4, 5,
son, daughter, twin,
</code></pre>
<p>I want to combine the <code>first_part_of_row_string</code> of each iteration with the <code>second_part_of_row_string</code> of each iteration as well</p>
<p>so I want to print out this:</p>
<pre><code>a, b, c, d, e,
1, 2, 3, 4, 5
father, mother, son, daughter, twin,
</code></pre>
<p>which are effectively <code>first_part_of_row_string + second_part_of_row_string</code> of each iteration of both loops</p>
<p>The length of tableList and tdList is the same so both loops will always return the same number of rows. I could have in one loop if the td was in the same table that is being referred to in the tableList, unfortunately it is not. In the html the table with the class specified in tableList definition is always followed by another table that has no class but always contains a td with the class specified in tdList. A sample occurrence of this html is included below. The whole page is some thousands of line, so I am putting it on a seperate link.<a href="http://pastebin.com/N0qih4Hj" rel="nofollow">link</a></p>
<pre><code><table cellspacing="0" cellpadding="0"
style="margin-left: auto; margin-right: auto;" class="width100 menu_header_top_emr">
<tbody>
<tr>
<td style="width:80px;">
<div style="width:70px;background-color:#B2EE98; border:1px solid grey; padding:2px 5px 2px 5px; text-align:center;">Entity</div>
</td>
<td style="padding-left:5px;">
<span class="results_body_text"><h5 style="vertical-align: middle;">Rascal X-Press, Inc.</h5></span>
</td>
<td style="width:130px;">
<div class="right">
<span class="results_title_text">Status:</span>
<span class="results_body_text">
Submitted
</span>
</div>
</td>
<td style="width:22px;">
<a href="" class="more_duns_link_emr right" style="display: inline;"><img
id="more_duns_link_emr"
src="/SAMSearch/styles/img/expand-small-blue.png" style="padding:8px 8px 8px 2px;"
alt="Expand Search Result for Rascal X-Press, Inc."></a>
<a href="" class="hide_duns_link_emr off right" style="display: none;"><img
id="hide_duns_link_emr"
src="/SAMSearch/styles/img/collapse-small-blue.png" style="padding:8px 8px 8px 2px;"
alt="Collapse Search Result for Rascal X-Press, Inc."></a>
</td>
</tr>
</tbody>
</table>
<table>
<tbody>
<tr>
<td class="menu_header width100">
<table>
<tr>
<td style="width:25%;">
<span class="results_title_text">DUNS:</span> <span class="results_body_text"> 012361296</span>
</td>
<td style="width:25%;">
</td>
<!-- label as CAGE when US Territory is listed as Country -->
<td style="width:27%;">
<span class="results_title_text">CAGE Code:</span> <span class="results_body_text"></span>
</td>
<td style="width:15%" rowspan="2">
<input type="button" value="View Details" title="View Details for Rascal X-Press, Inc." class="center" style="height:25px; width:90px; vertical-align:middle; margin:7px 3px 7px 3px;" onClick="viewEntry('4420848', '1472652382619')" />
</td>
</tr>
<tr>
<td colspan="2">
<span class="results_title_text">Has Active Exclusion?: </span>
<span class="results_body_text">
No
</span>
</td>
<td>
<span class="results_title_text">DoDAAC:</span> <span class="results_body_text"></span>
</td>
</tr>
<tr>
<td colspan="2">
<span class="results_title_text">Expiration Date:</span>
<span class="results_body_text">
</span>
</td>
<td colspan="2"><span class="results_title_text">Delinquent Federal Debt?</span>
<span class="results_body_text">
No
</span>
</td>
</tr>
<tr>
<td colspan="2"><span class="results_title_text">Purpose of Registration:</span>
<span class="results_body_text">
Federal Assistance Awards Only
</span>
</td>
</tr>
</table>
<div class="off_duns_emr" style="display: none;">
<table class="resultbox1 menu_header width100"
style="margin-left: auto; margin-right: auto;" cellpadding="2">
<tbody>
<tr>
<td colspan="3"><span class="results_title_text">Address:</span>
<span class="results_body_text">1372 State Hwy 37</span></td>
</tr>
<tr>
<td style="width:212px;"><span class="results_title_text">City:</span>
<span class="results_body_text">West Frankfort</span></td>
<td style="width:200px;"><span class="results_title_text">State/Province:</span>
<span class="results_body_text">IL</span></td>
</tr>
<tr>
<td style="width:130px;"><span class="results_title_text">ZIP Code:</span>
<span class="results_body_text">62896-5007</span></td>
<td style="width:200px;"><span class="results_title_text">Country:</span>
<span class="results_body_text">UNITED STATES</span></td>
</tr>
</tbody>
</table>
</div>
</td>
</tr>
</tbody>
</table></td>
</tr>
</tbody>
</table> </li>
</td>
</tr>
</table>
</code></pre>
| 0 | 2016-09-01T21:50:27Z | 39,281,539 | <p>Use <code>zip</code>, and also use <code>join</code> instead of concatenating commas:</p>
<pre><code>for table,td in zip(tableList,tdList):
a = ', '.join(table.find_all("span", {"class":"results_body_text"}))
b = ', '.join(td.find_all("span", {"class":"results_body_text"}))
print(a, b, sep=', ')
</code></pre>
<p>If you're using Python 3.5, you can use the more powerful unpacking syntax instead:</p>
<pre><code>for table,td in zip(tableList,tdList):
a = table.find_all("span", {"class":"results_body_text"})
b = td.find_all("span", {"class":"results_body_text"})
print(*a, *b, sep=', ')
</code></pre>
<p>If you're using Python 2, either put the line <code>from __future__ import print_function</code> at the top of your code and use Python 3's print function syntax, or just manually join everything:</p>
<pre><code>for table,td in zip(tableList,tdList):
a = table.find_all("span", {"class":"results_body_text"})
b = td.find_all("span", {"class":"results_body_text"})
print ', '.join(a+b)
</code></pre>
| 0 | 2016-09-01T22:01:54Z | [
"python",
"string",
"python-2.7",
"loops",
"concatenation"
] |
Converting Integer programming to binary in python | 39,281,434 | <p><a href="http://i.stack.imgur.com/TpuQy.png" rel="nofollow"><img src="http://i.stack.imgur.com/TpuQy.png" alt="Excel Image"></a></p>
<p>See the image above to understand my explanation of the problem.</p>
<pre><code>from scipy.optimize import linprog
a = [-800,-1200,-800]
b = [[200,100,400],[200,200,400],[200,300,200]]
c = [600,600,600]
result = linprog(a,b,c)
print(result)
</code></pre>
<p>The code I have written is above. The number in the output I am interested is <code>fun : -2400.0</code> <-- This is the current result. This is not what I want though.</p>
<p>Here is the problem and what I am looking for.</p>
<p>In the picture, there are two tables. The table on the RHS is the integer part and the LHS table is the binary part. The answer that I want is <code>fun : -2800.0</code>, which is part of the 4th column in LHS table.</p>
<p>Let me tell you how I determined that answer.
First, as you can see there are all possible combinations of 0's and 1's are written for all the three assignments in the LHS table. I have to find the best highest number from the 4th column LHS that satisfies all the conditions in RHS table. For ex: Starting with the top; numbers taken from LHS and RHS table. 1 means choose that assignment and 0 means don't choose. So the first row has 1 1 1 meaning choosing all the assignments. Now, we add all the numbers for day 0,1,and 2 and we can see that they all go above 600. Adding 200*1+100*1+400*1=700 which is greater than 600. Same thing for day 1 and 2. So, 2801 is not the number we want. Next, we choose 2800, and it does satisfy the total condition of every added number <= 600 because assignment1 is 0, so 200*0+100*1+400*1=500, which is less than 600, and same thing for the remaining two days.</p>
<p>I just don't understand how to put every single thing in python and get result of -2800 instead -2400.</p>
<p>There has to be a simple way with linprog.</p>
<p>The way I got the numbers in the 4th of LHS table was using the green highlighted numbers from RHS table, which are also on the last row of LHS table.
I did =((B5+$B$13)+(C5*$C$13)+(D5*$D$13)) to get 2801 and so on for the remaining ones.</p>
<p>Without using linprog, I can't think anything better than the following code:</p>
<pre><code>B = [800,1200,800]
A = [[200,100,400],[200,200,400],[200,300,200]]
for num in A[0],A[1],A[2]:
t0 = num[0]*1 + num[1]*1 + num[2]*1
t1 = num[0]*0 + num[1]*1 + num[2]*1
print(t0)
</code></pre>
| 0 | 2016-09-01T21:53:00Z | 39,282,781 | <p>First of all: <strong>your question was very very badly phrased</strong> (and your usage of linprog is not common; <code>Variable A</code> is typically the 2d-one).</p>
<p>Usually i'm hesitant to work on these kind of questions (=very incomplete), but i think i got the problem now.</p>
<h2>What's the big problem with your question/formulation?</h2>
<ul>
<li>You did not tell us, where the 4th column of the left table came from! Are these values fixed (independent on the remaining table), or are these somehow correlated?</li>
<li>After checking the values, i understood, that these values themself (let's call them z) are a linear-function of the decisions-variables of the following form:
<ul>
<li><code>z = 1*x0 + 1200*x1 + 800*x2 + 800</code></li>
<li><strong>This is nowhere explained in your question and is important here!</strong></li>
</ul></li>
</ul>
<h2>How to handle the problem then?</h2>
<h3>Correct approach: ignore the constant offset of 800 & correct objective later</h3>
<p>The following code (i'm using the variable-names in a common way) ignores the offset (<strong>a constant offset does not change the result regarding the decision-variables</strong>). Therefore, you would want to add the offset to your obtained solution later / or ignore the returned objective and calculate it yourself by the formula above!</p>
<pre><code>from scipy.optimize import linprog
import numpy as np
a_ub = [[200, 100, 400], [200, 200, 300], [400, 400, 200]]
b_ub = [600, 600, 600]
c = [-1, -1200, -800]
result = linprog(c,a_ub,b_ub, bounds=(0,1))
print('*** result: (negated, offset of 800 missing) ***\n')
print(result)
</code></pre>
<p>Result:</p>
<pre><code>*** result: (negated, offset of 800 missing) ***
fun: -2000.0
message: 'Optimization terminated successfully.'
nit: 3
slack: array([ 100., 100., 0., 1., 0., 0.])
status: 0
success: True
x: array([ 0., 1., 1.])
</code></pre>
<p>After negating the objective, just add the offset and you receive your desired result of 2800!</p>
<h3>Silly approach: add some variable which is fixed to describe offset</h3>
<pre><code>""" Alternative formulation
We introduce a 4th variable W, which is set to 1 and introduce the offset
into the objective
"""
from scipy.optimize import linprog
import numpy as np
a_ub = [[200, 100, 400, 0], [200, 200, 300, 0], [400, 400, 200, 0]] # no upper bound for W
b_ub = [600, 600, 600]
c = [-1, -1200, -800, -800] # W*800 added to objective
a_eq = [[0, 0, 0, 1]] # W=1 fixed
b_eq = [1] # "" ""
result = linprog(c,a_ub,b_ub, a_eq, b_eq, bounds=(0,1))
print('*** alternative result (negated, including offset): ***\n')
print(result)
</code></pre>
<p>Result:</p>
<pre><code>*** alternative result (negated, including offset): ***
fun: -2800.0
message: 'Optimization terminated successfully.'
nit: 4
slack: array([ 100., 100., 0., 1., 0., 0., 0.])
status: 0
success: True
x: array([ 0., 1., 1., 1.])
</code></pre>
<h2>But will this work in general?</h2>
<p><strong>The Linear-programming approach will not work in general</strong>. As ayhan mentioned in the comments, a unimodular matrix would mean, that an LP-solver guarantees an optimal integer-solution. But without some rules about your data, this characteristic of unimodularity is not given in general!</p>
<p>Look at the following example code, which generates a random-instance and compares the result of a LP-solver & and a MIP-solver. The very first random-instance fails, because the LP-solution is continuous!</p>
<p>Of course you could stick to a MIP-solver then, but:</p>
<ul>
<li>MIP-problems are hard in general</li>
<li>There is no MIP-solver within numpy/scipy</li>
</ul>
<p>Code:</p>
<pre><code>from scipy.optimize import linprog
import numpy as np
from cylp.cy import CyCbcModel, CyClpSimplex
from cylp.py.modeling.CyLPModel import CyLPModel, CyLPArray
np.random.seed(1)
def solve_lp(a_ub, b_ub, c):
result = linprog(c,a_ub,b_ub, bounds=(0,1))
print(result)
return result.fun
def solve_mip(a_ub, b_ub, c):
a_ub, b_ub, c = np.matrix(a_ub), np.array(b_ub), np.array(c)
n = b_ub.shape[0]
model = CyLPModel()
x = model.addVariable('x', n, isInt=True)
model += a_ub*x <= b_ub
for i in range(n):
model += 0 <= x[i] <= 1
c = CyLPArray(c)
model.objective = c*x
s = CyClpSimplex(model)
cbcModel = s.getCbcModel()
cbcModel.branchAndBound()
print('sol: ', cbcModel.primalVariableSolution['x'])
print('obj: ', cbcModel.objectiveValue)
return cbcModel.objectiveValue
def solve_random_until_unequal():
while True:
a_ub = np.random.randint(0, 1000, size=(3,3))
b_ub = np.random.randint(0, 1000, size=3)
c = [-1, -1200, -800]
lp_obj = solve_lp(a_ub, b_ub, c)
mip_obj = solve_mip(a_ub, b_ub, c)
print('A_ub: ', a_ub)
print('b_ub: ', b_ub)
assert np.isclose(lp_obj, mip_obj)
solve_random_until_unequal()
</code></pre>
<p>Output:</p>
<pre><code>fun: -225.29335071707953
message: 'Optimization terminated successfully.'
nit: 1
slack: array([ 9.15880052e+02, 0.00000000e+00, 7.90482399e+00,
1.00000000e+00, 8.12255541e-01, 1.00000000e+00])
status: 0
success: True
x: array([ 0. , 0.18774446, 0. ])
Clp0000I Optimal - objective value 0
Clp0000I Optimal - objective value 0
Node 0 depth 0 unsatisfied 0 sum 0 obj 0 guess 0 branching on -1
Clp0000I Optimal - objective value 0
Cbc0004I Integer solution of -0 found after 0 iterations and 0 nodes (0.00 seconds)
Cbc0001I Search completed - best objective -0, took 0 iterations and 0 nodes (0.00 seconds)
Cbc0035I Maximum depth 0, 0 variables fixed on reduced cost
Clp0000I Optimal - objective value 0
Clp0000I Optimal - objective value 0
('sol: ', array([ 0., 0., 0.]))
('obj: ', -0.0)
('A_ub: ', array([[ 37, 235, 908],
[ 72, 767, 905],
[715, 645, 847]]))
('b_ub: ', array([960, 144, 129]))
Traceback (most recent call last):
File "so_linprog_v3.py", line 45, in <module>
solve_random_until_unequal()
File "so_linprog_v3.py", line 43, in solve_random_until_unequal
assert np.isclose(lp_obj, mip_obj)
AssertionError
</code></pre>
| 1 | 2016-09-02T00:43:17Z | [
"python",
"binary",
"integer",
"linear-programming"
] |
SWIG with Eigen3::Vector3d and std::vector<Vector3d> | 39,281,461 | <p>The corresponding gist is <a href="https://gist.github.com/nschloe/5e213272671db5dd745508093f37a4a1" rel="nofollow">here</a>.</p>
<hr>
<p>I'd like to use SWIG to call a bunch of C++ function from Python, specifically functions that accept vectors. So far, I've implemented it all with <code>std_vector.i</code> and <code>std::vector<double></code>, but since I end of converting it all into <code>Eigen::Vector3d</code> anyways, I thought I might better make it native. A small C++ example is</p>
<pre><code>#ifndef MYTEST_HPP
#define MYTEST_HPP
#include <iostream>
#include <Eigen/Dense>
void
print_norm(const Eigen::Vector3d & x) {
std::cout << x.norm() << std::endl;
}
void
print_norms(const std::vector<Eigen::Vector3d> & xs) {
for (const auto & x: xs) {
std::cout << x.norm() << std::endl;
}
}
#endif // MYTEST_HPP
</code></pre>
<p>I have no idea though how to best call this from Python. Perhaps</p>
<pre><code>import mytest
a = [1, 1, 0]
mytest.print_norm(a)
</code></pre>
<p>this is reasonable? A <code>numpy.array</code> might also work. Either way, I have no idea what to put in my <code>mytest.i</code>.</p>
<p>Any hints?</p>
| 0 | 2016-09-01T21:55:08Z | 39,451,545 | <p>There are a few examples of wrapping Eigen types with NumPy floating around on the web, where the <a href="https://github.com/Biomechanical-ToolKit/BTKCore/blob/master/Utilities/SWIG/eigen.i" rel="nofollow">Biomechanical Toolkit</a> implementation is the most widely copied, and I would recommend using that one too. It looks relatively big, but that's mostly from all the sanity checks and separate templates for different types.</p>
<p>Conversion from NumPy to Eigen works by using the <code>obj_to_array_contiguous_allow_conversion</code> function from the accompanying <a href="https://github.com/Biomechanical-ToolKit/BTKCore/blob/master/Utilities/SWIG/numpy.i" rel="nofollow"><code>numpy.i</code></a>, followed by <code>PyArray_DATA</code> to get the data as a contiguous C(++) array, from which the data is simply assigned to each coefficient in the Eigen matrix separately.</p>
<p>The other way around is pretty much the inverse: a python NumPy array is created with <code>PyArray_SimpleNew</code>, which is filled with data from the Eigen matrix.</p>
<p>It doesn't directly do the wrapping for <code>std::vector<Eigen::Vector3d></code>, you can set that up using <code>%include <stl.i></code> probably, but in my experience it would be better to use Nx3 numpy arrays as lists of 3D vectors, because of the issues with Eigen, alignment and STL containers.</p>
| 1 | 2016-09-12T13:31:23Z | [
"python",
"c++",
"swig",
"eigen",
"eigen3"
] |
Unable to bundle app with libusb1 in PyInstaller | 39,281,554 | <p>I'm running Python 2.7.12 with a simple python file and importing a couple of modules (PyQt5 and usb1). No additional assets or files. </p>
<p>When I try to bundle the app, with a default spec file, the app works fine on my host machine. But trying to run it on another machine (with Python 2.7.10) fails with this error: OSError: dlopen(libusb-1.0.dylib, 6): image not found. So I added the following in the spec file in Analysis object :</p>
<pre><code>binaries=[('/usr/local/Cellar/libusb/1.0.20/lib/libusb-1.0.0.dylib', 'libusb-1.0.0.dylib')],
</code></pre>
<p>Doesn't work either. Any help with getting libusb1 to work with PyInstaller bundle? I'm using OSX 10.10.3</p>
<p>Thanks</p>
| 0 | 2016-09-01T22:02:48Z | 39,283,667 | <p>OK, finally solved it!
I manually copied the libusb-1.0.0.dylib from /usr/local/Cellar/libusb/1.0.20/lib/ and pasted/overwrote the one in the distributed folder. Then created a symlink in the folder as libusb-1.0.dylib and that seemed to do it. Hope this helps someone!</p>
| 0 | 2016-09-02T02:58:49Z | [
"python",
"pyinstaller",
"libusb-1.0"
] |
efficient function to find harmonic mean across different pandas dataframes | 39,281,575 | <p>I have several dataframes with identical shape/types, but slightly different numeric values. I can easily produce a new dataframe with the mean of all input dataframes via:</p>
<pre><code>df = pd.concat([input_dataframes])
df = df.groupby(df.index).mean()
</code></pre>
<p>I want to do the same with harmonic mean (probably the scipy.stats.hmean function). I have attempted to do this using:</p>
<pre><code>.groupby(df.index).apply(scipy.stats.hmean)
</code></pre>
<p>But this alters the structure of the dataframe. Is there a better way to do this, or do I need to use a more lengthly/manual implementation?</p>
<p>To illustrate :</p>
<pre><code>df_input1:
'a' 'b' 'c'
'x' 1 1 2
'y' 2 2 4
'z' 3 3 6
df_input2:
'a' 'b' 'c'
'x' 2 2 4
'y' 3 3 6
'z' 4 4 8
desired output (but w/ hmean):
'a' 'b' 'c'
'x' 1.5 1.5 3
'y' 2.5 2.5 5
'z' 3.5 3.5 7
</code></pre>
| 2 | 2016-09-01T22:04:36Z | 39,281,843 | <p>Create a pandas Panel, and apply the harmonic mean function over the 'item' axis.</p>
<p>Example with your dataframes <code>df1</code> and <code>df2</code>:</p>
<pre><code>import pandas as pd
from scipy import stats
d = {'1':df1,'2':df2}
pan = pd.Panel(d)
pan.apply(axis='items',func=stats.hmean)
</code></pre>
<p>yields:</p>
<pre><code> 'a' 'b' 'c'
'x' 1.333333 1.333333 2.666667
'y' 2.400000 2.400000 4.800000
'z' 3.428571 3.428571 6.857143
</code></pre>
| 3 | 2016-09-01T22:32:56Z | [
"python",
"pandas",
"scipy"
] |
Check if a number is a perfect power of another number | 39,281,632 | <p>For example, 243 is a perfect power of 3 because 243=3^5.</p>
<p>I've previously been using <code>(math.log(a) / math.log(b)).is_integer()</code>, which I thought worked fine, but then I tried it with the example above and it actually returns 4.999999999999999 due to floating point arithmetic. So it's only reliable for very small numbers, less than around 100 I've found.</p>
<p>I suppose I could use a loop to do repeated multiplication... i.e. set i to 3, then 9, then 27, then 81, then 243, which equals the target, so we know it's a perfect power. If it reaches a point where it's bigger than 243 then we know it's not a perfect power. But I'm running this check within a loop as it is, so this seems like it'd be very inefficient.</p>
<p>So is there any other way of reliably checking if a number is a perfect power of another?</p>
| 2 | 2016-09-01T22:10:23Z | 39,281,679 | <p>Try:</p>
<pre><code>b ** int(round(math.log(a, b))) == a
</code></pre>
<p>That is, only use <code>log()</code> (note there is a 2-argument form!) to get a guess at an integer power, then verify whether "that works".</p>
<p>Note that <code>math.log()</code> returns a sensible result even for integer arguments much too large to represent as a float. Also note that integer <code>**</code> in Python is exact, and uses an efficient algorithm internally (doing a number of multiplications proportional to the number of bits in the exponent).</p>
<p>This is straightforward and much more efficient (in general) than, say, repeated division.</p>
<p>But then I'm answering the question you asked ;-) If you had some other question in mind, some of the other answers may be more appropriate.</p>
| 3 | 2016-09-01T22:14:45Z | [
"python",
"math"
] |
Check if a number is a perfect power of another number | 39,281,632 | <p>For example, 243 is a perfect power of 3 because 243=3^5.</p>
<p>I've previously been using <code>(math.log(a) / math.log(b)).is_integer()</code>, which I thought worked fine, but then I tried it with the example above and it actually returns 4.999999999999999 due to floating point arithmetic. So it's only reliable for very small numbers, less than around 100 I've found.</p>
<p>I suppose I could use a loop to do repeated multiplication... i.e. set i to 3, then 9, then 27, then 81, then 243, which equals the target, so we know it's a perfect power. If it reaches a point where it's bigger than 243 then we know it's not a perfect power. But I'm running this check within a loop as it is, so this seems like it'd be very inefficient.</p>
<p>So is there any other way of reliably checking if a number is a perfect power of another?</p>
| 2 | 2016-09-01T22:10:23Z | 39,281,767 | <p>maybe something like:</p>
<pre><code>def is_perfect_power(a, b):
while a % b == 0:
a = a / b
if a == 1:
return True
return False
print is_perfect_power(8,2)
</code></pre>
| 1 | 2016-09-01T22:24:08Z | [
"python",
"math"
] |
Check if a number is a perfect power of another number | 39,281,632 | <p>For example, 243 is a perfect power of 3 because 243=3^5.</p>
<p>I've previously been using <code>(math.log(a) / math.log(b)).is_integer()</code>, which I thought worked fine, but then I tried it with the example above and it actually returns 4.999999999999999 due to floating point arithmetic. So it's only reliable for very small numbers, less than around 100 I've found.</p>
<p>I suppose I could use a loop to do repeated multiplication... i.e. set i to 3, then 9, then 27, then 81, then 243, which equals the target, so we know it's a perfect power. If it reaches a point where it's bigger than 243 then we know it's not a perfect power. But I'm running this check within a loop as it is, so this seems like it'd be very inefficient.</p>
<p>So is there any other way of reliably checking if a number is a perfect power of another?</p>
| 2 | 2016-09-01T22:10:23Z | 39,281,928 | <p>If both the base and exponent are wildcards here and you are starting with only the "big number" result, then you're going to face a speed-memory trade-off. Since you seem to have implied that nested loops would be "inefficient", I assume that CPU cycle efficiency is important to you. In this case, may I suggest pre-computing a bunch of base-exponent pairs: a kind of rainbow table if you will. This will consume more memory, but much less CPU cycles. Then you can simply iterate over your table and see if you find a match. Or, if you're <em>very</em> concerned about speed, you can even sort the table and do a a binary search. You can even store the pre-computed table in a file, if you like.</p>
<p>Also, you didn't specify whether you just want a yes/no answer, or whether you are trying to find the corresponding base and exponent of the power. So I'll assume you want to find <strong>a</strong> base-exponent pair (some perfect powers will have multiple base-exponent solutions).</p>
<p>Try this on for size (I'm using Python 3):</p>
<pre><code>#Setup the table as a list
min_base = 2 #Smallest possible base
max_base = 100 #Largest possible base
min_exp = 2 #Smallest possible exponent
max_exp = 10 #Largest possible exponent
powers = []
#Pre-compute the table - this takes time, but only needs to be done once
for i in range(min_base, max_base+1):
for j in range(min_exp, max_exp+1):
powers.append([i, j, i ** j])
#Now sort the table by the 3'rd element - again this is done only once
powers.sort(key=lambda x: x[2])
#Binary search the table to check if a number is a perfect power
def is_perfect_power(a, powers_arr):
lower = 0
upper = len(powers_arr)
while lower < upper:
x = lower + (upper - lower) // 2
val = powers_arr[x][2] #[2] for the pre-computed power
if a == val:
return powers_arr[x][0:2]
elif a > val:
if lower == x:
break
lower = x
elif a < val:
upper = x
return False #Number supplied is not a perfect power
#A simple demonstration
print(is_perfect_power(243, powers)) #Output is [3, 5]
print(is_perfect_power(105413504, powers)) #Output is [14, 7]
print(is_perfect_power(468209, powers)) #Output is False - not a perfect power
</code></pre>
<p>Someone more mathematically inclined may have a more efficient answer, but this should get you started. On my machine this runs pretty fast.</p>
| 0 | 2016-09-01T22:43:13Z | [
"python",
"math"
] |
Check if a number is a perfect power of another number | 39,281,632 | <p>For example, 243 is a perfect power of 3 because 243=3^5.</p>
<p>I've previously been using <code>(math.log(a) / math.log(b)).is_integer()</code>, which I thought worked fine, but then I tried it with the example above and it actually returns 4.999999999999999 due to floating point arithmetic. So it's only reliable for very small numbers, less than around 100 I've found.</p>
<p>I suppose I could use a loop to do repeated multiplication... i.e. set i to 3, then 9, then 27, then 81, then 243, which equals the target, so we know it's a perfect power. If it reaches a point where it's bigger than 243 then we know it's not a perfect power. But I'm running this check within a loop as it is, so this seems like it'd be very inefficient.</p>
<p>So is there any other way of reliably checking if a number is a perfect power of another?</p>
| 2 | 2016-09-01T22:10:23Z | 39,282,125 | <p>If you will be working with large numbers, you may want to look at <a href="https://pypi.python.org/pypi/gmpy2" rel="nofollow">gmpy2</a>. <code>gmpy2</code> provides access to the GMP multiple-precision library. One of the functions provided is <code>is_power()</code>. It will return True if the argument is a perfect power of some base number. It won't provide the power or the base number but it will quickly filter out numbers that can not be perfect powers.</p>
<pre><code>>>> import gmpy2
>>> [n for n in range(1,1000) if gmpy2.is_power(n)]
[1, 4, 8, 9, 16, 25, 27, 32, 36, 49, 64, 81, 100, 121, 125, 128, 144, 169, 196, 216, 225, 243, 256, 289, 324, 343, 361, 400, 441, 484, 512, 529, 576, 625, 676, 729, 784, 841, 900, 961]
</code></pre>
<p>Once you've identified possible powers, then you can use <code>iroot_rem(x,n)</code> to find the nth root and remainder of x. Then once you find a valid exponent, you can find the base number. Here is an example that searches through a range for all possible perfect powers.</p>
<pre><code>>>> for x in range(1,1000):
... if gmpy2.is_power(x):
... for e in range(x.bit_length(), 1, -1):
... temp_root, temp_rem = gmpy2.iroot_rem(x, e)
... if not temp_rem:
... print x, temp_root, e
...
4 2 2
8 2 3
9 3 2
16 2 4
16 4 2
25 5 2
27 3 3
32 2 5
36 6 2
49 7 2
64 2 6
64 4 3
64 8 2
81 3 4
81 9 2
<< remainder clipped>>
</code></pre>
<p>Disclaimer: I maintain <code>gmpy2</code>.</p>
| 1 | 2016-09-01T23:08:15Z | [
"python",
"math"
] |
Check if a number is a perfect power of another number | 39,281,632 | <p>For example, 243 is a perfect power of 3 because 243=3^5.</p>
<p>I've previously been using <code>(math.log(a) / math.log(b)).is_integer()</code>, which I thought worked fine, but then I tried it with the example above and it actually returns 4.999999999999999 due to floating point arithmetic. So it's only reliable for very small numbers, less than around 100 I've found.</p>
<p>I suppose I could use a loop to do repeated multiplication... i.e. set i to 3, then 9, then 27, then 81, then 243, which equals the target, so we know it's a perfect power. If it reaches a point where it's bigger than 243 then we know it's not a perfect power. But I'm running this check within a loop as it is, so this seems like it'd be very inefficient.</p>
<p>So is there any other way of reliably checking if a number is a perfect power of another?</p>
| 2 | 2016-09-01T22:10:23Z | 39,286,024 | <p>If you want a speed up over repeated division for large numbers, you can make a list of all the exponents of the base where the exponent is a power of 2, and test those only.</p>
<p>For example, take 3<sup>9</sup>. 9 is 101 in binary, which is 2<sup>3</sup> + 2<sup>1</sup> = 8 + 1.</p>
<p>So 3<sup>9</sup> = 3<sup>8+1</sup> = 3<sup>2<sup>3</sup>+2<sup>1</sup></sup> = 3<sup>2<sup>3</sup></sup> x 3<sup>2<sup>1</sup></sup></p>
<p>You only need to trial divide twice, instead of 9 times.</p>
<pre><code>def is_power(a, b):
if b == 0 or b == 1:
return a == b
if a < b:
return False
c = []
# make a list of the power of 2 exponents less than or equal to a:
while b * b <= a:
c.append(b)
b = b * b
# test against each of them where b <= remaining a:
while True:
if a % b != 0:
return False
a //= b
while b > a:
if len(c) == 0:
return a == 1
b = c.pop()
return True
</code></pre>
<p>Works for positive integers, e.g:</p>
<pre><code>print is_power(3**554756,3)
print is_power((3**554756)-1,3)
</code></pre>
<p>Output:</p>
<pre><code>True
False
</code></pre>
| 0 | 2016-09-02T06:51:46Z | [
"python",
"math"
] |
ROS2: ImportError: No module named genmsg | 39,281,644 | <p>I starting with <em>ROS2</em> which is currently in the alpha phase. While building the package <code>ros1_bridge</code> I got this error:</p>
<pre><code>Traceback (most recent call last):
File "bin/ros1_bridge_generate_factories", line 11, in <module>
from ros1_bridge import generate_cpp
File "/home/ros/ros2_ws/src/ros2/ros1_bridge/ros1_bridge/__init__.py", line 13, in <module>
import genmsg
ImportError: No module named 'genmsg'
</code></pre>
<p>This is quite strange. On the same computer I build the same code without any problem. The only thing that changed: I have installed <em>ROS Kinetic</em>.</p>
<p>I found out in synaptic that I have now two different packages of <code>genmsg</code> installed: <code>python-genmsg</code> and <code>ros-kinetic-genmsg</code>. The first one comes as dependency of <em>ROS2</em> the second one with <em>ROS</em>. So may both are necessary. I think that is <em>Python</em>-stuff and I am not familiar with <em>Python</em>. What can I do to get it run again?</p>
<p>Thanks in advance,<br>
Alex</p>
| 2 | 2016-09-01T22:11:31Z | 39,282,311 | <p>This happens while dependencies installed for <em>ROS</em> and <em>ROS2</em> on the same
machine. Especially the package <code>python-genmsg</code> and <code>ros-kinetic-genmsg</code>.
<code>genmsg</code> can now found at these places:</p>
<ol>
<li>/opt/ros/kinetic/lib/python2.7/dist-packages</li>
<li>/usr/lib/python2.7/dist-packages</li>
</ol>
<p>This will bring Python run into trouble. In respect that <code>ros1_bridge</code> shall fit to <em>ROS Kinetic</em> the environment Python
variable <code>PYTHONPATH</code> will set to the <em>Kinetic</em> one:</p>
<pre><code>export PYTHONPATH=/opt/ros/kinetic/lib/python2.7/dist-packages/
</code></pre>
<p>Now restart the build and the build runs now...</p>
| 2 | 2016-09-01T23:33:31Z | [
"python",
"ros"
] |
Contrary of .join() pyspark | 39,281,687 | <p><code>join</code> returns an RDD containing all pairs of elements with matching keys.</p>
<p><a href="https://spark.apache.org/docs/1.6.2/api/python/pyspark.html#pyspark.RDD.join" rel="nofollow">https://spark.apache.org/docs/1.6.2/api/python/pyspark.html#pyspark.RDD.join</a></p>
<p>Example:</p>
<pre><code> trueDupsRDD = (rdd1.join(rdd2))
</code></pre>
<p><b>How can I perform a disjoin?</b></p>
<p>I tried:</p>
<pre><code>notMatchingRDD = (rdd1.join(!rdd2))
</code></pre>
| 2 | 2016-09-01T22:15:37Z | 39,281,736 | <p>Use <code>subtractByKey</code>:</p>
<blockquote>
<p>Return each (key, value) pair in C{self} that has no pair with matching
key in C{other}.</p>
</blockquote>
<pre><code>rdd1.subtractByKey(rdd2)
</code></pre>
| 5 | 2016-09-01T22:20:40Z | [
"python",
"apache-spark",
"pyspark"
] |
Running DFS on large graph | 39,281,720 | <p>I am trying to find Strongly Connected Components in a large graph implementing Kosaraju's algorithm. It requires running a DFS on a graph in reverse, and then forward. If you're interested the list of edges for this graph are here: <a href="https://dl.dropboxusercontent.com/u/28024296/SCC.txt.tar.gz" rel="nofollow">https://dl.dropboxusercontent.com/u/28024296/SCC.txt.tar.gz</a></p>
<p>I can't implement this recursively in Python, it exceeds its recursive limits and crashes if I increase them. I'm trying to implement through iteration.</p>
<p>Below is my code for 1. Loading the graph in reverse into a Dictionary, and 2. Running the DFS on it iteratively for each node from n -> 1.</p>
<p>This code runs perfect for small sample graphs but just doesn't run for this large graph. I get it's inefficient but any tips on how to make it work?</p>
<pre><code>def reverseFileLoader():
graph = collections.defaultdict(lambda: {'g': [], 's': False, 't': None, 'u': None })
for line in open('/home/edd91/Documents/SCC.txt'):
k, v = map(int, line.split())
graph[v]['g'].append(k)
return graph
def DFS(graph, i):
global t
global source
stack = []
seen = []
seen.append(i)
stack.append(i)
while stack:
s = stack[-1]
j = len(graph[s]['g']) - 1
h = 0
while (j >= 0):
if graph[s]['g'][j] not in seen and graph[graph[s]['g'][j]]['t'] == None:
seen.append(graph[s]['g'][j])
stack.append(graph[s]['g'][j])
h += 1
j -= 1
if h == 0:
if graph[s]['t'] == None:
t += 1
graph[s]['u'] = source
graph[s]['t'] = t
stack.pop()
def DFSLoop(graph):
global t
t = 0
global source
source = None
i = len(graph)
while (i >= 1):
print "running for " + str(i)
source = i
DFS(graph, i)
i -= 1
</code></pre>
| 4 | 2016-09-01T22:19:19Z | 39,282,185 | <p>Kosaraju's algorithm probably requires that checking whether an element has been seen is an O(1) operation. But your seen data structure has O(n) time membership testing. Converting <code>seen</code> from a list to a set makes the code execute in a few seconds on my system (after also removing the prints which took up most of the remaining execution time).</p>
<p>For completeness the changes you need to make are</p>
<ul>
<li>Change <code>seen = []</code> to <code>seen = set()</code></li>
<li>Change each <code>seen.append(...)</code> to <code>seen.add(...)</code>.</li>
</ul>
| 1 | 2016-09-01T23:16:26Z | [
"python",
"graph",
"graph-algorithm"
] |
Python string.replace is stripping off my quote characters | 39,281,782 | <p>What I'm doing is feeding my python script a CSV file which contains millions of records separated by commas. Any strings are "contained by double qoutes".</p>
<p>I pass this .csv file through my python script</p>
<pre><code>import csv
import string
import sys, getopt
inFile = open(sys.argv[1], 'r')
outFile = open(sys.argv[1][:-4] + '_no-nulls.csv', 'w')
data = csv.reader(inFile)
writer = csv.writer(outFile)
specials = "NULL"
for line in data:
line = [value.replace(specials, '') for value in line]
writer.writerow(line)
inFile.close()
outFile.close()
</code></pre>
<p>And the end result has all the quotes stipped off my strings.<br>
What am I doing wrong?</p>
<h2>Edit</h2>
<p>Sample input:</p>
<pre><code>897555,2021-03-31 00:00:00.000,NULL,"45687","B","QA",29,NULL,NULL,NULL,NULL,NULL,NULL,NULL,"5648987QEXXX",6,NULL,NULL,"DOE","JOHN",NULL,NULL,NULL,NULL,NULL,"Q",1994-04-24 00:00:00.000,"R","CX","ZZ",NULL,NULL,NULL,NULL,NULL,"Y",NULL,"GA","R","DE",NULL,NULL,NULL,NULL,NULL,"EN",NULL,"Y","OP",NULL,"R","XZ",NULL,NULL,NULL,"8945564",2005-03-01 12:00:00.000,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL
</code></pre>
<p>Sample output:</p>
<pre><code>897555,2021-03-31 00:00:00.000,,"45687","B","QA",29,,,,,,,,"5648987QEXXX",6,,,"DOE","JOHN",,,,,,"Q",1994-04-24 00:00:00.000,"R","CX","ZZ",,,,,,"Y",,"GA","R","DE",,,,,,"EN",,"Y","OP",,"R","XZ",,,,"8945564",2005-03-01 12:00:00.000,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
</code></pre>
| 0 | 2016-09-01T22:25:29Z | 39,281,886 | <p>This is normal. When reading, <code>csv.reader</code> will strip off the quotes because it's assumed that the program consuming the data doesn't want or need them. <code>csv.writer</code> will then put them back on if necessary, depending on the setting of <a href="https://docs.python.org/3/library/csv.html#csv.Dialect.quoting" rel="nofollow"><code>quoting</code></a> that you pass, the default being <code>QUOTE_MINIMAL</code> - it will only add quotes if there are characters in the string that could be misinterpreted.</p>
<p>You could set both the reader and the writer to <code>QUOTE_NONE</code> to preserve the quotes that are in the original file, or set the writer to <code>QUOTE_ALL</code> to requote all output.</p>
| 1 | 2016-09-01T22:38:17Z | [
"python",
"string",
"csv",
"replace"
] |
Openstack Python SDK - Glance does not return image MD5 | 39,281,790 | <p>I'm trying to download an OpenStack image from glance using only the Openstack Python SDK, but I only get this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/openstack/discovery/discovery.py", line 222, in <module>
main(sys.argv[1:])
File "/home/openstack/discovery/discovery.py", line 117, in main
image_service.download_image(image)
File "/usr/local/lib/python2.7/dist-packages/openstack/image/v2/_proxy.py", line 72, in download_image
return image.download(self.session)
File "/usr/local/lib/python2.7/dist-packages/openstack/image/v2/image.py", line 166, in download
checksum = resp.headers["Content-MD5"]
File "/usr/local/lib/python2.7/dist-packages/requests/structures.py", line 54, in __getitem__
return self._store[key.lower()][1]
KeyError: 'content-md5'
</code></pre>
<p>The weird part is that if I run the code using an IDE (PyCharm with remote debug) or as a script (python script.py -i ...) I get the error, but if I run each line using a python interpreter (ipython/python) the error does not happen! Have no idea why.</p>
<p>Here is the code I'm using:</p>
<pre><code>...
image_name = node.name + "_" + time.strftime("%Y-%m-%d_%H-%M-%S")
print "Getting data from", node.name
compute_service.create_server_image(node, image_name)
image = image_service.find_image(image_name)
image_service.wait_for_status(image, 'active')
fileName = "%s.img" % image.name
with open(str(fileName), 'w+') as imgFile:
imgFile.write(image.download(conn.image.session))
...
</code></pre>
<p>This code ends up calling the API in this file <code>/usr/local/lib/python2.7/dist-packages/openstack/image/v2/image.py</code>, with this method:</p>
<pre><code>def download(self, session):
"""Download the data contained in an image"""
# TODO(briancurtin): This method should probably offload the get
# operation into another thread or something of that nature.
url = utils.urljoin(self.base_path, self.id, 'file')
resp = session.get(url, endpoint_filter=self.service)
checksum = resp.headers["Content-MD5"]
digest = hashlib.md5(resp.content).hexdigest()
if digest != checksum:
raise exceptions.InvalidResponse("checksum mismatch")
return resp.content
</code></pre>
<p>The resp.headers variable has no key "Content-MD5". This is the value I found for it:</p>
<pre><code>{'Date': 'Thu, 01 Sep 2016 20:17:01 GMT', 'Transfer-Encoding': 'chunked',
'Connection': 'keep-alive', 'Content-Type': 'application/octet-stream',
'X-Openstack-Request-Id': 'req-9eb16897-1398-4ab2-9cd4-45706e92819c'}
</code></pre>
<p>But according to the REST API documentationm the response should return with the key Content-MD5:
<a href="http://developer.openstack.org/api-ref/image/v2/?expanded=download-binary-image-data-detail" rel="nofollow">http://developer.openstack.org/api-ref/image/v2/?expanded=download-binary-image-data-detail</a></p>
<p>If I just comment the MD5 check the download works fine, but this is inside the SDK so I can't/shouldn't change it. Anyone have any suggestion on how to achieve this using the OpenStack Python SDK? Is this an SDK bug?</p>
| 0 | 2016-09-01T22:26:30Z | 39,457,662 | <p>Turns out this was indeed a bug SDK/Glance. More details about it can be found here:
<a href="https://bugs.launchpad.net/python-openstacksdk/+bug/1619675" rel="nofollow">https://bugs.launchpad.net/python-openstacksdk/+bug/1619675</a></p>
<p>And the fix implemented can be seen here:
<a href="https://github.com/openstack/python-openstacksdk/commit/759651f4a9eae2ba546f46613550a4cb10ddd964" rel="nofollow">https://github.com/openstack/python-openstacksdk/commit/759651f4a9eae2ba546f46613550a4cb10ddd964</a></p>
| 0 | 2016-09-12T19:42:51Z | [
"python",
"openstack",
"openstack-glance",
"openstack-api"
] |
How do I send a JavaScript confirm response over Django to a script to perform operations based on the response? | 39,281,884 | <p>I'm very very new to Django and I'm currently trying to build a simple To-Do application. I'm currently able to display my To-Do list on the webpage using HTML Tables, but to be able to delete it, I added a new row at the end of every entry called <code>"Delete"</code> with an a <code>href element, on click event</code> sending it to a JS function that pops up a confirmation to the user asking if he actually wants to delete it. I'm storing the response in a variable called <code>response</code>. Now, my question is how do I send this response across django to actually be able to delete the entry? This is what I have done so far and its not working. All help will be appreciated. Thank You. </p>
<pre><code>{% for item in task %}
<tr>
<td>{{ item.task_name }}</td>
<td>{{ item.task_priority }}</td>
<td>{{ item.task_status }}</td>
<td>{{ item.target_date }}</td>
<td><a href="" onclick="return delete_function()">Delete</a></td>
<p id="delete"></p>
</tr>
{% endfor %}
</table>
<script>
function delete_function() {
var response = confirm("Are you sure you want to delete this item?");
if (response == true){
response.post()
}
}
</script>
</code></pre>
<p>In my views page, I have this function to handle this post method:</p>
<pre><code>def view_task(request):
if request.method == 'GET':
context = dict()
context['task'] = Task.objects.all()
return render(request, 'todo/view_tasks.html', context)
if request.method == 'POST':
task_name = request.post['task_name']
Task.objects.delete(task_name)
Task.objects.save()
</code></pre>
<p>What am I doing wrong? Also, could you also explain to me with your answer when and where should I use post and get or at least direct me to some very well answered questions about the same topic?</p>
| 1 | 2016-09-01T22:38:00Z | 39,282,726 | <p>I don't think that this would be a good way to solve the problem, </p>
<p>try to do this:</p>
<p>in your template.html:</p>
<pre><code><form method="POST" action="">
{% csrf_token %}
<input type="submit" name="delete_amount" onclick="delete_me(event)" />
</form>
</code></pre>
<p>in your view.py:</p>
<pre><code>#just in case you have other form
if 'delete_amount' in request.POST:
# do your deleting query here
print('deleting amount')
</code></pre>
<p>and in your javascript.js:</p>
<pre><code><script>
function delete_me(e)
{
if(!confirm('Are you sure you want to delete this?')){
//prevent sending the request when user clicked 'Cancel'
e.preventDefault();
}
}
</script>
</code></pre>
<p>Hope this will help!</p>
| 0 | 2016-09-02T00:36:28Z | [
"javascript",
"python",
"django",
"post",
"get"
] |
Changing text font and colour | 39,281,903 | <p>Whether it's a module or something I can define in my code, is there a way to change the output text like making it <em>italic</em> or <strong>BOLD</strong>? Also, are colours a thing?</p>
| 0 | 2016-09-01T22:39:49Z | 39,282,055 | <p>Taking the question @LukeK posted, you can use <a href="https://en.wikipedia.org/wiki/ANSI_escape_code?wprov=sfla1" rel="nofollow">ANSI escape sequences</a> anywhere in your strings:</p>
<pre><code>print("Roses are \033[31mred\033[0m") # Will make red... red
</code></pre>
<p>You can combine them in that way (I hope it's clear enough):</p>
<pre><code>BOLD_AND_RED = "\033[1;31m"
</code></pre>
<p>On the other hand, you can find modules that allow simpler (or nicer) ways of doing it, but I prefer writing code with minimal dependencies, so I usually put some global variables and I later concatenate them with my strings:</p>
<pre><code>RED = "\033[31m"
BOLD = "\033[1m"
ITALIC = "\033[3m"
RESET = "\033[0m"
print(BOLD + RED + "ERROR: " + RESET + ITALIC + "blah blah...")
</code></pre>
<p>Last note! This escape sequences only work in terminals, and I don't think they work in Windows cmd.</p>
| 0 | 2016-09-01T22:59:48Z | [
"python"
] |
access/update global/shared defaultdict from different tornado.RequestHandler instances | 39,281,912 | <p>Is it safe to access/update a global <code>defaultdict</code> from different <code>RequestHandler</code> instances? E.g.</p>
<pre><code>GlobalMap = defaultdict(list)
class Event(tornado.web.RequestHandler):
def get(self, unit):
# This is where the access/modify might happen
# The list.append() is just an arbitrary example
GlobalMap[unit].append(datetime.utcnow())
self.write(b'')
</code></pre>
<p>If not, whatâs the proper way to do a lookup/store of keyed data between different <code>RequestHandler</code> instances?</p>
| 0 | 2016-09-01T22:41:26Z | 39,282,083 | <p>Yes, that's fine to do. Tornado code typically runs in the main thread; any code that accesses a Python data structure like this.</p>
<p>However, if you're deploying a Tornado application in production you'll want multiple Tornado processes, maybe running on multiple machines, so you'll want to put data in a shared database server to share it among processes.</p>
| 1 | 2016-09-01T23:03:02Z | [
"python",
"python-3.x",
"tornado",
"defaultdict"
] |
access/update global/shared defaultdict from different tornado.RequestHandler instances | 39,281,912 | <p>Is it safe to access/update a global <code>defaultdict</code> from different <code>RequestHandler</code> instances? E.g.</p>
<pre><code>GlobalMap = defaultdict(list)
class Event(tornado.web.RequestHandler):
def get(self, unit):
# This is where the access/modify might happen
# The list.append() is just an arbitrary example
GlobalMap[unit].append(datetime.utcnow())
self.write(b'')
</code></pre>
<p>If not, whatâs the proper way to do a lookup/store of keyed data between different <code>RequestHandler</code> instances?</p>
| 0 | 2016-09-01T22:41:26Z | 39,297,942 | <p>In an async/threaded/multiprocess app, accessing global variables are as safe as you make them. As in, it's never really a good idea to have shared states. But for the most part it's fairly safe to access values from a global <code>dict</code> especially in an a single threaded async app.</p>
<p>On the other hand, eventually it will become unruly to house all <code>RequestHandler</code> classes in a single module. Having a dictionary as a data store will not be able scale out effectively. What <code>A. Jesse Jiryu Davis</code> was implying in his answer, is that the data you have in <code>GlobalMap</code> should be stored in some sort of database so that it can be shared. There are a plethora of database solutions which "feel" alot like traditional <code>dict</code> objects such as <a href="https://motor.readthedocs.io/en/stable/" rel="nofollow">MongoDB</a> and Redis.</p>
<h1>Update</h1>
<p>You can also pass in additional parameters when you building your <code>Application</code> object. So in your main module you can do something like:</p>
<pre><code>GlobalMap = defaultdict(list)
class Event(tornado.web.RequestHandler):
def initialize(self, shared_dict):
self.shared_dict = shared_dict
app = Application([
#...
(r"/", Event, dict(shared_dict=GlobalMap))
])
</code></pre>
<p>Take a look at the <a href="http://www.tornadoweb.org/en/stable/guide/structure.html#the-application-object" rel="nofollow"><code>Application</code> docs</a> and the <a href="http://www.tornadoweb.org/en/stable/web.html#tornado.web.URLSpec" rel="nofollow"><code>URLSpec</code> docs</a>. I feel a bit foolish for not recalling this earlier. This will handle accessibility amongst Tornado apps, but not so much with other modules, in which case you would definitely need an external database.</p>
| 1 | 2016-09-02T17:32:31Z | [
"python",
"python-3.x",
"tornado",
"defaultdict"
] |
Remove columns where all items in column are identical (excluding header) and match a specified string | 39,281,956 | <p>My question is an extension of <a href="http://stackoverflow.com/questions/21164910/delete-column-in-pandas-based-on-condition">Delete Column in Pandas based on Condition</a>, but I have headers and the information isn't binary. Instead of removing a column containing all zeros, I'd like to be able to pass a variable "search_var" (containing a string) to filter out columns containing only that string.</p>
<p>I initially thought I should read in the df and iterate across each column, read each column in as a list, and print columns where len(col_list) > 2 and search_var not in col_list. The solution provided to the previous post involving a boolean dataframe (df != search_var) intrigued me there might be a simpler way, but how could I go around the issue that the header will not match and therefore cannot purely filter on True/False?</p>
<p>What I have (non-working):</p>
<pre><code>import pandas as pd
df = pd.read_table('input.tsv', dtype=str)
with open('output.tsv', 'aw') as ofh:
df['col_list'] = list(df.values)
if len(col_list) < 3 and search_var not in col_list:
df.to_csv(ofh, sep='\t', encoding='utf-8', header=False)
</code></pre>
<h1>Example input, search_var = 'red'</h1>
<pre><code>Name Header1 Header2 Header3
name1 red red red
name2 red orange red
name3 red yellow red
name4 red green red
name5 red blue blue
</code></pre>
<h1>Expected Output</h1>
<pre><code>Name Header2 Header3
name1 red red
name2 orange red
name3 yellow red
name4 green red
name5 blue blue
</code></pre>
| 0 | 2016-09-01T22:47:25Z | 39,282,009 | <p>You can check the number of <code>non-red</code> item in the column, if it is not zero then select it using <code>loc</code>:</p>
<pre><code>df.loc[:, (df != 'red').sum() != 0]
# Name Header2 Header3
# 0 name1 red red
# 1 name2 orange red
# 2 name3 yellow red
# 3 name4 green red
# 4 name5 blue blue
</code></pre>
| 1 | 2016-09-01T22:53:10Z | [
"python",
"pandas"
] |
Keeping code DRY when repeatedly referring to command-line options' configuration | 39,282,047 | <p>I just got a Raspberry Pi, and I'm building a Reddit API-based application for it, using the <a href="http://praw.readthedocs.io/en/stable/" rel="nofollow">PRAW</a> library. I'm executing my python files by using:</p>
<pre class="lang-sh prettyprint-override"><code>sudo python3 main.py
</code></pre>
<p>However, I would like to pass arguments to this file from the command line (so I can run the app silently by passing it the argument <code>silent</code> for example), and I know I can do so with <code>sys.argv[0], sys.argv[1], etc..</code>.</p>
<p>My problem is how to do this while following <strong>DRY</strong> -- Don't Repeat Yourself -- in referring to the configuration established by those options.</p>
<p>This is a portion of my code:</p>
<pre class="lang-py prettyprint-override"><code>def init():
if (len(sys.argv) >= 1):
global semisilent
global silent
for arg in sys.argv:
if arg == "semisilent":
semisilent = True
if arg == "silent":
silent = True
print ("--------[ Reddipi v0.1 ]--------\n")
if silent:
print (" ++Running in silent mode++ \n")
elif semisilent:
print ("++Running in semi-silent mode++ \n")
else:
print (" Logging in to reddit ")
print (" .... ")
global r
r = oauth.login()
if not silent:
print (" Login successful \n")
if not silent and not semisilent:
print (" Connecting to database ")
print (" .... ")
db.init(tablename)
if not silent:
print (" Connection successful \n")
if not silent and not semisilent:
global sub_name
q_sub = input(" Use custom subreddit? \n")
if (q_sub[0]=="y"):
q_sub = input(" Enter custom subreddit: \n")
if ((len(q_sub)==0) or (q_sub==" ")):
print (" No valid input. Using default \"" + sub_name + "\" \n")
else:
sub_name = q_sub
print ("\r Using subreddit \"" + sub_name + "\"\n")
else:
print (" Using default \"" + sub_name + "\" \n")
</code></pre>
<p>I find myself making the code very hard to read by constantly putting <code>if not silent</code> and such before other pieces of code. I've also thought about having multiple methods with the same essential functions but with code left out if the user had put in <code>silent</code> or <code>semisilent</code>, but this would lead to unnecessary code duplication.</p>
<p><strong>Is there another / a good way to change the behaviour of my methods without having to make it unreadable or rewrite the code multiple times?</strong></p>
<p>Thanks a lot for the help!
- Jeroen</p>
| 0 | 2016-09-01T22:58:44Z | 39,282,153 | <p>Define your own custom "print" method, which checks if some variable (silent) is set, and then skips printing it if that value is set. So all your lines print('value') will turn into something like myprint('value'). I personally like using the function names verbose() and debug() and having two log levels. In your case, maybe call them "silent()" and "semisilent()" or something. </p>
| 1 | 2016-09-01T23:12:21Z | [
"python",
"methods",
"arguments",
"line",
"code-duplication"
] |
Confused about passing by reference in this Python function | 39,282,051 | <p>I have this simple Python 2.7 function:</p>
<pre><code>def sort_rows(mat):
mat = [sorted(i) for i in mat]
</code></pre>
<p>However, when I run: </p>
<pre><code>M = [[4, 5, 2, 8], [3, 9, 6, 7]]
sort_rows(M)
print(M)
</code></pre>
<p>I get</p>
<pre><code>[[4, 5, 2, 8], [3, 9, 6, 7]]
</code></pre>
<p>Why wasn't <code>M</code> edited? I thought python functions passed lists by reference? Am I missing something?</p>
| -1 | 2016-09-01T22:59:26Z | 39,282,105 | <p>The reason is that the <code>sorted</code> function doesn't sort the list in place, if you want it be sorted in place, you can use <code>list.sort()</code> method:</p>
<pre><code>def sort_rows(mat):
[i.sort() for i in mat]
M = [[4, 5, 2, 8], [3, 9, 6, 7]]
sort_rows(M)
print(M)
# [[2, 4, 5, 8], [3, 6, 7, 9]]
</code></pre>
<p><code>sorted</code> returns a new list but the original list doesn't change:</p>
<pre><code>x = [3, 2, 1, 4]
sorted(x)
# [1, 2, 3, 4]
x
# [3, 2, 1, 4] # x doesn't change here
</code></pre>
| 0 | 2016-09-01T23:05:37Z | [
"python",
"python-2.7"
] |
Confused about passing by reference in this Python function | 39,282,051 | <p>I have this simple Python 2.7 function:</p>
<pre><code>def sort_rows(mat):
mat = [sorted(i) for i in mat]
</code></pre>
<p>However, when I run: </p>
<pre><code>M = [[4, 5, 2, 8], [3, 9, 6, 7]]
sort_rows(M)
print(M)
</code></pre>
<p>I get</p>
<pre><code>[[4, 5, 2, 8], [3, 9, 6, 7]]
</code></pre>
<p>Why wasn't <code>M</code> edited? I thought python functions passed lists by reference? Am I missing something?</p>
| -1 | 2016-09-01T22:59:26Z | 39,282,114 | <p>When your <code>sort_rows</code> is called <code>mat</code> points to the same list that <code>M</code> points to. This statement, however, changes that:</p>
<pre><code>mat = [sorted(i) for i in mat]
</code></pre>
<p>After the above is executed, <code>mat</code> now points to a different list. <code>M</code> is unchanged.</p>
<p>To change <code>M</code> in place:</p>
<pre><code>>>> def sort_rows(mat):
... for i, row in enumerate(mat):
... mat[i] = sorted(row)
...
>>> sort_rows(M)
>>> M
[[2, 4, 5, 8], [3, 6, 7, 9]]
</code></pre>
<p>The statement <code>mat[i] = sorted(row)</code> changes the i-th element of <code>mat</code> but does not change the list that <code>mat</code> points to. Hence, <code>M</code> is changed also.</p>
<p>Alternatively, you can have your function return a list with sorted rows:</p>
<pre><code>>>> def rowsort(mat):
... return [sorted(i) for i in mat]
...
>>> M = [[4, 5, 2, 8], [3, 9, 6, 7]]
>>> M = rowsort(M)
>>> M
[[2, 4, 5, 8], [3, 6, 7, 9]]
</code></pre>
| 2 | 2016-09-01T23:06:26Z | [
"python",
"python-2.7"
] |
Confused about passing by reference in this Python function | 39,282,051 | <p>I have this simple Python 2.7 function:</p>
<pre><code>def sort_rows(mat):
mat = [sorted(i) for i in mat]
</code></pre>
<p>However, when I run: </p>
<pre><code>M = [[4, 5, 2, 8], [3, 9, 6, 7]]
sort_rows(M)
print(M)
</code></pre>
<p>I get</p>
<pre><code>[[4, 5, 2, 8], [3, 9, 6, 7]]
</code></pre>
<p>Why wasn't <code>M</code> edited? I thought python functions passed lists by reference? Am I missing something?</p>
| -1 | 2016-09-01T22:59:26Z | 39,282,190 | <p>You could use the return function to return the variable mat at the end of sort row.</p>
<pre><code>return M
</code></pre>
<p>Then if you try- </p>
<pre><code>print(sort_rows(M))
</code></pre>
<p>you should get the result you want. Thanks.</p>
| 0 | 2016-09-01T23:17:05Z | [
"python",
"python-2.7"
] |
Confused about passing by reference in this Python function | 39,282,051 | <p>I have this simple Python 2.7 function:</p>
<pre><code>def sort_rows(mat):
mat = [sorted(i) for i in mat]
</code></pre>
<p>However, when I run: </p>
<pre><code>M = [[4, 5, 2, 8], [3, 9, 6, 7]]
sort_rows(M)
print(M)
</code></pre>
<p>I get</p>
<pre><code>[[4, 5, 2, 8], [3, 9, 6, 7]]
</code></pre>
<p>Why wasn't <code>M</code> edited? I thought python functions passed lists by reference? Am I missing something?</p>
| -1 | 2016-09-01T22:59:26Z | 39,282,242 | <p>You could use the return function to return the variable mat at the end of sort row.</p>
<p>return M</p>
<p>Then if you try </p>
<p>print(sort_rows(M))</p>
<p>you should get the result you want.</p>
<p>I mean return mat I apologize</p>
| 0 | 2016-09-01T23:22:35Z | [
"python",
"python-2.7"
] |
Nonlinear regression on tensorflow | 39,282,060 | <p>What activation and cost functions on <code>tensorflow</code> could be suitable below for <strong>tf.nn</strong> to <strong>learn</strong> a simple single variate nonlinear relationship <code>f(x) = x * x</code> that is a priori unknown? </p>
<p>Certainly, this impractical model is used for the sole purpose of understanding <code>tf.nn mechanics 101</code>.</p>
<pre><code>import numpy as np
import tensorflow as tf
x = tf.placeholder(tf.float32, [None, 1])
W = tf.Variable(tf.zeros([1,1]))
b = tf.Variable(tf.zeros([1]))
y = some_nonlinear_activation_function_HERE(tf.matmul(x,W) + b)
y_ = tf.placeholder(tf.float32, [None, 1])
cost = tf.reduce_mean(some_related_cost_function_HERE(y, y_))
learning_rate = 0.001
optimize = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
steps = 1000
for i in range(steps):
sess.run(optimize,
feed_dict={x: np.array([[i]])), y_: np.array([[i*i]])})
print("prediction: %f" % sess.run(y,
feed_dict={x: np.array([[1100]])}))
# expected output near 1210000
</code></pre>
| 0 | 2016-09-01T23:00:02Z | 39,293,770 | <p>The COST that is often used is simply the squared difference:</p>
<pre><code>def squared_error(y1,y2):
return tf.square(y1-y2)
</code></pre>
<p>Plus an L1 or L2 penalty if you feel like it. </p>
<p>However it seems to me that you need a hidden layer in your neural network if you want something remotely interesting. Plus if you squash your output and your target is the squared function you might not be able to do much.
I would do:</p>
<pre><code>x = tf.placeholder(tf.float32, [None, 1])
#Hidden layer with ten neurons
W1 = tf.Variable(tf.zeros([1,10]))
b1 = tf.Variable(tf.zeros([10]))
h1 = some_nonlinear_activation_function(tf.matmul(x,W) + b)
W2 = tf.Variable(tf.zeros([10,1]))
b2 = tf.Variable(tf.zeros([1]))
#I am not squashing the output
y=tf.matmul(h1,W2)+b
cost = tf.reduce_mean(squared_error(y, y_))
</code></pre>
<p>Also I would not use 0 weights but a more clever initialization scheme like Xavier's or He's which really come down to starting with practically zero weights but not exactly zeros for various reasons.
For activations you might use tanh, sigmoid or ReLU or anything really.</p>
| 0 | 2016-09-02T13:34:45Z | [
"python",
"machine-learning",
"neural-network",
"tensorflow",
"non-linear-regression"
] |
groupby apply Pandas not yielding desired output | 39,282,070 | <p>Below is a small sample of my dataframe : I am trying a groupby.apply which is not giving me the desired result.</p>
<pre><code> In [204]: df1
Out[204]:
Location_ID Terminal Time
0 10000001405702 *WhF 2016-07-01 13:56:00
1 10000001405702 @W1n 2016-07-01 09:14:39
2 10000001405702 *Wu3 2016-07-01 11:54:52
3 10000001405702 @WJo 2016-07-01 11:30:57
4 10000001405702 @WCg 2016-07-01 11:06:24
5 10000001405702 *WL2 2016-07-01 10:04:20
6 10000001201132 A24O 2016-07-01 14:28:39
7 10000000564967 2JT1 2016-07-01 03:46:31
8 10000000615068 A125 2016-07-01 21:58:33
9 10000000552415 5MTH 2016-07-01 05:51:39
10 10000001405702 *WqW 2016-07-01 00:09:06
11 10000000250413 FF41 2016-07-01 02:59:43
12 10000001125037 WQ2I 2016-06-30 14:03:57
13 10000000174015 H5NM 2016-06-30 05:56:09
14 10000001856529 AR7K 2016-06-30 18:53:05
</code></pre>
<p>By doing the below groupby.apply , I am losing the Location_ID and Terminal information , but I need that .</p>
<pre><code>In [206]: df1.groupby(['Location_ID','Terminal'])['Time'].apply(lambda x : x.diff()<=dt.timedelta(seconds=60))
Out[206]:
0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 False
11 False
12 False
13 False
14 False
15 False
16 False
17 False
</code></pre>
<p>I need an output of below format such that The boolean info can be known for Location_IDs and Terminal.</p>
<pre><code>In [211]: df3
Out[211]:
Time
Location_ID Terminal
10000000000081 3ZR1 False
CDE1 True
CDE4 False
GIG2 True
L43L False
L43W False
W9YE True
YIW1 False
YIW4 True
ZYI7 True
ZYJN False
10000000000086 A1E6 False
A4DG True
</code></pre>
<p>Still trying to find my grip in pandas. Thanks in advance.</p>
| 0 | 2016-09-01T23:01:21Z | 39,282,139 | <p>The result of your operation is a pandas Series. If you want it to be a column in a Dataframe you need to assign it to one.</p>
<p>Make <code>df3</code> as a copy of <code>df1</code> and change your call to:</p>
<pre><code>df3['Time'] = df1.groupby(['Location_ID','Terminal'])['Time'].apply(lambda x : x.diff()<=dt.timedelta(seconds=60))
</code></pre>
<p>Also, you apparently want 'Location_ID' and 'Terminal' as indexes of the Dataframe.</p>
| 0 | 2016-09-01T23:10:53Z | [
"python",
"pandas"
] |
Advance filtering with list of elements using django tastypie | 39,282,094 | <p>I am using <a href="https://pypi.python.org/pypi/django-multiselectfield" rel="nofollow">MultiSelectField</a> to select multiple choices within my django admin it creates an array for fields in the backend of all the choices I select. I then use <code>django tastypie's</code> List Field to make sure its a list of elements the api returns.</p>
<p>My problem is as I am building the filter when I put <code>/api/?brand_category=Clothing&q=athletic,bohemian</code> in the browser it does not return anything but an empty list. So I want to know if I am doing something wrong? Or not building my filters correctly?</p>
<p><strong>models.py</strong></p>
<pre><code>class Brand(models.Model):
# category
brand_category = MultiSelectField(max_length=100, blank=True, choices=categories))
# style
brand_style = MultiSelectField(max_length=100, choices=styles, blank=True)
</code></pre>
<p><strong>api.py</strong></p>
<pre><code>class LabelResource(ModelResource):
brand_category = fields.ListField(attribute='brand_category')
brand_style = fields.ListField(attribute='brand_style')
class Meta:
filtering = {
"brand_category": ALL,
"brand_style": ALL,
"q": ['exact', 'startswith', 'endswith', 'contains', 'in'],
}
def build_filters(self, filters=None):
if filters is None:
filters = {}
orm_filters = super(LabelResource, self).build_filters(filters)
if('q' in filters):
query = filters['q']
qset = (
Q(brand_style__in=query)
)
orm_filters.update({'custom': qset})
return orm_filters
def apply_filters(self, request, applicable_filters):
if 'custom' in applicable_filters:
custom = applicable_filters.pop('custom')
else:
custom = None
semi_filtered = super(LabelResource, self).apply_filters(request, applicable_filters)
return semi_filtered.filter(custom) if custom else semi_filtered
</code></pre>
<p><strong>JSON Response</strong></p>
<pre><code>{
"brand_category": [
"Clothing"
],
"brand_style": [
"athletic",
"bohemian",
"casual"
]
}
</code></pre>
| 0 | 2016-09-01T23:03:59Z | 39,289,015 | <p><code>filters['q']</code> is a <code>athletic,bohemian</code> string. <code>__in</code> lookup need a list or tuple.</p>
<pre><code>query = filters['q'].split(',')
</code></pre>
| 0 | 2016-09-02T09:29:37Z | [
"python",
"django",
"tastypie"
] |
python regex to split list entry's name into list name and index | 39,282,120 | <p>How can I use regular expressions to parse <code><listname>[#]</code> into <code>[listname, #]</code>?</p>
<p>Here's what I've tried:</p>
<pre><code>> s = 'li[10]'
> re.split(s,r'[%d]')
['[%d]']
> re.findall(s,r'[%d]')
[]
> s.split(r'[%d]')
['li[0]']
</code></pre>
<p>the desired output is <code>li</code> and <code>10</code></p>
| 1 | 2016-09-01T23:07:06Z | 39,282,149 | <p>First, you should put your pattern as the first part of the <code>findall</code> command as <code>findall( pattern, string)</code>. You pattern is only matching a single digit ie: <code>0</code> through <code>9</code>. To match more than 1 digit you can use:</p>
<pre><code> re.findall(r'\[(\d+)\]', s)
</code></pre>
<p>This will return just the digits, and not the square brackets. To get the stuff in from the brackets you can use:</p>
<pre><code>re.findall(r'(\w)\[(\d+)\]', s)
</code></pre>
| 1 | 2016-09-01T23:12:11Z | [
"python",
"regex"
] |
python regex to split list entry's name into list name and index | 39,282,120 | <p>How can I use regular expressions to parse <code><listname>[#]</code> into <code>[listname, #]</code>?</p>
<p>Here's what I've tried:</p>
<pre><code>> s = 'li[10]'
> re.split(s,r'[%d]')
['[%d]']
> re.findall(s,r'[%d]')
[]
> s.split(r'[%d]')
['li[0]']
</code></pre>
<p>the desired output is <code>li</code> and <code>10</code></p>
| 1 | 2016-09-01T23:07:06Z | 39,282,150 | <p>How about <code>(.*)\[(.*)\]</code>?</p>
<pre><code>re.findall("(.*)\[(.*)\]", s)
# [('li', '10')]
</code></pre>
| 1 | 2016-09-01T23:12:14Z | [
"python",
"regex"
] |
python regex to split list entry's name into list name and index | 39,282,120 | <p>How can I use regular expressions to parse <code><listname>[#]</code> into <code>[listname, #]</code>?</p>
<p>Here's what I've tried:</p>
<pre><code>> s = 'li[10]'
> re.split(s,r'[%d]')
['[%d]']
> re.findall(s,r'[%d]')
[]
> s.split(r'[%d]')
['li[0]']
</code></pre>
<p>the desired output is <code>li</code> and <code>10</code></p>
| 1 | 2016-09-01T23:07:06Z | 39,282,452 | <p>I can also suggest a more restrictive pattern to use with <code>re.findall</code>:</p>
<pre><code>re.findall(r'(\w+)\[(\d+)]', s)
</code></pre>
<p>See the <a href="http://ideone.com/a53Sh9" rel="nofollow">Python demo</a></p>
<p>Or a variation with <code>zip</code>:</p>
<pre><code>import re
s = 'li[10] li[11]'
names, ids = zip(*re.findall(r"(\w+)\[(\d+)]", s))
print(names)
print(ids)
</code></pre>
<p>See <a href="http://ideone.com/KICcfk" rel="nofollow">another Python demo</a></p>
<p><em>Details</em>:</p>
<ul>
<li><code>(\w+)</code> - Group 1 capturing one or more letters, digits or underscores</li>
<li><code>\[</code> - a <code>[</code> literal symbol </li>
<li><code>(\d+)</code> - Group 2 capturing 1 or more digits</li>
<li><code>]</code> - a closing literal <code>]</code> symbol</li>
</ul>
| 1 | 2016-09-01T23:52:58Z | [
"python",
"regex"
] |
Optimizing Bayer16 -> RGB in Python | 39,282,179 | <p>I am reading a camera that gives me Bayer16 format (GRGB) and I wrote the following code in python to modify it from bayer16 to bayer8, and then use OpenCV to convert it to RGB:</p>
<pre><code>def _convert_GRGB_to_RGB(self, bayer16_image):
bayer8_image = bytearray()
# Convert bayer16 to bayer 8
for i in range(0, len(bayer16_image), 2):
data_byte = (bayer16_image[i] & 0xF0) >> 4
data_byte |= (bayer16_image[i+1] & 0x0F) << 4
bayer8_image.append(data_byte)
bayer8_image = numpy.frombuffer(bayer8_image, dtype=numpy.uint8).reshape((720, 1280))
# Use OpenCV to convert Bayer GRGB to RGB
return cv2.cvtColor(bayer8_image, cv2.COLOR_BayerGR2RGB)
</code></pre>
<p>After doing some timing, the for loop takes most of the running time and is extremely inefficient (although I think it does not allocate any space, unless numpy makes a copy for very edit). I am wondering how to improve this function as a whole, or the for loop in particular (as it is the slowest part of this function, by an order of magnitude).</p>
<p>Does anyone have tips and advice about how to improve this Bayer16 -> RGB conversion if I am to use Python please?</p>
<p><strong>EDIT:</strong></p>
<p>I found a solution using numpy array that makes my code pretty fast:</p>
<pre><code>def _convert_GRGB_to_RGB(self, data_bytes):
even = numpy.frombuffer(data_bytes[0::2], dtype=numpy.uint8)
odd = numpy.frombuffer(data_bytes[1::2], dtype=numpy.uint8)
# Convert bayer16 to bayer8
even = numpy.right_shift(even, 4)
odd = numpy.left_shift(odd, 4)
bayer8_image = numpy.bitwise_or(even, odd).reshape((720, 1280))
# Use OpenCV to convert Bayer GRGB to RGB
return cv2.cvtColor(bayer8_image, cv2.COLOR_BayerGR2RGB)
</code></pre>
<p>This solution satisfies my need but if anyone has any suggestion, I'm curious to hear them!</p>
| 0 | 2016-09-01T23:16:08Z | 39,285,626 | <p>Your can use standard python operators in your numpyified code, You'll also get a speedup by not slicing data_bytes (assuming it's <code>bytes</code> and not itself a numpy array)</p>
<pre><code>def _convert_GRGB_to_RGB(self, data_bytes):
data_bytes = numpy.frombuffer(data_bytes, dtype=numpy.uint8)
even = data_bytes[0::2]
odd = data_bytes[1::2]
# Convert bayer16 to bayer8
bayer8_image = (even >> 4) | (odd << 4)
bayer8_image = bayer8_image.reshape((720, 1280))
# Use OpenCV to convert Bayer GRGB to RGB
return cv2.cvtColor(bayer8_image, cv2.COLOR_BayerGR2RGB)
</code></pre>
| 0 | 2016-09-02T06:27:13Z | [
"python",
"image",
"numpy"
] |
Optimizing Bayer16 -> RGB in Python | 39,282,179 | <p>I am reading a camera that gives me Bayer16 format (GRGB) and I wrote the following code in python to modify it from bayer16 to bayer8, and then use OpenCV to convert it to RGB:</p>
<pre><code>def _convert_GRGB_to_RGB(self, bayer16_image):
bayer8_image = bytearray()
# Convert bayer16 to bayer 8
for i in range(0, len(bayer16_image), 2):
data_byte = (bayer16_image[i] & 0xF0) >> 4
data_byte |= (bayer16_image[i+1] & 0x0F) << 4
bayer8_image.append(data_byte)
bayer8_image = numpy.frombuffer(bayer8_image, dtype=numpy.uint8).reshape((720, 1280))
# Use OpenCV to convert Bayer GRGB to RGB
return cv2.cvtColor(bayer8_image, cv2.COLOR_BayerGR2RGB)
</code></pre>
<p>After doing some timing, the for loop takes most of the running time and is extremely inefficient (although I think it does not allocate any space, unless numpy makes a copy for very edit). I am wondering how to improve this function as a whole, or the for loop in particular (as it is the slowest part of this function, by an order of magnitude).</p>
<p>Does anyone have tips and advice about how to improve this Bayer16 -> RGB conversion if I am to use Python please?</p>
<p><strong>EDIT:</strong></p>
<p>I found a solution using numpy array that makes my code pretty fast:</p>
<pre><code>def _convert_GRGB_to_RGB(self, data_bytes):
even = numpy.frombuffer(data_bytes[0::2], dtype=numpy.uint8)
odd = numpy.frombuffer(data_bytes[1::2], dtype=numpy.uint8)
# Convert bayer16 to bayer8
even = numpy.right_shift(even, 4)
odd = numpy.left_shift(odd, 4)
bayer8_image = numpy.bitwise_or(even, odd).reshape((720, 1280))
# Use OpenCV to convert Bayer GRGB to RGB
return cv2.cvtColor(bayer8_image, cv2.COLOR_BayerGR2RGB)
</code></pre>
<p>This solution satisfies my need but if anyone has any suggestion, I'm curious to hear them!</p>
| 0 | 2016-09-01T23:16:08Z | 39,288,483 | <p>As a guess, your color problem is as follows - your <code>GRBG</code> data comes in like this:</p>
<pre><code>G0 B1 G2 ...
R0 G1 R2
</code></pre>
<p>Where the numbers represent the uint16 index. OpenCV needs them to be numbered</p>
<pre><code>G0 B0 G1 R1 ...
R6 G6 R7 G7
</code></pre>
<p>You can fix this with some careful reshape and transposing:</p>
<pre><code>data_bytes = np.frombuffer(data_bytes, dtype=np.uint8)
data = data.reshape(height / 2, width, 2) # a pair for each uint16
data = data.transpose((0, 2, 1)) #move the G/RB axis to be adjacent to the height axis
data = data.reshape(height, width) # collapse it
</code></pre>
<p>Example</p>
<pre><code># manually constructed by hand
sample = ''.join([
'grbGgRbGgRbg'
'grBGGRBGGRbg'
'grBgGrBgGrbg'
])
width = height = 6
data = np.array(list(sample))
data = (data
.reshape(height / 2, width, 2)
.transpose((0, 2, 1))
.reshape(height, width)
)
# easy way to view the output
>>> data.view((np.str_,6))
array([['gbgbgb'],
['rGRGRg'],
['gBGBGb'],
['rGRGRg'],
['gBGBGb'],
['rgrgrg']],
dtype='<U6')
</code></pre>
| 0 | 2016-09-02T09:05:52Z | [
"python",
"image",
"numpy"
] |
Threading of Periodic Update to Class Member | 39,282,197 | <p>What is a method for updating a class member in Python while it is still being used by other methods in the class?</p>
<p>I want the rest of the class to continue processing using the old version of the member until it is fully updated and then switch all processing to the new version once the update is complete.</p>
<p>Here is a toy example to illustrate my use case, where <code>self.numbers</code> is the class member that needs safely threaded periodic updating using the logic in <code>updateNumbers()</code>, which I want called in a non-blocking way by <code>runCounter()</code>. </p>
<pre><code>from time import sleep, time
class SimpleUpdater(object):
def __init__(self):
self.i = 5
self.numbers = list(range(self.i))
self.lastUpdate = time()
self.updateDelta = 10
def timePast(self):
now = time()
delta = self.lastUpdate - now
return (delta > self.updateDelta)
def updateNumbers(self):
print('Starting Update', flush=True)
self.numbers = list(range(self.i))
# artificial calculation time
sleep(2)
print('Done Updating', flush=True)
def runCounter(self):
for j in self.numbers:
print(j, flush=True)
sleep(0.5)
self.i += 1
if self.timePast:
## Spin off this calculation!! (and safely transfer the new value)
self.updateNumbers()
if __name__ == '__main__':
S = SimpleUpdater()
while True:
S.runCounter()
</code></pre>
<p>The desired behavior is that if <code>self.numbers</code> is being iterated on in the loop, it should finish the loop with the old version before switching to the new version.</p>
| 1 | 2016-09-01T23:17:29Z | 39,282,364 | <p>Create a lock to control access to your list:</p>
<pre><code>import threading
def __init__(self, ...):
# We could use Lock, but RLock is somewhat more intuitive in a few ways that might
# matter when your requirements change or when you need to debug things.
self.numberLock = threading.RLock()
...
</code></pre>
<p>Whenever you need to read the list, hold the lock and save the current list to a local variable. The local variable will not be affected by updates to the instance attribute; use the local variable until you want to check for an updated value:</p>
<pre><code>with self.numberLock:
numbers = self.numbers
doStuffWith(numbers)
</code></pre>
<p>Whenever you need to update the list, hold the lock and replace the list with a new list, without mutating the old list:</p>
<pre><code>with self.numberLock:
self.numbers = newNumbers
</code></pre>
<hr>
<p>Incidentally, I've used camelcase here to match your code, but the Python convention is to use <code>lowercase_with_underscores</code> instead of <code>camelCase</code> for variable and function names.</p>
| 1 | 2016-09-01T23:41:38Z | [
"python",
"multithreading"
] |
Threading of Periodic Update to Class Member | 39,282,197 | <p>What is a method for updating a class member in Python while it is still being used by other methods in the class?</p>
<p>I want the rest of the class to continue processing using the old version of the member until it is fully updated and then switch all processing to the new version once the update is complete.</p>
<p>Here is a toy example to illustrate my use case, where <code>self.numbers</code> is the class member that needs safely threaded periodic updating using the logic in <code>updateNumbers()</code>, which I want called in a non-blocking way by <code>runCounter()</code>. </p>
<pre><code>from time import sleep, time
class SimpleUpdater(object):
def __init__(self):
self.i = 5
self.numbers = list(range(self.i))
self.lastUpdate = time()
self.updateDelta = 10
def timePast(self):
now = time()
delta = self.lastUpdate - now
return (delta > self.updateDelta)
def updateNumbers(self):
print('Starting Update', flush=True)
self.numbers = list(range(self.i))
# artificial calculation time
sleep(2)
print('Done Updating', flush=True)
def runCounter(self):
for j in self.numbers:
print(j, flush=True)
sleep(0.5)
self.i += 1
if self.timePast:
## Spin off this calculation!! (and safely transfer the new value)
self.updateNumbers()
if __name__ == '__main__':
S = SimpleUpdater()
while True:
S.runCounter()
</code></pre>
<p>The desired behavior is that if <code>self.numbers</code> is being iterated on in the loop, it should finish the loop with the old version before switching to the new version.</p>
| 1 | 2016-09-01T23:17:29Z | 39,283,514 | <p>You could create a new thread for every call to <code>updateNumbers</code> but the more common way to do this type of thing is to have 1 thread running an infinite loop in the background. You should write a method or function with that infinite loop, and that method/function will serve as the target for your background thread. This kind of thread is often a daemon, but it doesn't have to be. I've modified your code to show how this might be done (I've also fixed a few small bugs in your example code).</p>
<pre><code>from time import sleep, time
import threading
class Numbers(object):
def __init__(self, numbers):
self.data = numbers
self.lastUpdate = time()
class SimpleUpdater(object):
def __init__(self):
self.i = 5
self.updateDelta = 5
self.numbers = Numbers(list(range(self.i)))
self._startUpdateThread()
def _startUpdateThread(self):
# Only call this function once
update_thread = threading.Thread(target=self._updateLoop)
update_thread.daemon = True
update_thread.start()
def _updateLoop(self):
print("Staring Update Thread")
while True:
self.updateNumbers()
sleep(.001)
def updateNumbers(self):
numbers = self.numbers
delta = time() - numbers.lastUpdate
if delta < self.updateDelta:
return
print('Starting Update')
# artificial calculation time
sleep(4)
numbers = Numbers(list(range(self.i)))
self.numbers = numbers
print('Done Updating')
def runCounter(self):
# Take self.numbers once, then only use local `numbers`.
numbers = self.numbers
for j in numbers.data:
print(j)
sleep(0.5)
# do more with numbers
self.i += 1
if __name__ == '__main__':
S = SimpleUpdater()
while True:
S.runCounter()
</code></pre>
<p>Notice that I have not used any locks :). I could get away without using any locks because I only access the <code>numbers</code> attribute of the <code>SimpleUpdater</code> class using atomic operations, in this case just simple assignments. Every assignment to <code>self.numbers</code> associated a new <code>Numbers</code> object with that attribute. Every time you access that attribute you should expect to get a different object, but if you take a local reference to that object at the beginning of a method, ie <code>numbers = self.numbers</code>, <code>numbers</code> will always refer to the same object (till it goes out of scope at the end of the method), even if the background thread updates <code>self.numbers</code>.</p>
<p>The reason I've created a <code>Numbers</code> class is so that i can get and set all "volatile" members in a single (atomic) assignment (the numbers list and lastUpdated value). I know this is a lot to keep track of, and honestly it might be smarter and safer to just use locks :), but I wanted to show you this option as well. </p>
| 1 | 2016-09-02T02:35:41Z | [
"python",
"multithreading"
] |
Joining an aliased selectable in Sqlalchemy Core | 39,282,201 | <p>I am attempting to make a left join on two tables: invoices and vendors. The typical problem is that I have multiple entries in the right table (vendors) which leads to duplicate results:</p>
<pre><code> Vendors Invoices
vend_id name vend_id line_amt
001 Lowes 001 5.95
001 lowes 002 17
001 Lowes_ca 002 25
002 Bills 002 40
002 Bill's 003 4.35
003 Two Alphas 003 3.75
004 Apple Cartz 003 10
004 23
004 56
004 80
</code></pre>
<p>I'm looking for this:</p>
<pre><code>Desired Result:
vend_id line_amt name
001 5.95 Lowes
002 17 Bills
002 25 Bills
002 40 Bills
003 4.35 Two Alphas
003 3.75 Two Alphas
003 10 Two Alphas
004 23 Apple Cartz
004 56 Apple Cartz
004 80 Apple Cartz
</code></pre>
<p>But I am getting this:</p>
<pre><code>vend_id line_amt name
001 5.95 Lowes
001 5.95 lowes
001 5.95 Lowes_ca
002 17 Bills
002 17 Bill's
002 25 Bills
002 25 Bill's
002 40 Bills
002 40 Bill's
003 4.35 Two Alphas
003 3.75 Two Alphas
003 10 Two Alphas
004 23 Apple Cartz
004 56 Apple Cartz
004 80 Apple Cartz
</code></pre>
<p>So I'm trying the code below to join on a selectable in sqlalchemy core, but I am getting a Not an executable clause error. I can't use the ORM because of the way the db is set up. Is there a way to alter this code or a better solution that I am not thinking of?</p>
<pre><code>conn = engine.connect()
a = select([vendors.c.vend_id.label('vend_id'),
func.min(vendors.c.name).label('name')]).group_by(vendors.c.vend_id).alias('a')
s = select([
invoices.c.vend_id.label('vendor'),
invoices.c.line_amt.label('amount'),
]).join(a, a.c.vend_id == invoices.c.vend_id)
p = conn.execute(s)
</code></pre>
| 0 | 2016-09-01T23:18:10Z | 39,298,116 | <p>First joining the invoices table to the aliased table will work. I needed to use <code>.select_from</code> in order to complete the join. This is the code that works:</p>
<pre> conn = engine.connect()
a = select([vendors.c.vend_id.label('vend_id'),
func.min(vendors.c.name).label('name')]).group_by(vendors.c.vend_id).alias('a')
j = invoices.join(a, a.c.vend_id == invoices.c.vend_id)
s = select([
invoices.c.vend_id.label('vendor'),
a.c.name.label('name'),
invoices.c.line_amt.label('amount'),
]).select_from(j)
p = conn.execute(s)
</pre>
| 0 | 2016-09-02T17:45:20Z | [
"python",
"join",
"sqlalchemy",
"core"
] |
Simplest solution to use Python to drive an Applescript script | 39,282,231 | <p>Since Apple has already integrated a full UI automation tool like Applescript (ancient, yes...ugly, yes; but can't find anything better), I was wondering if I can run a Applescript script, inside Python unit test class.</p>
<p>I did find an abandoned project that was integrating AS and Python, but I would like to use something that is stable and reliable; and most of all, easy to implement.</p>
<p>I did look at pyObjC and it is quite a pain to deal with (I know basically nothing about Objective-C), so the last resort that I did try, is to use AS, but I have no way to get results about actions, unless I use something like Python. Unless there is an easier way, that I do not know</p>
| 0 | 2016-09-01T23:21:09Z | 39,302,610 | <p><a href="https://pypi.python.org/pypi/py-applescript" rel="nofollow">py-applescript</a> provides an easy-to-use wrapper around PyObjC and <code>NSAppleScript</code> for calling AS handlers and converting Python arguments and results to and from their AS equivalents automatically. (Strictly speaking I no longer support py-appscript either - as with py-appscript I'm no longer willing to throw good time after bad - but the py-applescript code is simple enough that any Python user can figure out how to fix or improve it for herself if necessary.) </p>
<p>Alternatively, you can avoid the extra module dependency by using both PyObjC and AppleScriptObjC to <a href="http://appscript.sourceforge.net/asoc.html" rel="nofollow">bridge all the way from Python to AppleScript</a>, though it does require some basic understanding of both ObjC bridges to do it.</p>
<p>The only other option for doing Apple event automation in Python is to use OS X's crappy Scripting Bridge framework via PyObjC, but since SB is riddled with a ton of incomprehensible bugs, defects, and omissions of its own, that's about the last thing you want to inject into your test scripts.</p>
<p>...</p>
<p>OTOH, if you're <em>only</em> doing graphical UI testing, then calling OS X's low-level Accessibility APIs via System Events.app via AppleScript via Python is arguably overkill anyway, and you may want to look around for existing Python GUI automation libraries that just wrap OS X's Accessibility APIs directly. e.g. <a href="https://gist.github.com/diyan/f3c24653e63c24c99137" rel="nofollow">Here's a useful-looking list</a>, although I can't vouch for its accuracy or completeness, or the quality of the libraries it links to.</p>
| 0 | 2016-09-03T02:07:53Z | [
"python",
"applescript"
] |
dictionary comprehension derived from existing lists | 39,282,241 | <p>as a part of a bigger exercise, I am trying to construct a dictionary based on inputs from smaller lists, but am struggling with an embedded iteration. suppose i have the following illustrative example:</p>
<pre><code>cities = ['newyork','london','tokyo','paris']
citypairs = [i for i in it.combinations(cities,2)]
airlines = ['delta', 'united']
</code></pre>
<p>i want to construct an dictionary of dictionaries, "info", whose keys are the combinations of cities above + each airline (so 12 total keys), and each of those keys contains a "city1" and a "city2" key that is populated using the citypairs list. i am trying something like: </p>
<pre><code>info = {
'{city1}/{city2} {airline}'.format(city1=city1, city2=city2, airline=airline): {
"city1": city1, "city2": city2
for city1, city2 in citypairs
}
for city1, city2 in citypairs
for airline in airlines
}
</code></pre>
<p>but am receiving an invalid syntax error. just to more clearly illustrate what i am after, if instead of the above attempt, i do: </p>
<pre><code>info = {
'{city1}/{city2} {airline}'.format(city1=city1, city2=city2, airline=airline): {
"city1": "whatever", "city2": "whatever"
}
for city1, city2 in citypairs
for airline in airlines
}
</code></pre>
<p>then this will run and simply create dummy values of 'whatever' for city1 and city2 for each key in "info"</p>
<p>this example might seem silly or overly complicated, but the heart of my question is how can i iterate through a list of tuples to populate city1 and city2 in this example - i am after this because the real-life project i am working on would need this sort of flexibility</p>
| 0 | 2016-09-01T23:22:25Z | 39,282,490 | <p>My belief is that the solution is simpler than you're making it:</p>
<pre><code>from itertools import combinations
cities = ['newyork','london','tokyo','paris']
citypairs = combinations(cities, 2)
airlines = ['delta', 'united']
info = {'{city1}/{city2} {airline}'.format(city1=city1, city2=city2, airline=airline): {"city1": city1, "city2": city2}
for city1, city2 in citypairs
for airline in airlines
}
print(info)
print()
print(info["newyork/london delta"]["city1"])
</code></pre>
<p><strong>OUTPUT</strong></p>
<p>{'london/tokyo delta': {'city1': 'london', 'city2': 'tokyo'}, 'newyork/london delta': {'city1': 'newyork', 'city2': 'london'}, 'london/paris delta': {'city1': 'london', 'city2': 'paris'}, 'london/tokyo united': {'city1': 'london', 'city2': 'tokyo'}, 'tokyo/paris united': {'city1': 'tokyo', 'city2': 'paris'}, 'newyork/paris delta': {'city1': 'newyork', 'city2': 'paris'}, 'tokyo/paris delta': {'city1': 'tokyo', 'city2': 'paris'}, 'newyork/paris united': {'city1': 'newyork', 'city2': 'paris'}, 'london/paris united': {'city1': 'london', 'city2': 'paris'}, 'newyork/london united': {'city1': 'newyork', 'city2': 'london'}, 'newyork/tokyo delta': {'city1': 'newyork', 'city2': 'tokyo'}, 'newyork/tokyo united': {'city1': 'newyork', 'city2': 'tokyo'}}</p>
<p>newyork</p>
| 1 | 2016-09-01T23:57:19Z | [
"python",
"list",
"python-2.7",
"dictionary"
] |
dictionary comprehension derived from existing lists | 39,282,241 | <p>as a part of a bigger exercise, I am trying to construct a dictionary based on inputs from smaller lists, but am struggling with an embedded iteration. suppose i have the following illustrative example:</p>
<pre><code>cities = ['newyork','london','tokyo','paris']
citypairs = [i for i in it.combinations(cities,2)]
airlines = ['delta', 'united']
</code></pre>
<p>i want to construct an dictionary of dictionaries, "info", whose keys are the combinations of cities above + each airline (so 12 total keys), and each of those keys contains a "city1" and a "city2" key that is populated using the citypairs list. i am trying something like: </p>
<pre><code>info = {
'{city1}/{city2} {airline}'.format(city1=city1, city2=city2, airline=airline): {
"city1": city1, "city2": city2
for city1, city2 in citypairs
}
for city1, city2 in citypairs
for airline in airlines
}
</code></pre>
<p>but am receiving an invalid syntax error. just to more clearly illustrate what i am after, if instead of the above attempt, i do: </p>
<pre><code>info = {
'{city1}/{city2} {airline}'.format(city1=city1, city2=city2, airline=airline): {
"city1": "whatever", "city2": "whatever"
}
for city1, city2 in citypairs
for airline in airlines
}
</code></pre>
<p>then this will run and simply create dummy values of 'whatever' for city1 and city2 for each key in "info"</p>
<p>this example might seem silly or overly complicated, but the heart of my question is how can i iterate through a list of tuples to populate city1 and city2 in this example - i am after this because the real-life project i am working on would need this sort of flexibility</p>
| 0 | 2016-09-01T23:22:25Z | 39,282,505 | <pre class="lang-py prettyprint-override"><code>info = {
'{city1}/{city2} {airline}'.format(**vars()) :
{ "city1": city1, "city2": city2 }
for city1, city2 in citypairs
for airline in airlines
}
</code></pre>
<p>yields:</p>
<pre><code>{'london/paris delta': {'city1': 'london', 'city2': 'paris'},
'london/paris united': {'city1': 'london', 'city2': 'paris'},
'london/tokyo delta': {'city1': 'london', 'city2': 'tokyo'},
'london/tokyo united': {'city1': 'london', 'city2': 'tokyo'},
'newyork/london delta': {'city1': 'newyork', 'city2': 'london'},
'newyork/london united': {'city1': 'newyork', 'city2': 'london'},
'newyork/paris delta': {'city1': 'newyork', 'city2': 'paris'},
'newyork/paris united': {'city1': 'newyork', 'city2': 'paris'},
'newyork/tokyo delta': {'city1': 'newyork', 'city2': 'tokyo'},
'newyork/tokyo united': {'city1': 'newyork', 'city2': 'tokyo'},
'tokyo/paris delta': {'city1': 'tokyo', 'city2': 'paris'},
'tokyo/paris united': {'city1': 'tokyo', 'city2': 'paris'}}
</code></pre>
<p>Here I've used a terser form of <code>format()</code> that looks at local variables to find content for the format strings.</p>
<p>You were very close to what you seemed to be asking for. Not sure why you filed in the values for your second-level <code>dict</code>s with <code>"whatever"</code>, since you had already generated the <code>city1</code> and <code>city2</code> variables in the comprehension. Just using those completes the circuit.</p>
| 1 | 2016-09-01T23:59:06Z | [
"python",
"list",
"python-2.7",
"dictionary"
] |
Couldn't import Django error when I try to startapp | 39,282,309 | <p>I usually work on PC's but started to work on projects on my mac. I run Python 3 and when I started a new project I did the following:</p>
<p>1) In main project folder, installed virtualenv and activated it.</p>
<p>2) Install Django and Gunicorn</p>
<p>3) Did startproject</p>
<p>When I try to python3 manage.py startapp www I get an error that Django could not be imported. Below is what was in the terminal:</p>
<pre><code>(venv) AB:directory AB$ pip freeze
Django==1.10
gunicorn==19.6.0
(venv) AB:directory AB$ ls
directory manage.py
(venv) AB:directory AB$ python3 manage.py startpap www
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named 'django'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 14, in <module>
import django
ImportError: No module named 'django'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 17, in <module>
"Couldn't import Django. Are you sure it's installed and "
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
</code></pre>
| 0 | 2016-09-01T23:33:07Z | 39,943,854 | <p>I had the same problem, make sure you activated virtualenv, since once you close cmd it is no longer activated:</p>
<p><code>env\Scripts\activate</code> in cmd</p>
<p>Now cmd should have (env) just like this: <code>(env) c:\users\user\PROJECT\..</code></p>
<p>Now you can type: <code>python manage.py runserver</code></p>
| 0 | 2016-10-09T12:54:38Z | [
"python",
"django",
"python-3.x"
] |
Formating Django TimeField in a Django view | 39,282,352 | <p>How do you format a TimeField in a Django view?</p>
<p>Currently in my django html template I can easily do something like: <code>{{movie.start_time|time:"g:iA"|lower}}</code></p>
<p>How can I do the equivalent of the above in a Django view?</p>
| 1 | 2016-09-01T23:39:49Z | 39,282,424 | <p>Use the Python <code>strftime()</code> function. <a href="https://docs.python.org/2/library/time.html#time.strftime" rel="nofollow">https://docs.python.org/2/library/time.html#time.strftime</a></p>
| 2 | 2016-09-01T23:49:36Z | [
"python",
"django"
] |
Formating Django TimeField in a Django view | 39,282,352 | <p>How do you format a TimeField in a Django view?</p>
<p>Currently in my django html template I can easily do something like: <code>{{movie.start_time|time:"g:iA"|lower}}</code></p>
<p>How can I do the equivalent of the above in a Django view?</p>
| 1 | 2016-09-01T23:39:49Z | 39,299,659 | <p>Below is an example of how to achieve this. It isn't ideal, but this is what I came up with.</p>
<pre><code>import datetime
now = datetime.datetime.now()
time = now.time()
t = time.strftime("%I:%M %p")
t = t.lower()
t = list(t)
if t[0] == '0':
t.pop(0)
t = ''.join(t)
print t
</code></pre>
| 0 | 2016-09-02T19:40:33Z | [
"python",
"django"
] |
Pandas Group to Divide by Max | 39,282,355 | <p>I'm trying to normalize a user count by dividing by the max users in each group. I'm able to get the results to calculate (commented out the print that works), but I'm having trouble getting the results to save back to the original table. The code below doesn't throw an error, but it also doesn't add any data to weeklyPerson:</p>
<pre><code>weeklyPersonGroups=weeklyPerson.groupby('Person')
PersonMax=weeklyPersonGroups['users'].max()
for name, group in weeklyPersonGroups:
#print(weeklyPerson[weeklyPerson['Person']==name]['users']/PersonMax[name])
weeklyPerson[weeklyPerson['Person']==name]['usersNorm']=weeklyPerson[weeklyPerson['Person']==name]['users']/PersonMax[name]
</code></pre>
| 1 | 2016-09-01T23:40:16Z | 39,282,362 | <p>Use <code>groupby</code> and <code>transform</code></p>
<pre><code>weeklyPerson.groupby('Person').users.transform(lambda x: x / x.max())
</code></pre>
<p>Per @Jeff's suggestion</p>
<pre><code>weeklyPerson.users / weeklyPerson.groupby('Person').users.transform(np.max)
</code></pre>
<p>This avoids using <code>lambda</code> when it isn't necessary.</p>
| 1 | 2016-09-01T23:41:24Z | [
"python",
"pandas"
] |
How to instal polyglot on mac? | 39,282,400 | <p>When following the <a href="http://polyglot.readthedocs.io/en/latest/Installation.html" rel="nofollow">instructions</a> I get the following error message:</p>
<pre><code>Failed building wheel for PyICU
</code></pre>
<p>One of the dependencies is missing. However the module <code>PyICU</code> cannot be installed with homebrew (i.e. <code>brew install PyICU</code>).</p>
| 0 | 2016-09-01T23:46:32Z | 39,282,401 | <p>You can use <code>icu4c</code> instead of <code>PyICU</code>.</p>
<p>Follow these steps: <a href="http://stackoverflow.com/a/33352241/1053612">http://stackoverflow.com/a/33352241/1053612</a>
(May require installing python between steps 1 and 2, i.e. <code>brew install python</code>).</p>
| 0 | 2016-09-01T23:46:32Z | [
"python"
] |
Confused about looping through list of people in Python | 39,282,409 | <p>I'm having issues with an exercise that is asking me to loop through my list of people and print everything that I know about each person while using a dictionary for each person. I'm trying to start out by just getting Python to accept and loop through my dictionaries, but whenever I try to run my code, I get an error message stating: "Value Error: too many values to unpack. (expected 2)"</p>
<pre><code>dictionaries_v = {
'first_name': 'victor',
'last_name': 'croc',
'age': 21,
'city': 'new york',
}
dictionaries_c = {
'first_name': 'charmy',
'last_name': 'bee',
'age': 8,
'city': 'new york',
}
dictionaries_e = {
'first_name': 'espio',
'last_name': 'armadilo',
'age': 15,
'city': 'new york',
}
people = ['dictionaries_v', 'dictionaries_c', 'dictionaries_e']
for key, value in people:
print( "\n" + key + ": " + value)
</code></pre>
<p>Thank you for your time.</p>
| 0 | 2016-09-01T23:47:28Z | 39,282,438 | <p><code>people</code> is a list. You can only get one value at a time from a list. However, its <em>elements</em> are dictionaries. So, the following loop would work:</p>
<pre><code>for element in people:
for key, value in element:
print( "\n" + key + ": " + value)
</code></pre>
<p>The outer loop goes through each dictionary in <code>people</code>, and sets <code>element</code> to that dictionary. The inner loop goes through that dictionary and sets <code>key</code> to a key and <code>value</code> to a value.</p>
<p>Also, you set <code>people</code> to <code>['dictionaries_v', 'dictionaries_c', 'dictionaries_e']</code>. Instead of setting each of its elements to a string, set it to the variables themselves: <code>[dictionaries_v, dictionaries_c, dictionaries_e]</code>.</p>
| 0 | 2016-09-01T23:51:32Z | [
"python"
] |
Confused about looping through list of people in Python | 39,282,409 | <p>I'm having issues with an exercise that is asking me to loop through my list of people and print everything that I know about each person while using a dictionary for each person. I'm trying to start out by just getting Python to accept and loop through my dictionaries, but whenever I try to run my code, I get an error message stating: "Value Error: too many values to unpack. (expected 2)"</p>
<pre><code>dictionaries_v = {
'first_name': 'victor',
'last_name': 'croc',
'age': 21,
'city': 'new york',
}
dictionaries_c = {
'first_name': 'charmy',
'last_name': 'bee',
'age': 8,
'city': 'new york',
}
dictionaries_e = {
'first_name': 'espio',
'last_name': 'armadilo',
'age': 15,
'city': 'new york',
}
people = ['dictionaries_v', 'dictionaries_c', 'dictionaries_e']
for key, value in people:
print( "\n" + key + ": " + value)
</code></pre>
<p>Thank you for your time.</p>
| 0 | 2016-09-01T23:47:28Z | 39,282,444 | <p>you are adding string in list. Please use the variable names. </p>
<pre><code>people = [dictionaries_v, dictionaries_c, dictionaries_e]
</code></pre>
<p>Edit:
Adding more code that fixes the problem of unpacking below.</p>
<pre><code>for person in people:
for key,value in person:
print key, " : ", value
</code></pre>
| 0 | 2016-09-01T23:52:25Z | [
"python"
] |
Confused about looping through list of people in Python | 39,282,409 | <p>I'm having issues with an exercise that is asking me to loop through my list of people and print everything that I know about each person while using a dictionary for each person. I'm trying to start out by just getting Python to accept and loop through my dictionaries, but whenever I try to run my code, I get an error message stating: "Value Error: too many values to unpack. (expected 2)"</p>
<pre><code>dictionaries_v = {
'first_name': 'victor',
'last_name': 'croc',
'age': 21,
'city': 'new york',
}
dictionaries_c = {
'first_name': 'charmy',
'last_name': 'bee',
'age': 8,
'city': 'new york',
}
dictionaries_e = {
'first_name': 'espio',
'last_name': 'armadilo',
'age': 15,
'city': 'new york',
}
people = ['dictionaries_v', 'dictionaries_c', 'dictionaries_e']
for key, value in people:
print( "\n" + key + ": " + value)
</code></pre>
<p>Thank you for your time.</p>
| 0 | 2016-09-01T23:47:28Z | 39,282,447 | <p>Your first problem is that your list is holding three <em>strings</em>. Notice how you wrapped quotes around it. So when you iterate over your list, you will most definitely not be getting the dictionary you expect. </p>
<p>Secondly, <code>people</code> is a <code>list</code>. So, when you iterate your list, your iterator will hold the dictionary through each iteration. </p>
<p>Knowing this, you simply need to iterate over your list, like any other list: </p>
<pre><code>people = [dictionaries_v, dictionaries_c, dictionaries_e]
for d in people:
print(d)
</code></pre>
<p>Your output will look like:</p>
<pre><code>{'last_name': 'croc', 'age': 21, 'first_name': 'victor', 'city': 'new york'}
{'last_name': 'bee', 'age': 8, 'first_name': 'charmy', 'city': 'new york'}
{'last_name': 'armadilo', 'age': 15, 'first_name': 'espio', 'city': 'new york'}
</code></pre>
<p>To get specific information from your dictionary, you just use the key in each iteration. Simple example:</p>
<pre><code>for d in people:
print(d['last_name'])
</code></pre>
| 3 | 2016-09-01T23:52:38Z | [
"python"
] |
Equal Average Partition DP using Python 2 VS Python 3 | 39,282,416 | <p>I was using memorization to try to solve the <a href="https://www.interviewbit.com/problems/equal-average-partition/" rel="nofollow">Equal Average Partition Problem</a>, somehow the solution that I came up with take a long time to solve the problem in <a href="https://repl.it/DJdT" rel="nofollow">Python 2.x</a> but relatively fast in <a href="https://repl.it/DJci" rel="nofollow">Python 3.x</a>
I'm wondering whether anyone encounter something similar, and what are the reasons behind? Thanks </p>
<pre><code>def avgset(A):
if len(A) <= 1: return []
A.sort()
A = tuple(A)
idx = 0
curSum = 0
curSize = 0
dic = {}
length = len(A)
avg = sum(A)/float(length)
minAry = sorted(recursive(A, idx, curSum, curSize, avg, dic), key=lambda x:len(x))[0]
A = list(A)
for itm in minAry: A.remove(itm)
return [minAry, A]
def recursive(A, idx, curSum, curSize, avg, dic):
if idx > len(A)-1: return None
if (idx, curSum, curSize) in dic.keys(): return dic[(idx, curSum, curSize)]
if (curSum+A[idx])/ float(curSize+1) == avg:
return [[A[idx]]]
res1 = recursive(A, idx+1, curSum+A[idx], curSize+1, avg, dic) or []
res2 = recursive(A, idx+1, curSum, curSize, avg, dic) or []
res3 = []
for itm in res1:
tmp = [A[idx]]+itm
if tmp not in res3:
res3.append(tmp)
for itm in res2:
if itm not in res3:
res3.append(itm)
dic[(idx, curSum, curSize)] = res3
return dic[(idx, curSum, curSize)]
A = [ 28, 10, 2, 44, 33, 31, 39, 46, 1, 24, 32, 31, 28, 9, 13, 40, 46, 1, 16, 18, 39, 13, 48, 5 ]
print (avgset(A))
</code></pre>
| 0 | 2016-09-01T23:48:08Z | 39,319,913 | <p>There is only one difference between python 2 and 3 that you are using. In line</p>
<pre><code>if (idx, curSum, curSize) in dic.keys(): return dic[(idx, curSum, curSize)]
</code></pre>
<p>method <code>keys()</code> in python2 returns list of dict keys and in python3 returns iterator through keys (it was iterkeys() method in python2.) Note that method <code>keys()</code> is not needed since operator <code>in <dict></code> will return result you want. So code </p>
<pre><code>if (idx, curSum, curSize) in dic: return dic[(idx, curSum, curSize)]
</code></pre>
<p>works with same speed in 2 and 3.</p>
<p>It seems that operator <code>in <iterator></code> is optimized in python3, if it is possible to evaluate it faster than to iterate through elements, like in <code>dict</code> or <code>set</code>.</p>
| 1 | 2016-09-04T18:00:45Z | [
"python",
"dynamic-programming"
] |
Equal Average Partition DP using Python 2 VS Python 3 | 39,282,416 | <p>I was using memorization to try to solve the <a href="https://www.interviewbit.com/problems/equal-average-partition/" rel="nofollow">Equal Average Partition Problem</a>, somehow the solution that I came up with take a long time to solve the problem in <a href="https://repl.it/DJdT" rel="nofollow">Python 2.x</a> but relatively fast in <a href="https://repl.it/DJci" rel="nofollow">Python 3.x</a>
I'm wondering whether anyone encounter something similar, and what are the reasons behind? Thanks </p>
<pre><code>def avgset(A):
if len(A) <= 1: return []
A.sort()
A = tuple(A)
idx = 0
curSum = 0
curSize = 0
dic = {}
length = len(A)
avg = sum(A)/float(length)
minAry = sorted(recursive(A, idx, curSum, curSize, avg, dic), key=lambda x:len(x))[0]
A = list(A)
for itm in minAry: A.remove(itm)
return [minAry, A]
def recursive(A, idx, curSum, curSize, avg, dic):
if idx > len(A)-1: return None
if (idx, curSum, curSize) in dic.keys(): return dic[(idx, curSum, curSize)]
if (curSum+A[idx])/ float(curSize+1) == avg:
return [[A[idx]]]
res1 = recursive(A, idx+1, curSum+A[idx], curSize+1, avg, dic) or []
res2 = recursive(A, idx+1, curSum, curSize, avg, dic) or []
res3 = []
for itm in res1:
tmp = [A[idx]]+itm
if tmp not in res3:
res3.append(tmp)
for itm in res2:
if itm not in res3:
res3.append(itm)
dic[(idx, curSum, curSize)] = res3
return dic[(idx, curSum, curSize)]
A = [ 28, 10, 2, 44, 33, 31, 39, 46, 1, 24, 32, 31, 28, 9, 13, 40, 46, 1, 16, 18, 39, 13, 48, 5 ]
print (avgset(A))
</code></pre>
| 0 | 2016-09-01T23:48:08Z | 39,320,016 | <p><code>something in dic.keys()</code> will be O(n) in Python 2 (membership in list) and O(1) in in Python 3 (membership in set). This accounts for observed difference in performance.</p>
<p>On <a href="https://docs.python.org/3/library/stdtypes.html#dictionary-view-objects" rel="nofollow">dictionary view objects</a>:</p>
<blockquote>
<p>Keys views are set-like since their entries are unique and hashable.
If all values are hashable, so that (key, value) pairs are unique and
hashable, then the items view is also set-like. (Values views are not
treated as set-like since the entries are generally not unique.) For
set-like views, all of the operations defined for the abstract base
class collections.abc.Set are available (for example, ==, <, or ^).</p>
</blockquote>
<p>Consider using <code>something in dic</code>, which is O(1) in both Python 2.x and 3.x and is basically equivalent (unless you modify dict on the fly and want to freeze state of dict keys before modifying it)</p>
| 0 | 2016-09-04T18:10:54Z | [
"python",
"dynamic-programming"
] |
Is there a way to get the argument of argparse in the order in which they were defined? | 39,282,429 | <p>I'd like to print all options of the program and they are grouped for readability. However when accessing the arguments via <code>vars(args)</code>, the order is random.</p>
| 0 | 2016-09-01T23:50:25Z | 39,282,787 | <p><code>argparse</code> parses the list of arguments in <code>sys.argv[1:]</code> (<code>sys.argv[0]</code> is used as the <code>prog</code> value in <code>usage</code>).</p>
<p><code>args=parser.parser_args()</code> returns a <code>argparse.Namespace</code> object. <code>vars(args)</code> returns a dictionary based on this object (<code>args.__dict__</code>). Keys of a dictionary are unordered. <code>print(args)</code> also uses this dictionary order.</p>
<p>The parser keeps a record of seen-actions for its own bookkeeping purposes. But it is not exposed to the user, and is an unordered <code>set</code>. I can imagine defining an custom <code>Action</code> subclass that recorded the order in which its instances were used.</p>
<p>====================</p>
<p>It is possible to retrieve arguments in the order in which they were defined when creating the parser. That's because the <code>parser</code> has a <code>_actions</code> list of all the <code>Actions</code>. It's not part of the public API, but a basic attribute and unlikely to every disappear.</p>
<p>To illustrate:</p>
<pre><code>In [622]: parser=argparse.ArgumentParser()
In [623]: parser.add_argument('foo')
In [624]: parser.add_argument('--bar')
In [625]: parser.add_argument('--baz')
In [626]: parser.print_help()
usage: ipython3 [-h] [--bar BAR] [--baz BAZ] foo
positional arguments:
foo
optional arguments:
-h, --help show this help message and exit
--bar BAR
--baz BAZ
</code></pre>
<p>The usage and help listings show the arguments in the order that they are defined, except that <code>positionals</code> and <code>optionals</code> are separated.</p>
<pre><code>In [627]: args=parser.parse_args(['--bar','one','foobar'])
In [628]: args
Out[628]: Namespace(bar='one', baz=None, foo='foobar')
In [629]: vars(args)
Out[629]: {'bar': 'one', 'baz': None, 'foo': 'foobar'}
In [631]: [(action.dest, getattr(args,action.dest, '***')) for action in parser._actions]
Out[631]: [('help', '***'), ('foo', 'foobar'), ('bar', 'one'), ('baz', None)]
</code></pre>
<p>Here I iterate on the <code>_actions</code> list, get the <code>dest</code> for each <code>Action</code>, and fetch that value from the <code>args</code> namespace. I could have fetched it from the <code>vars(args)</code> dictionary just as well.</p>
<p>I had to give <code>getattr</code> a default <code>***</code>, because the <code>help</code> action does not appear in the namespace. I could have filtered that sort of action out of the display.</p>
| 2 | 2016-09-02T00:44:08Z | [
"python",
"argparse"
] |
How do I format a string to use for a Method in Python? | 39,282,464 | <p>When trying to run the listPersons() command, every Person/Instance should call the sayHello() method. But as the names are str, it would raise an AttributeError (see below).</p>
<p>How do I format the names so I can use the methods on them?</p>
<pre><code>class person:
def __init__ (self, name):
self.name = name
def sayHello(self):
print("Hello World, I'm", self.name)
def listPersons():
print ("There are", len(names), "persons here, please everybody say hello to the world!")
for name in names:
print(name.sayHello())
names = ["Tobias", "Lukas", "Alex", "Hannah"]
for name in names:
globals()[name] = person(name)
</code></pre>
<p>AttributeError:</p>
<pre><code>Traceback (most recent call last):
File "<pyshell#97>", line 1, in <module>
listPersons()
File "/Users/user/Desktop/test.py", line 12, in listPersons
print(name.sayHello())
AttributeError: 'str' object has no attribute 'sayHello'
</code></pre>
<p>Thank you so much for your help! :-)</p>
| 0 | 2016-09-01T23:54:10Z | 39,282,666 | <p>You're getting this error because names list is a list of strings, not the people objects you create. Since you are using globals(), each person is being assigned to a variable in the global scope. Rather than using globals(), I would suggest having a list of people. </p>
<p>Another problem you will run into is that you are trying to print the output of person.sayHello, but that does not return anything. You could just call the function instead. </p>
<p>Both of these changes together:</p>
<pre><code>class person:
def __init__ (self, name):
self.name = name
def sayHello(self):
print("Hello World, I'm", self.name)
def listPersons():
print ("There are", len(people), "persons here, please everybody say hello to the world!")
for name in people:
name.sayHello()
names = ["Tobias", "Lukas", "Alex", "Hannah"]
people = []
for name in names:
people.append(person(name))
</code></pre>
| 2 | 2016-09-02T00:24:48Z | [
"python",
"methods",
"format"
] |
How to use css styling in html with multiple directories? | 39,282,466 | <p>I'm trying to make a simple webapp using Google Appengine with Python, HTML, and CSS. I know that to put a .css styling from separate file into html one should use a link tag, however I can't seem to get it to work. Here is the general directory config:</p>
<p>Main<br>
âââ app.yaml<br>
âââ files.py<br>
âââ Folder<br>
â âââ files.py<br>
â âââ Templates<br>
â â âââ form.html<br>
â âââ Static<br>
â â âââ style.css<br></p>
<p>"form.html" contains the layout and "style.css" contains styling. I tried putting in the code from "style.css" directly into "form.html" with a style tag and it worked, however when I use the link tag in the head section of the html file it doesn't work. Here is what link tags I tried so far:<br></p>
<pre><code><head>
<link type="text/css" rel="stylesheet" href="/Static/style.css">
<title> ... </title>
</head>
</code></pre>
<p>or<br></p>
<pre><code><head>
<link type="text/css" rel="stylesheet" href="/Folder/Static/style.css">
<title> ... </title>
</head>
</code></pre>
<p>or<br></p>
<pre><code><head>
<link type="text/css" rel="stylesheet" href="/Main/Folder/Static/style.css">
<title> ... </title>
</head>
</code></pre>
<p>None of these work, what could be a solution?</p>
| 0 | 2016-09-01T23:54:16Z | 39,282,504 | <p>I'm not 100% sure from your file diagram, but I think you're not referencing the CSS file properly in the link tag.</p>
<p>If the two files are in these spots:</p>
<p>Main/Folder/Templates/form.html</p>
<p>Main/Folder/Static/style.css</p>
<p>Then in the HTML, your link tag will need to be</p>
<pre><code><link type="text/css" rel="stylesheet" href="../Static/style.css">
</code></pre>
<p>Because relative to the form.html file, you need to back up a directory to the 'Folder' dir before you add the Static path to the end there...</p>
<p>In any case, generally you don't need a slash at the start of the path containing the CSS unless you are entering the absolute path all the way from the root</p>
| 1 | 2016-09-01T23:58:57Z | [
"python",
"html",
"css",
"google-app-engine"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.