title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
convert string to other type in python
| 39,417,539
|
<p>Hi everyone I have a simple problem but I don't find the solution, I have a function that returns something like that</p>
<pre><code>[[4, 'adr', 0, 0, 1, '2016-04-05T13:00:01'], [115, 'adr', 0, 0, 1, '2016-04-05T14:00:01'], [226, 'adr', 0, 0, 1, '2016-04-05T15:00:01'], [337, 'adr', 0, 0, 1, '2016-04-05T16:00:01']]
</code></pre>
<p>when I check de type of this variable <code>type(data)</code>say that is a string <code><type 'str'></code>
I want to create a loop to get each element like this </p>
<p>item 1 <code>[4, 'adr', 0, 0, 1, '2016-04-05T13:00:01']</code></p>
<p>item 2 <code>[115, 'adr', 0, 0, 1, '2016-04-05T14:00:01']</code></p>
<p>I try to convert the string in a list, a tuple... but nothing work, any idea how to change the string to any type that I can do a loop and get the items</p>
<p>when I try to convert in a tuple or string I have this result</p>
<pre><code>('[', '[', '4', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '3', ':', '0', '0', ':', '0', '1', "'", ']', ',', ' ', '[', '1', '1', '5', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '4', ':', '0', '0', ':', '0', '1', "'", ']', ',', ' ', '[', '2', '2', '6', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '5', ':', '0', '0', ':', '0', '1', "'", ']', ',', ' ', '[', '3', '3', '7', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '6', ':', '0', '0', ':', '0', '1', "'", ']', ']')
</code></pre>
| 0
|
2016-09-09T18:17:38Z
| 39,417,609
|
<p>You might consider to use <a href="https://docs.python.org/2/library/ast.html#ast.literal_eval" rel="nofollow">literal_eval from ast</a> module.</p>
<pre><code>In [8]: from ast import literal_eval
In [9]: a = "[[4, 'adr', 0, 0, 1, '2016-04-05T13:00:01'], [115, 'adr', 0, 0, 1, '2016-04-05T14:00:01'], [226, 'adr', 0, 0, 1, '2016-04-05T15:00:01'], [337, 'adr', 0, 0, 1, '2016-04-05T16:00:01']]"
In [10]: type(a)
Out[10]: str
In [11]: b = literal_eval(a)
In [12]: type(b)
Out[12]: list
In [13]: b
Out[13]:
[[4, 'adr', 0, 0, 1, '2016-04-05T13:00:01'],
[115, 'adr', 0, 0, 1, '2016-04-05T14:00:01'],
[226, 'adr', 0, 0, 1, '2016-04-05T15:00:01'],
[337, 'adr', 0, 0, 1, '2016-04-05T16:00:01']]
</code></pre>
<p>Then you have a proper list and can easily iterate on it to get it's element.</p>
| 2
|
2016-09-09T18:22:26Z
|
[
"python",
"python-2.7"
] |
convert string to other type in python
| 39,417,539
|
<p>Hi everyone I have a simple problem but I don't find the solution, I have a function that returns something like that</p>
<pre><code>[[4, 'adr', 0, 0, 1, '2016-04-05T13:00:01'], [115, 'adr', 0, 0, 1, '2016-04-05T14:00:01'], [226, 'adr', 0, 0, 1, '2016-04-05T15:00:01'], [337, 'adr', 0, 0, 1, '2016-04-05T16:00:01']]
</code></pre>
<p>when I check de type of this variable <code>type(data)</code>say that is a string <code><type 'str'></code>
I want to create a loop to get each element like this </p>
<p>item 1 <code>[4, 'adr', 0, 0, 1, '2016-04-05T13:00:01']</code></p>
<p>item 2 <code>[115, 'adr', 0, 0, 1, '2016-04-05T14:00:01']</code></p>
<p>I try to convert the string in a list, a tuple... but nothing work, any idea how to change the string to any type that I can do a loop and get the items</p>
<p>when I try to convert in a tuple or string I have this result</p>
<pre><code>('[', '[', '4', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '3', ':', '0', '0', ':', '0', '1', "'", ']', ',', ' ', '[', '1', '1', '5', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '4', ':', '0', '0', ':', '0', '1', "'", ']', ',', ' ', '[', '2', '2', '6', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '5', ':', '0', '0', ':', '0', '1', "'", ']', ',', ' ', '[', '3', '3', '7', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '6', ':', '0', '0', ':', '0', '1', "'", ']', ']')
</code></pre>
| 0
|
2016-09-09T18:17:38Z
| 39,417,648
|
<p>You could use <a href="https://docs.python.org/3/library/ast.html#ast.literal%5Feval" rel="nofollow">ast.literal_eval</a>:</p>
<pre><code>import ast
myteststr = "[[4, 'adr', 0, 0, 1, '2016-04-05T13:00:01'], [115, 'adr', 0, 0, 1, '2016-04-05T14:00:01'], [226, 'adr', 0, 0, 1, '2016-04-05T15:00:01'], [337, 'adr', 0, 0, 1, '2016-04-05T16:00:01']]"
pyobj = ast.literal_eval(myteststr)
print(pyobj)
</code></pre>
| 1
|
2016-09-09T18:24:51Z
|
[
"python",
"python-2.7"
] |
convert string to other type in python
| 39,417,539
|
<p>Hi everyone I have a simple problem but I don't find the solution, I have a function that returns something like that</p>
<pre><code>[[4, 'adr', 0, 0, 1, '2016-04-05T13:00:01'], [115, 'adr', 0, 0, 1, '2016-04-05T14:00:01'], [226, 'adr', 0, 0, 1, '2016-04-05T15:00:01'], [337, 'adr', 0, 0, 1, '2016-04-05T16:00:01']]
</code></pre>
<p>when I check de type of this variable <code>type(data)</code>say that is a string <code><type 'str'></code>
I want to create a loop to get each element like this </p>
<p>item 1 <code>[4, 'adr', 0, 0, 1, '2016-04-05T13:00:01']</code></p>
<p>item 2 <code>[115, 'adr', 0, 0, 1, '2016-04-05T14:00:01']</code></p>
<p>I try to convert the string in a list, a tuple... but nothing work, any idea how to change the string to any type that I can do a loop and get the items</p>
<p>when I try to convert in a tuple or string I have this result</p>
<pre><code>('[', '[', '4', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '3', ':', '0', '0', ':', '0', '1', "'", ']', ',', ' ', '[', '1', '1', '5', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '4', ':', '0', '0', ':', '0', '1', "'", ']', ',', ' ', '[', '2', '2', '6', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '5', ':', '0', '0', ':', '0', '1', "'", ']', ',', ' ', '[', '3', '3', '7', ',', ' ', "'", 'a', 'd', 'r', "'", ',', ' ', '0', ',', ' ', '0', ',', ' ', '1', ',', ' ', "'", '2', '0', '1', '6', '-', '0', '4', '-', '0', '5', 'T', '1', '6', ':', '0', '0', ':', '0', '1', "'", ']', ']')
</code></pre>
| 0
|
2016-09-09T18:17:38Z
| 39,475,738
|
<p>Hi everyone I finally resolve the problem like this</p>
<pre><code>data = {}
data = {'names': []}
for item in project_name:
data['names'].append(item)
data.update({item: {}})
jobs_running = []
jobs_pending = []
for row in all_rows:
if (str(item) == row[1]):
parsed_t = dp.parse(str(row[5]))
t_in_seconds = parsed_t.strftime('%s')
jobs_running.append([ (t_in_seconds), (row[3]) ])
jobs_pending.append([ (t_in_seconds), (row[4]) ])
data[item].update({'jobs_running': jobs_running})
data[item].update({'jobs_pending': jobs_pending})
</code></pre>
<p>So my data structure is like this <a href="http://i.stack.imgur.com/aPANZ.png" rel="nofollow">see image</a></p>
| 0
|
2016-09-13T17:23:45Z
|
[
"python",
"python-2.7"
] |
Python Flask WTForms-Components PhoneNumberField Import error
| 39,417,571
|
<p>I am trying to use the PhoneNumberField from WTForms-Components offcial docs are here <a href="https://wtforms-components.readthedocs.io/en/latest/#phonenumberfield" rel="nofollow">https://wtforms-components.readthedocs.io/en/latest/#phonenumberfield</a></p>
<p>this is what i am trying `</p>
<pre><code>from wtforms import Form
from sqlalchemy_utils import PhoneNumber
from wtforms_components import PhoneNumberField
class UserForm(Form):
phone_number = PhoneNumberField(
country_code='FI'
display_format='national'
)`
</code></pre>
<p>What i have done so so far is</p>
<pre><code>sudo pip install Flask-Wtf --upgrade
sudo pip install Flask-Wtforms --upgrade
sudo pip install sqlalchemy-utils --upgrade
sudo pip install WTForms-Components --upgrade
</code></pre>
<p>Does this library even still works?
I get this error
from wtforms_components import PhoneNumberField
ImportError: cannot import name PhoneNumberField</p>
| -1
|
2016-09-09T18:19:58Z
| 39,676,032
|
<p>It looks as if <code>PhoneNumberField</code> was moved in WTForms-Components 0.10.0 to WTForms-Alchemy 0.15.0. Both packages have the same author. <a href="https://github.com/kvesteri/wtforms-components/issues/39#issuecomment-176649583" rel="nofollow">Here</a> is a GitHub issue that does a better job of explaining why it broke.</p>
<p>In short change your import to this:</p>
<pre><code>from wtforms_alchemy import PhoneNumberField
</code></pre>
| 0
|
2016-09-24T11:52:51Z
|
[
"python",
"flask",
"wtforms",
"flask-wtforms"
] |
Reason and solution of "*** Error in `python': free(): corrupted unsorted chunks: 0x0000000000ff2460 ***" in Python
| 39,417,641
|
<p>i create a service on Python, who connects to sql Azure using pymssql and only makes a select querys, and after 1 day (or a bit more), the connection begins to fail in querys and finally the service ends with the error <strong>* Error in `python': free(): corrupted unsorted chunks: 0x0000000000ff2460 *</strong>, i don't be sure if are one error or more (maybe the first error provoques more errors).</p>
<p>the code of connection is here:</p>
<pre><code> connectionDb = pymssql.connect(host=self.HOST_DATA_BASE, user=self.USER_DATA_BASE, password=self.PASSWORD_DATA_BASE, database=self.DATA_BASE_NAME)
</code></pre>
<p>and i execute the query in the next form:</p>
<pre><code> cursor=connectionDb.cursor()
cursor.execute("select * from vehicles")
rows = cursor.fetchall()
if (rows!=None):
return rows
</code></pre>
<p>And initially, the connection is ok, and works fine, the problem is after a time of inactivity.</p>
<p>I try to simplify the querys but i don´t belive that is the reason of error.</p>
<p>*Maybe is posible a bug of pymssql?</p>
| 0
|
2016-09-09T18:24:28Z
| 39,465,603
|
<p>@APRocha, It seems to be not a bug of pymssql, it's an error from glibc when python free some malloc memory. </p>
<p>Did you close the connection after done the sql opertion per request? If not, I suggest you do, or using SQLAlchemy with pymssql to manage the connections in a pool.</p>
<p>Otherwise, I think you can try to use <code>gc.collect()</code> to release the unreferenced memory at intervals, please refer to the <a href="https://docs.python.org/3/library/gc.html" rel="nofollow">document</a> as reference.</p>
<p>Hope it helps.</p>
| 1
|
2016-09-13T08:37:33Z
|
[
"python",
"sql",
"azure"
] |
Complex pivot and resample
| 39,417,686
|
<p>I'm not sure where to start with this so apologies for my lack of an attempt.</p>
<p>This is the initial shape of my data:</p>
<pre><code>df = pd.DataFrame({
'Year-Mth': ['1900-01'
,'1901-02'
,'1903-02'
,'1903-03'
,'1903-04'
,'1911-08'
,'1911-09'],
'Category': ['A','A','B','B','B','B','B'],
'SubCategory': ['X','Y','Y','Y','Z','Q','Y'],
'counter': [1,1,1,1,1,1,1]
})
df
</code></pre>
<p>This is the result I'd like to get to - the Mth-Year in the below has been resampled to 4 year buckets:</p>
<p><a href="http://i.stack.imgur.com/sWkYR.png" rel="nofollow"><img src="http://i.stack.imgur.com/sWkYR.png" alt="enter image description here"></a></p>
<p>If possible I'd like to do this via a process that makes 'Year-Mth' resamplable - so I can easily switch to different buckets.</p>
| 4
|
2016-09-09T18:27:09Z
| 39,417,957
|
<p>Here's my attempt:</p>
<pre><code>df['Year'] = pd.cut(df['Year-Mth'].str[:4].astype(int),
bins=np.arange(1900, 1920, 5), right=False)
df.pivot_table(index=['SubCategory', 'Year'], columns='Category',
values='counter', aggfunc='sum').dropna(how='all').fillna(0)
Out:
Category A B
SubCategory Year
Q [1910, 1915) 0.0 1.0
X [1900, 1905) 1.0 0.0
Y [1900, 1905) 1.0 2.0
[1910, 1915) 0.0 1.0
Z [1900, 1905) 0.0 1.0
</code></pre>
<p>The year column is not parameterized as pandas (or numpy) does not offer a cut option with step size, as far as I know. But I think it can be done with a little arithmetic on minimums/maximums. Something like:</p>
<pre><code>df['Year'] = pd.to_datetime(df['Year-Mth']).dt.year
df['Year'] = pd.cut(df['Year'], bins=np.arange(df['Year'].min(),
df['Year'].max() + 5, 5), right=False)
</code></pre>
<p>This wouldn't create nice bins like Excel does, though. </p>
| 5
|
2016-09-09T18:45:31Z
|
[
"python",
"pandas"
] |
Complex pivot and resample
| 39,417,686
|
<p>I'm not sure where to start with this so apologies for my lack of an attempt.</p>
<p>This is the initial shape of my data:</p>
<pre><code>df = pd.DataFrame({
'Year-Mth': ['1900-01'
,'1901-02'
,'1903-02'
,'1903-03'
,'1903-04'
,'1911-08'
,'1911-09'],
'Category': ['A','A','B','B','B','B','B'],
'SubCategory': ['X','Y','Y','Y','Z','Q','Y'],
'counter': [1,1,1,1,1,1,1]
})
df
</code></pre>
<p>This is the result I'd like to get to - the Mth-Year in the below has been resampled to 4 year buckets:</p>
<p><a href="http://i.stack.imgur.com/sWkYR.png" rel="nofollow"><img src="http://i.stack.imgur.com/sWkYR.png" alt="enter image description here"></a></p>
<p>If possible I'd like to do this via a process that makes 'Year-Mth' resamplable - so I can easily switch to different buckets.</p>
| 4
|
2016-09-09T18:27:09Z
| 39,418,053
|
<pre><code>cols = [df.SubCategory, pd.to_datetime(df['Year-Mth']), df.Category]
df1 = df.set_index(cols).counter
df1.unstack('Year-Mth').T.resample('60M', how='sum').stack(0).swaplevel(0, 1).sort_index().fillna('')
</code></pre>
<p><a href="http://i.stack.imgur.com/7trRO.png" rel="nofollow"><img src="http://i.stack.imgur.com/7trRO.png" alt="enter image description here"></a></p>
| 3
|
2016-09-09T18:52:40Z
|
[
"python",
"pandas"
] |
Print SparkSession Config Options
| 39,417,743
|
<p>When I start pyspark, a SparkSession is automatically generated and available as 'spark'. I would like to print/view the details of the spark session but am having a lot of difficulty accessing these parameters. </p>
<p>Pyspark auto creates a SparkSession. This can be created manually using the following code:</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("PythonSQL")\
.config("spark.some.config.option", "some-value")\
.getOrCreate()
</code></pre>
<p>I would like to view/print the appname and config options. The reason I would like to see these is as a result of another issue that I am experiencing which this may shed light on.</p>
| 0
|
2016-09-09T18:30:51Z
| 39,418,914
|
<p>Application name can be accessed using <code>SparkContext</code>:</p>
<pre><code>spark.sparkContext.appName
</code></pre>
<p>Configuration is accessible using <code>RuntimeConfig</code>:</p>
<pre><code>from py4j.protocol import Py4JError
try:
spark.conf.get("some.conf")
except Py4JError as e:
pass
</code></pre>
| 0
|
2016-09-09T19:57:56Z
|
[
"python",
"apache-spark",
"pyspark"
] |
pyqtSignal() connects bool object instead of str
| 39,417,857
|
<p>Let's say I have this snippet of code:</p>
<pre><code>from PyQt5.QtWidgets import QMainWindow
from PyQt5.QtWidgets import QApplication
from PyQt5.QtWidgets import QDialog
from PyQt5.QtCore import pyqtSignal
from ui_helloworld import Ui_MainWindow
from ui_hellodialog import Ui_Hi
from sys import argv
from sys import exit
class MainWindow(QMainWindow):
update = pyqtSignal(str)
def __init__(self):
super(MainWindow, self).__init__()
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
self.h = HelloDialog()
self.ui.pushButton.clicked.connect(self.update_label)
self.ui.doIt.clicked.connect(self.h.update_label)
def update_label(self):
self.h.show()
def update_label_hello(self, msg):
self.update.emit(msg)
class HelloDialog(QDialog):
def __init__(self):
super(HelloDialog, self).__init__()
self.ui = Ui_Hi()
self.ui.setupUi(self)
def update_label(self, msg):
print msg
# Crashes the program:
# TypeError: setText(self, str): argument 1 has unexpected type 'bool'
# >> self.ui.label.setText(msg)
self.ui.label.setText("Hello world!")
def main():
app = QApplication(argv)
window = MainWindow()
window.show()
exit(app.exec_())
if __name__=="__main__":
main()
</code></pre>
<p>It's fairly simple. It's 2 windows, one is a QMainWindow and the other is a QDialog. The MainWindow has 2 buttons, pushButton and doIt:</p>
<ul>
<li><code>pushButton</code> opens the <code>HelloDialog</code></li>
<li><code>doIt</code> emit the <code>update</code> signal</li>
</ul>
<p>The problem is, that the slot in <code>HelloDialog</code> Is receiving a boolean from the <code>update</code> signal in <code>MainWindow</code>, but I declared it as a str object.</p>
<p>Why does the the <code>update_label</code> slot receive a <code>bool</code> and not a <code>str</code> object? </p>
<pre><code>localhost :: Documents/Python/qt » python main.py
{ push `doIt` object }
False
</code></pre>
<p>The <code>Ui_MainWidow</code> and <code>Ui_Hi</code> classes are <code>pyuic5</code> generated.</p>
| 0
|
2016-09-09T18:39:30Z
| 39,418,398
|
<p>I did not need to connect to <code>self.h.update_label</code> directly. I had to connect to the method inside MainWindow called <code>update_label_hello</code> to <code>doIt</code>, and then connect the <code>pyqtSignal</code> to the slot in <code>HelloDialog</code></p>
<p>So, the final result is this:</p>
<p>init of <code>MainWindow</code>:</p>
<pre><code>def __init__(self):
super(MainWindow, self).__init__()
self.ui = Ui_MainWindow()
self.ui.setupUi(self)
self.h = HelloDialog()
self.ui.pushButton.clicked.connect(self.update_label_)
self.ui.doIt.clicked.connect(self.update_label_hello)
self.update.connect(self.h.update_label)
</code></pre>
| 0
|
2016-09-09T19:20:33Z
|
[
"python",
"qt",
"pyqt5"
] |
Issues with Shape inheritance for base classes triangles and squares
| 39,417,867
|
<p>I keep getting an error. I want the program to display the area of my triangle class. Here is my code:</p>
<pre><code>#Parent class is Shape class
#Child class is Triangle and Square class
class Shape:
def __init__(self,base,height):
self.base=base
self.height=height
def triangle_area(self):
return .5*self.base*self.height
def square_area(self):
return self.base*self.height
class Triangle(Shape):
def triangle_area(self):
return .5*self.base*self.height
class Square(Shape):
def square_area(self):
return self.base*self.height
triangle_one=Triangle()
triangle_one.base=9
triangle_one.height=12
print("Area of triangle is",triangle_one.triangle_area())
</code></pre>
<p>And here is my error:</p>
<blockquote>
<p>BlockquoteTraceback (most recent call last):
File "C:/Users/Pentazoid/Desktop/PythonPrograms/inheritanceshape.py", line 31, in
triangle_one=Triangle()
TypeError: <strong>init</strong>() missing 2 required positional arguments: 'base' and 'height'</p>
</blockquote>
<p>What am I doing wrong</p>
| 0
|
2016-09-09T18:40:35Z
| 39,417,927
|
<p>You need to pass base and height in the constructor call like this:</p>
<pre><code>base=9
height=12
triangle_one=Triangle(base, height)
</code></pre>
| 1
|
2016-09-09T18:44:11Z
|
[
"python",
"inheritance"
] |
Human readable output in bits
| 39,417,944
|
<p>I have looked at the modules <em>humanize</em> and <em>humanfriendly</em>, and neither can convert a large bit value to human readable bit output (e.g. Mbits, Gbits, Tbits, ..etc). Has anyone come across such a module? Example:</p>
<pre><code>mbits = 1000000
gbits = 1000000000
</code></pre>
<p>Then </p>
<pre><code>print(human.bits(mbits)) # would output "1 Mbit"
print(human.bits(gbits)) # would output "1 Gbit"
</code></pre>
<p>...etc, up to exabit.</p>
| -3
|
2016-09-09T18:45:04Z
| 39,418,077
|
<p>You can try <code>hurry.filesize</code></p>
<pre><code>>>> from hurry.filesize import size
>>> size(11000)
'10K'
</code></pre>
<p>There is another library <code>bitmath</code></p>
<pre><code>>>> from bitmath import *
>>> small_number = MiB(10000)
>>> print small_number.best_prefix()
9.765625 GiB
</code></pre>
| 0
|
2016-09-09T18:54:30Z
|
[
"python"
] |
Python Selenium is redirecting the URL to Sign up page
| 39,417,959
|
<p>I have the following code:</p>
<pre><code>> from selenium import webdriver
> browser = webdriver.Chrome(executable_path =
r"C:\Users\ABC\AppData\Local\Programs\Python\Python35-32\Lib\site-packages\selenium\webdriver\common\chromedriver.exe")
> browser.get('http://www.linkedin.com/pub/dir/?first=jatin&last=wadhwa&trk=prof-samename-search-submit')
> print (browser.page_source)
</code></pre>
<p>What's happening is I want to open In spite of ->
<a href="http://www.linkedin.com/pub/dir/?first=jatin&last=wadhwa&trk=prof-samename-search-submit" rel="nofollow">http://www.linkedin.com/pub/dir/?first=jatin&last=wadhwa&trk=prof-samename-search-submit</a></p>
<p>it is going -></p>
<p><a href="https://www.linkedin.com/start/join?session_redirect=http%3A%2F%2Fwww.linkedin.com%2Fpub%2Fdir%2F%3Ffirst%3Djatin%26last%3Dwadhwa%26trk%3Dprof-samename-search-submit&source=sentinel_org_block&trk=login_reg_redirect" rel="nofollow">https://www.linkedin.com/start/join?session_redirect=http%3A%2F%2Fwww.linkedin.com%2Fpub%2Fdir%2F%3Ffirst%3Djatin%26last%3Dwadhwa%26trk%3Dprof-samename-search-submit&source=sentinel_org_block&trk=login_reg_redirect</a></p>
<p>Any solution so that it opens the desired link not the redirected one?</p>
| 0
|
2016-09-09T18:45:35Z
| 39,418,131
|
<p>Linkedin does not search page without login. First login on linkedin then you can scrape data. </p>
<pre><code>browser.get('https://www.linkedin.com/')
elem = browser.find_element_by_name('session_key')
elem.clear()
elem.send_keys(email_id) # enter your email id or phone number
elem = browser.find_element_by_name('session_password')
elem.clear()
elem.send_keys(password) # enter your linkedin password
submit = browser.find_element_by_xpath('//*[@id="pagekey-uno-reg-guest-home"]/div[1]/div/form/input[6]')
actions = ActionChains(browser)
actions.click(submit)
actions.perform() # after this you will be login
# Now you can open url without redirecting
browser.get(`'http://www.linkedin.com/pub/dir/?first=jatin&last=wadhwa&trk=prof-samename-search-submit')`
</code></pre>
| 0
|
2016-09-09T18:57:29Z
|
[
"python",
"session",
"selenium"
] |
Python Selenium is redirecting the URL to Sign up page
| 39,417,959
|
<p>I have the following code:</p>
<pre><code>> from selenium import webdriver
> browser = webdriver.Chrome(executable_path =
r"C:\Users\ABC\AppData\Local\Programs\Python\Python35-32\Lib\site-packages\selenium\webdriver\common\chromedriver.exe")
> browser.get('http://www.linkedin.com/pub/dir/?first=jatin&last=wadhwa&trk=prof-samename-search-submit')
> print (browser.page_source)
</code></pre>
<p>What's happening is I want to open In spite of ->
<a href="http://www.linkedin.com/pub/dir/?first=jatin&last=wadhwa&trk=prof-samename-search-submit" rel="nofollow">http://www.linkedin.com/pub/dir/?first=jatin&last=wadhwa&trk=prof-samename-search-submit</a></p>
<p>it is going -></p>
<p><a href="https://www.linkedin.com/start/join?session_redirect=http%3A%2F%2Fwww.linkedin.com%2Fpub%2Fdir%2F%3Ffirst%3Djatin%26last%3Dwadhwa%26trk%3Dprof-samename-search-submit&source=sentinel_org_block&trk=login_reg_redirect" rel="nofollow">https://www.linkedin.com/start/join?session_redirect=http%3A%2F%2Fwww.linkedin.com%2Fpub%2Fdir%2F%3Ffirst%3Djatin%26last%3Dwadhwa%26trk%3Dprof-samename-search-submit&source=sentinel_org_block&trk=login_reg_redirect</a></p>
<p>Any solution so that it opens the desired link not the redirected one?</p>
| 0
|
2016-09-09T18:45:35Z
| 39,418,145
|
<p>Linkedin redirect you to the signup page if you are crawling too fast.</p>
<p>I recommend you to wait a random amount of time between each http request.</p>
<p>Deleting your cookies wouldn't hurt either.</p>
| 0
|
2016-09-09T18:59:10Z
|
[
"python",
"session",
"selenium"
] |
How to shift list indexes by a certain value in Python
| 39,418,008
|
<p>I need to create a python function that right shifts values in a list by a given value. </p>
<p>For example if the list is [1,2,3,4] and the shift is 2 it will become [2,3,4,1]. the shift value must be a non negative integer. I can only use the len and range functions. </p>
<p>This is what I have so far</p>
<pre><code>def shift(array, value):
if value < 0:
return
for i in range(len(array)):
arr[i] = arr[(i + shift_amount) % len(arr)]
</code></pre>
| -2
|
2016-09-09T18:48:55Z
| 39,418,037
|
<p>You could consider a <a href="https://docs.python.org/3/library/collections.html#collections.deque" rel="nofollow"><code>deque</code></a> instead:</p>
<pre><code>>>> from collections import deque
>>> d = deque([1,2,3,4])
>>> d.rotate(-1)
>>> d
deque([2, 3, 4, 1])
</code></pre>
| 0
|
2016-09-09T18:51:25Z
|
[
"python",
"indexing"
] |
How to shift list indexes by a certain value in Python
| 39,418,008
|
<p>I need to create a python function that right shifts values in a list by a given value. </p>
<p>For example if the list is [1,2,3,4] and the shift is 2 it will become [2,3,4,1]. the shift value must be a non negative integer. I can only use the len and range functions. </p>
<p>This is what I have so far</p>
<pre><code>def shift(array, value):
if value < 0:
return
for i in range(len(array)):
arr[i] = arr[(i + shift_amount) % len(arr)]
</code></pre>
| -2
|
2016-09-09T18:48:55Z
| 39,418,067
|
<p>Usually you can do this with slicing</p>
<pre><code>arr = arr[shift:] + arr[:shift]
</code></pre>
<p>Your shifted list is only <code>shift = 1</code>, not 2. You can't get your output by shifting 2 positions.</p>
| 2
|
2016-09-09T18:53:40Z
|
[
"python",
"indexing"
] |
How to shift list indexes by a certain value in Python
| 39,418,008
|
<p>I need to create a python function that right shifts values in a list by a given value. </p>
<p>For example if the list is [1,2,3,4] and the shift is 2 it will become [2,3,4,1]. the shift value must be a non negative integer. I can only use the len and range functions. </p>
<p>This is what I have so far</p>
<pre><code>def shift(array, value):
if value < 0:
return
for i in range(len(array)):
arr[i] = arr[(i + shift_amount) % len(arr)]
</code></pre>
| -2
|
2016-09-09T18:48:55Z
| 39,430,009
|
<p>I make some modifications in your code (If you have to use <code>len</code> and <code>range</code> functions) :</p>
<pre><code>def shift(array, shift_amount):
if shift_amount < 0:
return
ans = []
for i in range(len(array)):
ans.append(array[(i + shift_amount) % len(array)])
print ans
shift([1,2,3,4],2)
</code></pre>
<p>Output:</p>
<pre><code>[3, 4, 1, 2]
</code></pre>
<blockquote>
<p><strong>Note:</strong> </p>
<ul>
<li>Your's logic is correct but your overriding values in same array, So I created another list and append value to it.</li>
<li>If shift value is 1 then output will be <code>[2, 3, 4, 1]</code>. So for value 2 it will be two shifts that's why output should be <code>[3, 4,
1, 2]</code></li>
<li>value and shift_amount are two different variables in your code, So I use only single variable.</li>
</ul>
</blockquote>
<p>You can use <em>list comprehension</em> (If you want to check in detail about <em>list comprehension</em> see this article <a href="http://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/" rel="nofollow">Python List Comprehensions: Explained Visually</a>) like </p>
<pre><code>def shift(array, shift_amount):
if shift_amount < 0:
return
length = len(array)
print [array[(i + shift_amount) % length] for i in range(length)]
shift([1,2,3,4],0)
</code></pre>
| 0
|
2016-09-10T19:50:52Z
|
[
"python",
"indexing"
] |
Unexpected number when reading PLC using pymodbus
| 39,418,049
|
<p><a href="http://i.stack.imgur.com/rYMVB.png" rel="nofollow"><img src="http://i.stack.imgur.com/rYMVB.png" alt="enter image description here"></a>I am using pymodbus to read a register on a Wago 750-881 PLC. I am also reading the same register on the Modbus Poll utility, as well as, an HMI. The Modbus Poll and HMI are reading correctly, but the pymodbus program is not. </p>
<p>Here is the code:</p>
<pre><code>from pymodbus.client.sync import ModbusTcpClient
c = ModbusTcpClient(host="192.168.1.20")
chk = c.read_holding_registers(257, 1, unit = 1)
response = c.execute(chk)
print response.getRegister(0)
</code></pre>
<p>Here is the response from running the code:</p>
<pre><code>>>> runfile('C:/Users/Mike/modbustest2.py', wdir='C:/Users/Mike')
18283
</code></pre>
<p>The correct output should be 2043. It also reads the same number "18283" on the other registers. I know the problem must be code related since I can read the register from other programs/devices. Any help is appreciated. </p>
| 0
|
2016-09-09T18:52:13Z
| 39,419,045
|
<p>You may be reading the wrong register, or from the wrong unit ID, or some combination of both.</p>
<p>If you use Wireshark to capture what the 3rd party software and your own software is doing you should be able to spot the difference pretty quickly.</p>
| 0
|
2016-09-09T20:09:24Z
|
[
"python",
"python-2.7",
"modbus",
"modbus-tcp"
] |
How does Elastic Beanstalk work behind the scenes for Django?
| 39,418,059
|
<p>For running django applications locally I can do </p>
<pre><code>django-admin startproject djangotest
python djangotest/manage.py runserver
</code></pre>
<p>and the sample webpage shows up at <a href="http://127.0.0.1:8000/" rel="nofollow">http://127.0.0.1:8000/</a></p>
<p>However, when I deploy this to EB with </p>
<pre><code>eb deploy
</code></pre>
<p>It just magically works. My question is does EB runs the command <code>python djangotest/manage.py runserver</code> on the EC2 server after <code>eb deploy</code> by default? What are the list of commands that EB executes to get the webpage working? What if I want it to run with different flags like <code>python djangotest/manage.py runserver --nostatic</code> is that possible?</p>
| 0
|
2016-09-09T18:53:07Z
| 39,419,282
|
<p>It doesn't just magically work. You have to configure Django for Elastic Beanstalk, as described in <a href="http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-django.html#python-django-configure-for-eb" rel="nofollow">the EB documentation</a>: you provide a configuration file which points to the WSGI module.</p>
<p>In any case, it wouldn't use runserver, as the development server is absolutely not for production use.</p>
| 1
|
2016-09-09T20:30:29Z
|
[
"python",
"django",
"amazon-web-services",
"amazon-ec2",
"elastic-beanstalk"
] |
how to get field value in django admin form save_model
| 39,418,062
|
<p>I have the following model:</p>
<pre><code>class Guest(models.Model):
first_name = models.CharField(max_length=128)
last_name = models.CharField(max_length=128)
category = models.ForeignKey(Category)
player_achievement = models.ForeignKey(PlayerAchievement, related_name="guest_achievements")
player = models.ForeignKey(Player)
</code></pre>
<p>I want to get the player selected in the form to use in my save_model method:</p>
<pre><code>def save_model(self, request, obj, form, change):
player = ??? how do i get this from the admin form?
obj.player = player
obj.save()
</code></pre>
| 0
|
2016-09-09T18:53:25Z
| 39,420,418
|
<p>It depends on how you declared your form. If you inherited the <a href="https://docs.djangoproject.com/en/1.10/topics/forms/modelforms/" rel="nofollow">ModelForm</a>, then you should in most cases use <a href="https://docs.djangoproject.com/en/1.10/ref/forms/fields/#django.forms.ModelChoiceField" rel="nofollow">ModelChoiceField</a> to select the associated <code>Player</code> with your <code>Guest</code> object and this is through calling <code>form.cleaned_data.get('player')</code></p>
| 1
|
2016-09-09T22:13:32Z
|
[
"python",
"django",
"django-models",
"django-forms",
"django-admin"
] |
DataError: (1406, "Data too long for column 'name' at row 1")
| 39,418,063
|
<p>I've read nearly all other posts with the same error and can't seem to find a proper solution. </p>
<p>In my models.py file I have this:</p>
<pre><code>class LetsSayCups(models.Model):
name = models.CharField(max_length=65535)
def __str__(self):
return str(self.name)
</code></pre>
<p>I get this error when I try to load aws mysql data into my local mysql server. I had the issue occur for another part in my models.py file, and the way I was able to work around it was by going into the my.cnf.bak file and changing the sql_mode from:</p>
<pre><code>sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
</code></pre>
<p>to:</p>
<pre><code>sql_mode=''
</code></pre>
<p>And it worked!!! Until later on I find another error. The specific error is something like this:</p>
<pre><code>...
File "/Users/im_the_user/Desktop/my_company/my_project/load_items.py", line 122, in load_the_items
existing_cups = Cups.objects.get_or_create(name=cups)
...
django.db.utils.DataError: (1406, "Data too long for column 'name' at row 1")
</code></pre>
<p>The above ... means things came before/after that I left out in this. </p>
<p>Updating my my.cnf.bak file wasnt enough, nor was making the CharField max_length to 65535. What else can I try?</p>
| 0
|
2016-09-09T18:53:37Z
| 39,420,389
|
<p>You need to use a <code>TextField</code>. The <code>max_length</code> of a <code>CharField</code> should be set to 255 or less to avoid issues with DBs that store it as <code>VARCHAR</code>.</p>
<p><a href="https://docs.djangoproject.com/en/1.10/ref/databases/#character-fields" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/databases/#character-fields</a></p>
| 0
|
2016-09-09T22:11:23Z
|
[
"python",
"mysql",
"django",
"pycharm"
] |
DataError: (1406, "Data too long for column 'name' at row 1")
| 39,418,063
|
<p>I've read nearly all other posts with the same error and can't seem to find a proper solution. </p>
<p>In my models.py file I have this:</p>
<pre><code>class LetsSayCups(models.Model):
name = models.CharField(max_length=65535)
def __str__(self):
return str(self.name)
</code></pre>
<p>I get this error when I try to load aws mysql data into my local mysql server. I had the issue occur for another part in my models.py file, and the way I was able to work around it was by going into the my.cnf.bak file and changing the sql_mode from:</p>
<pre><code>sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
</code></pre>
<p>to:</p>
<pre><code>sql_mode=''
</code></pre>
<p>And it worked!!! Until later on I find another error. The specific error is something like this:</p>
<pre><code>...
File "/Users/im_the_user/Desktop/my_company/my_project/load_items.py", line 122, in load_the_items
existing_cups = Cups.objects.get_or_create(name=cups)
...
django.db.utils.DataError: (1406, "Data too long for column 'name' at row 1")
</code></pre>
<p>The above ... means things came before/after that I left out in this. </p>
<p>Updating my my.cnf.bak file wasnt enough, nor was making the CharField max_length to 65535. What else can I try?</p>
| 0
|
2016-09-09T18:53:37Z
| 39,451,226
|
<p>I found out that my.cfn.bak is only a backup file. I'm not sure how it worked for the first issue, but when I renamed the file to my.cfn my problem was resolved. </p>
| 0
|
2016-09-12T13:15:03Z
|
[
"python",
"mysql",
"django",
"pycharm"
] |
Plotting a simple 2D vector
| 39,418,165
|
<p>New to Python and just trying to accomplish what I think must be the simplest of tasks: plotting a basic 2D vector. However my online search has gotten me nowhere so I turn to stackoverflow with my very first question.</p>
<p>I Just want to plot a single 2D vector, let's call it my_vector. my_vector goes from (0,0) to (3,11).</p>
<p>What I have done is this:</p>
<pre><code>from __future__ import print_function
import numpy as np
import pylab as pl
%pylab inline
x_cords = np.arange(4)
y_cords = np.linspace(0, 11, 4)
my_vector = vstack([x_cords, y_cords])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(my_vector)
plt.show()
</code></pre>
<p>Which gives the following image (and totally not what I am after):</p>
<p><a href="http://i.stack.imgur.com/39KLk.png" rel="nofollow">a very wrong plot</a></p>
<p>However I have found that </p>
<pre><code>ax.plot(x_cords, y_cords)
</code></pre>
<p>instead of</p>
<pre><code>ax.plot(my_vector)
</code></pre>
<p>gives me the plot I am looking for but then I don't have that single vector I am after.</p>
<p>So how does one correctly plot a basic 2D vector? Thank you and sorry if this has indeed been posted somewhere else...</p>
| 0
|
2016-09-09T19:00:24Z
| 39,418,224
|
<p>You can also unpack your 2D vector</p>
<pre><code>pl.plot(*my_vector)
</code></pre>
<p>Which is effectively just doing</p>
<pre><code>pl.plot(x_cords, y_cords)
</code></pre>
| 0
|
2016-09-09T19:05:22Z
|
[
"python",
"matplotlib",
"vector"
] |
Django store shell history
| 39,418,167
|
<p>I lost all my history when I login to shell. Inorder to view the history, I installed ipython and tried to use it.</p>
<p>Now, I get a error when I try this command - </p>
<pre><code>ipython manage.py shell_plus --print-sql
[TerminalIPythonApp] CRITICAL | Bad config encountered during
initialization:
[TerminalIPythonApp] CRITICAL | Unrecognized flag: '--print-sql'
</code></pre>
<p>Similarly even bpython also does not work.</p>
| 0
|
2016-09-09T19:00:28Z
| 39,418,904
|
<p>You shouldn't use <code>ipython</code> to start the shell. Just use <code>python manage.py...</code> as normal; Django will use IPython as the shell if it's installed.</p>
| 0
|
2016-09-09T19:57:17Z
|
[
"python",
"django",
"shell"
] |
Reading repeated information from the file in different order in Python
| 39,418,191
|
<p>I tried to search for similar questions, but I couldn't find. Please mark as a duplicate if there is similar questions available.</p>
<p>I'm trying to figure out a way to read and gather multiple information from single file. Here in the file Block-A,B & C are repeated in random order and Block-C has more than one information to capture. Every block end with 'END' text. Here is the input file:</p>
<pre><code>Block-A:
(info1)
END
Block-B:
(info2)
END
Block-C:
(info3)
(info4)
END
Block-C:
(info7)
(info8)
END
Block-A:
(info5)
END
Block-B:
(info6)
END
</code></pre>
<p>Here is my code:</p>
<pre><code>import re
out1 = out2 = out3 = ""
a = b = c = False
array=[]
with open('test.txt', 'r') as f:
for line in f:
if line.startswith('Block-A'):
line = next(f)
out1 = line
a = True
if line.startswith('Block-B'):
line=next(f)
out2 = line
b = True
if line.startswith('Block-C'):
c = True
if c:
line=next(f)
if not line.startswith('END\n'):
out3 = line
array.append(out3.strip())
if a == b == c == True:
print(out1.rstrip() +', ' + out2.rstrip() + ', ' + str(array))
a = b = c = False
array=[]
</code></pre>
<p>Thank you in advance for your valuable inputs.</p>
| 0
|
2016-09-09T19:02:17Z
| 39,418,305
|
<p>Use a dictionary for the datas from each block. When you read the line that starts a block, set a variable to that name, and use it as the key into the dictionary.</p>
<pre><code>out = {}
with open('test.txt', 'r') as f:
for line in f:
if line.endswidth(':'):
blockname = line[:-1]
if not blockname in out:
out[blockname] = ''
elif line == 'END'
blockname = None
else if blockname:
out[blockname] += line
print(out)
</code></pre>
| 1
|
2016-09-09T19:12:07Z
|
[
"python",
"readfile"
] |
Reading repeated information from the file in different order in Python
| 39,418,191
|
<p>I tried to search for similar questions, but I couldn't find. Please mark as a duplicate if there is similar questions available.</p>
<p>I'm trying to figure out a way to read and gather multiple information from single file. Here in the file Block-A,B & C are repeated in random order and Block-C has more than one information to capture. Every block end with 'END' text. Here is the input file:</p>
<pre><code>Block-A:
(info1)
END
Block-B:
(info2)
END
Block-C:
(info3)
(info4)
END
Block-C:
(info7)
(info8)
END
Block-A:
(info5)
END
Block-B:
(info6)
END
</code></pre>
<p>Here is my code:</p>
<pre><code>import re
out1 = out2 = out3 = ""
a = b = c = False
array=[]
with open('test.txt', 'r') as f:
for line in f:
if line.startswith('Block-A'):
line = next(f)
out1 = line
a = True
if line.startswith('Block-B'):
line=next(f)
out2 = line
b = True
if line.startswith('Block-C'):
c = True
if c:
line=next(f)
if not line.startswith('END\n'):
out3 = line
array.append(out3.strip())
if a == b == c == True:
print(out1.rstrip() +', ' + out2.rstrip() + ', ' + str(array))
a = b = c = False
array=[]
</code></pre>
<p>Thank you in advance for your valuable inputs.</p>
| 0
|
2016-09-09T19:02:17Z
| 39,418,547
|
<p>If you don't want the Block-X to print, unhash the elif statment</p>
<pre><code>import os
data = r'/home/x/Desktop/test'
txt = open(data, 'r')
for line in txt.readlines():
line = line[:-1]
if line in ('END'):
pass
#elif line.startswith('Block'):
# pass
else:
print line
>>>>
Block-A:
(info1)
Block-B:
(info2)
Block-C:
(info3)
(info4)
Block-C:
(info7)
(info8)
Block-A:
(info5)
Block-B:
(info6)
</code></pre>
| 0
|
2016-09-09T19:30:22Z
|
[
"python",
"readfile"
] |
ImportError: Import by filename is not supported. (WSGI)
| 39,418,376
|
<p>I don't now why I seem to be getting the following errors from the Apache24 error log: </p>
<pre><code> mod_wsgi (pid=9036): Exception occurred processing WSGI script 'C:/Apache24/htdocs/tools/ixg_dashboard/ixg_dashboard.wsgi'.
Traceback (most recent call last):
File "C:/Apache24/htdocs/tools/ixg_dashboard/ixg_dashboard.wsgi", line 242, in application
env = Environment(loader=PackageLoader('C:\\htdocs\\tools\\ixg_dashboard\\ixg_dashboard', 'templates'))
File "C:\\Python27\\lib\\site-packages\\jinja2\\loaders.py", line 224, in __init__
provider = get_provider(package_name)
File "C:\\Python27\\lib\\site-packages\\pkg_resources\\__init__.py", line 419, in get_provider
__import__(moduleOrReq)
ImportError: Import by filename is not supported.
</code></pre>
<p>The .wsgi file is pretty long but I'll give the relevant parts of code. The imports are as follows: </p>
<pre><code> import cgi, urlparse, jinja2, os
from pymongo import MongoClient
from sets import Set
from jinja2 import Environment, PackageLoader
</code></pre>
<p>and the actual code where I believe the issue may lie is:</p>
<pre><code> env = Environment(loader=PackageLoader('C:\htdocs\tools\ixg_dashboard\ixg_dashboard', 'templates'))
table_template = env.get_template('table.html')
print table_template.render()
</code></pre>
<p>The code was created by someone who was previously here and never got around to getting it fully working on the server but was able to get it to run locally which is what I'm trying to do. Is it possible that the issue lies in the httpd.config file for Apache and the code itself. I tried looking around and couldn't find anything that worked. It could possibly be jinja as well but im not sure.</p>
| 0
|
2016-09-09T19:18:35Z
| 39,419,868
|
<p><code>PackageLoader</code> is defined as:</p>
<blockquote>
<p>class jinja2.PackageLoader(package_name, package_path='templates', encoding='utf-8')</p>
</blockquote>
<p>So the first argument should be a package name, not a path.</p>
<p>Check the Jinja2 documentation to better understand what you are supposed to supply for the package name.</p>
<ul>
<li><a href="http://jinja.pocoo.org/docs/dev/api/#jinja2.PackageLoader" rel="nofollow">http://jinja.pocoo.org/docs/dev/api/#jinja2.PackageLoader</a></li>
</ul>
| 0
|
2016-09-09T21:20:50Z
|
[
"python",
"jinja2",
"mod-wsgi"
] |
Histogram with equal number of points in each bin
| 39,418,380
|
<p>I have a sorted vector <code>points</code> with 100 points. I now want to create two histograms: the first histogram should have 10 bins having equal width. The second should also have 10 histograms, but not necessarily of equal width. In the second, I just want the histogram to have the same number of points in each bin. So for example, the first bar might be very short and wide, while the second bar in the histogram might be very tall and narrow. I have code that creates the first histogram using <code>matplotlib</code>, but now I'm not sure how to go about creating the second one.</p>
<pre><code>import matplotlib.pyplot as plt
points = [1,2,3,4,5,6, ..., 99]
n, bins, patches = plt.hist(points, 10)
</code></pre>
<p><strong>Edit:</strong></p>
<p>Trying the solution below, I'm a bit puzzled as to why the heights of all of the bars in my histogram are the same.</p>
<p><a href="http://i.stack.imgur.com/Sr9tk.png" rel="nofollow"><img src="http://i.stack.imgur.com/Sr9tk.png" alt="enter image description here"></a></p>
| 3
|
2016-09-09T19:18:56Z
| 39,418,480
|
<p>provide bins to histogram:</p>
<p><code>bins=points[0::len(points)/10]</code></p>
<p>and then</p>
<p><code>n, bins, patches = plt.hist(points, bins=bins)</code></p>
<p>(provided points is sorted)</p>
| 0
|
2016-09-09T19:26:12Z
|
[
"python",
"matplotlib",
"histogram"
] |
Histogram with equal number of points in each bin
| 39,418,380
|
<p>I have a sorted vector <code>points</code> with 100 points. I now want to create two histograms: the first histogram should have 10 bins having equal width. The second should also have 10 histograms, but not necessarily of equal width. In the second, I just want the histogram to have the same number of points in each bin. So for example, the first bar might be very short and wide, while the second bar in the histogram might be very tall and narrow. I have code that creates the first histogram using <code>matplotlib</code>, but now I'm not sure how to go about creating the second one.</p>
<pre><code>import matplotlib.pyplot as plt
points = [1,2,3,4,5,6, ..., 99]
n, bins, patches = plt.hist(points, 10)
</code></pre>
<p><strong>Edit:</strong></p>
<p>Trying the solution below, I'm a bit puzzled as to why the heights of all of the bars in my histogram are the same.</p>
<p><a href="http://i.stack.imgur.com/Sr9tk.png" rel="nofollow"><img src="http://i.stack.imgur.com/Sr9tk.png" alt="enter image description here"></a></p>
| 3
|
2016-09-09T19:18:56Z
| 39,419,049
|
<p>This question is <a href="http://stackoverflow.com/questions/37649342/matplotlib-how-to-make-a-histogram-with-bins-of-equal-area/37667480">similar to one</a> that I wrote an answer to a while back, but sufficiently different to warrant it's own question. The solution, it turns out, uses basically the same code from my other answer.</p>
<pre><code>def histedges_equalN(x, nbin):
npt = len(x)
return np.interp(np.linspace(0, npt, nbin + 1),
np.arange(npt),
np.sort(x))
x = np.random.randn(100)
n, bins, patches = plt.hist(x, histedges_equalN(x, 10))
</code></pre>
<p>This solution gives a histogram with equal height bins, because---by definition---a histogram is a count of the number of points in each bin.</p>
<p>To get a pdf (i.e. density function) use the <code>normed=True</code> kwarg to plt.hist. As described in my <a href="http://stackoverflow.com/questions/37649342/matplotlib-how-to-make-a-histogram-with-bins-of-equal-area/37667480">other answer</a>. </p>
| 2
|
2016-09-09T20:10:13Z
|
[
"python",
"matplotlib",
"histogram"
] |
Histogram with equal number of points in each bin
| 39,418,380
|
<p>I have a sorted vector <code>points</code> with 100 points. I now want to create two histograms: the first histogram should have 10 bins having equal width. The second should also have 10 histograms, but not necessarily of equal width. In the second, I just want the histogram to have the same number of points in each bin. So for example, the first bar might be very short and wide, while the second bar in the histogram might be very tall and narrow. I have code that creates the first histogram using <code>matplotlib</code>, but now I'm not sure how to go about creating the second one.</p>
<pre><code>import matplotlib.pyplot as plt
points = [1,2,3,4,5,6, ..., 99]
n, bins, patches = plt.hist(points, 10)
</code></pre>
<p><strong>Edit:</strong></p>
<p>Trying the solution below, I'm a bit puzzled as to why the heights of all of the bars in my histogram are the same.</p>
<p><a href="http://i.stack.imgur.com/Sr9tk.png" rel="nofollow"><img src="http://i.stack.imgur.com/Sr9tk.png" alt="enter image description here"></a></p>
| 3
|
2016-09-09T19:18:56Z
| 39,437,454
|
<p>Here I wrote an example on how you could get the result. My approach uses the data points to get the bins that will be passed to <code>np.histogram</code> to construct the histogram. Hence the need to sort the data using <code>np.argsort(x)</code>. The number of points per bin can be controlled with <code>npoints</code>. As an example, I construct two histograms using this method. One where the weights of all points is the same, so that the height of the histogram is always constant (and equal to <code>npoints</code>). The other where the "weight" of each point is drawn from a uniform random distribution (see <code>mass</code> array). As expected, the boxes of the histogram are not equal anymore. However, the Poisson error per bin is the same.</p>
<pre><code>x = np.random.rand(1000)
mass = np.random.rand(1000)
npoints = 200
ksort = np.argsort(x)
#Here I get the bins from the data set.
#Note that data need to be sorted
bins=x[ksort[0::npoints]]
bins=np.append(bins,x[ksort[-1]])
fig = plt.figure(1,figsize=(10,5))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
#Histogram where each data
yhist, xhist = np.histogram(x, bins, weights=None)
ax1.plot(0.5*(xhist[1:]+xhist[:-1]), yhist, linestyle='steps-mid', lw=2, color='k')
yhist, xhist = np.histogram(x, bins, weights=mass)
ax2.plot(0.5*(xhist[1:]+xhist[:-1]), yhist, linestyle='steps-mid', lw=2, color='k')
ax1.set_xlabel('x', size=15)
ax1.set_ylabel('Number of points per bin', size=15)
ax2.set_xlabel('x', size=15)
ax2.set_ylabel('Mass per bin', size=15)
</code></pre>
<p><a href="http://i.stack.imgur.com/2EQSs.png" rel="nofollow"><img src="http://i.stack.imgur.com/2EQSs.png" alt="enter image description here"></a></p>
| 0
|
2016-09-11T15:05:57Z
|
[
"python",
"matplotlib",
"histogram"
] |
Proper way to split Python list into list of lists on matching list item delimeter
| 39,418,384
|
<pre><code>list = ['a', 'b', 'c', '', 'd', 'e', 'f', '', 'g','h','i']
def chunk(list, delim):
ct = list.count(delim)
chunks = [[]] * (ct+1)
for iter in range(ct):
idx = list.index(delim)
chunks[iter] = list[:idx]
list = list[idx+1:]
chunks[ct] = list
return chunks
print chunk(list, '')
</code></pre>
<p>Produces:</p>
<pre><code>[['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']]
</code></pre>
<p>Which is what I want, but I feel like this is the 'c' way to accomplish this. Is there a more pythonistic construct I should know? </p>
| 1
|
2016-09-09T19:19:19Z
| 39,418,430
|
<p>Here is one way to do it <a href="https://docs.python.org/dev/library/itertools.html#itertools.groupby" rel="nofollow">using <code>itertools.groupby</code></a></p>
<pre><code>[list(v) for k, v in groupby(l, key=lambda x: x!= '') if k]
</code></pre>
<p>Demo:</p>
<pre><code>>>> from itertools import groupby
>>> l = ['a', 'b', 'c', '', 'd', 'e', 'f', '', 'g','h','i']
>>> ch = [list(v) for k, v in groupby(l, key=lambda x: x!= '') if k]
>>> ch
[['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']]
</code></pre>
| 3
|
2016-09-09T19:23:11Z
|
[
"python"
] |
Proper way to split Python list into list of lists on matching list item delimeter
| 39,418,384
|
<pre><code>list = ['a', 'b', 'c', '', 'd', 'e', 'f', '', 'g','h','i']
def chunk(list, delim):
ct = list.count(delim)
chunks = [[]] * (ct+1)
for iter in range(ct):
idx = list.index(delim)
chunks[iter] = list[:idx]
list = list[idx+1:]
chunks[ct] = list
return chunks
print chunk(list, '')
</code></pre>
<p>Produces:</p>
<pre><code>[['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']]
</code></pre>
<p>Which is what I want, but I feel like this is the 'c' way to accomplish this. Is there a more pythonistic construct I should know? </p>
| 1
|
2016-09-09T19:19:19Z
| 39,418,454
|
<p>Focusing on the iterability of lists in Python:</p>
<pre class="lang-py prettyprint-override"><code>list = ['a', 'b', 'c', '', 'd', 'e', 'f', '', 'g','h','i']
def chunk(list, delim):
aggregate_list = []
current_list = []
for item in list:
if item == delim:
aggregate_list.append(current_list)
current_list = [];
# You could also do checking here if delim is starting
# index or ending index and act accordingly
else:
current_list.append(item)
return aggregate_list
print chunk(list, '')
</code></pre>
| 1
|
2016-09-09T19:24:40Z
|
[
"python"
] |
Proper way to split Python list into list of lists on matching list item delimeter
| 39,418,384
|
<pre><code>list = ['a', 'b', 'c', '', 'd', 'e', 'f', '', 'g','h','i']
def chunk(list, delim):
ct = list.count(delim)
chunks = [[]] * (ct+1)
for iter in range(ct):
idx = list.index(delim)
chunks[iter] = list[:idx]
list = list[idx+1:]
chunks[ct] = list
return chunks
print chunk(list, '')
</code></pre>
<p>Produces:</p>
<pre><code>[['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']]
</code></pre>
<p>Which is what I want, but I feel like this is the 'c' way to accomplish this. Is there a more pythonistic construct I should know? </p>
| 1
|
2016-09-09T19:19:19Z
| 39,418,561
|
<p>A neat one-liner:</p>
<pre><code>L = ['a', 'b', 'c', '', 'd', 'e', 'f', '', 'g','h','i']
new_l = [x.split("_") for x in "_".join(L).split("__")]
</code></pre>
| 1
|
2016-09-09T19:31:31Z
|
[
"python"
] |
Proper way to split Python list into list of lists on matching list item delimeter
| 39,418,384
|
<pre><code>list = ['a', 'b', 'c', '', 'd', 'e', 'f', '', 'g','h','i']
def chunk(list, delim):
ct = list.count(delim)
chunks = [[]] * (ct+1)
for iter in range(ct):
idx = list.index(delim)
chunks[iter] = list[:idx]
list = list[idx+1:]
chunks[ct] = list
return chunks
print chunk(list, '')
</code></pre>
<p>Produces:</p>
<pre><code>[['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']]
</code></pre>
<p>Which is what I want, but I feel like this is the 'c' way to accomplish this. Is there a more pythonistic construct I should know? </p>
| 1
|
2016-09-09T19:19:19Z
| 39,418,640
|
<pre><code>lists = ['a','b','c','','d','e','f','','g','h','i']
new_list = [[]]
delim = ''
for i in range(len(lists)):
if lists[i] == delim:
new_list.append([])
else:
new_list[-1].append(lists[i])
new_list
[['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']]
</code></pre>
| -1
|
2016-09-09T19:36:55Z
|
[
"python"
] |
Python: Auto response to the console output
| 39,418,424
|
<p>I am running an os command in python script and the command ask do you want to continue after running for sometime and it does that many times while it runs. I want to see if there is a way in python where it answers yes every time it sees its waiting for user confirmation.</p>
<p>Dint find much help.</p>
<p>If someone can paste some example.</p>
| 1
|
2016-09-09T19:22:46Z
| 39,419,539
|
<p>Finally I figured it out.</p>
<pre><code>#!/usr/bin/python
import subprocess
import os
f = open('test', 'r')
for i in f.readlines():
cmd = '/usr/sbin/nsrmm -d -S ' + i
cmd1 = cmd.split( )
print cmd1
p = subprocess.Popen(cmd1, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
p.stdin.write('y')
</code></pre>
| 0
|
2016-09-09T20:52:43Z
|
[
"python"
] |
Counting matching substrings in a string
| 39,418,432
|
<pre><code>s = "abobabobabob"
total = 0
for i in range(len(s)):
if s[i-1 : i+2] == 'bob':
total += 1
print ('times bob occurs is:' + str(total))
</code></pre>
<p>Is there a simpler way to change the <code>if</code> statement? Also, could someone tell me what <code>i-1 : i+2</code> does?</p>
<p>I wrote this code to find the occurrences of "bob" and I am stuck for a while.</p>
| 0
|
2016-09-09T19:23:16Z
| 39,418,619
|
<p>The following code searches for the beginning index of each <code>'bob'</code> substring, and adds that value to an array. The <code>total</code> variable just returns the count of the values in that array. </p>
<p><strong>As a one-liner:</strong></p>
<pre><code>total = 0
s = "abobabobabob"
total = len([i for i in range(len(s)) if s.find('bob', i) == i])
print('times bob occurs is: ' + str(total))
</code></pre>
<p><strong>Prints:</strong></p>
<pre><code>times bob occurs is: 3
</code></pre>
<p><strong><em>--- Here is an alternative if you want a modification to your <code>for</code> loop:</em></strong></p>
<pre><code>total = 0
s = "abobabobabob"
for i in range(len(s)):
if (s.find('bob', i) == i):
total += 1
print('times bob occurs is: ' + str(total))
</code></pre>
<p><strong>Prints:</strong></p>
<pre><code>times bob occurs is: 3
</code></pre>
| 1
|
2016-09-09T19:35:18Z
|
[
"python"
] |
Counting matching substrings in a string
| 39,418,432
|
<pre><code>s = "abobabobabob"
total = 0
for i in range(len(s)):
if s[i-1 : i+2] == 'bob':
total += 1
print ('times bob occurs is:' + str(total))
</code></pre>
<p>Is there a simpler way to change the <code>if</code> statement? Also, could someone tell me what <code>i-1 : i+2</code> does?</p>
<p>I wrote this code to find the occurrences of "bob" and I am stuck for a while.</p>
| 0
|
2016-09-09T19:23:16Z
| 39,418,705
|
<p>if s[i-1 : i+2] == 'bob' is checking whether the current index -1 to current index + 2 is 'bob'. It will cause issue, since i begin at 0 and i-1 is the last element of list<br></p>
<p>try:</p>
<pre><code>s = "abobabobabob"
total = 0
for i in range(1,len(s)):
if s[i-1 : i+2] == 'bob':
total += 1
</code></pre>
<p>there is a better way in two line</p>
<pre><code>s= "abobabobabob"
print sum( s[i:i+3] =='bob' for i in range(len(s)-2) )
3
</code></pre>
| 1
|
2016-09-09T19:41:44Z
|
[
"python"
] |
Counting matching substrings in a string
| 39,418,432
|
<pre><code>s = "abobabobabob"
total = 0
for i in range(len(s)):
if s[i-1 : i+2] == 'bob':
total += 1
print ('times bob occurs is:' + str(total))
</code></pre>
<p>Is there a simpler way to change the <code>if</code> statement? Also, could someone tell me what <code>i-1 : i+2</code> does?</p>
<p>I wrote this code to find the occurrences of "bob" and I am stuck for a while.</p>
| 0
|
2016-09-09T19:23:16Z
| 39,418,763
|
<p>Your if-statement is looking at a subset of s. To sort of answer your other question, here's a simpler approach that changes more than the if-statement:</p>
<p>This is python's regular expression library</p>
<pre><code>import re
</code></pre>
<p>The embedded statement searches for all, non-overlapping instances of 'bob' and returns a list with each match as an element; the outer statement just counts the number of elements in the list</p>
<pre><code>len(re.findall('bob',s))
</code></pre>
| 2
|
2016-09-09T19:46:04Z
|
[
"python"
] |
Counting matching substrings in a string
| 39,418,432
|
<pre><code>s = "abobabobabob"
total = 0
for i in range(len(s)):
if s[i-1 : i+2] == 'bob':
total += 1
print ('times bob occurs is:' + str(total))
</code></pre>
<p>Is there a simpler way to change the <code>if</code> statement? Also, could someone tell me what <code>i-1 : i+2</code> does?</p>
<p>I wrote this code to find the occurrences of "bob" and I am stuck for a while.</p>
| 0
|
2016-09-09T19:23:16Z
| 39,418,927
|
<p>Using <code>enumerate()</code></p>
<pre><code>s = "abobabobabob"
n = len([i for i, w in enumerate(s) if s[i:i+3] == "bob"])
print ('times bob occurs is:', n)
</code></pre>
<p>This checks for every character in string <code>s</code>(by index <code>i</code>) if the character + two characters on the right equals "bob".</p>
<hr>
<p>About your secundary question: </p>
<p>In your example:</p>
<pre><code>s[i-1 : i+2]
</code></pre>
<p>refers to the index (<code>i</code>) of the characters in the string, where <code>s[0]</code> is the first character, while <code>s[i-1 : i+2]</code> is a <em>slice</em> of the string <code>s</code>, from the current character <code>-1</code> (the one on the left) to the current character <code>+2</code> (the second character on the right).</p>
| 1
|
2016-09-09T19:58:24Z
|
[
"python"
] |
Can't open csv file via command prompt on windows
| 39,418,504
|
<p>I'm having trouble with the first argument in the pd.read_table function used for Python via Pandas. If I hardcode in the file path of the csv file I want to open and use as a data frame, it works. However, when I receive the file path via a command argument, which saves it into a variable, it won't receive the variable. Any idea why?</p>
<p>I'm using anaconda 2.0.1 on Windows</p>
| -1
|
2016-09-09T19:27:49Z
| 39,418,661
|
<p>I tried the same and it worked for me(I am using Ubuntu, but that should not matter). I did the following please crosscheck and see</p>
<p>test.py</p>
<pre><code>import sys
import pandas as pd
pd.read_table(sys.argv[1])
</code></pre>
<p>Then called the function like :</p>
<pre><code>test.py /home/user/test.csv
</code></pre>
<p>Hope this was helpful, in case you are doing something different put a snppet to be more clear</p>
| 3
|
2016-09-09T19:38:21Z
|
[
"python",
"windows",
"pandas",
"command-line"
] |
Getting attribute value using ansible filters
| 39,418,600
|
<p>I am new to ansible. I am trying to get my attribute value from JSON data. Here is my JSON data:</p>
<pre><code>{
"user1": "{\n \"data\":[\n {\n \"secure\": [\n {\n \"key\": \"-----BEGIN KEY-----\nMIIEowIBAAKCAQEAgOh+Afb0oQEnvHifHuzBwhCP3\n-----END KEY-----\"\n }\n ],\n \"owner\": \"shallake\",\n \"position\": \"gl\"\n }\n ]\n}"
}
</code></pre>
<p>I had invalid JSON data, so I converted it into valid JSON using <code>to_json</code> and <code>from_json</code>. The above JSON data is the result that I get.</p>
<p>code:</p>
<pre><code> - set_fact:
user: "{{ lookup('file','filepath/myfile.json') | to_json }}"
- set_fact:
user1: "{{ user | from_json}}"
- set_fact:
user3: "{{ item }}"
with_items: "{{ user1['data'] | map(attribute='position') | list }}"
</code></pre>
<p>In a third set_fact, I am trying to get position attribute value. But it shows an error like this:</p>
<blockquote>
<p>the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'item' is undefined</p>
</blockquote>
<p>"\"{\n \\"data\\":[\n {\n \\"secure\\": [\n {\n \\"key\\": \\"-----BEGINPRIVATE KEY-----\nMIIEowIBAAKCAQEAgOh+Afb0oQEnvHifHuzBwl+Tiu8LXoJXb/ii/eh\ngYEP3\n-----END PRIVATE KEY-----\\"\n }\n ],\n \\"owner\\": \\"shalloke\\",\n \\"position\\": \\"gl\\"\n }\n ]\n}\n\n\""</p>
<p>So how can I get a position value from the above JSON data result using ansible loop?</p>
| 1
|
2016-09-09T19:34:24Z
| 39,423,604
|
<p>Would you please check if the key in <code>filepath/myfile.json</code> spans multiple lines? We need to escape line breaks in JSON. Use <code>\n</code> to replace line breaks.</p>
<p>The <code>user1</code> is supported to be a map parsed from JSON. However, the JSON in <code>filepath/myfile.json</code> might be invalid. So in the second <code>set_fact</code>, this invalid JSON couldn't be parsed. Thus, <code>user1</code> is just a string. And it doesn't contains an entry <code>data</code>.</p>
<pre><code>{
"data":[
{
"secure": [
{
"key": "-----BEGIN KEY-----\nMIIEowIBAAKCAQEAgOh+Afb0oQEnvHifHuzBwhCP3\n-----END KEY-----"
}
],
"owner": "shallake",
"position": "gl"
}
]
}
</code></pre>
<p><a href="http://stackoverflow.com/questions/42068/how-do-i-handle-newlines-in-json">How do I handle newlines in JSON?</a></p>
| 0
|
2016-09-10T07:14:33Z
|
[
"python",
"json",
"ansible",
"ansible-playbook",
"ansible-2.x"
] |
Extracting text from multiple powerpoint files using python
| 39,418,620
|
<p>I am trying to find a way to look in a folder and search the contents of all of the powerpoint documents within that folder for specific strings, preferably using Python. When those strings are found, I want to report out the text after that string as well as what document it was found in. I would like to compile the information and report it in a CSV file. </p>
<p>So far I've only come across the olefil package, <a href="https://bitbucket.org/decalage/olefileio_pl/wiki/Home" rel="nofollow">https://bitbucket.org/decalage/olefileio_pl/wiki/Home</a>. This provides all of the text contained in a specific document, which is not what I am looking to do. Please help.</p>
| -1
|
2016-09-09T19:35:27Z
| 39,430,554
|
<p><code>python-pptx</code> can be used to do what you propose. Just at a high level, you would do something like this (not working code, just and idea of overall approach):</p>
<pre><code>from pptx import Presentation
for pptx_filename in directory:
prs = Presentation(pptx_filename)
for slide in prs.slides:
for shape in slide.shapes:
print shape.text
</code></pre>
<p>You'd need to add the bits about searching shape text for key strings and adding them to a CSV file or whatever, but this general approach should work just fine. I'll leave it to you to work out the finer points :)</p>
| 0
|
2016-09-10T21:04:27Z
|
[
"python",
"python-2.7",
"powerpoint"
] |
Using HTML to implement Python User Interface
| 39,418,639
|
<p>I've just started learning a bit of Python and I'm currently trying to implement a Python UI through HTML. Is there anything in vanilla Python that would allow for this, similar to how you can create UI's with Java and XML with JFX or will I have to use a framework such as Django? </p>
<p>I'm reluctant to use Django as there are many features that I do not need</p>
<p>Thanks,<br>
Michael</p>
| 0
|
2016-09-09T19:36:54Z
| 39,419,544
|
<p>In vanilla python <a href="https://docs.python.org/3/library/wsgiref.html" rel="nofollow">wsgiref</a> is very helpful for building the server-side of web-applications (possibly with <a href="https://docs.python.org/3/library/stdtypes.html#str.format" rel="nofollow">str.format</a> or <a href="https://docs.python.org/3/library/string.html#template-strings" rel="nofollow">string.Template</a> and/or <a href="https://docs.python.org/3/library/json.html" rel="nofollow">json</a>) but if you want it more direct communication I would suggest <a href="https://docs.python.org/3/library/xmlrpc.html" rel="nofollow">XML-RPC</a> (there are good js-clients out there).</p>
<p>It is also possible to execute Python Scripts right in your Website with <a href="http://www.brython.info/" rel="nofollow">Brython</a> or (with strong constraints) <a href="http://ironpython.net/" rel="nofollow">Ironpython</a></p>
<p>For Windows you can build <a href="https://en.wikipedia.org/wiki/HTML_Application" rel="nofollow">HTML Applications</a> with <a href="http://ironpython.net/" rel="nofollow">Ironpython</a> (have not tried but in theory it should work) or <a href="http://www.brython.info/" rel="nofollow">Brython</a> (if you dont want to require the user to have ironpython installed)</p>
<p>You can also use <a href="http://pyjs.org/" rel="nofollow">Pyjs</a> to build applications but while it uses html and javascript, i think it you dont see much of it.</p>
<p>There are HTML things in some ui-libraries like in <a href="https://wxpython.org/" rel="nofollow">wxpython</a> (I am quite sure you will find them in many libraries)</p>
| 0
|
2016-09-09T20:53:26Z
|
[
"python",
"html",
"user-interface"
] |
Regex to find continuous characters in the word and remove the word
| 39,418,648
|
<p>I want to find whether a particular character is occurring continuously in the a word of the string or find if the word contains only numbers and remove those as well. For example,</p>
<pre><code>df
All aaaaaab the best 8965
US issssss is 123 good
qqqq qwerty 1 poiks
lkjh ggggqwe 1234 aqwe iphone5224s
</code></pre>
<p>I want to check for two conditions, where in the first condition check for repeating characters more than 3 times and also check if a word contains only numbers. I want to remove only when the word contains only numbers and when a character occurs more than 3 times continuously in the word.</p>
<p>the following should be the output,</p>
<pre><code>df
All the best
US is good
qwerty poiks
lkjh aqwe iphone5224s
</code></pre>
<p>The following are my trying,</p>
<p><code>re.sub('r'\w[0-9]\w*', df[i])</code> for number. but this is not removing single character numbers. Also for the repeated characters, I tried, <code>re.sub('r'\w[a-z A-Z]+[a-z A-Z]+[a-z A-Z]+[a-z A-Z]\w*', df[i])</code> but this is removing every word here. instead of repeated letter. </p>
<p>Can anybody help me in solving these problems?</p>
| 1
|
2016-09-09T19:37:38Z
| 39,418,787
|
<p>I would suggest</p>
<pre><code>\s*\b(?=[a-zA-Z\d]*([a-zA-Z\d])\1{3}|\d+\b)[a-zA-Z\d]+
</code></pre>
<p>See the <a href="https://regex101.com/r/qA0aS0/1" rel="nofollow">regex demo</a></p>
<p>Only alphanumeric words are matched with this pattern:</p>
<ul>
<li><code>\s*</code> - zero or more whitespaces</li>
<li><code>\b</code> - word boundary</li>
<li><code>(?=[a-zA-Z\d]*([a-zA-Z\d])\1{3}|\d+\b)</code> - there must be at least 4 repeated consecutive letters or digits in the word OR the whole word must consist of only digits</li>
<li><code>[a-zA-Z\d]+</code> - a word with 1+ letters or digits.</li>
</ul>
<p><a href="http://ideone.com/5OiEtS" rel="nofollow">Python demo:</a></p>
<pre><code>import re
p = re.compile(r'\s*\b(?=[a-z\d]*([a-z\d])\1{3}|\d+\b)[a-z\d]+', re.IGNORECASE)
s = "df\nAll aaaaaab the best 8965\nUS issssss is 123 good \nqqqq qwerty 1 poiks\nlkjh ggggqwe 1234 aqwe iphone5224s"
strs = s.split("\n") # Split to test lines individually
print([p.sub("", x).strip() for x in strs])
# => ['df', 'All the best', 'US is good', 'qwerty poiks', 'lkjh aqwe iphone5224s']
</code></pre>
<p>Note that <code>strip()</code> will remove remaining whitespaces at the start of the string.</p>
<p>A similar solution in R with a TRE regex:</p>
<pre><code>x <- c("df", "All aaaaaab the best 8965", "US issssss is 123 good ", "qqqq qwerty 1 poiks", "lkjh ggggqwe 1234 aqwe iphone5224s")
p <- " *\\b(?:[[:alnum:]]*([[:alnum:]])\\1{3}[[:alnum:]]*|[0-9]+)\\b"
gsub(p, "", x)
</code></pre>
<p>See a <a href="http://ideone.com/k51nyu" rel="nofollow">demo</a></p>
<p><em>Pattern details</em> and <a href="https://regex101.com/r/sL0wE8/1" rel="nofollow">demo</a>:</p>
<ul>
<li><code>\s*</code> - 0+ whitespaces</li>
<li><code>\b</code> - a leading word boundary</li>
<li><code>(?:[[:alnum:]]*([[:alnum:]])\1{3}[[:alnum:]]*|[0-9]+)</code> - either of the 2 alternatives:
<ul>
<li><code>[[:alnum:]]*([[:alnum:]])\1{3}[[:alnum:]]*</code> - 0+ alphanumerics followed with the same 4 alphanumeric chars, followed with 0+ alphanumerics</li>
<li><code>|</code> - or</li>
<li><code>[0-9]+</code> - 1 or more digits</li>
</ul></li>
<li><code>\b</code> - a trailing word boundary</li>
</ul>
<p>UPDATE:</p>
<p>To also add an option to remove 1-letter words you may use</p>
<ol>
<li><strong>R</strong> (add <code>[[:alpha:]]|</code> to the alternation group): <code>\s*\b(?:[[:alpha:]]|[[:alnum:]]*([[:alnum:]])\1{3}[[:alnum:]]*|[0-9]+)\b</code> (see <a href="https://regex101.com/r/sL0wE8/3" rel="nofollow">demo</a>)</li>
<li><strong>Python</strong> lookaround based regex (<a href="https://regex101.com/r/qA0aS0/2" rel="nofollow">add</a> <code>[a-zA-Z]\b|</code> to the lookahead group): <code>*\b(?=[a-zA-Z]\b|\d+\b|[a-zA-Z\d]*([a-zA-Z\d])\1{3})[a-zA-Z\d]+</code></li>
</ol>
| 0
|
2016-09-09T19:48:15Z
|
[
"python",
"regex",
"python-2.7",
"python-3.x"
] |
Regex to find continuous characters in the word and remove the word
| 39,418,648
|
<p>I want to find whether a particular character is occurring continuously in the a word of the string or find if the word contains only numbers and remove those as well. For example,</p>
<pre><code>df
All aaaaaab the best 8965
US issssss is 123 good
qqqq qwerty 1 poiks
lkjh ggggqwe 1234 aqwe iphone5224s
</code></pre>
<p>I want to check for two conditions, where in the first condition check for repeating characters more than 3 times and also check if a word contains only numbers. I want to remove only when the word contains only numbers and when a character occurs more than 3 times continuously in the word.</p>
<p>the following should be the output,</p>
<pre><code>df
All the best
US is good
qwerty poiks
lkjh aqwe iphone5224s
</code></pre>
<p>The following are my trying,</p>
<p><code>re.sub('r'\w[0-9]\w*', df[i])</code> for number. but this is not removing single character numbers. Also for the repeated characters, I tried, <code>re.sub('r'\w[a-z A-Z]+[a-z A-Z]+[a-z A-Z]+[a-z A-Z]\w*', df[i])</code> but this is removing every word here. instead of repeated letter. </p>
<p>Can anybody help me in solving these problems?</p>
| 1
|
2016-09-09T19:37:38Z
| 39,418,828
|
<p>Numbers are easy:</p>
<pre><code>re.sub(r'\d+', '', s)
</code></pre>
<p>If you want to remove words where the same letter appears twice, you can use capturing groups (see <a href="https://docs.python.org/3/library/re.html" rel="nofollow">https://docs.python.org/3/library/re.html</a>):</p>
<pre><code>re.sub(r'\w*(\w)\1\w*', '', s)
</code></pre>
<p>Putting those together:</p>
<pre><code>re.sub(r'\d+|\w*(\w)\1\w*', '', s)
</code></pre>
<p>For example:</p>
<pre><code>>>> re.sub(r'\d+|\w*(\w)\1\w*', '', 'abc abbc 123 a1')
'abc a'
</code></pre>
<p>You may need to clean up spaces afterwards with something like this:</p>
<pre><code>>>> re.sub(r' +', ' ', 'abc a')
'abc a'
</code></pre>
| 0
|
2016-09-09T19:51:17Z
|
[
"python",
"regex",
"python-2.7",
"python-3.x"
] |
Bulk insert into Vertica using Python using Uber's vertica-python package
| 39,418,808
|
<p><strong>Question 1 of 2</strong></p>
<p>I'm trying to import data from CSV file to Vertica using Python, using Uber's vertica-python package. The problem is that whitespace-only data elements are being loaded into Vertica as NULLs; I want only empty data elements to be loaded in as NULLs, and non-empty whitespace data elements to be loaded in as whitespace instead.</p>
<p>For example, the following two rows of a CSV file are both loaded into the database as ('1','abc',NULL,NULL), whereas I want the second one to be loaded as ('1','abc',' ',NULL).</p>
<pre><code>1,abc,,^M
1,abc, ,^M
</code></pre>
<p>Here is the code:</p>
<pre><code># import vertica-python package by Uber
# source: https://github.com/uber/vertica-python
import vertica_python
# write CSV file
filename = 'temp.csv'
data = <list of lists, e.g. [[1,'abc',None,'def'],[2,'b','c','d']]>
with open(filename, 'w', newline='', encoding='utf-8') as f:
writer = csv.writer(f, escapechar='\\', doublequote=False)
writer.writerows(data)
# define query
q = "copy <table_name> (<column_names>) from stdin "\
"delimiter ',' "\
"enclosed by '\"' "\
"record terminator E'\\r' "
# copy data
conn = vertica_python.connect( host=<host>,
port=<port>,
user=<user>,
password=<password>,
database=<database>,
charset='utf8' )
cur = conn.cursor()
with open(filename, 'rb') as f:
cur.copy(q, f)
conn.close()
</code></pre>
<p><strong>Question 2 of 2</strong></p>
<p>Are there any other issues (e.g. character encoding) I have to watch out for using this method of loading data into Vertica? Are there any other mistakes in the code? I'm not 100% convinced it will work on all platforms (currently running on Linux; there may be record terminator issues on other platforms, for example). Any recommendations to make this code more robust would be greatly appreciated.</p>
<p>In addition, are there alternative methods of bulk inserting data into Vertica from Python, such as loading objects directly from Python instead of having to write them to CSV files first, without sacrificing speed? The data volume is large and the insert job as is takes a couple of hours to run.</p>
<p>Thank you in advance for any help you can provide!</p>
| 1
|
2016-09-09T19:49:15Z
| 39,418,934
|
<p>The copy statement you have should perform the way you want with regards to the spaces. I tested it using a very similar <code>COPY</code>. </p>
<p>Edit: I missed what you were really asking with the copy, I'll leave this part in because it might still be useful for some people: </p>
<p>To fix the whitespace, you can change your copy statement: </p>
<pre><code>copy <table_name> (FIELD1, FIELD2, MYFIELD3 AS FILLER VARCHAR(50), FIELD4, FIELD3 AS NVL(MYFIELD3,'') ) from stdin
</code></pre>
<p>By using filler, it will parse that into something like a variable which you can then assign to your actual table field using <code>AS</code> later in the copy.</p>
<p>As for any gotchas... I do what you have on Solaris often. The only one thing I noticed is you are setting the record terminator, not sure if this is really something you need to do depending on environment or not. I've never had to do it switching between linux, windows and solaris. </p>
<p>Also, one hint, this will return a resultset that will tell you how many rows were loaded. Do a <code>fetchone()</code> and print it out and you'll see it. </p>
<p>The only other thing I can recommend might be to use reject tables in case any rows reject. </p>
<p>You mentioned that it is a large job. You may need to increase your read timeout by adding <code>'read_timeout': 7200,</code> to your connection or more. I'm not sure if None would disable the read timeout or not.</p>
<p>As for a faster way... if the file is accessible directly on the vertica node itself, you could just reference the file directly in the copy instead of doing a <code>copy from stdin</code> and have the daemon load it directly. It's much faster and has a number of optimizations that you can do. You could then use apportioned load, and if you have multiple files to load you can just reference them all together in a list of files. </p>
<p>It's kind of a long topic, though. If you have any specific questions let me know.</p>
| 1
|
2016-09-09T19:59:05Z
|
[
"python",
"vertica"
] |
How to make a reddit bot invoke a username using PRAW
| 39,418,819
|
<p>I have been playing with PRAW to build reddit bots. While it is easy to build a bot that auto responds generic messages to triggered keywords, I want to build something that is a bit more interactive.</p>
<p>I am trying to build a reddit bot that invokes the username of the redditor it is replying to. Eg redditor /u/ironman666 posts "good morning", I want the bot to auto respond "good morning to you too! /u/ironman666". How can I make this work? Thanks!</p>
<p>Sample code: where and how do I invoke the triggering user's name?</p>
<pre><code>import praw
import time
from praw.helpers import comment_stream
r = praw.Reddit("response agent")
r.login()
target_text = "Good Morning!"
response_text = "Good Morning to you too! #redditor name go here "
processed = []
while True:
for c in comment_stream(r, 'all'):
if target_text == c.body.lower() and c.id not in processed:
print('Wiseau bot activated! :@')
c.reply(response_text)
processed.append(c.id) #then push the response
time.sleep(2)
</code></pre>
| -1
|
2016-09-09T19:50:10Z
| 39,419,165
|
<p>If you <a href="http://praw.readthedocs.io/en/stable/pages/comment_parsing.html?highlight=author#deleted-comments" rel="nofollow">read the docs</a> you'll see that the comment has an <code>author</code> attribute (unless it was deleted), so you should be able to do:</p>
<pre><code>response_text = 'Good morning to you too, {}!'
...
c.reply(response_text.format(c.author))
</code></pre>
| 0
|
2016-09-09T20:20:50Z
|
[
"python",
"bots",
"reddit",
"praw"
] |
Querying SQLAlchemy User model by password never matches user
| 39,418,832
|
<p>I want users in my Flask app to be able to change their email by providing a new email and their current password. However, when I try to look up the user by the password they entered with <code>User.query.filter_by(password=form.password.data)</code>, it never finds the user. How can I query the user so I can change the email while verifying the password?</p>
<pre><code>@app.route('/changeemail', methods=['GET', 'POST'])
def change_email():
form = ChangeEmailForm(request.form)
if form.validate_on_submit():
user = User.query.filter_by(password=form.password.data).first()
if user and bcrypt.check_password_hash(user.password,form.password.data):
user.email = form.email.data
db.session.commit()
return redirect(url_for("user.confirmed"))
return render_template('user/changeemail.html',form=form)
class ChangeEmailForm(Form):
email = TextField('email', validators=[DataRequired()])
password = PasswordField('password', validators=[DataRequired()])
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String, unique=True, nullable=False)
password = db.Column(db.String, nullable=False)
</code></pre>
| 1
|
2016-09-09T19:51:45Z
| 39,419,149
|
<p>The <em>whole point</em> of storing the hashed password is so that you <em>never</em> store the raw password. Query for the user that you're editing, then verify the password.</p>
<pre><code>@app.route('/<int:id>/change-email', methods=['GET', 'POST'])
def change_email(id):
user = User.query.get_or_404(id)
if user.check_password(form.password.data):
...
</code></pre>
| 4
|
2016-09-09T20:19:53Z
|
[
"python",
"flask",
"sqlalchemy"
] |
Python 3 import hooks
| 39,418,845
|
<p>I'm trying to implement an "import hook" in Python 3. The hook is supposed to add an attribute to every class that is imported. (Not really <em>every</em> class, but for the sake of simplifying the question, let's assume so.)</p>
<p>I have a loader defined as follows:</p>
<pre><code>import sys
class ConfigurableImports(object):
def find_module(self, fullname, path):
return self
def create_module(self, spec):
# ???
def exec_module(self, module):
# ???
sys.meta_path = [ConfigurableImports()]
</code></pre>
<p>The documentation states that as of 3.6, loaders will have to implement both <code>create_module</code> and <code>exec_module</code>. However, the documentation also has little indication what one should do to implement these, and no examples. My use case is very simple because I'm only loading Python modules and the behavior of the loader is supposed to be almost exactly the same as the default behavior.</p>
<p>If I could, I'd just use <code>importlib.import_module</code> and then modify the module contents accordingly; however, since importlib leverages the import hook, I get an infinite recursion.</p>
<p>EDIT: I've also tried using the <code>imp</code> module's <code>load_module</code>, but this is deprecated.</p>
<p>Is there any easy way to implement this functionality with import hooks, or am I going about this the wrong way?</p>
| 2
|
2016-09-09T19:52:34Z
| 39,421,393
|
<p>Imho, if you only need to alter the module, that is, play with it after it has been found and loaded, there's no need to actually create a full hook that finds, loads and returns the module; just patch <code>__import__</code>.</p>
<p>This can easily be done in a few lines:</p>
<pre><code>import builtins
from inspect import getmembers, isclass
old_imp = builtins.__import__
def add_attr(mod):
for name, val in getmembers(mod):
if isclass(val):
setattr(val, 'a', 10)
def custom_import(*args, **kwargs):
m = old_imp(*args, **kwargs)
add_attr(m)
return m
builtins.__import__ = custom_import
</code></pre>
<p>Here, <code>__import__</code> is replaced by your custom import that calls the original <code>__import__</code> to get the loaded module and then calls a function <code>add_attr</code> that does the actual modification of the classes in a module (with <code>getmembers</code> and <code>isclass</code> from <code>inspect</code>) before returning the module.</p>
<p>Of course this is created in a way that when you <code>import</code> the script, the changes are made. </p>
<p>You can and probably should create auxiliary functions that <code>restore</code> and change it again if needed i.e things like:</p>
<pre><code>def revert(): builtins.__import__ = old_imp
def apply(): builtins.__import__ = custom_import
</code></pre>
<p>A context-manager might also be a cool addition too.</p>
<p>So you can call it when you need to get your </p>
| 1
|
2016-09-10T00:35:32Z
|
[
"python",
"python-3.x"
] |
Failed execute_child with while running google backend for speech_recognition
| 39,418,846
|
<p>I have a problem on the execution of the example of Speech Recognition in Python. Atter I have executed the next command line:<strong>python -m speech_recognition</strong>, I got the next result:</p>
<hr>
<pre><code>A moment of silence, please...
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.surround71
ALSA lib setup.c:548:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory
ALSA lib setup.c:548:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory
ALSA lib setup.c:548:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.hdmi
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.hdmi
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline
Set minimum energy threshold to 50.1102959507
Say something!
ALSA lib pcm_dsnoop.c:618:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.surround71
ALSA lib setup.c:548:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory
ALSA lib setup.c:548:(add_elem) Cannot obtain info for CTL elem (MIXER,'IEC958 Playback Default',0,0,0): No such file or directory
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.hdmi
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.hdmi
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.modem
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline
Got it! Now to recognize it...
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/local/lib/python2.7/dist-packages/speech_recognition /__main__.py", line 16, in <module>
value = r.recognize_google(audio)
File "/usr/local/lib/python2.7/dist-packages/speech_recognition/__init__.py", line 642, in recognize_google
convert_width = 2 # audio samples must be 16-bit
File "/usr/local/lib/python2.7/dist-packages/speech_recognition/__init__.py", line 385, in get_flac_data
], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
</code></pre>
<hr>
<p>I am working with the Debian jessie, and I have installed previously the next programs:</p>
<pre><code>-Python 2.7.9.
-PyAudio 1.9.
-Jack audio connection kit.
</code></pre>
<p>Kind regards.</p>
| 0
|
2016-09-09T19:52:45Z
| 39,420,370
|
<p>You need to install flac:</p>
<blockquote>
<p>FLAC encoder (required only if the system is not x86-based
Windows/Linux/OS X)</p>
</blockquote>
<p>See the documentation <a href="https://github.com/Uberi/speech_recognition#flac-for-some-systems" rel="nofollow">https://github.com/Uberi/speech_recognition#flac-for-some-systems</a></p>
| 0
|
2016-09-09T22:09:16Z
|
[
"python",
"speech-recognition"
] |
Django - how do I clone object without applying clone changes to source object
| 39,418,865
|
<p>Best described by example:</p>
<p>View:</p>
<pre><code>def my_view(request):
obj_old = Inventories.objects.get(id = source_id)
obj_new = obj_old
obj_old.some_field = 0
obj_old.save()
obj_new.some_field = 1
obj_new.id = None
obj_new.save()
</code></pre>
<p>The problem is that the changes I make to obj_new are also applied to <code>obj_old</code> so that the value of <code>some_field</code> is 1 for both <code>obj_old</code> and <code>obj_new</code>. Any ideas how to fix this ?</p>
| 0
|
2016-09-09T19:53:44Z
| 39,418,930
|
<p>You should make a <em>copy</em> of your object, and not make them equal. </p>
<p>To make a copy you can use the copy module</p>
<pre><code>import copy
obj_new = copy.deepcopy(obj_old)
</code></pre>
| 2
|
2016-09-09T19:58:29Z
|
[
"python",
"django"
] |
How to recode and count efficiently
| 39,418,883
|
<p>I have a large csv with three strings per row in this form:</p>
<pre><code>a,c,d
c,a,e
f,g,f
a,c,b
c,a,d
b,f,s
c,a,c
</code></pre>
<p>I read in the first two columns recode the strings to integers and then remove duplicates counting how many copies of each row there were as follows:</p>
<pre><code>import pandas as pd
df = pd.read_csv("test.csv", usecols=[0,1], prefix="ID_", header=None)
letters = set(df.values.flat)
df.replace(to_replace=letters, value=range(len(letters)), inplace=True)
df1 = df.groupby(['ID_0', 'ID_1']).size().rename('count').reset_index()
print df1
</code></pre>
<p>This gives:</p>
<pre><code> ID_0 ID_1 count
0 0 1 2
1 1 0 3
2 2 4 1
3 4 3 1
</code></pre>
<p>which is exactly what I need.</p>
<p>However as my data is large I would like to make two improvements.</p>
<ul>
<li>How can I do the groupby and then recode instead of the other way round? The problem is that I can't do df1[['ID_0','ID_0']].replace(to_replace=letters, value=range(len(letters)), inplace = True). This gives the error "A value is trying to be set on a copy of a slice from a DataFrame"</li>
<li>How can I avoid creating df1? That is do the whole thing inplace.</li>
</ul>
| 0
|
2016-09-09T19:55:20Z
| 39,418,952
|
<p><strong><em>New Answer</em></strong></p>
<pre><code>unq = np.unique(df)
mapping = pd.Series(np.arange(unq.size), unq)
df.stack().map(mapping).unstack() \
.groupby(df.columns.tolist()).size().reset_index(name='count')
</code></pre>
<p><a href="http://i.stack.imgur.com/VSSrX.png" rel="nofollow"><img src="http://i.stack.imgur.com/VSSrX.png" alt="enter image description here"></a></p>
<p><strong><em>Old Answer</em></strong></p>
<pre><code>df.stack().rank(method='dense').astype(int).unstack() \
.groupby(df.columns.tolist()).size().reset_index(name='count')
</code></pre>
<p><a href="http://i.stack.imgur.com/5TDXZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/5TDXZ.png" alt="enter image description here"></a></p>
| 2
|
2016-09-09T20:00:30Z
|
[
"python",
"pandas"
] |
How to recode and count efficiently
| 39,418,883
|
<p>I have a large csv with three strings per row in this form:</p>
<pre><code>a,c,d
c,a,e
f,g,f
a,c,b
c,a,d
b,f,s
c,a,c
</code></pre>
<p>I read in the first two columns recode the strings to integers and then remove duplicates counting how many copies of each row there were as follows:</p>
<pre><code>import pandas as pd
df = pd.read_csv("test.csv", usecols=[0,1], prefix="ID_", header=None)
letters = set(df.values.flat)
df.replace(to_replace=letters, value=range(len(letters)), inplace=True)
df1 = df.groupby(['ID_0', 'ID_1']).size().rename('count').reset_index()
print df1
</code></pre>
<p>This gives:</p>
<pre><code> ID_0 ID_1 count
0 0 1 2
1 1 0 3
2 2 4 1
3 4 3 1
</code></pre>
<p>which is exactly what I need.</p>
<p>However as my data is large I would like to make two improvements.</p>
<ul>
<li>How can I do the groupby and then recode instead of the other way round? The problem is that I can't do df1[['ID_0','ID_0']].replace(to_replace=letters, value=range(len(letters)), inplace = True). This gives the error "A value is trying to be set on a copy of a slice from a DataFrame"</li>
<li>How can I avoid creating df1? That is do the whole thing inplace.</li>
</ul>
| 0
|
2016-09-09T19:55:20Z
| 39,419,342
|
<p>I like to use <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html" rel="nofollow"><code>sklearn.preprocessing.LabelEncoder</code></a> to do the letter to digit conversion:</p>
<pre><code>from sklearn.preprocessing import LabelEncoder
# Perform the groupby (before converting letters to digits).
df = df.groupby(['ID_0', 'ID_1']).size().rename('count').reset_index()
# Initialize the LabelEncoder.
le = LabelEncoder()
le.fit(df[['ID_0', 'ID_1']].values.flat)
# Convert to digits.
df[['ID_0', 'ID_1']] = df[['ID_0', 'ID_1']].apply(le.transform)
</code></pre>
<p>The resulting output:</p>
<pre><code> ID_0 ID_1 count
0 0 2 2
1 1 3 1
2 2 0 3
3 3 4 1
</code></pre>
<p>If you want to convert back to letters at a later point in time, you can use <code>le.inverse_transform</code>:</p>
<pre><code>df[['ID_0', 'ID_1']] = df[['ID_0', 'ID_1']].apply(le.inverse_transform)
</code></pre>
<p>Which maps back as expected:</p>
<pre><code> ID_0 ID_1 count
0 a c 2
1 b f 1
2 c a 3
3 f g 1
</code></pre>
<p>If you just want to know which digit corresponds to which letter, you can look at the <code>le.classes_</code> attribute. This will give you an array of letters, which is indexed by the digit it encodes to:</p>
<pre><code>le.classes_
['a' 'b' 'c' 'f' 'g']
</code></pre>
<p>For a more visual representation, you can cast as a Series:</p>
<pre><code>pd.Series(le.classes_)
0 a
1 b
2 c
3 f
4 g
</code></pre>
<p><strong>Timings</strong></p>
<p>Using a larger version of the sample data and the following setup:</p>
<pre><code>df2 = pd.concat([df]*10**5, ignore_index=True)
def root(df):
df = df.groupby(['ID_0', 'ID_1']).size().rename('count').reset_index()
le = LabelEncoder()
le.fit(df[['ID_0', 'ID_1']].values.flat)
df[['ID_0', 'ID_1']] = df[['ID_0', 'ID_1']].apply(le.transform)
return df
def pir2(df):
unq = np.unique(df)
mapping = pd.Series(np.arange(unq.size), unq)
return df.stack().map(mapping).unstack() \
.groupby(df.columns.tolist()).size().reset_index(name='count')
</code></pre>
<p>I get the following timings:</p>
<pre><code>%timeit root(df2)
10 loops, best of 3: 101 ms per loop
%timeit pir2(df2)
1 loops, best of 3: 1.69 s per loop
</code></pre>
| 3
|
2016-09-09T20:36:30Z
|
[
"python",
"pandas"
] |
Modify output from series.rolling to 2 decimal points
| 39,418,892
|
<p>Using the following data:</p>
<pre><code> Open High Low Last Volume
Timestamp
2016-06-10 16:10:00 2088.00 2088.0 2087.75 2087.75 1418
2016-06-10 16:11:00 2088.00 2088.0 2087.75 2088.00 450
2016-06-10 16:12:00 2088.00 2088.0 2087.25 2087.25 2898
</code></pre>
<p>I am looking to use a rolling moving average as follows:</p>
<pre><code>data["sma_9_volume"] = data.Volume.rolling(window=9,center=False).mean()
</code></pre>
<p>and this gives me this output:</p>
<pre><code> Open High Low Last Volume candle_range sma_9_close sma_9_volume
Timestamp
2014-03-04 09:38:00 1785.50 1785.50 1784.75 1785.25 24 0.75 1785.416667 48.000000
2014-03-04 09:39:00 1785.50 1786.00 1785.25 1785.25 13 0.75 1785.500000 30.444444
2014-03-04 09:40:00 1786.00 1786.25 1783.50 1783.75 28 2.75 1785.333333 30.444444
2014-03-04 09:41:00 1784.00 1785.00 1784.00 1784.25 12 1.00 1785.083333 22.777778
2014-03-04 09:42:00 1784.25 1784.75 1784.00 1784.25 18 0.75 1784.972222 20.222222
2014-03-04 09:43:00 1784.75 1785.00 1784.50 1784.50 10 0.50 1784.888889 20.111111
2014-03-04 09:44:00 1784.25 1784.25 1783.75 1784.00 32 0.50 1784.694444 18.222222
</code></pre>
<p>what is the best way to take the output from:</p>
<pre><code>data["sma_9_volume"] = data.Volume.rolling(window=9,center=False).mean()
</code></pre>
<p>and have the output only return 2 decimal points i.e. <code>48.00</code> instead of <code>48.000000</code></p>
| 1
|
2016-09-09T19:56:23Z
| 39,418,996
|
<p>you can use pandas' <code>round</code> function</p>
<p><code>data["sma_9_volume"]=data["sma_9_volume"].round(decimals=2)</code></p>
<p>or directly:</p>
<p><code>data["sma_9_volume"] = data.Volume.rolling(window=9,center=False).mean().round(decimals=2)</code></p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.round.html" rel="nofollow">documentation</a></p>
| 2
|
2016-09-09T20:04:18Z
|
[
"python",
"pandas"
] |
Downloading a list files with python requests on a cert authenticated resource
| 39,419,035
|
<p>I've been on here all day long looking for a usable answer to this question, but haven't found something that works for my use case.</p>
<p>I am trying to download a bunch of files from a server that checks a client cert for authentication. I also have a list array of specific files I want to download in an automated way. I am using python 2.7. What I'd like to do is to wait for FileOne.zip to download before looping back to start downloading FileTwo.zip, and so on. Here's the code:</p>
<pre><code>import requests
import shutil
dlList = ["FileOne.zip", "FileTwo.zip", "FileThree.zip"]
cCert = r'C:\Temp\client_cert.pem'
cKey = r'C:\Temp\client_key.pem'
for i in dlList:
url = ("https://my.server.com/files/" + i)
r = requests.get(url, cert=(cCert, cKey), stream=True)
with open(i, "wb") as f:
r.raw.decode_content = True
shutil.copyfileobj(r.raw, f)
</code></pre>
<p>The cert is working fine; I'm getting 200 response.</p>
<p>But when run, the script creates 3 files in the directory called FileOne.zip, FileTwo.zip, etc, but they are each only 2K and the files themselves are a couple hundred MB each.</p>
<p>What I'd like to do is get one file completed and then move on to the next. Once that's working I can figure out how to multi-thread it. But right now I just want to get the files down correctly...</p>
| 0
|
2016-09-09T20:08:31Z
| 39,419,083
|
<p>Ok, here is what I did to fix it:</p>
<p>instead of</p>
<pre><code>r.raw.decode_content = True
shutil.copyfileobj(r.raw, f)
</code></pre>
<p>I chunked it using:</p>
<pre><code>with open(i, "wb") as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
</code></pre>
<p>I honestly don't know how efficient this is, but it seems to work. Still would be happy on any advice on how I could tighten this up or move to concurrent downloads.</p>
| 0
|
2016-09-09T20:14:20Z
|
[
"python",
"ssl",
"python-multithreading",
"downloading"
] |
scipy.misc save image with transparency
| 39,419,055
|
<p>I am trying to load a image, turn some of the pixels into transparent by setting the alpha param in scipy.misc module, e.g.:</p>
<pre><code>import scipy.misc as sm
im = sm.imread("tmp.png", mode = "RGBA")
im[0, 0, :] = [0,0,0,0]
</code></pre>
<p>When I try to save it:</p>
<pre><code>sm.imsave("out.png", im)
</code></pre>
<p>The RGB setting for that pixel have been changed (to black in this case), but the transparency setting does not manifest. How could I fix this?</p>
| 0
|
2016-09-09T20:10:46Z
| 39,519,205
|
<p>It turns out this works. I just wasn't aware the jpg format does not support transparency, if you save the image in png then things work.</p>
| 0
|
2016-09-15T19:46:49Z
|
[
"python",
"image-processing",
"scipy"
] |
KeyError: '_OrderedDict__root?
| 39,419,098
|
<p>Hi I have following code snippet which gives KeyError. I have checked other links specifying <code>make __init__ call to Ordered Dict</code> which I have done. But still no luck.</p>
<pre><code>from collections import OrderedDict
class BaseExcelNode(OrderedDict):
def __init__(self):
super(BaseExcelNode, self).__init__()
self.start_row = -1
self.end_row = -1
self.col_no = -1
def __getattr__(self, name):
return self[name]
def __setattr__(self, name, value):
self[name] = value
BaseExcelNode()
</code></pre>
<p><code>Error</code>:</p>
<pre><code>Traceback (most recent call last):
File "CIMParser.py", line 29, in <module>
BaseExcelNode()
File "CIMParser.py", line 9, in __init__
super(BaseExcelNode, self).__init__()
File "C:\Python27\lib\collections.py", line 64, in __init__
self.__root
File "CIMParser.py", line 15, in __getattr__
return self[name]
KeyError: '_OrderedDict__root'
</code></pre>
| 3
|
2016-09-09T20:15:15Z
| 39,419,358
|
<p><code>OrderedDict</code> is implemented under the assumption that attribute access works by the default mechanisms, and in particular, that attribute access is not equivalent to indexing.</p>
<p>When you subclass it and change how attribute access works, you break one of the deepest assumptions of the <code>OrderedDict</code> implementation, and everything goes to hell.</p>
| 2
|
2016-09-09T20:37:25Z
|
[
"python",
"ordereddictionary"
] |
KeyError: '_OrderedDict__root?
| 39,419,098
|
<p>Hi I have following code snippet which gives KeyError. I have checked other links specifying <code>make __init__ call to Ordered Dict</code> which I have done. But still no luck.</p>
<pre><code>from collections import OrderedDict
class BaseExcelNode(OrderedDict):
def __init__(self):
super(BaseExcelNode, self).__init__()
self.start_row = -1
self.end_row = -1
self.col_no = -1
def __getattr__(self, name):
return self[name]
def __setattr__(self, name, value):
self[name] = value
BaseExcelNode()
</code></pre>
<p><code>Error</code>:</p>
<pre><code>Traceback (most recent call last):
File "CIMParser.py", line 29, in <module>
BaseExcelNode()
File "CIMParser.py", line 9, in __init__
super(BaseExcelNode, self).__init__()
File "C:\Python27\lib\collections.py", line 64, in __init__
self.__root
File "CIMParser.py", line 15, in __getattr__
return self[name]
KeyError: '_OrderedDict__root'
</code></pre>
| 3
|
2016-09-09T20:15:15Z
| 39,419,527
|
<p>Using monkey patching method:</p>
<pre><code>from collections import OrderedDict
class BaseExcelNode(OrderedDict):
def __init__(self):
super(BaseExcelNode, self).__init__()
self.start_row = -1
self.end_row = -1
self.col_no = -1
def __getattr__(self, name):
if not name.startswith('_'):
return self[name]
super(BaseExcelNode, self).__getattr__(name)
def __setattr__(self, name, value):
if not name.startswith('_'):
self[name] = value
else:
super(BaseExcelNode, self).__setattr__(name, value)
b = BaseExcelNode()
</code></pre>
| 1
|
2016-09-09T20:51:39Z
|
[
"python",
"ordereddictionary"
] |
How to remove clutter from PyInstaller one-folder build?
| 39,419,108
|
<p>Alright, so I managed to use PyInstaller to build a homework assignment I made with Pygame. Cool. The executable works fine and everything.</p>
<p>Problem is, alongside the executable, there is so much clutter. So many files, like pyds and dlls accompany the exe in the same directory, making it look so ugly.</p>
<p>Now, I know that these files are important; the modules I used, such as Pygame, need them to work. Still, how do I make PyInstaller build my game, so that it puts the clutter into its own folder? I could just manually make a folder and move the files in there, but it stops the exe from working.</p>
<p>If this info would help any, I used Python 3.4.3 and am on Windows.</p>
| 0
|
2016-09-09T20:15:58Z
| 39,569,588
|
<p>Apparently this is an <a href="https://github.com/pyinstaller/pyinstaller/issues/1048" rel="nofollow">open request for <code>pyinstaller</code></a>, but hasn't happened in the past two years.</p>
<p>My workaround for this one was to create a shortcut one folder higher than the <code>.exe</code> folder with all the files.</p>
<p>The difficult part here is to set up the shortcut to work in all PCs. <a href="http://superuser.com/questions/644407/using-relative-paths-for-windows-shortcuts">I did two things in the shortcut properties</a>. </p>
<ol>
<li>Delete the "Starts in:" path</li>
<li>Set as the "Target": <code>"%windir%\system32\cmd.exe" /c start "" "%CD%\YourFolder\YourEXE.exe"</code></li>
</ol>
<p>The second one calls a command line and launches your exe with a relative path. I have only tested it with windows 7. The downside is that this becomes a shortcut to the command line and you get a console window.</p>
<p>A different option is to create a batch file in the one folder higher than the <code>.exe</code> and call it. This shows only briefly the console window, but won't allow you to set your own icon. A sample code that launches your code:</p>
<pre><code>@echo off
setlocal enabledelayedexpansion enableextensions
set CDir=%~dp0
set EXEF=%CDir%MyEXEFolder\
cd %EXEF%
start "MyCode" "MyCode.exe"
exit
</code></pre>
<p>Just open a notepad, add the code and save it as a <code>.bat</code> file.</p>
<p><a href="http://stackoverflow.com/a/36112722/3837382">This answer</a> also describes a workaround with <code>py2exe</code>, but a similar approach can be used in <code>pyinstaller</code>. However, I find this quite "ugly" and I am not sure if it's that easy to collect all dependencies in one folder.</p>
<p>There is also <a href="http://www.csparks.com/Relative/" rel="nofollow">Relative</a>, but I didn't want to use another program.</p>
| 0
|
2016-09-19T09:22:44Z
|
[
"python",
"build",
"pygame",
"pyinstaller"
] |
assigning variables in a list based on its value in relation to the other values in the list
| 39,419,114
|
<p>This function takes, as an argument, a positive integer n and generates 3 random numbers between 200 and 625, the smallest random number will be called minValue, the middle random number will be called myTaret, and the largest random number will be called maxValue.</p>
<pre><code>def usingFunctionsGreater(n):
#create random numbers
aList=[]
for i in range(3):
aList.append(random.randrange(200,625,1))
#assign minValue, myTarget, and maxValue
</code></pre>
<p>the comments should help explain the program that i want to write, but I have no clue how to assign the variables to the elements in the list that is generated.</p>
| 1
|
2016-09-09T20:16:32Z
| 39,419,192
|
<p>How about the following:</p>
<pre><code>>>> min, target, max = sorted([random.randrange(100, 625, 1) for i in range(3)])
>>> min, target, max
(155, 181, 239)
>>>
</code></pre>
| 4
|
2016-09-09T20:22:46Z
|
[
"python",
"list",
"random",
"variable-assignment"
] |
I am getting the output for the program but not in the required way?
| 39,419,190
|
<p>Write a Python function that accepts a string as an input.
The function must return the sum of the digits 0-9 that appear in the string, ignoring all other characters.Return 0 if there are no digits in the string.</p>
<p>my code:</p>
<pre><code>user_string = raw_input("enter the string: ")
new_user_string = list(user_string)
addition_list = []
for s in new_user_string:
if ( not s.isdigit()):
combine_string = "".join(new_user_string)
print ( combine_string)
else:
if ( s.isdigit()):
addition_list.append(s)
test = "".join(addition_list)
output = sum(map(int,test))
print ( output )
</code></pre>
<p>the output should be:</p>
<pre><code>Enter a string: aa11b33
8
</code></pre>
<p>my output:</p>
<pre><code>enter the string: aa11b33
aa11b33
aa11b33
1
2
aa11b33
5
8
</code></pre>
| -1
|
2016-09-09T20:22:43Z
| 39,419,246
|
<p>This look suspiciously like homework...</p>
<pre><code>getsum = lambda word: sum(int(n) for n in word if n.isdigit())
getsum('aa11b33')
Out[3]: 8
getsum('aa')
Out[4]: 0
</code></pre>
<p>An explanation of how this works piece-by-piece:</p>
<ol>
<li>The function <code>n.isdigit()</code> returns <code>True</code> if <code>n</code> is composed only of one or more digits, false otherwise. (<a href="https://docs.python.org/3/library/stdtypes.html#str.isdigit" rel="nofollow">Documentation</a>)</li>
<li>The syntax <code>for n in word</code> will loop over each item in the iterable <code>word</code>. Since <code>word</code> is a string, python considers each character to be an individual item.</li>
<li>The operation <code>sum(int(n) for n in word...)</code> casts each character to an <code>int</code> and takes the sum of all of them, while the suffix <code>if n.isdigit()</code> filters out all non-digit characters. Thus the end result will just take the sum of all the individual digit characters in the string <code>word</code>.</li>
<li>The syntax <code>lambda x: some expression using x</code> constructs an anonymous function which takes some value <code>x</code> as its parameter, and returns the value of the expression after the colon. To give this function a name we can just put it on the right-hand-side of an assignment statement. Then we can call it like a normal function. Usually it's better to use a normal <code>def getsum(x)</code> kind of function definition, however <code>lambda</code> expressions are sometimes useful for if you have a one-off kind of function you just need to use as a parameter to a function. In general in python it's better if you can find a way to avoid them, as they're not considered very readable.</li>
</ol>
<p>Here is a complete example:</p>
<pre><code>def sumword(word):
return sum( int(n) for n in word if n.isdigit() )
word = raw_input("word please: ")
print(sumword(word))
</code></pre>
| 4
|
2016-09-09T20:27:17Z
|
[
"python",
"python-2.7",
"python-3.x",
"ipython"
] |
I am getting the output for the program but not in the required way?
| 39,419,190
|
<p>Write a Python function that accepts a string as an input.
The function must return the sum of the digits 0-9 that appear in the string, ignoring all other characters.Return 0 if there are no digits in the string.</p>
<p>my code:</p>
<pre><code>user_string = raw_input("enter the string: ")
new_user_string = list(user_string)
addition_list = []
for s in new_user_string:
if ( not s.isdigit()):
combine_string = "".join(new_user_string)
print ( combine_string)
else:
if ( s.isdigit()):
addition_list.append(s)
test = "".join(addition_list)
output = sum(map(int,test))
print ( output )
</code></pre>
<p>the output should be:</p>
<pre><code>Enter a string: aa11b33
8
</code></pre>
<p>my output:</p>
<pre><code>enter the string: aa11b33
aa11b33
aa11b33
1
2
aa11b33
5
8
</code></pre>
| -1
|
2016-09-09T20:22:43Z
| 39,419,264
|
<p>Python cares about indentations:</p>
<pre><code>user_string = raw_input("enter the string: ")
new_user_string = list(user_string)
addition_list = []
for s in new_user_string:
if ( not s.isdigit()):
combine_string = "".join(new_user_string)
#print ( combine_string)
else:
if ( s.isdigit()):
addition_list.append(s)
test = "".join(addition_list)
output = sum(map(int,test))
print ( output ) #<------ change here
</code></pre>
<p>Also got rid of your inside print statement since it's not your output.</p>
| 0
|
2016-09-09T20:28:45Z
|
[
"python",
"python-2.7",
"python-3.x",
"ipython"
] |
I am getting the output for the program but not in the required way?
| 39,419,190
|
<p>Write a Python function that accepts a string as an input.
The function must return the sum of the digits 0-9 that appear in the string, ignoring all other characters.Return 0 if there are no digits in the string.</p>
<p>my code:</p>
<pre><code>user_string = raw_input("enter the string: ")
new_user_string = list(user_string)
addition_list = []
for s in new_user_string:
if ( not s.isdigit()):
combine_string = "".join(new_user_string)
print ( combine_string)
else:
if ( s.isdigit()):
addition_list.append(s)
test = "".join(addition_list)
output = sum(map(int,test))
print ( output )
</code></pre>
<p>the output should be:</p>
<pre><code>Enter a string: aa11b33
8
</code></pre>
<p>my output:</p>
<pre><code>enter the string: aa11b33
aa11b33
aa11b33
1
2
aa11b33
5
8
</code></pre>
| -1
|
2016-09-09T20:22:43Z
| 39,419,294
|
<p>It should be </p>
<pre><code>user_string = raw_input("enter the string: ")
new_user_string = list(user_string)
addition_list = []
for s in new_user_string:
if ( not s.isdigit()):
combine_string = "".join(new_user_string)
else:
if ( s.isdigit()):
addition_list.append(s)
test = "".join(addition_list)
output = sum(map(int,addition_list))
print output
</code></pre>
<p>You were getting the output you were for two reasons. </p>
<ol>
<li>In the if statement, you were telling it to print the string that was originally inputted when it came across a non-digit. This makes perfect sense as you look at your output - the string is printed when it sees a, the string is printed when it sees the second a, and the string is not printed, not printed (for the ones) and then the string is printed for the last time with the b.</li>
<li>You were printing the output as the for loop incremented through the list, meaning it printed the total each time. Moving the output variable and print statement outside the for loop fixed the problem.</li>
</ol>
| 1
|
2016-09-09T20:31:41Z
|
[
"python",
"python-2.7",
"python-3.x",
"ipython"
] |
launch selenium with python on osx
| 39,419,293
|
<p>I have the following script</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
opts = Options()
opts.binary_location = "/Applications/Chrome.app/Contents/MacOS/Google\ Chrome"
browser = webdriver.Chrome(chrome_options=opts)
browser.get('0.0.0.0:3500')
assert 'Django' in browser.title
</code></pre>
<p>I get the following error after interrupting the program</p>
<pre><code>$ python3 functional_tests.py
Traceback (most recent call last):
File "functional_tests.py", line 6, in <module>
browser = webdriver.Chrome(chrome_options=opts)
File "/usr/local/lib/python3.5/site-packages/selenium/webdriver/chrome/webdriver.py", line 69, in __init__
desired_capabilities=desired_capabilities)
File "/usr/local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 90, in __init__
self.start_session(desired_capabilities, browser_profile)
File "/usr/local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 177, in start_session
response = self.execute(Command.NEW_SESSION, capabilities)
File "/usr/local/lib/python3.5/site-packages/selenium/webdriver/remote/webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.5/site-packages/selenium/webdriver/remote/errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: no chrome binary at /Applications/Chrome.app/Contents/MacOS/Google\ Chrome
(Driver info: chromedriver=2.23.409710 (0c4084804897ac45b5ff65a690ec6583b97225c0),platform=Mac OS X 10.11.6 x86_64)
</code></pre>
<p>I have chrome installed on osx. And I know the path to the chrome binary in the script is correct. What may be wrong?</p>
| 1
|
2016-09-09T20:31:38Z
| 39,419,740
|
<p>Try adding the path to your <code>chromedriver</code> binary when you instantiate <code>webdriver.Chrome()</code></p>
<pre><code>browser = webdriver.Chrome('path/to/my/chomedriver', chrome_options=opts)
</code></pre>
<p>The <a href="https://sites.google.com/a/chromium.org/chromedriver/getting-started" rel="nofollow">official documentation</a> suggests that you "include the path to ChromeDriver when instantiating webdriver.Chrome" <em>in addition to</em> including the <code>chromedriver</code> location in your <code>PATH</code> variable when you are using it in Python. </p>
<p>If you do not know the location of <code>chromedriver</code>, you can execute <code>brew info chromedriver</code> to see the path in addition to other information.</p>
| 1
|
2016-09-09T21:10:07Z
|
[
"python",
"osx",
"selenium"
] |
shortest distance from plane to origin using a plane equation
| 39,419,343
|
<p>suppose i have a plane equation ax+by+cz=d, how can I go about finding the shortest distance from the plane to the origin?</p>
<p>I am going in reverse of this post. In this post, they start out with a point P0 and the normal. In my case, I only have the plane equation
<a href="http://stackoverflow.com/questions/8540531/distance-from-origin-to-plane-shortest">Distance from origin to plane (shortest)</a></p>
<p>Here is what I have so far. </p>
<pre><code> #calculate the distance from plane to origin
dist=math.sqrt(new_a**2+new_b**2+new_c**2)
x_dist=(a*d)/dist
y_dist=(b*d)/dist
z_dist=(c*d)/dist
</code></pre>
| 0
|
2016-09-09T20:36:37Z
| 39,473,784
|
<p>The normal of your plane is <code>[a,b,c]</code>. Multiply it by <code>d</code> and get the length of the result. This should give you what you need.</p>
| 1
|
2016-09-13T15:27:46Z
|
[
"python",
"3d",
"plane"
] |
Writing values to excel csv in python
| 39,419,363
|
<p>I have the below code where I'm trying to write values to an excel file, but my output adds one letter in every single column, instead of the whole word, like so
<a href="http://i.stack.imgur.com/nQsvd.png" rel="nofollow"><img src="http://i.stack.imgur.com/nQsvd.png" alt="enter image description here"></a></p>
<p>I want the whole word to be in one column. I'm currently passing in an array that has the words <code>[u'Date / Time', u'City', u'State', u'Shape', u'Duration', u'Summary']</code> into my <code>writer</code>. How can I make it so that I get the whole word in one column?</p>
<pre><code>import requests
import csv
from bs4 import BeautifulSoup
r = requests.get('http://www.nuforc.org/webreports/ndxlAK.html')
soup = BeautifulSoup(r.text, 'html.parser')
csv.register_dialect('excel')
f = open('ufo.csv', 'wb')
writer = csv.writer(f)
headers = soup.find_all('th')
header_text = []
header_count = 1
for header in headers:
if header_count == len(headers):
print "value being written: " + str(header_text)
writer.writerows(header_text)
else:
header_text.append(header.text)
header_count += 1
f.close()
</code></pre>
| 0
|
2016-09-09T20:37:40Z
| 39,419,489
|
<p>You're extracting the text of a single column via:</p>
<blockquote>
<p>for header in headers</p>
</blockquote>
<p>For each single column, you're writing it out like a row of columns via:</p>
<blockquote>
<p>writer.writerows(header_text)</p>
</blockquote>
<p>The <a href="https://docs.python.org/2/library/csv.html#csv.csvwriter.writerows" rel="nofollow">writerows</a> method expects a list of columns to write, so it iterates over it. You've passed it a single string, so it iterates over that and writes one character per column.</p>
<p>So either:</p>
<pre><code>writer.writerows([header_text]) # turn this single column into a list
# or
writer.writerow(header_text) # just write out as a single item
</code></pre>
<p>should work.</p>
| 2
|
2016-09-09T20:48:27Z
|
[
"python",
"excel",
"csv"
] |
understanding recursive function python
| 39,419,369
|
<p>I am trying to understand what happens when this recursive function is called. The code is supposed to be a trace </p>
<pre><code>def mysum(lower, upper, margin):
blanks = ' ' * margin
print blanks, lower, upper
if lower > upper:
print blanks, 0
return 0
else:
result = lower + mysum(lower + 1, upper, margin + 4)
print blanks, result, lower, margin
return result
if __name__ == "__main__":
mysum(1, 4, 0)
</code></pre>
<p>the output reads </p>
<pre><code>1 4
2 4
3 4
4 4
5 4
0
4 4 12
7 3 8
9 2 4
10 1 0
</code></pre>
<p>I don't understand why the function begins to unwind after it returns 0. Can you help me follow through what happens</p>
| 0
|
2016-09-09T20:38:11Z
| 39,419,450
|
<p>In brief, you always make a recursive call before you reach a <code>return</code> statement, until you reach the base case. Then, you always reach a <code>return</code> statement before you reach another recursive call (trivially so, since there is only one recursive call).</p>
| 0
|
2016-09-09T20:45:39Z
|
[
"python",
"recursion"
] |
understanding recursive function python
| 39,419,369
|
<p>I am trying to understand what happens when this recursive function is called. The code is supposed to be a trace </p>
<pre><code>def mysum(lower, upper, margin):
blanks = ' ' * margin
print blanks, lower, upper
if lower > upper:
print blanks, 0
return 0
else:
result = lower + mysum(lower + 1, upper, margin + 4)
print blanks, result, lower, margin
return result
if __name__ == "__main__":
mysum(1, 4, 0)
</code></pre>
<p>the output reads </p>
<pre><code>1 4
2 4
3 4
4 4
5 4
0
4 4 12
7 3 8
9 2 4
10 1 0
</code></pre>
<p>I don't understand why the function begins to unwind after it returns 0. Can you help me follow through what happens</p>
| 0
|
2016-09-09T20:38:11Z
| 39,419,491
|
<p>When one call of the function returns 0 ("bottoms out"), there might be many other calls to the function on the stack, waiting to proceed. When the recursion bottoms out, control returns to the last-but-one function on the stack. It finishes its work and returns, and control returns to the next earlier function on the stack. This continues until all the calls to <code>mysum</code> have been removed from the stack, in reverse order to the order on which they were put on the stack.</p>
<p>Maybe you understood all that already :) If so, please clarify what you mean by "why the function begins to unwind."</p>
| 0
|
2016-09-09T20:48:34Z
|
[
"python",
"recursion"
] |
understanding recursive function python
| 39,419,369
|
<p>I am trying to understand what happens when this recursive function is called. The code is supposed to be a trace </p>
<pre><code>def mysum(lower, upper, margin):
blanks = ' ' * margin
print blanks, lower, upper
if lower > upper:
print blanks, 0
return 0
else:
result = lower + mysum(lower + 1, upper, margin + 4)
print blanks, result, lower, margin
return result
if __name__ == "__main__":
mysum(1, 4, 0)
</code></pre>
<p>the output reads </p>
<pre><code>1 4
2 4
3 4
4 4
5 4
0
4 4 12
7 3 8
9 2 4
10 1 0
</code></pre>
<p>I don't understand why the function begins to unwind after it returns 0. Can you help me follow through what happens</p>
| 0
|
2016-09-09T20:38:11Z
| 39,419,506
|
<p>I think a useful observation here is that the first five lines are printed before any of the nested calls returns. This all happens in the first part of the function body:</p>
<p><code>print</code> - check condition - go to <code>else</code> - and go to beginning again, one level deeper.</p>
<p>When <code>0</code> is printed, the deepest call returns, so the second-deepest <code>result</code> gets calculated. Then the <code>print</code> from the next line occurs - and it's the first line with 3 numbers in it. Then you hit another <code>return</code>, so another <code>result</code> is calculated, etc. The consecutive returns correspond to earlier calls - thus they have less <code>blanks</code>.</p>
| 0
|
2016-09-09T20:50:03Z
|
[
"python",
"recursion"
] |
understanding recursive function python
| 39,419,369
|
<p>I am trying to understand what happens when this recursive function is called. The code is supposed to be a trace </p>
<pre><code>def mysum(lower, upper, margin):
blanks = ' ' * margin
print blanks, lower, upper
if lower > upper:
print blanks, 0
return 0
else:
result = lower + mysum(lower + 1, upper, margin + 4)
print blanks, result, lower, margin
return result
if __name__ == "__main__":
mysum(1, 4, 0)
</code></pre>
<p>the output reads </p>
<pre><code>1 4
2 4
3 4
4 4
5 4
0
4 4 12
7 3 8
9 2 4
10 1 0
</code></pre>
<p>I don't understand why the function begins to unwind after it returns 0. Can you help me follow through what happens</p>
| 0
|
2016-09-09T20:38:11Z
| 39,419,995
|
<p>Here is the code with comments helping you to begin to understand how the recursive function works.</p>
<pre><code>def mysum(lower, upper, margin):
blanks = ' ' * margin # First time : margin = 0
# 2nd time : margin = 4
print blanks, lower, upper # first time : lower = 1, upper = 4
# 2nd time : lower = 2, upper = 4
if lower > upper: # first time : go to else (& 2nd time, ... until lower =5)
print blanks, 0
return 0
else:
result = lower + mysum(lower + 1, upper, margin + 4) # result is not directly calulated
# first it need to execute the call to
# the function, it will be the second call
print blanks, result, lower, margin
return result
if __name__ == "__main__":
mysum(1, 4, 0) # First call of your function
</code></pre>
<p>When lower is at 5, there is no call to mysum and it returns 0.
So you unwind just one step : lower is at 4, and you where in the "else" part. You have to finish it</p>
<pre><code>result = lower + mysum(lower + 1, upper, margin + 4)
print blanks, result, lower, margin
return result
</code></pre>
<p>with lower = 4 and the last call returned 0. The result = 4.
And you unwind another step : lower is at 3, and the call just before returned a 4, so the new result is 7. This value is returned.</p>
<p>Now, lower = 3, you do the same, for lower = 2, lower = 1.</p>
<p>You can see that 1 + 2 + 3 + 4 = 10. And it is the result of your function.
I hope that I helped you, tell me if you don't understand, maybe can I find another way to explain ... :/</p>
| 0
|
2016-09-09T21:30:46Z
|
[
"python",
"recursion"
] |
Python - How to remotely execute processes in parallel and retrieve their output
| 39,419,398
|
<p>I'm trying to remotely execute a command on an unknown number of hosts (could be anywhere from one host to hundreds) in a Python script. The simple way of doing this is the following, but obviously it can get ridiculously time-consuming with many hosts:</p>
<pre><code>listOfOutputs = []
for host in listOfHosts:
output = subprocess.Popen(shlex.split("ssh %s '<command>'" % host), stdout = subprocess.PIPE).communicate()[0]
listOfOutputs.append(output)
</code></pre>
<p>Is there a way to do this same thing, but have the commands remotely execute in parallel so it doesn't take as long?</p>
| 0
|
2016-09-09T20:40:12Z
| 39,419,584
|
<p>You have to run your <code>Popen.subprocess</code> calls each in a separate thread so you can launch as many as you want without blocking your main program.</p>
<p>I made a small example creating as many threads as there will be hosts. No big deal since threads will mostly wait for host reply (else, thread pools would have been better)</p>
<p>In my example, I have 3 hosts, and I perform a <code>ping</code> on each one. Outputs are stored in a thread-safe list of outputs and printed in the end:</p>
<pre><code>import threading
import subprocess
listOfOutputs=[]
lock = threading.Lock()
def run_command(args):
p = subprocess.Popen(["ping","-n","1",args],stdout = subprocess.PIPE)
output,err = p.communicate()
lock.acquire() # listOfOutputs is thread-safe now
listOfOutputs.append(args+": "+output)
lock.release()
threads=[]
listOfHosts = ['host1','host2','host3']
for host in listOfHosts:
t = threading.Thread(target=run_command,args=[host])
t.start() # start in background
threads.append(t) # store thread object for future join operation
[t.join() for t in threads] # wait for all threads to finish
# print results
for o in listOfOutputs:
print(o)
print("-"*50)
</code></pre>
| 0
|
2016-09-09T20:57:06Z
|
[
"python",
"ssh",
"subprocess",
"popen"
] |
How to get last 3 digits after comma?
| 39,419,408
|
<p>For a number that is 32,146 ...how do I find only 146? Is this able to be done?</p>
<pre><code>findnum = '32,146'
return findnum
</code></pre>
| -2
|
2016-09-09T20:41:11Z
| 39,419,421
|
<p>Work with <code>split</code></p>
<pre><code>>>> findnum = '32,146'
>>> findnum.split(',')
['32', '146']
</code></pre>
| 4
|
2016-09-09T20:42:53Z
|
[
"python",
"regex"
] |
How to get last 3 digits after comma?
| 39,419,408
|
<p>For a number that is 32,146 ...how do I find only 146? Is this able to be done?</p>
<pre><code>findnum = '32,146'
return findnum
</code></pre>
| -2
|
2016-09-09T20:41:11Z
| 39,419,438
|
<p>If you want the number you can do:</p>
<pre><code># get number by ignoring commas
number = int(findnum.replace(',',''))
# get last three digits
last_three = number % 1000
</code></pre>
<p>This will result in <code>146</code> (int) and not <code>'146'</code> (string)</p>
<p>Example:</p>
<pre><code>>>> findnum = '32,146'
>>> number = int(findnum.replace(',',''))
>>> number % 1000
146
</code></pre>
| 1
|
2016-09-09T20:44:21Z
|
[
"python",
"regex"
] |
How to get last 3 digits after comma?
| 39,419,408
|
<p>For a number that is 32,146 ...how do I find only 146? Is this able to be done?</p>
<pre><code>findnum = '32,146'
return findnum
</code></pre>
| -2
|
2016-09-09T20:41:11Z
| 39,419,440
|
<p>Since its a string and not a number:</p>
<p>Can use slicing after the ',' to the end:</p>
<pre><code>s[s.index(',')+1:]
</code></pre>
| -1
|
2016-09-09T20:44:36Z
|
[
"python",
"regex"
] |
Adjacency List Implementation in Python
| 39,419,461
|
<p>I'm a newbie to Python (and computer science in general), so bear with me.</p>
<p>I'm having trouble implementing an adjacency list in Python. I have learned how to implement it through a dictionary (I learned how through here lol), but I need to know how to do it using only basic lists (list of lists)</p>
<p>This is my code:</p>
<pre><code>with open("graph1.txt") as infile:
vertices = []
for line in infile:
line = line.split()
line = [int(i) for i in line]
vertices.append(line)
adj = dict()
for edge in vertices:
x, y = int(edge[0]), int(edge[1])
if x not in adj: adj[x] = set()
if y not in adj: adj[y] = set()
adj[x].add(y)
adj[y].add(x)
print(adj)
</code></pre>
<p>Any help is appreciated.
Cheers</p>
| 0
|
2016-09-09T20:46:31Z
| 39,419,548
|
<p>You'll still be thinking in terms of a set of vertices, each with a set of adjacent vertices, but you will be implementing the sets as lists rather than with a more sophisticated <code>set</code> data structure. You will need a fast way to index into the top-level set, and the only way to do that is with integer indexes. So you want to assign a (natural or arbitrary) integer k to each vertex, then put that vertex's adjacency set (implemented as a list) into slot k of the top-level list. It is less important to provide for efficient indexing into the second-level lists, since you typically iterate over them rather than selecting a particular one, but since Python has a built-in list that happens to provide efficient indexing with integers, why not use it?</p>
<p>I agree with the comment that a variable called <code>vertices</code> shouldn't hold edges. Everyone in the world but you will be confused, and later you also will be confused. I suggest you use the name <code>vertices</code> for the top-level list I described above. Then the second-level lists can just hold indices into the top-level <code>vertices</code>. (Well, maybe you need edge objects, too -- I don't know what you are using this for -- but a purist's adjacency-list representation has no room for edge objects.)</p>
| 0
|
2016-09-09T20:53:44Z
|
[
"python",
"adjacency-list"
] |
Running python lines from Atom on Windows
| 39,419,463
|
<p>I am trying to set up Atom to be my python IDE. I have seen many Mac users be able to pipe a line into a python shell using <code>ctrl + enter</code> command, but I have been unsuccessful in figuring out how to set this up.</p>
<p>I have seen packages like script that execute the entire program but am looking for something that can also run python in the shell.</p>
<p>Wondering if anyone is using something like this or has seen something that would do the trick. </p>
| 0
|
2016-09-09T20:46:42Z
| 39,464,538
|
<p>Not sure if I get your question right, but there are two packages which might do (part of?) the job you're looking for:</p>
<ul>
<li><a href="https://atom.io/packages/terminal-panel" rel="nofollow">terminal-panel</a></li>
<li><a href="https://atom.io/packages/atom-terminal-panel" rel="nofollow">atom-terminal-panel</a></li>
</ul>
<p>You can then run a console from within Atom and by customizing keyboard short-cuts, you can probably add the desired functionality.</p>
| 0
|
2016-09-13T07:35:01Z
|
[
"python",
"windows",
"atom-editor"
] |
Python tool to decode a 2d datamatrix barcode using python
| 39,419,510
|
<p>I have a project which requires me to decode a 2D data matrix barcode from an image using python. </p>
<p>Example (which I believe is a 2d Data Matrix, but hey I could be wrong):</p>
<p><a href="https://imgur.com/a/Xbr1I" rel="nofollow">https://imgur.com/a/Xbr1I</a></p>
<p>I'm having trouble finding tools to do this? As I understand it, Zbar and opencv don't have data matrix support?</p>
<p>Is there anything out there than can help?</p>
<p>Thanks for your time </p>
| 0
|
2016-09-09T20:50:09Z
| 40,102,925
|
<p>I think you can use ZXing to decode datamatrix.<br>
Please ref to site: <a href="https://github.com/oostendo/python-zxing" rel="nofollow">https://github.com/oostendo/python-zxing</a> to get more detail. it can support datamatrix as well in my test.</p>
<p>since ZBar is not support DataMatrix in python yet. ZXing is a good choise.</p>
| 0
|
2016-10-18T08:11:25Z
|
[
"python",
"opencv",
"barcode-scanner",
"zbar"
] |
Python: Calculating difference of values in a nested list by using a While Loop
| 39,419,635
|
<p>I have a list that is composed of nested lists, each nested list contains two values - a float value (file creation date), and a string (a name of the file). </p>
<p>For example: </p>
<pre><code>n_List = [[201609070736L, 'GOPR5478.MP4'], [201609070753L, 'GP015478.MP4'],[201609070811L, 'GP025478.MP4']]
</code></pre>
<p>The nested list is already sorted in order of ascending values (creation dates). I am trying to use a While loop to calculate the difference between each sequential float value. </p>
<p>For Example: 201609070753 - 201609070736 = 17</p>
<p>The goal is to use the time difference values as the basis for grouping the files. </p>
<p>The problem I am having is that when the count reaches the last value for <code>len(n_List)</code> it throws an <code>IndexError</code> because <code>count+1</code> is out of range. </p>
<pre><code>IndexError: list index out of range
</code></pre>
<p>I can't figure out how to work around this error. no matter what i try the count is always of range when it reaches the last value in the list. </p>
<p>Here is the While loop I've been using. </p>
<pre><code>count = 0
while count <= len(n_List):
full_path = source_folder + "/" + n_List[count][1]
time_dif = n_List[count+1][0] - n_List[count][0]
if time_dif < 100:
f_List.write(full_path + "\n")
count = count + 1
else:
f_List.write(full_path + "\n")
f_List.close()
f_List = open(source_folder + 'GoPro' + '_' + str(count) + '.txt', 'w')
f_List.write(full_path + "\n")
count = count + 1
</code></pre>
<p>PS. The only work around I can think of is to assume that the last value will always be appended to the final group of files. so, when the count reaches <code>len(n_List - 1)</code>, I skip the time dif calculation, and just automatically add that final value to the last group. While this will probably work most of the time, I can see edge cases where the final value in the list may need to go in a separate group. </p>
| 0
|
2016-09-09T21:00:31Z
| 39,419,721
|
<p>n_list(len(n_list)) will always return an index out of range error</p>
<pre><code>while count < len(n_List):
</code></pre>
<p>should be enough because you are starting count at 0, not 1.</p>
| 0
|
2016-09-09T21:08:55Z
|
[
"python",
"list",
"while-loop",
"nested"
] |
Python: Calculating difference of values in a nested list by using a While Loop
| 39,419,635
|
<p>I have a list that is composed of nested lists, each nested list contains two values - a float value (file creation date), and a string (a name of the file). </p>
<p>For example: </p>
<pre><code>n_List = [[201609070736L, 'GOPR5478.MP4'], [201609070753L, 'GP015478.MP4'],[201609070811L, 'GP025478.MP4']]
</code></pre>
<p>The nested list is already sorted in order of ascending values (creation dates). I am trying to use a While loop to calculate the difference between each sequential float value. </p>
<p>For Example: 201609070753 - 201609070736 = 17</p>
<p>The goal is to use the time difference values as the basis for grouping the files. </p>
<p>The problem I am having is that when the count reaches the last value for <code>len(n_List)</code> it throws an <code>IndexError</code> because <code>count+1</code> is out of range. </p>
<pre><code>IndexError: list index out of range
</code></pre>
<p>I can't figure out how to work around this error. no matter what i try the count is always of range when it reaches the last value in the list. </p>
<p>Here is the While loop I've been using. </p>
<pre><code>count = 0
while count <= len(n_List):
full_path = source_folder + "/" + n_List[count][1]
time_dif = n_List[count+1][0] - n_List[count][0]
if time_dif < 100:
f_List.write(full_path + "\n")
count = count + 1
else:
f_List.write(full_path + "\n")
f_List.close()
f_List = open(source_folder + 'GoPro' + '_' + str(count) + '.txt', 'w')
f_List.write(full_path + "\n")
count = count + 1
</code></pre>
<p>PS. The only work around I can think of is to assume that the last value will always be appended to the final group of files. so, when the count reaches <code>len(n_List - 1)</code>, I skip the time dif calculation, and just automatically add that final value to the last group. While this will probably work most of the time, I can see edge cases where the final value in the list may need to go in a separate group. </p>
| 0
|
2016-09-09T21:00:31Z
| 39,419,895
|
<p>I think using zip could be easier to get difference. </p>
<pre><code>res1,res2 = [],[]
for i,j in zip(n_List,n_List[1:]):
target = res1 if j[0]-i[0] < 100 else res2
target.append(i[1])
</code></pre>
| 0
|
2016-09-09T21:22:49Z
|
[
"python",
"list",
"while-loop",
"nested"
] |
Python: Calculating difference of values in a nested list by using a While Loop
| 39,419,635
|
<p>I have a list that is composed of nested lists, each nested list contains two values - a float value (file creation date), and a string (a name of the file). </p>
<p>For example: </p>
<pre><code>n_List = [[201609070736L, 'GOPR5478.MP4'], [201609070753L, 'GP015478.MP4'],[201609070811L, 'GP025478.MP4']]
</code></pre>
<p>The nested list is already sorted in order of ascending values (creation dates). I am trying to use a While loop to calculate the difference between each sequential float value. </p>
<p>For Example: 201609070753 - 201609070736 = 17</p>
<p>The goal is to use the time difference values as the basis for grouping the files. </p>
<p>The problem I am having is that when the count reaches the last value for <code>len(n_List)</code> it throws an <code>IndexError</code> because <code>count+1</code> is out of range. </p>
<pre><code>IndexError: list index out of range
</code></pre>
<p>I can't figure out how to work around this error. no matter what i try the count is always of range when it reaches the last value in the list. </p>
<p>Here is the While loop I've been using. </p>
<pre><code>count = 0
while count <= len(n_List):
full_path = source_folder + "/" + n_List[count][1]
time_dif = n_List[count+1][0] - n_List[count][0]
if time_dif < 100:
f_List.write(full_path + "\n")
count = count + 1
else:
f_List.write(full_path + "\n")
f_List.close()
f_List = open(source_folder + 'GoPro' + '_' + str(count) + '.txt', 'w')
f_List.write(full_path + "\n")
count = count + 1
</code></pre>
<p>PS. The only work around I can think of is to assume that the last value will always be appended to the final group of files. so, when the count reaches <code>len(n_List - 1)</code>, I skip the time dif calculation, and just automatically add that final value to the last group. While this will probably work most of the time, I can see edge cases where the final value in the list may need to go in a separate group. </p>
| 0
|
2016-09-09T21:00:31Z
| 39,514,499
|
<p>FYI, here is the solution I used, thanks to @galaxyman for the help.</p>
<p>I handled the issue of the last value in the nested list, by simply
adding that value after the loop completes. Don't know if that's the most
elegant way to do it, but it works. </p>
<p>(note: i'm only posting the function related to the zip method suggested in the previous posts). </p>
<pre><code> def list_zip(get_gp_list):
ffmpeg_list = open(output_path + '\\' + gp_List[0][1][0:8] + '.txt', 'a')
for a,b in zip(gp_List,gp_List[1:]):
full_path = gopro_folder + '\\' + a[1]
time_dif = b[0]-a[0]
if time_dif < 100:
ffmpeg_list.write("file " + full_path + "\n")
else:
ffmpeg_list.write("file " + full_path + "\n")
ffmpeg_list.close()
ffmpeg_list = open(output_path + '\\' + b[1][0:8] + '.txt', 'a')
last_val = gp_List[-1][1]
ffmpeg_list.write("file " + gopro_folder + '\\' + last_val + "\n")
ffmpeg_list.close()
</code></pre>
| 0
|
2016-09-15T15:06:39Z
|
[
"python",
"list",
"while-loop",
"nested"
] |
How to return the first highest value that is placed in a dictionary
| 39,419,653
|
<pre><code>dic = {}
count = 0
i = 0
str_in = str_in.replace(' ','')
while i < len(str_in):
count = str_in.count(str_in[i])
dic[str_in[i]] = count
i += 1
for key in dic:
if key == max(dic, key=dic.get):
return key
break
</code></pre>
<p>The dictionary that is made in this program is</p>
<pre><code>{'i': 1, 'h': 2, 'j': 2, 'o': 2, 'n': 1, 's': 2}
</code></pre>
<p>from the input <code>'joshin josh'</code></p>
<p>I am pretty sure the dictionary's max value returns h because it is the first value in the dictionary with the highest value, even if tied.
I want it to return j because that is the first letter I put into the dictionary. It seems like the dictionary automatically sorts the letters alphabetically, which I also don't understand. I want this to work with any string and all tied letters, not just this particular string. Correct me if I am wrong on anything, I am a noob.</p>
| 1
|
2016-09-09T21:01:54Z
| 39,419,786
|
<p>You can use an <code>OrderedDict</code></p>
<p>Just a simple change would resolve your problem </p>
<pre><code>ord_dic = OrderedDict(sorted(dic.items(), key=lambda t: str_in.index(t[0])))
</code></pre>
<p>So you could use something like </p>
<pre><code>dic = {}
count = 0
i = 0
str_in = str_in.replace(' ','')
while i < len(str_in):
count = str_in.count(str_in[i])
dic[str_in[i]] = count
i += 1
ord_dic = OrderedDict(sorted(dic.items(), key=lambda t: str_in.index(t[0])))
for key in ord_dic:
if key == max(ord_dic, key=ord_dic.get):
return key
break
</code></pre>
<p>Do not forget to import <code>OrderedDict</code> <br/>
Add this line at the top of your code:</p>
<pre><code>from collections import OrderedDict
</code></pre>
| 0
|
2016-09-09T21:13:44Z
|
[
"python",
"string",
"dictionary"
] |
How to return the first highest value that is placed in a dictionary
| 39,419,653
|
<pre><code>dic = {}
count = 0
i = 0
str_in = str_in.replace(' ','')
while i < len(str_in):
count = str_in.count(str_in[i])
dic[str_in[i]] = count
i += 1
for key in dic:
if key == max(dic, key=dic.get):
return key
break
</code></pre>
<p>The dictionary that is made in this program is</p>
<pre><code>{'i': 1, 'h': 2, 'j': 2, 'o': 2, 'n': 1, 's': 2}
</code></pre>
<p>from the input <code>'joshin josh'</code></p>
<p>I am pretty sure the dictionary's max value returns h because it is the first value in the dictionary with the highest value, even if tied.
I want it to return j because that is the first letter I put into the dictionary. It seems like the dictionary automatically sorts the letters alphabetically, which I also don't understand. I want this to work with any string and all tied letters, not just this particular string. Correct me if I am wrong on anything, I am a noob.</p>
| 1
|
2016-09-09T21:01:54Z
| 39,419,795
|
<p>If you don't want to manually count, you could use Python's <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow"><code>Counter</code></a> to count letter occurrences, find the max, then return the first letter in your string that matches that count:</p>
<pre><code>from collections import Counter
string = 'joshin josh'
counts = Counter(string)
max_count = max(counts.values())
print(next(c for c in string if counts[c] == max_count)) # j
</code></pre>
<p><code>next(c for c in string if letter_counts[c] == max_count)</code> returns the first letter in the given string for which the count is the max count.</p>
<p><strong>A more optimal approach</strong>:</p>
<p>The previous approach would have you traverse the string in the worst case three times. For a one-pass approach, it would be most efficient to just keep track of the max count and letter corresponding to it as you iterate through the string:</p>
<pre><code>from collections import defaultdict
counts = defaultdict(int)
max_letter = None
max_count = 0
string = 'joshin josh'
for c in string:
counts[c] += 1
if counts[c] > max_count:
max_letter = c
max_count = counts[c]
print(max_letter) # j
</code></pre>
| 1
|
2016-09-09T21:14:45Z
|
[
"python",
"string",
"dictionary"
] |
How to return the first highest value that is placed in a dictionary
| 39,419,653
|
<pre><code>dic = {}
count = 0
i = 0
str_in = str_in.replace(' ','')
while i < len(str_in):
count = str_in.count(str_in[i])
dic[str_in[i]] = count
i += 1
for key in dic:
if key == max(dic, key=dic.get):
return key
break
</code></pre>
<p>The dictionary that is made in this program is</p>
<pre><code>{'i': 1, 'h': 2, 'j': 2, 'o': 2, 'n': 1, 's': 2}
</code></pre>
<p>from the input <code>'joshin josh'</code></p>
<p>I am pretty sure the dictionary's max value returns h because it is the first value in the dictionary with the highest value, even if tied.
I want it to return j because that is the first letter I put into the dictionary. It seems like the dictionary automatically sorts the letters alphabetically, which I also don't understand. I want this to work with any string and all tied letters, not just this particular string. Correct me if I am wrong on anything, I am a noob.</p>
| 1
|
2016-09-09T21:01:54Z
| 39,425,017
|
<p>Another approach, by using <code>OrderedDict</code> as a mixin class it allows an ordered <code>Counter</code> class to be easily composed. You can then use the first entry of the list returned from the <code>most_common</code> method.</p>
<p>e.g.</p>
<pre><code>from collections import Counter, OrderedDict
class OrderedCounter(Counter, OrderedDict):
pass
oc = OrderedCounter('joshin josh')
print(oc.most_common()[0])
</code></pre>
<p>Will give you back a tuple of the letter and the number of occurrences. e.g.</p>
<blockquote>
<p>('j', 2)</p>
</blockquote>
<p>You'd need to protect the <code>[0]</code> in the case where you were passing an empty string, Where nothing is most common.</p>
<p>To better understand why this works I recommend watching Raymond Hettingers Super Considered Super talk from PyCon 2015. Youtube link <a href="https://www.youtube.com/watch?v=EiOglTERPEo" rel="nofollow">here</a></p>
| 2
|
2016-09-10T10:13:28Z
|
[
"python",
"string",
"dictionary"
] |
Python pyspark error
| 39,419,712
|
<p>I am using Pyspark to create a dataframe but come up against an error from the get go.</p>
<p>I am using the following code to create the dataframe using data from the examples folder:</p>
<pre><code>df = spark.read.load(`c:/spark/examples/src/main/resources/users.parquet`)
</code></pre>
<p>This generates the following extensive error message:</p>
<pre><code>SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
16/09/09 15:41:51 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
16/09/09 15:41:51 WARN Hive: Failed to access metastore. This class should not accessed in runtime.
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hiv
e.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1236)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:174)
at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:166)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:171)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46)
at org.apache.spark.sql.hive.HiveSharedState.externalCatalog(HiveSharedState.scala:45)
at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:50)
at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
at org.apache.spark.sql.hive.HiveSessionState$$anon$1.<init>(HiveSessionState.scala:63)
at org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
at org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClien
t
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1234)
... 36 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
... 42 more
Caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:c:/Spark/
bin/spark-warehouse
at org.apache.hadoop.fs.Path.initialize(Path.java:205)
at org.apache.hadoop.fs.Path.<init>(Path.java:171)
at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:600)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 47 more
Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:c:/Spark/bin/spark-warehouse
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.<init>(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:202)
... 58 more
16/09/09 15:41:51 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\Spark\python\pyspark\sql\readwriter.py", line 147, in load
return self._df(self._jreader.load(path))
File "c:\Spark\python\lib\py4j-0.10.1-src.zip\py4j\java_gateway.py", line 933, in __call__
File "c:\Spark\python\pyspark\sql\utils.py", line 63, in deco
return f(*a, **kw)
File "c:\Spark\python\lib\py4j-0.10.1-src.zip\py4j\protocol.py", line 312, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o27.load.
: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.Sessio
nHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:171)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46)
at org.apache.spark.sql.hive.HiveSharedState.externalCatalog(HiveSharedState.scala:45)
at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:50)
at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
at org.apache.spark.sql.hive.HiveSessionState$$anon$1.<init>(HiveSessionState.scala:63)
at org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
at org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
at org.apache.spark.sql.SparkSession.baseRelationToDataFrame(SparkSession.scala:382)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:143)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:132)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClien
t
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
... 33 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
... 39 more
Caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: file:c:/Spark/
bin/spark-warehouse
at org.apache.hadoop.fs.Path.initialize(Path.java:205)
at org.apache.hadoop.fs.Path.<init>(Path.java:171)
at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:600)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 44 more
Caused by: java.net.URISyntaxException: Relative path in absolute URI: file:c:/Spark/bin/spark-warehouse
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.<init>(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:202)
... 55 more
</code></pre>
<p>I think a cause may be this line:</p>
<pre><code>java.net.URISyntaxException: Relative path in absolute URI: file:c:/Spark/bin/spark-warehouse
</code></pre>
<p>I'm not confident how to address this so any assistance is greatly appreciated,</p>
| 0
|
2016-09-09T21:07:42Z
| 39,421,403
|
<p>This was an issue with the Spark installation.I installed locally. I created rddâs & everything went fine until I wanted to create a Spark DataFrame from the rdds ... big errors.</p>
<p>The issue was with the pre-built Spark version: spark-2.0.0-bin-hadoop2.7</p>
<p>I Removed spark-2.0.0-bin-hadoop2.7 and downloaded and installed spark-1.6.2-bin-hadoop2.6
Used pip install py4j rather than unzipping and using the version shipped with the pre-built Spark</p>
<p>I can now create the DataFrame</p>
<p>The outcome I think is two fold:
1. Use spark-1.6.2-bin-hadoop2.6 if installing on Windows7 and want to use Spark DataFrames
2. SparkSession is not available â only came out with Spark 2, have to use SQLContext ⦠oh well!</p>
<p>regards</p>
| 0
|
2016-09-10T00:37:28Z
|
[
"python",
"pyspark"
] |
Python: C++ extension returning multiple values
| 39,419,727
|
<p>I am writing a C++ extension for python script and want to return multiple values like what we can do in python function.</p>
<p>Simple Example in python:</p>
<pre><code>def test():
return 0,0
</code></pre>
<p><strong>tuple</strong> seems to be the closest answer</p>
<pre><code>#include <tuple>
std::tuple<int,int> test(void){
return std::make_tuple(0,0);
}
</code></pre>
<p>But when I compile it, it complains that </p>
<pre><code>TypeError: No to_python (by-value) converter found for C++ type: std::tuple<int, int>
</code></pre>
<p>Anyone knows if I could return multiple values using C++?</p>
<p><strong>EDIT:</strong></p>
<p>This is my setup.py file.</p>
<pre><code>#!/usr/bin/env python
from distutils.core import setup
from distutils.extension import Extension
setup(name="PackageName",
ext_modules=[
Extension("foo", ["foo.cpp"],
libraries = ["boost_python"],
extra_compile_args = ["-std=c++11"]
)
])
</code></pre>
| 1
|
2016-09-09T21:09:17Z
| 39,421,899
|
<p>It seems you're using boost-python. Then should use boost::python::tuple, not std::tuple. See the examples on <a href="http://www.boost.org/doc/libs/1_61_0/libs/python/doc/html/reference/object_wrappers/boost_python_tuple_hpp.html#object_wrappers.boost_python_tuple_hpp.introduction" rel="nofollow">this page</a>.</p>
| 1
|
2016-09-10T02:21:35Z
|
[
"python",
"c++",
"python-c-api",
"python-c-extension"
] |
python - to read json file
| 39,419,753
|
<p>I'm new to JSON and Python, any help on this would be greatly appreciated.
Below is my JSON file format:</p>
<pre><code>{
"header": {
"platform":"atm"
"version":"2.0"
}
"details":[
{
"abc":"3"
"def":"4"
},
{
"abc":"5"
"def":"6"
},
{
"abc":"7"
"def":"8"
}
]
}
</code></pre>
<p>My requirement is to read the values of all <code>"abc"</code> <code>"def"</code> in details and add this is to a new list like this <code>[(1,2),(3,4),(5,6),(7,8)]</code>. The new list will be used to create a spark data frame.</p>
| -3
|
2016-09-09T21:10:44Z
| 39,419,847
|
<p>Open the file, and get a filehandle:</p>
<p><a href="https://docs.python.org/2/library/functions.html#open" rel="nofollow">https://docs.python.org/2/library/functions.html#open</a></p>
<p>Then, pass the file handle into json.load():</p>
<p><a href="https://docs.python.org/2/library/json.html#json.load" rel="nofollow">https://docs.python.org/2/library/json.html#json.load</a></p>
<p>From there, you can easily deal with a python dictionary that represents your json-encoded data.</p>
| 0
|
2016-09-09T21:19:05Z
|
[
"python",
"json",
"spark-dataframe"
] |
python - to read json file
| 39,419,753
|
<p>I'm new to JSON and Python, any help on this would be greatly appreciated.
Below is my JSON file format:</p>
<pre><code>{
"header": {
"platform":"atm"
"version":"2.0"
}
"details":[
{
"abc":"3"
"def":"4"
},
{
"abc":"5"
"def":"6"
},
{
"abc":"7"
"def":"8"
}
]
}
</code></pre>
<p>My requirement is to read the values of all <code>"abc"</code> <code>"def"</code> in details and add this is to a new list like this <code>[(1,2),(3,4),(5,6),(7,8)]</code>. The new list will be used to create a spark data frame.</p>
| -3
|
2016-09-09T21:10:44Z
| 39,419,927
|
<p>I'm trying to understand your question as best as I can, but it looks like it was formatted poorly.</p>
<p>First off your json blob is not valid json, it is missing quite a few commas. This is probably what you are looking for:</p>
<pre><code>{
"header": {
"platform": "atm",
"version": "2.0"
},
"details": [
{
"abc": "3",
"def": "4"
},
{
"abc": "5",
"def": "6"
},
{
"abc": "7",
"def": "8"
}
]
}
</code></pre>
<p>Now assuming you are trying to parse this in python you will have to do the following.</p>
<pre><code>import json
json_blob = '{"header": {"platform": "atm","version": "2.0"},"details": [{"abc": "3","def": "4"},{"abc": "5","def": "6"},{"abc": "7","def": "8"}]}'
json_obj = json.loads(json_blob)
final_list = []
for single in json_obj['details']:
final_list.append((int(single['abc']), int(single['def'])))
print(final_list)
</code></pre>
<p>This will print the following: [(3, 4), (5, 6), (7, 8)]</p>
| 1
|
2016-09-09T21:25:57Z
|
[
"python",
"json",
"spark-dataframe"
] |
Set foreign key to nullable=false?
| 39,419,762
|
<p>Do you set your foreign keys as nullable=false if always expect a foreign key on that column in the database? </p>
<p>I'm using sqlalchemy and have set my models with required foreign keys. This sometimes causes me to run session.commit() more often, since I need the parent model to have an id and be fully created in order to build a child object in the ORM. What is considered best practice? My models are below:</p>
<pre><code>class Location(Base):
__tablename__ = 'locations'
id = Column(Integer, primary_key=True)
city = Column(String(50), nullable=False, unique=True)
hotels = relationship('Hotel', back_populates='location')
class Hotel(Base):
__tablename__ = 'hotels'
id = Column(Integer, primary_key=True)
name = Column(String(100), nullable=False, unique=True)
phone_number = Column(String(20))
parking_fee = Column(String(10))
location_id = Column(Integer, ForeignKey('locations.id'), nullable=False)
location = relationship('Location', back_populates='hotels')
</code></pre>
| 0
|
2016-09-09T21:11:36Z
| 39,419,880
|
<p>You don't need to do <code>session.commit()</code> to get an ID; <code>session.flush()</code> will do.</p>
<p>Even better, you don't need to get an ID at all if you set the relationship because SQLalchemy will figure out the order to do the <code>INSERT</code>s in. You can simply do:</p>
<pre><code>loc = Location(city="NYC", hotels=[Hotel(name="Hilton")])
session.add(loc)
session.commit()
</code></pre>
<p>and it will work fine.</p>
| 2
|
2016-09-09T21:21:45Z
|
[
"python",
"sqlalchemy"
] |
Set foreign key to nullable=false?
| 39,419,762
|
<p>Do you set your foreign keys as nullable=false if always expect a foreign key on that column in the database? </p>
<p>I'm using sqlalchemy and have set my models with required foreign keys. This sometimes causes me to run session.commit() more often, since I need the parent model to have an id and be fully created in order to build a child object in the ORM. What is considered best practice? My models are below:</p>
<pre><code>class Location(Base):
__tablename__ = 'locations'
id = Column(Integer, primary_key=True)
city = Column(String(50), nullable=False, unique=True)
hotels = relationship('Hotel', back_populates='location')
class Hotel(Base):
__tablename__ = 'hotels'
id = Column(Integer, primary_key=True)
name = Column(String(100), nullable=False, unique=True)
phone_number = Column(String(20))
parking_fee = Column(String(10))
location_id = Column(Integer, ForeignKey('locations.id'), nullable=False)
location = relationship('Location', back_populates='hotels')
</code></pre>
| 0
|
2016-09-09T21:11:36Z
| 39,420,590
|
<p>I would suggest that you'd better not set nullable=False. Make foreign key nullable is very reasonable in many situations. In your scenario, for example, if I want to insert a hotel whose location is currently underdetermined, you can not accomplish this with the foreign key not null. So the best practice when using foreign key is to set it nullable.</p>
<p>See necessary nullable foreign key <a href="http://stackoverflow.com/questions/925203/any-example-of-a-necessary-nullable-foreign-key">Any example of a necessary nullable foreign key?</a> </p>
| 0
|
2016-09-09T22:36:31Z
|
[
"python",
"sqlalchemy"
] |
Broadcasting in Python with permutations
| 39,419,823
|
<p>I understand that <code>transpose</code> on an <code>ndarray</code> is intended to be the equivalent of matlab's <code>permute</code> function however I have a specific usecase that doesn't work simply. In matlab I have the following:</p>
<pre><code>C = @bsxfun(@times, permute(A,[4,2,5,1,3]), permute(B, [1,6,2,7,3,4,5])
</code></pre>
<p>where A is a 3D tensor of shape NxNxM and B is a 5D tensor of shape NxNxMxPxP. The above function is meant to vectorize looped kronecker products. I'm assuming that Matlab is adding 2 singleton dimensions for both A and B which is why it's able to rearrange them. I'm looking to port this code over to Python <strike>but I don't think it has the capability of adding these extra dimensions.</strike>. I found <a href="http://stackoverflow.com/questions/17835121/is-this-the-best-way-to-add-an-extra-dimension-to-a-numpy-array-in-one-line-of-c">this</a> which successfully adds the extra dimensions however the broadcasting is not working the same matlab's <code>bsxfun</code>. I have attempted the obvious translation (yes I am using numpy for these <code>ndarray</code>'s and functions):</p>
<pre><code>A = A[...,None,None]
B = B[...,None,None]
C = transpose(A,[3,1,4,0,2])*transpose(B,[0,5,1,6,2,3,4])
</code></pre>
<p><strike>and I get the following error:</p>
<pre><code>return transpose(axes)
ValueError: axes don't match array
</code></pre>
<p>My first guess is to do a <code>reshape</code> on A and B to add in those singleton dimensions?</strike></p>
<p>I now get the following error:</p>
<pre><code>mults = transpose(rho_in,[3,1,4,0,2])*transpose(proj,[0,5,1,6,2,3,4])
ValueError: operands could not be broadcast together with shapes (1,9,1,9,8) (9,1,9,1,8,40,40)
</code></pre>
<p>EDIT: Amended my question to be less about adding singleton dimensions but more about correctly broadcasting this matlab multiplication in python.</p>
| 4
|
2016-09-09T21:16:32Z
| 39,420,827
|
<p>The huge difference between MATLAB and numpy is that the former uses column-major format for its arrays, while the latter row-major. The corollary is that implicit singleton dimensions are handled differently.</p>
<p>Specifically, MATLAB explicitly ignores trailing singleton dimensions: <code>rand(3,3,1,1,1,1,1)</code> is actually a <code>3x3</code> matrix. Along these lines, you can use <code>bsxfun</code> to operate on two arrays if their <em>leading</em> dimensions match: <code>NxNxM</code> is implicitly <code>NxNxMx1x1</code> which is compatible with <code>NxNxMxPxP</code>.</p>
<p>Numpy, on the other hand, <a href="http://docs.scipy.org/doc/numpy-1.10.1/user/basics.broadcasting.html#general-broadcasting-rules" rel="nofollow">allows implicit singletons <em>up front</em></a>. You need to <code>permute</code> your arrays in a way that their <em>trailing</em> dimensions match up, for instance shape <code>(40,40,9,1,9,1,8)</code> with shape <code>(1,9,1,9,8)</code>, and the result should be of shape <code>(40,40,9,9,9,9,8)</code>.</p>
<p>Dummy example:</p>
<pre><code>>>> import numpy as np
>>> (np.random.rand(40,40,9,1,9,1,8)+np.random.rand(1,9,1,9,8)).shape
(40, 40, 9, 9, 9, 9, 8)
</code></pre>
<p>Note that what you're trying to do can probably be done using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>numpy.einsum</code></a>. I suggest taking a closer look at that. An example of what I mean: from your question I gathered that you want to perform this: take elements <code>A[1:N,1:N,1:M]</code> and <code>B[1:N,1:N,1:M,1:P,1:P]</code> and construct a new array <code>C[1:N,1:N,1:N,1:N,1:M,1:P,1:P]</code> such that</p>
<pre><code>C[i1,i2,i3,i4,i5,i6,i7] = A[i2,i4,i5]*B[i1,i3,i5,i6,i7]
</code></pre>
<p>(your specific index order might vary). If this is correct, you can indeed use <code>numpy.einsum()</code>:</p>
<pre><code>>>> a = np.random.rand(3,3,2)
>>> b = np.random.rand(3,3,2,4,4)
>>> np.einsum('ijk,lmkno->limjkno',a,b).shape
(3, 3, 3, 3, 2, 4, 4)
</code></pre>
<p>Two things should be noted, though. Firstly, the above operation will be very memory-intense, which should be expected for cases of vectorization (where you usually win CPU time at the expense of memory need). Secondly, you should seriously consider rearranging the data model when you port your code. The reason that broadcasting works differently in the two languages is intricately connected to the column-major/row-major difference. This also implies that in MATLAB you should work with <em>leading</em> indices first, since <code>A(:,i2,i3)</code> corresponds to a contiguous block of memory, while <code>A(i1,i2,:)</code> does not. Conversely, in numpy <code>A[i1,i2,:]</code> is contiguous, while <code>A[:,i2,i3]</code> is not.</p>
<p>These considerations suggest that you should set up the logistics of your data such that vectorized operations preferably work with leading indices in MATLAB and trailing indices in numpy. You could still use <code>numpy.einsum</code> to perform the operation itself, however your dimensions should be in a different (possibly reverse) order compared to MATLAB, at least if we assume that both versions of the code use an optimal setup.</p>
| 6
|
2016-09-09T23:03:11Z
|
[
"python",
"matlab",
"vectorization",
"broadcasting",
"permute"
] |
Broadcasting in Python with permutations
| 39,419,823
|
<p>I understand that <code>transpose</code> on an <code>ndarray</code> is intended to be the equivalent of matlab's <code>permute</code> function however I have a specific usecase that doesn't work simply. In matlab I have the following:</p>
<pre><code>C = @bsxfun(@times, permute(A,[4,2,5,1,3]), permute(B, [1,6,2,7,3,4,5])
</code></pre>
<p>where A is a 3D tensor of shape NxNxM and B is a 5D tensor of shape NxNxMxPxP. The above function is meant to vectorize looped kronecker products. I'm assuming that Matlab is adding 2 singleton dimensions for both A and B which is why it's able to rearrange them. I'm looking to port this code over to Python <strike>but I don't think it has the capability of adding these extra dimensions.</strike>. I found <a href="http://stackoverflow.com/questions/17835121/is-this-the-best-way-to-add-an-extra-dimension-to-a-numpy-array-in-one-line-of-c">this</a> which successfully adds the extra dimensions however the broadcasting is not working the same matlab's <code>bsxfun</code>. I have attempted the obvious translation (yes I am using numpy for these <code>ndarray</code>'s and functions):</p>
<pre><code>A = A[...,None,None]
B = B[...,None,None]
C = transpose(A,[3,1,4,0,2])*transpose(B,[0,5,1,6,2,3,4])
</code></pre>
<p><strike>and I get the following error:</p>
<pre><code>return transpose(axes)
ValueError: axes don't match array
</code></pre>
<p>My first guess is to do a <code>reshape</code> on A and B to add in those singleton dimensions?</strike></p>
<p>I now get the following error:</p>
<pre><code>mults = transpose(rho_in,[3,1,4,0,2])*transpose(proj,[0,5,1,6,2,3,4])
ValueError: operands could not be broadcast together with shapes (1,9,1,9,8) (9,1,9,1,8,40,40)
</code></pre>
<p>EDIT: Amended my question to be less about adding singleton dimensions but more about correctly broadcasting this matlab multiplication in python.</p>
| 4
|
2016-09-09T21:16:32Z
| 39,423,815
|
<p>Looking at your MATLAB code, you have -</p>
<pre><code>C = bsxfun(@times, permute(A,[4,2,5,1,3]), permute(B, [1,6,2,7,3,4,5])
</code></pre>
<p>So, in essence -</p>
<pre><code>B : 1 , 6 , 2 , 7 , 3 , 4 , 5
A : 4 , 2 , 5 , 1 , 3
</code></pre>
<p>Now, in MATLAB we had to borrow singleton dimensions from higher ones, that's why all that trouble of bringing in dims <code>6</code>, <code>7</code> for <code>B</code> and dims <code>4</code> <code>5</code> for <code>A</code>.</p>
<p>In NumPy, we bring in those explicitly with <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/arrays.indexing.html#numpy.newaxis" rel="nofollow"><code>np.newaxis</code>/None</a>. Thus, for NumPy, we could put it like so -</p>
<pre><code>B : 1 , N , 2 , N , 3 , 4 , 5
A : N , 2 , N , 1 , 3 , N , N
</code></pre>
<p>, where <code>N</code> represents new axis. Please notice that we were need to put in new axes at the end for <code>A</code> to push forward other dimension for alignment. In constrast, this happens in MATLAB by default.</p>
<p>Making that <code>B</code> looks easy enough with as the dimensions seem to be in order and we just need to add in new axes at appropriate places - <code>B[:,None,:,None,:,:,:]</code>.</p>
<p>Creating such <code>A</code> doesn't look straight-forward. Ignoring <code>N's</code> in <code>A</code>, we would have - <code>A : 2 , 1 , 3</code>. So, the starting point would be permuting dimensions and then add in those two new axes that were ignored - <code>A.transpose(1,0,2)[None,:,None,:,:,None,None]</code>.</p>
<p>Thus far, we have -</p>
<pre><code>B (new): B[:,None,:,None,:,:,:]
A (new): A.transpose(1,0,2)[None,:,None,:,:,None,None]
</code></pre>
<p>In NumPy, we can skip the leading new axes and trailing non-singleton dims. So, we could simplify like so -</p>
<pre><code>B (new): B[:,None,:,None]
A (new): A.transpose(1,0,2)[:,None,...,None,None]
</code></pre>
<p>The final output would be multiplication between these two extended versions -</p>
<pre><code>C = A.transpose(1,0,2)[:,None,...,None,None]*B[:,None,:,None]
</code></pre>
<p><strong>Runtime test</strong></p>
<p>I believe @Andras's post meant the equivalent <code>np.einsum</code> implementation to be something like : <code>np.einsum('ijk,lmkno->ljmikno',A,B)</code>.</p>
<pre><code>In [24]: A = np.random.randint(0,9,(10,10,10))
...: B = np.random.randint(0,9,(10,10,10,10,10))
...:
In [25]: C1 = np.einsum('ijk,lmkno->ljmikno',A,B)
In [26]: C2 = A.transpose(1,0,2)[:,None,...,None,None]*B[:,None,:,None]
In [27]: np.allclose(C1,C2)
Out[27]: True
In [28]: %timeit np.einsum('ijk,lmkno->ljmikno',A,B)
10 loops, best of 3: 102 ms per loop
In [29]: %timeit A.transpose(1,0,2)[:,None,...,None,None]*B[:,None,:,None]
10 loops, best of 3: 78.4 ms per loop
In [30]: A = np.random.randint(0,9,(15,15,15))
...: B = np.random.randint(0,9,(15,15,15,15,15))
...:
In [31]: %timeit np.einsum('ijk,lmkno->ljmikno',A,B)
1 loop, best of 3: 1.76 s per loop
In [32]: %timeit A.transpose(1,0,2)[:,None,...,None,None]*B[:,None,:,None]
1 loop, best of 3: 1.36 s per loop
</code></pre>
| 2
|
2016-09-10T07:40:14Z
|
[
"python",
"matlab",
"vectorization",
"broadcasting",
"permute"
] |
Syntax error in print statement
| 39,419,846
|
<p>This is the end of my script. I'm getting the error : </p>
<pre><code>print "[%s]\t %s " % (item.sharing['access'], item.title)
^
SyntaxError: invalid syntax
</code></pre>
<p>Code:</p>
<pre><code>#List titles and sharing status for items in users'home folder
for item in currentUser.items:
print "[%s]\t %s " % (item.sharing['access'], item.title)
</code></pre>
<p><strong>Questions</strong></p>
<p>How can I correct this syntax error?</p>
<p>Where can I access resources to avoid mistakes such as this?</p>
| -4
|
2016-09-09T21:18:54Z
| 39,419,889
|
<p>Either you're using Python 3, in which <code>print</code> is a function and thus requires surrounding parentheses (i.e. you have to say <code>print(x)</code> instead of <code>print x</code>), or you have an earlier line of code that wasn't terminated properly.</p>
| 0
|
2016-09-09T21:22:28Z
|
[
"python"
] |
Obtain documentation for a module implementing the NSGA-II algorithm
| 39,419,848
|
<p>I am attempting to use the implementation of the NSGA-II algorithm in this module <a href="https://github.com/wreszelewski/nsga2" rel="nofollow">https://github.com/wreszelewski/nsga2</a></p>
<p><strong>Question</strong></p>
<p>Where can I find documentation for this module?</p>
| 0
|
2016-09-09T21:19:20Z
| 39,421,438
|
<p>The documentation seems to be limited to the examples provided by the author. <a href="https://github.com/wreszelewski/nsga2/tree/master/examples" rel="nofollow">https://github.com/wreszelewski/nsga2/tree/master/examples</a> </p>
<p><strong>Collecting metrics during evolution, then plotting hvr metric per generation</strong></p>
<pre><code>from metrics.problems.zdt import ZDT3Metrics
from nsga2.evolution import Evolution
from nsga2.problems.zdt import ZDT
from nsga2.problems.zdt.zdt3_definitions import ZDT3Definitions
from plotter import Plotter
def print_generation(population, generation_num):
print("Generation: {}".format(generation_num))
def print_metrics(population, generation_num):
pareto_front = population.fronts[0]
metrics = ZDT3Metrics()
hv = metrics.HV(pareto_front)
hvr = metrics.HVR(pareto_front)
print("HV: {}".format(hv))
print("HVR: {}".format(hvr))
collected_metrics = {}
def collect_metrics(population, generation_num):
pareto_front = population.fronts[0]
metrics = ZDT3Metrics()
hv = metrics.HV(pareto_front)
hvr = metrics.HVR(pareto_front)
collected_metrics[generation_num] = hv, hvr
zdt_definitions = ZDT3Definitions()
plotter = Plotter(zdt_definitions)
problem = ZDT(zdt_definitions)
evolution = Evolution(problem, 200, 200)
evolution.register_on_new_generation(plotter.plot_population_best_front)
evolution.register_on_new_generation(print_generation)
evolution.register_on_new_generation(print_metrics)
evolution.register_on_new_generation(collect_metrics)
pareto_front = evolution.evolve()
plotter.plot_x_y(collected_metrics.keys(), map(lambda (hv, hvr): hvr, collected_metrics.values()), 'generation', 'HVR', 'HVR metric for ZDT3 problem', 'hvr-zdt3')
</code></pre>
<p><strong>Collecting metrics during evolution, then plotting hvr metric per generation</strong></p>
<pre><code>import os
import matplotlib.pyplot as pyplot
class Plotter():
def __init__(self, problem):
self.directory = 'plots'
self.problem = problem
def plot_population_best_front(self, population, generation_number):
if generation_number % 10 == 0:
filename = "{}/generation{}.png".format(self.directory, str(generation_number))
self.__create_directory_if_not_exists()
computed_pareto_front = population.fronts[0]
self.__plot_front(computed_pareto_front, filename)
def plot_x_y(self, x, y, x_label, y_label, title, filename):
filename = "{}/{}.png".format(self.directory, filename)
self.__create_directory_if_not_exists()
figure = pyplot.figure()
axes = figure.add_subplot(111)
axes.plot(x, y, 'r')
axes.set_xlabel(x_label)
axes.set_ylabel(y_label)
axes.set_title(title)
pyplot.savefig(filename)
pyplot.close(figure)
def __create_directory_if_not_exists(self):
if not os.path.exists(self.directory):
os.makedirs(self.directory)
def __plot_front(self, front, filename):
figure = pyplot.figure()
axes = figure.add_subplot(111)
computed_f1 = map(lambda individual: individual.objectives[0], front)
computed_f2 = map(lambda individual: individual.objectives[1], front)
axes.plot(computed_f1, computed_f2, 'g.')
perfect_pareto_front_f1, perfect_pareto_front_f2 = self.problem.perfect_pareto_front()
axes.plot(perfect_pareto_front_f1, perfect_pareto_front_f2, 'r.')
axes.set_xlabel('f1')
axes.set_ylabel('f2')
axes.set_title('Computed Pareto front')
pyplot.savefig(filename)
pyplot.close(figure)
</code></pre>
<p>Do they help you? </p>
| 0
|
2016-09-10T00:45:08Z
|
[
"python",
"module"
] |
Compare a datetime with a datetime range
| 39,419,869
|
<p>I am fairly new to coding with Python and have so far managed to google my way out of most issues thanks to the great resources available on this site.</p>
<p>I am writing a program which takes multiple .csv files, strips out the data from each of the initial files into different types of log files and writes these different types of logs into their own .csv.</p>
<p>Now I have stripped the files back I have the need to roll through file A and for each row take the datetime and search file B for the same datetime and copy the relevant data into a new column alongside the initial data. This is nice and easy with a if A == B for loop HOWEVER... each of these logs are written by different computers who's real time clock drifts over time.
So what I actually want is to say take the time from file A and search for a corresponding time in file B +/- 30 seconds which is where I am stuck and have been gong around in circles for the last 3/4 hours!</p>
<p>Currently when I run the below code extract I get the following:</p>
<p>---> 35 if (timecode - margin) <= datetime.datetime.date(ssptime) <= (timecode + margin):</p>
<p>TypeError: can't compare datetime.datetime to datetime.date </p>
<p>Thanks in advance!</p>
<pre><code> import matplotlib.pyplot as plt # external function need from inside Canopy
import os # external functions
import re # external functions
import matplotlib.patches as mpatches
import csv
import pandas as pd
import datetime
addrbase = "23"
csvnum = [1,2,3,4,5] # CSV number
csvnum2 = [1,2,3,4,5]
senstyp = ['BSL'] #Sensor Type
Beacons = 5
outfile = open('C:\Users\xxx\Canopy\2303_AVG2.csv', 'w') #File to write to
outcsv = csv.writer(outfile, lineterminator='\n')
with open('C:\Users\xxx\Canopy\2303_AVG.csv', 'r') as f: #File read vairable f
csvread = csv.reader(f, delimiter=',') #Stores the data from file location f in csvread using the delimiter of','
for row in csvread: #sets up a for loop using the data in csvread
timecode = datetime.datetime.strptime(row[1],'%Y/%m/%d %H:%M:%S')#BSL time to datetime.datetime
margin = datetime.timedelta(seconds = 30)
with open('C:\Users\xxx\Canopy\2301_SSP_AVG.csv', 'r') as f: #File read vairable f
csvreadssp = csv.reader(f, delimiter=',')
for line in csvreadssp:
ssptime = datetime.datetime.strptime(row[2],'%Y/%m/%d %H:%M:%S')#
print ssptime
if (timecode - margin) <= datetime.datetime.date(ssptime) <= (timecode + margin):
relssp = line[6]
print "Time: " + str(timecode) + " SSP: " + str(relssp)
#try:
row.append(relssp) #Calculates the one way travel time of the range and adds a new column with the data
outcsv.writerow(row) # Writes file
#except ValueError: #handles errors from header files
# row.append(0) #handles errors from header files
outfile.flush()
outfile.close()
print "done"
</code></pre>
| 1
|
2016-09-09T21:20:53Z
| 39,420,022
|
<p>You can't compare a <code>datetime</code> representing a specific point in time to a <code>date</code>, which represents a whole day. What time of day should the date represent?</p>
<p><code>ssptime</code> is already a <code>datetime</code> (because that's what <code>strptime</code> returns) - why are you calling <code>date</code> on it? This should work:</p>
<pre><code>if (timecode - margin) <= ssptime <= (timecode + margin):
</code></pre>
<p>Since your times are all down to second precision, you could also do this:</p>
<pre><code>if abs((ssptime - timecode).total_seconds()) < margin:
</code></pre>
<p>I'm not sure which is clearer - I'd probably lean towards the second.</p>
| 2
|
2016-09-09T21:33:04Z
|
[
"python",
"csv",
"datetime"
] |
Given a DAG, remove nodes exclusively present in paths shorter than length 3
| 39,419,947
|
<p>I'm working with directed acyclic graphs in networkx. I have graphs that look like the figure below.
<a href="http://i.stack.imgur.com/OcdZN.png" rel="nofollow"><img src="http://i.stack.imgur.com/OcdZN.png" alt="enter image description here"></a></p>
<p>What I essentially want to do is remove all the nodes from this graph that are exclusively connected to paths with length less than 3. For example, in the graph above I would delete all the blue nodes and keep only the red ones.</p>
<p>What would be a best algorithm for this keeping in mind that these graphs can grow very large (upto 10K nodes)?</p>
<p>A similar question <a href="http://stackoverflow.com/questions/19306914/remove-all-nodes-in-a-binary-three-which-don-t-lie-in-any-path-with-sum-k">here</a> focuses on binary trees only and will not be applicable to my case. I'd prefer to achieve this on Python (networkx).</p>
<p>Thanks!</p>
| 0
|
2016-09-09T21:27:48Z
| 39,421,934
|
<ol>
<li>generate the height map (a dictionary from node to height)</li>
<li>generate the inverse graph if needed (all edges are reversed)</li>
<li>generate the depth map (which is just the height map of the inverse graph)</li>
<li>nodes = [ n for n in nodes if hmap[n] + dmap[n] >= 3 ]</li>
</ol>
<p>That's O(nodes+edges)</p>
| 0
|
2016-09-10T02:27:38Z
|
[
"python",
"graph",
"graph-algorithm",
"networkx",
"graph-traversal"
] |
How do I concat corresponding values of two Pandas series?
| 39,419,965
|
<p>I have a PD DF with three columns: lon, lat, and count:</p>
<pre><code>lon lat count
123 456 3
789 012 4
345 678 5
</code></pre>
<p>And I'd like to concat lon and lat to make a fourth column so that the df looks like:</p>
<pre><code>lon lat count text
123 456 3 '123, 456'
789 012 4 '789, 012'
345 678 5 '345, 678'
</code></pre>
<p>I tried:</p>
<pre><code>grouped['text'] = pd.Series([str(grouped['lon'])]).str.cat([str(grouped['lat'])], sep=', ')
</code></pre>
<p>But it appears to return:</p>
<pre><code>lon lat text
123 456 '123, 456', '345, 678', '789, 012'
789 012
345 678
</code></pre>
<p>What am I missing here?</p>
| 1
|
2016-09-09T21:28:48Z
| 39,420,004
|
<pre><code>df[['lon', 'lat']].astype(str).apply(lambda r: ', '.join(r), axis=1)
</code></pre>
<p>Turn your integers into strings so that you can use them in <code>join</code>, then apply the join function for each row (i.e. across each row cell thus <code>axis=1</code>)</p>
<p>Another way:</p>
<pre><code>df.astype(str).lon+', '+df.astype(str).lat # astype optional if df already as strings
</code></pre>
| 2
|
2016-09-09T21:31:20Z
|
[
"python",
"pandas",
"pyspark",
"concat"
] |
How to copy and convert parquet files to csv
| 39,419,975
|
<p>I have access to a hdfs file system and can see parquet files with</p>
<pre><code>hadoop fs -ls /user/foo
</code></pre>
<p>How can I copy those parquet files to my local system and convert them to csv so I can use them? The files should be simple text files with a number of fields per row.</p>
| 0
|
2016-09-09T21:29:27Z
| 39,428,561
|
<p>If there is a table defined over those parquet files in Hive (or if you define such a table yourself), you can run a Hive query on that and save the results into a CSV file. Try something along the lines of:</p>
<pre>
insert overwrite local directory <i>dirname</i>
row format delimited fields terminated by ','
select * from <i>tablename</i>;
</pre>
<p>Substitute <em><code>dirname</code></em> and <em><code>tablename</code></em> with actual values. Be aware that any existing content in the specified directory gets deleted. See <a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Writingdataintothefilesystemfromqueries" rel="nofollow">Writing data into the filesystem from queries</a> for details.</p>
| 0
|
2016-09-10T17:09:10Z
|
[
"python",
"hadoop",
"apache-spark",
"pyspark",
"parquet"
] |
How to copy and convert parquet files to csv
| 39,419,975
|
<p>I have access to a hdfs file system and can see parquet files with</p>
<pre><code>hadoop fs -ls /user/foo
</code></pre>
<p>How can I copy those parquet files to my local system and convert them to csv so I can use them? The files should be simple text files with a number of fields per row.</p>
| 0
|
2016-09-09T21:29:27Z
| 39,433,883
|
<p>Try</p>
<pre><code>df = spark.read.parquet("infile.parquet")
df.write.csv("outfile.csv")
</code></pre>
<p>Relevant API documentation:</p>
<ul>
<li><a href="http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameReader.parquet" rel="nofollow">pyspark.sql.DataFrameReader.parquet</a></li>
<li><a href="http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameWriter.csv" rel="nofollow">pyspark.sql.DataFrameWriter.csv</a></li>
</ul>
| 1
|
2016-09-11T07:36:56Z
|
[
"python",
"hadoop",
"apache-spark",
"pyspark",
"parquet"
] |
Error: object of type 'int' has no len()
| 39,420,047
|
<p>I have a function that is supposed to take an array and sort it from the smallest string size to the largest and then print it by using injection sort.</p>
<p>On the line: <code>while (j >=0) and (len(b[j]) > key):</code></p>
<p>I get this error: </p>
<blockquote>
<p>object of type 'int' has no len()</p>
</blockquote>
<p>Code:</p>
<pre><code>def list_sort(b):
for i in range(1, len(b)):
key = len(b[i])
j = i - 1
while (j >=0) and (len(b[j]) > key):
b[j+1] = b[j]
j = j - 1
b[j+1] = key
for i in range( len(b) ):
print (b[i])
list_sort(name)
</code></pre>
| 1
|
2016-09-09T21:35:53Z
| 39,420,096
|
<p>You can use <code>atleast_1d()</code> or <code>tolist()</code> to make sure that objects will have <code>len()</code> when dealing with a potential mixture of scalars and arrays/lists.</p>
| 1
|
2016-09-09T21:40:59Z
|
[
"python",
"arrays",
"sorting"
] |
Error: object of type 'int' has no len()
| 39,420,047
|
<p>I have a function that is supposed to take an array and sort it from the smallest string size to the largest and then print it by using injection sort.</p>
<p>On the line: <code>while (j >=0) and (len(b[j]) > key):</code></p>
<p>I get this error: </p>
<blockquote>
<p>object of type 'int' has no len()</p>
</blockquote>
<p>Code:</p>
<pre><code>def list_sort(b):
for i in range(1, len(b)):
key = len(b[i])
j = i - 1
while (j >=0) and (len(b[j]) > key):
b[j+1] = b[j]
j = j - 1
b[j+1] = key
for i in range( len(b) ):
print (b[i])
list_sort(name)
</code></pre>
| 1
|
2016-09-09T21:35:53Z
| 39,420,260
|
<p>I am not sure how you generated the error to begin with, since the input of <code>["alice", "bob", "sally"]</code> didn't crash, but it does generate numbers. </p>
<p>That's probably because you assign <code>b_list[j+1] = keyValue</code>, where <code>keyValue = len(b_list[i])</code>, which will be an int, and so, if you ever hit that element again, and try to <code>len()</code> on it, that's the error you are seeing. </p>
<p>Now, re-implementing this from the <a href="https://en.wikipedia.org/wiki/Insertion_sort#Algorithm_for_insertion_sort" rel="nofollow">Wikipedia pseudo-code</a></p>
<pre><code>def insertion_sort(A):
for i in range(1, len(A)):
j = i
s1 = A[j-1]
s2 = A[j]
while j > 0 and len(s1) > len(s2):
# swap A[j] and A[j-1]
tmp = A[j-1]
A[j-1] = A[j]
A[j] = tmp
j = j - 1
name_list = ["alice", "bob", "sally"]
insertion_sort(name_list)
for name in name_list:
print name
</code></pre>
<p>Prints out </p>
<pre><code>bob
alice
sally
</code></pre>
| 2
|
2016-09-09T21:57:24Z
|
[
"python",
"arrays",
"sorting"
] |
Error: object of type 'int' has no len()
| 39,420,047
|
<p>I have a function that is supposed to take an array and sort it from the smallest string size to the largest and then print it by using injection sort.</p>
<p>On the line: <code>while (j >=0) and (len(b[j]) > key):</code></p>
<p>I get this error: </p>
<blockquote>
<p>object of type 'int' has no len()</p>
</blockquote>
<p>Code:</p>
<pre><code>def list_sort(b):
for i in range(1, len(b)):
key = len(b[i])
j = i - 1
while (j >=0) and (len(b[j]) > key):
b[j+1] = b[j]
j = j - 1
b[j+1] = key
for i in range( len(b) ):
print (b[i])
list_sort(name)
</code></pre>
| 1
|
2016-09-09T21:35:53Z
| 39,420,311
|
<p>It sounds like you have a mixed list of integers and strings and want to sort by length of the strings. If that is the case, convert the entire list to like types -- strings -- before sorting. </p>
<p>Given:</p>
<pre><code>>>> li=[1,999,'a','bbbb',0,-3]
</code></pre>
<p>You can do:</p>
<pre><code>>>> sorted(map(str, li), key=len)
['1', 'a', '0', '-3', '999', 'bbbb']
</code></pre>
<p>If you want a two key sort by length then ascibetible, you can do:</p>
<pre><code>>>> sorted(map(str, li), key=lambda e: (len(e), e))
['0', '1', 'a', '-3', '999', 'bbbb']
</code></pre>
<p>You can sort without changing the object type by adding <code>str</code> to the key function:</p>
<pre><code>>>> sorted(li, key=lambda e: (len(str(e)), str(e)))
[0, 1, 'a', -3, 999, 'bbbb']
</code></pre>
| 1
|
2016-09-09T22:01:56Z
|
[
"python",
"arrays",
"sorting"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.