title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Python BeautifulSoup scrape Yahoo Finance value | 39,197,977 | <p>I am attempting to scrape the 'Full Time Employees' value of 110,000 from the Yahoo finance website. </p>
<p>The URL is: <code>http://finance.yahoo.com/quote/AAPL/profile?p=AAPL</code></p>
<p>I have tried using Beautiful soup, but I can't find the value on the page. When I look in the DOM explorer in IE, I can see it. It has a tag with a parent tag which has a parent <p> which has a parent . The actual value is in a custom class of <code>data-react-id</code>.</p>
<p>code I have tried:</p>
<pre><code>from bs4 import BeautifulSoup as bs
html=`http://finance.yahoo.com/quote/AAPL/profile?p=AAPL`
r = requests.get(html).content
soup = bs(r)
</code></pre>
<p>Not sure where to go.</p>
| 0 | 2016-08-29T03:08:11Z | 39,198,005 | <p>The problem is in the "requests" related part - <em>the page you download with <code>requests</code> is not the same as you see in the browser</em>. Browser executed all of the javascript, made multiple asynchronous requests needed to load this page. And, this particular page is quite <em>dynamic</em> itself. There is a lot happening on the "client-side".</p>
<p>What you can do is to load this page in a real browser automated by <a href="http://selenium-python.readthedocs.io/" rel="nofollow"><code>selenium</code></a>. Working example:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
driver = webdriver.Chrome()
driver.maximize_window()
driver.get("http://finance.yahoo.com/quote/AAPL/profile?p=AAPL")
# wait for the Full Time Employees to be visible
wait = WebDriverWait(driver, 10)
employees = wait.until(EC.visibility_of_element_located((By.XPATH, "//span[. = 'Full Time Employees']/following-sibling::strong")))
print(employees.text)
driver.close()
</code></pre>
<p>Prints <code>110,000</code>.</p>
| 2 | 2016-08-29T03:14:24Z | [
"python",
"beautifulsoup"
] |
Python BeautifulSoup scrape Yahoo Finance value | 39,197,977 | <p>I am attempting to scrape the 'Full Time Employees' value of 110,000 from the Yahoo finance website. </p>
<p>The URL is: <code>http://finance.yahoo.com/quote/AAPL/profile?p=AAPL</code></p>
<p>I have tried using Beautiful soup, but I can't find the value on the page. When I look in the DOM explorer in IE, I can see it. It has a tag with a parent tag which has a parent <p> which has a parent . The actual value is in a custom class of <code>data-react-id</code>.</p>
<p>code I have tried:</p>
<pre><code>from bs4 import BeautifulSoup as bs
html=`http://finance.yahoo.com/quote/AAPL/profile?p=AAPL`
r = requests.get(html).content
soup = bs(r)
</code></pre>
<p>Not sure where to go.</p>
| 0 | 2016-08-29T03:08:11Z | 39,274,886 | <p>There are so many ways to download financial data, or any kind of data, from the web. The script below downloads stock prices and saves everything to a CSV file.</p>
<pre><code>import urllib2
listOfStocks = ["AAPL", "MSFT", "GOOG", "FB", "AMZN"]
urls = []
for company in listOfStocks:
urls.append('http://real-chart.finance.yahoo.com/table.csv?s=' + company + '&d=6&e=28&f=2015&g=m&a=11&b=12&c=1980&ignore=.csv')
Output_File = open('C:/Users/your_path/Historical_Prices.csv','w')
New_Format_Data = ''
for counter in range(0, len(urls)):
Original_Data = urllib2.urlopen(urls[counter]).read()
if counter == 0:
New_Format_Data = "Company," + urllib2.urlopen(urls[counter]).readline()
rows = Original_Data.splitlines(1)
for row in range(1, len(rows)):
New_Format_Data = New_Format_Data + listOfStocks[counter] + ',' + rows[row]
Output_File.write(New_Format_Data)
Output_File.close()
</code></pre>
<p>The script below will download multiple stock tickers into one folder.</p>
<pre><code>import urllib
import re
import json
symbolslist = open("C:/Users/rshuell001/Desktop/symbols/tickers.txt").read()
symbolslist = symbolslist.split("\n")
for symbol in symbolslist:
myfile = open("C:/Users/your_path/Desktop/symbols/" +symbol +".txt", "w+")
myfile.close()
htmltext = urllib.urlopen("http://www.bloomberg.com/markets/chart/data/1D/"+ symbol+ ":US")
data = json.load(htmltext)
datapoints = data["data_values"]
myfile = open("C:/Users/rshuell001/Desktop/symbols/" +symbol +".txt", "a")
for point in datapoints:
myfile.write(str(symbol+","+str(point[0])+","+str(point[1])+"\n"))
myfile.close()
</code></pre>
<p>Finally...this will download prices for multiple stock tickers...</p>
<pre><code>import urllib
import re
symbolfile = open("C:/Users/your_path/Desktop/symbols/amex.txt")
symbollist = symbolfile.read()
newsymbolslist = symbollist.split("\n")
i=0
while i<len(newsymbolslist):
url = "http://finance.yahoo.com/q?s=" + newsymbolslist[i] + "&ql=1"
htmlfile = urllib.urlopen(url)
htmltext = htmlfile.read()
regex = '<span id="yfs_l84_' + newsymbolslist[i] + '">(.+?)</span>'
pattern = re.compile(regex)
price = re.findall(pattern,htmltext)
print "the price of ", newsymbolslist[i] , "is", price[0]
i+=1
# Make sure you place the 'amex.txt' file in 'C:\Python27\'
</code></pre>
<p>I wrote a book about these kinds of things, and lots of other stuff. You can find it using the URL below.</p>
<p><a href="http://rads.stackoverflow.com/amzn/click/B01DJJKVZC" rel="nofollow">https://www.amazon.com/Automating-Business-Processes-Reducing-Increasing-ebook/dp/B01DJJKVZC/ref=sr_1_1</a>?</p>
| 0 | 2016-09-01T14:56:29Z | [
"python",
"beautifulsoup"
] |
Executing a Python file from within Python, feeding it input and retrieving output | 39,197,999 | <p>I'm writing a web application where users may attempt to solve various programming problems. The user uploads an executable .py file and the application feeds it some pre-determined input and checks whether the output is correct. (similar to how <a href="http://www.codeforces.com/problemset" rel="nofollow">Codeforce</a> works)</p>
<p>Assuming I have uploaded the user's script, how do I execute the user's script from within my own Python script, feeding the user's script input that would be captured by a normal <code>input()</code> function, and retrieving the output from <code>print()</code> functions so that I can verify its correctness? </p>
| 0 | 2016-08-29T03:12:52Z | 39,198,190 | <p>Figured it out.</p>
<p><em>Note</em>: If you are going to use this in a production environment, make sure you place restrictions on what the user can execute.</p>
<p><strong>executor.py</strong></p>
<pre><code>import subprocess
# create process
process = subprocess.Popen(
['python3.4','script_to_execute.py'], # shell command
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
# encode string into bytes, since communicate function only accepts bytes
input_data = "Captured string".encode(encoding='UTF-8')
# send the input to the process, retrieve output of print() function
stdoutdata, stderrdata = process.communicate(input=input_data)
# decode output back into string
output_data = stdoutdata.decode(encoding='UTF-8')
print(output_data)
</code></pre>
<p><strong>script_to_execute.py</strong></p>
<pre><code>input_data = input()
print("Input recieved!\nData: " + input_data)
</code></pre>
<p><strong>Output of executor.py</strong></p>
<pre><code>Input recieved!
Data: Captured string
</code></pre>
| 1 | 2016-08-29T03:42:51Z | [
"python",
"python-3.x",
"execution"
] |
Preserving order of occurrence with size() function | 39,198,023 | <p>I would like to preserve the order of my DataFrame when using the .size() function. My first DataFrame is created by choosing a subset of a larger one:</p>
<pre><code>df_South = df[df['REGION_NAME'] == 'South']
</code></pre>
<p>Here is an example of what the DataFrame looks like:</p>
<p><a href="http://i.stack.imgur.com/PwW8q.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/PwW8q.jpg" alt="enter image description here"></a></p>
<p>With this DataFrame I count the occurrences of each unique 'TEMPBIN_CONS' variable.</p>
<pre><code>South_Count = df_South.groupby('TEMPBIN_CONS').size()
</code></pre>
<p>I would like to maintain the order that exists using the SORT column. I created this column based on the order I would like my 'TEMPBIN_CONS' variable to appear after counting. I can't seem to get it to appear in the proper order though. I've tried using .sort_index() on South_Count and it does not change order that groupby() creates. </p>
<p>Ultimately this is my solution for fixing the axis ordering of a bar plot I am creating of South_Count. As it is the ordering is very difficult to read and would like it to appear in a logical order. </p>
<p>For reference South_Count, and subsequently the axis of my bar plot appears in
this order:</p>
<p><a href="http://i.stack.imgur.com/IWBqv.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/IWBqv.jpg" alt="enter image description here"></a></p>
| 1 | 2016-08-29T03:16:11Z | 39,198,282 | <p>Try this: </p>
<pre><code>South_Count = df_South.groupby('TEMPBIN_CONS', sort=False ).size()
</code></pre>
<p>Looks as though your data is sorted as string. </p>
| 2 | 2016-08-29T03:57:30Z | [
"python",
"pandas",
"numpy",
"plot",
"dataframe"
] |
How to change which class variable is being called depending on user input? | 39,198,038 | <p>I've created a class called <code>Earthquake</code> which stores relevant earthquake data (its magnitude, time it hit, etc.) in class variables. I want to ask the user which data they'd like to see, then print out that data for them like so:</p>
<pre><code>earthquake1 = Earthquake()
answer = input("What data would you like to see?")
print(earthquake1.answer)
</code></pre>
<p>In the example above, if the user answers <code>magnitude</code>, I want to print out the class variable <code>earthquake1.magnitude</code>.</p>
<p>Is there some way of doing this?</p>
| 0 | 2016-08-29T03:19:08Z | 39,198,055 | <p>You could use <code>getattr</code> which is equivalent to <code>earthquake.attr_name</code>. The difference is that <code>getattr</code> allows you to use a variable to access the name and supply a default value if that is not found.</p>
<p>In short:</p>
<pre><code> print(getattr(earthquake1, answer))
</code></pre>
<p>Or, if you need to use a default value instead of raising a <code>KeyError</code>:</p>
<pre><code>print(getattr(earthquake1, answer, "Instance variable {} not found".format(answer)))
</code></pre>
| 2 | 2016-08-29T03:21:31Z | [
"python",
"class",
"python-3.x",
"class-variables"
] |
How to change which class variable is being called depending on user input? | 39,198,038 | <p>I've created a class called <code>Earthquake</code> which stores relevant earthquake data (its magnitude, time it hit, etc.) in class variables. I want to ask the user which data they'd like to see, then print out that data for them like so:</p>
<pre><code>earthquake1 = Earthquake()
answer = input("What data would you like to see?")
print(earthquake1.answer)
</code></pre>
<p>In the example above, if the user answers <code>magnitude</code>, I want to print out the class variable <code>earthquake1.magnitude</code>.</p>
<p>Is there some way of doing this?</p>
| 0 | 2016-08-29T03:19:08Z | 39,198,067 | <p>Just use <a href="http://effbot.org/zone/python-getattr.htm" rel="nofollow"><code>getattr</code></a>:</p>
<pre><code>earthquake1 = Earthquake()
answer = input("What data would you like to see?")
print(getattr(earthquake1, answer))
</code></pre>
| 4 | 2016-08-29T03:22:04Z | [
"python",
"class",
"python-3.x",
"class-variables"
] |
Peewee: Outer transaction does not roll back inner transaction (savepoints) | 39,198,041 | <p>Running the following python script twice throws this error:</p>
<pre><code>peewee.ProgrammingError: relation "test_table" already exists
</code></pre>
<p>because the table does not get removed on <em>.rollback()</em>. Removing the inner transaction (<em>with test_db.atomic()</em>) works. Why does the inner transaction (which are just savepoints according to the <a href="http://docs.peewee-orm.com/en/latest/peewee/transactions.html#nesting-transactions" rel="nofollow">documentation</a>) not get rolled back?</p>
<pre><code>from datetime import datetime
from peewee import Model, DateTimeField
from playhouse.postgres_ext import PostgresqlExtDatabase
"""
CREATE ROLE test WITH LOGIN;
DROP DATABASE IF EXISTS test;
CREATE DATABASE test WITH OWNER test;
"""
CREDENTIALS = {
"database": "test",
"user": "test",
"password": None,
"host": "localhost",
"port": 5432,
"register_hstore": False,
}
test_db = PostgresqlExtDatabase(**CREDENTIALS)
test_db.connect()
test_db.set_autocommit(False)
test_db.begin() # start transaction
class TestTable(Model):
timestamp = DateTimeField(null=False, default=datetime(1970,1,1,0,0,0,))
class Meta:
db_table = "test_table"
database = test_db
with test_db.atomic():
TestTable.create_table()
TestTable.create()
test_db.rollback() # rollback transaction
print TestTable.get().timestamp
test_db.close()
</code></pre>
<p><strong>Versions</strong></p>
<pre><code>peewee==2.8.3
psycopg2==2.6.2
PostgreSQL 9.5.1 on x86_64-apple-darwin15.3.0, compiled by Apple LLVM version 7.0.2 (clang-700.1.81), 64-bit
</code></pre>
| 0 | 2016-08-29T03:19:30Z | 39,210,809 | <p>Depending on the database you're using, DDL (schema changes) may or may not be possible to rollback.</p>
<p>Postgresql and SQLite both appear to support rolling back DDL, but MySQL does not.</p>
<p>However, the Python driver for SQLite has a bug which causes it to emit a COMMIT when you issue DDL. The first patch was submitted in 2010: <a href="http://bugs.python.org/issue10740" rel="nofollow">http://bugs.python.org/issue10740</a> .</p>
<p>Also take a look at pysqlite2, which is essentially the same as the standard-library sqlite3: </p>
<p><a href="https://github.com/ghaering/pysqlite/blob/ca5dc55b462a7e67072205bb1a9a0f62a7c6efc4/src/cursor.c#L537-L547" rel="nofollow">https://github.com/ghaering/pysqlite/blob/ca5dc55b462a7e67072205bb1a9a0f62a7c6efc4/src/cursor.c#L537-L547</a></p>
| 0 | 2016-08-29T16:21:23Z | [
"python",
"transactions",
"peewee"
] |
pandas standalone series and from dataframe different behavior | 39,198,108 | <p>Here is my code and warning message. If I change <code>s</code> to be a standalone <code>Series</code> by using <code>s = pd.Series(np.random.randn(5))</code>, there will no such errors. Using Python 2.7 on Windows.</p>
<p>It seems Series created from standalone and Series created from a column of a data frame are different behavior? Thanks.</p>
<p>My purpose is to change the Series value itself, other than change on a copy.</p>
<p><strong>Source code</strong>,</p>
<pre><code>import pandas as pd
sample = pd.read_csv('123.csv', header=None, skiprows=1,
dtype={0:str, 1:str, 2:str, 3:float})
sample.columns = pd.Index(data=['c_a', 'c_b', 'c_c', 'c_d'])
sample['c_d'] = sample['c_d'].astype('int64')
s = sample['c_d']
#s = pd.Series(np.random.randn(5))
for i in range(len(s)):
if s.iloc[i] > 0:
s.iloc[i] = s.iloc[i] + 1
else:
s.iloc[i] = s.iloc[i] - 1
</code></pre>
<p><strong>Warning message</strong>,</p>
<pre><code>C:\Python27\lib\site-packages\pandas\core\indexing.py:132: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._setitem_with_indexer(indexer, value)
</code></pre>
<p><strong>Content of 123.csv</strong>,</p>
<pre><code>c_a,c_b,c_c,c_d
hello,python,numpy,0.0
hi,python,pandas,1.0
ho,c++,vector,0.0
ho,c++,std,1.0
go,c++,std,0.0
</code></pre>
<p><strong>Edit 1</strong>, seems lambda solution does not work, tried to print <code>s</code> before and after, the same value,</p>
<pre><code>import pandas as pd
sample = pd.read_csv('123.csv', header=None, skiprows=1,
dtype={0:str, 1:str, 2:str, 3:float})
sample.columns = pd.Index(data=['c_a', 'c_b', 'c_c', 'c_d'])
sample['c_d'] = sample['c_d'].astype('int64')
s = sample['c_d']
print s
s.apply(lambda x:x+1 if x>0 else x-1)
print s
0 0
1 1
2 0
3 1
4 0
Name: c_d, dtype: int64
Backend TkAgg is interactive backend. Turning interactive mode on.
0 0
1 1
2 0
3 1
4 0
</code></pre>
<p>regards,
Lin</p>
| 0 | 2016-08-29T03:28:31Z | 39,198,426 | <p>I suggest you use apply function instead:</p>
<pre><code>s.apply(lambda x:x+1 if x>0 else x-1)
</code></pre>
| 1 | 2016-08-29T04:17:22Z | [
"python",
"python-2.7",
"pandas",
"numpy",
"dataframe"
] |
pandas standalone series and from dataframe different behavior | 39,198,108 | <p>Here is my code and warning message. If I change <code>s</code> to be a standalone <code>Series</code> by using <code>s = pd.Series(np.random.randn(5))</code>, there will no such errors. Using Python 2.7 on Windows.</p>
<p>It seems Series created from standalone and Series created from a column of a data frame are different behavior? Thanks.</p>
<p>My purpose is to change the Series value itself, other than change on a copy.</p>
<p><strong>Source code</strong>,</p>
<pre><code>import pandas as pd
sample = pd.read_csv('123.csv', header=None, skiprows=1,
dtype={0:str, 1:str, 2:str, 3:float})
sample.columns = pd.Index(data=['c_a', 'c_b', 'c_c', 'c_d'])
sample['c_d'] = sample['c_d'].astype('int64')
s = sample['c_d']
#s = pd.Series(np.random.randn(5))
for i in range(len(s)):
if s.iloc[i] > 0:
s.iloc[i] = s.iloc[i] + 1
else:
s.iloc[i] = s.iloc[i] - 1
</code></pre>
<p><strong>Warning message</strong>,</p>
<pre><code>C:\Python27\lib\site-packages\pandas\core\indexing.py:132: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._setitem_with_indexer(indexer, value)
</code></pre>
<p><strong>Content of 123.csv</strong>,</p>
<pre><code>c_a,c_b,c_c,c_d
hello,python,numpy,0.0
hi,python,pandas,1.0
ho,c++,vector,0.0
ho,c++,std,1.0
go,c++,std,0.0
</code></pre>
<p><strong>Edit 1</strong>, seems lambda solution does not work, tried to print <code>s</code> before and after, the same value,</p>
<pre><code>import pandas as pd
sample = pd.read_csv('123.csv', header=None, skiprows=1,
dtype={0:str, 1:str, 2:str, 3:float})
sample.columns = pd.Index(data=['c_a', 'c_b', 'c_c', 'c_d'])
sample['c_d'] = sample['c_d'].astype('int64')
s = sample['c_d']
print s
s.apply(lambda x:x+1 if x>0 else x-1)
print s
0 0
1 1
2 0
3 1
4 0
Name: c_d, dtype: int64
Backend TkAgg is interactive backend. Turning interactive mode on.
0 0
1 1
2 0
3 1
4 0
</code></pre>
<p>regards,
Lin</p>
| 0 | 2016-08-29T03:28:31Z | 39,199,050 | <p>By doing <code>s = sample['c_d']</code>, if you make a change to the value of <code>s</code> then your original Dataframe <code>sample</code> also changes. That's why you got the warning.</p>
<p>You can do <code>s = sample[c_d].copy()</code> instead, so that changing the value of <code>s</code> doesn't change the value of <code>c_d</code> column of the Dataframe <code>sample</code>.</p>
| 1 | 2016-08-29T05:31:07Z | [
"python",
"python-2.7",
"pandas",
"numpy",
"dataframe"
] |
why url cannot be generated after use blueprint | 39,198,413 | <p>My current project looks like this:</p>
<pre><code>âââ __init__.py
âââ pages
âââ settings.py
âââ static
âââ templates
â  âââ article.html
â  âââ base.html
â  âââ index.html
â  âââ posts.html
âââ view
â  âââ article.py
â  âââ home.py
â  âââ index.py
â  âââ postwall.py
</code></pre>
<p>I breaked views.py(no longer in the project) into files and those files cannot generate url dynamically anymore. What i had in article.py is:</p>
<pre><code>from flask import Blueprint, render_template
from app import articles
article = Blueprint('article',__name__)
@article.route('/article/<path:path>/')
def page(path):
article = articles.get_or_404(path)
return render_template('article.html',page=article)
</code></pre>
<p>Meanwhile, posts.html which offers the link button to articles doesn't work anymore:</p>
<pre><code><a href="{{ url_for('page', path=page.path) }}">More -></a>
</code></pre>
<p>What is the problem?Did I miss something in the blueprint file?</p>
| 0 | 2016-08-29T04:16:11Z | 39,198,566 | <p>solved. change</p>
<pre><code><a href="{{ url_for('page', path=page.path) }}">More -></a>
</code></pre>
<p>to </p>
<pre><code><a href="{{ url_for('article.page', path=page.path) }}">More -></a>
</code></pre>
<p>forgot to add blueprint name </p>
| 2 | 2016-08-29T04:37:12Z | [
"python",
"flask",
"jinja2"
] |
Get path of all the elements in dictionary | 39,198,489 | <p>I have a dictionary in python and want to get path of values param1 or param2, ... to replace these with some other values. Say to get the value of param1 I need to get it like ['OptionSettings'][2]['Value']. Can I have some generic code to do this, which will print the path of all my nodes/leaves
below is the dictionary</p>
<pre><code>{
"ApplicationName": "Test",
"EnvironmentName": "ABC-Nodejs",
"CNAMEPrefix": "ABC-Neptune",
"SolutionStackName": "64bit Amazon Linux 2016.03 v2.1.1 running Node.js",
"OptionSettings": [
{
"Namespace": "aws:ec2:vpc",
"OptionName": "AssociatePublicIpAddress",
"Value": "true"
},
{
"Namespace": "aws:elasticbeanstalk:environment",
"OptionName": "EnvironmentType",
"Value": "LoadBalanced"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "Subnets",
"Value": "param1"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "SecurityGroups",
"Value": "param2"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "MinSize",
"Value": "1"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "MaxSize",
"Value": "4"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "Availability Zones",
"Value": "Any"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "Cooldown",
"Value": "360"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "IamInstanceProfile",
"Value": "NepRole"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "MonitoringInterval",
"Value": "5 minutes"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeType",
"Value": "gp2"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeSize",
"Value": "10"
},
{
"Namespace": "aws:elasticbeanstalk:sns:topics",
"OptionName": "Notification Endpoint",
"Value": "sunil.kumar2@pb.com"
},
{
"Namespace": "aws:elasticbeanstalk:hostmanager",
"OptionName": "LogPublicationControl",
"Value": "false"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "DeploymentPolicy",
"Value": "Rolling"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "BatchSizeType",
"Value": "Percentage"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "BatchSize",
"Value": "100"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "HealthCheckSuccessThreshold",
"Value": "Ok"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "IgnoreHealthCheck",
"Value": "false"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "Timeout",
"Value": "600"
},
{
"Namespace": "aws:autoscaling:updatepolicy:rollingupdate",
"OptionName": "RollingUpdateEnabled",
"Value": "false"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "ELBSubnets",
"Value": "param3"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "SecurityGroups",
"Value": "param4"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "ManagedSecurityGroup",
"Value": "param4"
}
]
}
</code></pre>
| 0 | 2016-08-29T04:26:46Z | 39,198,539 | <p>You could try implementing something akin to a breadth-first search or depth-first search.</p>
| -1 | 2016-08-29T04:34:09Z | [
"python",
"python-2.7",
"python-3.x",
"dictionary",
"boto3"
] |
Get path of all the elements in dictionary | 39,198,489 | <p>I have a dictionary in python and want to get path of values param1 or param2, ... to replace these with some other values. Say to get the value of param1 I need to get it like ['OptionSettings'][2]['Value']. Can I have some generic code to do this, which will print the path of all my nodes/leaves
below is the dictionary</p>
<pre><code>{
"ApplicationName": "Test",
"EnvironmentName": "ABC-Nodejs",
"CNAMEPrefix": "ABC-Neptune",
"SolutionStackName": "64bit Amazon Linux 2016.03 v2.1.1 running Node.js",
"OptionSettings": [
{
"Namespace": "aws:ec2:vpc",
"OptionName": "AssociatePublicIpAddress",
"Value": "true"
},
{
"Namespace": "aws:elasticbeanstalk:environment",
"OptionName": "EnvironmentType",
"Value": "LoadBalanced"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "Subnets",
"Value": "param1"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "SecurityGroups",
"Value": "param2"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "MinSize",
"Value": "1"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "MaxSize",
"Value": "4"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "Availability Zones",
"Value": "Any"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "Cooldown",
"Value": "360"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "IamInstanceProfile",
"Value": "NepRole"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "MonitoringInterval",
"Value": "5 minutes"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeType",
"Value": "gp2"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeSize",
"Value": "10"
},
{
"Namespace": "aws:elasticbeanstalk:sns:topics",
"OptionName": "Notification Endpoint",
"Value": "sunil.kumar2@pb.com"
},
{
"Namespace": "aws:elasticbeanstalk:hostmanager",
"OptionName": "LogPublicationControl",
"Value": "false"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "DeploymentPolicy",
"Value": "Rolling"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "BatchSizeType",
"Value": "Percentage"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "BatchSize",
"Value": "100"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "HealthCheckSuccessThreshold",
"Value": "Ok"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "IgnoreHealthCheck",
"Value": "false"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "Timeout",
"Value": "600"
},
{
"Namespace": "aws:autoscaling:updatepolicy:rollingupdate",
"OptionName": "RollingUpdateEnabled",
"Value": "false"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "ELBSubnets",
"Value": "param3"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "SecurityGroups",
"Value": "param4"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "ManagedSecurityGroup",
"Value": "param4"
}
]
}
</code></pre>
| 0 | 2016-08-29T04:26:46Z | 39,198,812 | <p>This is essentially the same question as this:</p>
<p><a href="http://stackoverflow.com/questions/34836777/print-complete-key-path-for-all-the-values-of-a-python-nested-dictionary">Print complete key path for all the values of a python nested dictionary</a></p>
<p>It is essentially the necessary code. What you are going to need to add is the handling of the lists, since you are not presenting simply a key/value structure, and that can be addressed by this question here:</p>
<p><a href="http://stackoverflow.com/questions/25231989/how-to-check-if-a-variable-is-a-dictionary-in-python">How to check if a variable is a dictionary in python</a></p>
<p>Which will show you how to check for a type and modify behavior based on the data type. The easiest way to accomplish this is to set up two dummy data types, one defined as an empty list and the other as an empty map and perform "isinstance" checks of the elements against the dummy types to decide on what logic you will want to execute.</p>
<p>Between the two of those answers, you should be able to build the paths you are looking for.</p>
<p><a href="http://stackoverflow.com/users/4777872/lost">Lost</a> is right, this is not a "write it for me" site.</p>
| 1 | 2016-08-29T05:06:12Z | [
"python",
"python-2.7",
"python-3.x",
"dictionary",
"boto3"
] |
Get path of all the elements in dictionary | 39,198,489 | <p>I have a dictionary in python and want to get path of values param1 or param2, ... to replace these with some other values. Say to get the value of param1 I need to get it like ['OptionSettings'][2]['Value']. Can I have some generic code to do this, which will print the path of all my nodes/leaves
below is the dictionary</p>
<pre><code>{
"ApplicationName": "Test",
"EnvironmentName": "ABC-Nodejs",
"CNAMEPrefix": "ABC-Neptune",
"SolutionStackName": "64bit Amazon Linux 2016.03 v2.1.1 running Node.js",
"OptionSettings": [
{
"Namespace": "aws:ec2:vpc",
"OptionName": "AssociatePublicIpAddress",
"Value": "true"
},
{
"Namespace": "aws:elasticbeanstalk:environment",
"OptionName": "EnvironmentType",
"Value": "LoadBalanced"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "Subnets",
"Value": "param1"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "SecurityGroups",
"Value": "param2"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "MinSize",
"Value": "1"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "MaxSize",
"Value": "4"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "Availability Zones",
"Value": "Any"
},
{
"Namespace": "aws:autoscaling:asg",
"OptionName": "Cooldown",
"Value": "360"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "IamInstanceProfile",
"Value": "NepRole"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "MonitoringInterval",
"Value": "5 minutes"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeType",
"Value": "gp2"
},
{
"Namespace": "aws:autoscaling:launchconfiguration",
"OptionName": "RootVolumeSize",
"Value": "10"
},
{
"Namespace": "aws:elasticbeanstalk:sns:topics",
"OptionName": "Notification Endpoint",
"Value": "sunil.kumar2@pb.com"
},
{
"Namespace": "aws:elasticbeanstalk:hostmanager",
"OptionName": "LogPublicationControl",
"Value": "false"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "DeploymentPolicy",
"Value": "Rolling"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "BatchSizeType",
"Value": "Percentage"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "BatchSize",
"Value": "100"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "HealthCheckSuccessThreshold",
"Value": "Ok"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "IgnoreHealthCheck",
"Value": "false"
},
{
"Namespace": "aws:elasticbeanstalk:command",
"OptionName": "Timeout",
"Value": "600"
},
{
"Namespace": "aws:autoscaling:updatepolicy:rollingupdate",
"OptionName": "RollingUpdateEnabled",
"Value": "false"
},
{
"Namespace": "aws:ec2:vpc",
"OptionName": "ELBSubnets",
"Value": "param3"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "SecurityGroups",
"Value": "param4"
},
{
"Namespace": "aws:elb:loadbalancer",
"OptionName": "ManagedSecurityGroup",
"Value": "param4"
}
]
}
</code></pre>
| 0 | 2016-08-29T04:26:46Z | 39,216,946 | <p>This worked for me</p>
<pre><code>def dict_traverse(self,obj,path):
if isinstance(obj, dict):
for k,v in obj.items():
if isinstance(v, str):
#if v.find('param')!=-1:
print(path+"["+"'"+k+"'"+"]",v)
elif isinstance(v, list):
for elem in v:
self.myvar+=1
path_lst="["+"'"+k+"'"+"]"+"["+str(self.myvar)+"]"
self.dict_traverse(elem,path_lst)
elif isinstance(obj, list):
for elem in obj:
self.dict_traverse(elem,path)
['SolutionStackName'] 64bit Amazon Linux 2016.03 v2.1.1 running Node.js
['OptionSettings'][1]['OptionName'] AssociatePublicIpAddress
['OptionSettings'][1]['Namespace'] aws:ec2:vpc
['OptionSettings'][1]['Value'] true
['OptionSettings'][2]['OptionName'] EnvironmentType
['OptionSettings'][2]['Namespace'] aws:elasticbeanstalk:environment
['OptionSettings'][2]['Value'] LoadBalanced
['OptionSettings'][3]['OptionName'] Subnets
['OptionSettings'][3]['Namespace'] aws:ec2:vpc
['OptionSettings'][3]['Value'] param1
['OptionSettings'][4]['OptionName'] SecurityGroups
['OptionSettings'][4]['Namespace'] aws:autoscaling:launchconfiguration
['OptionSettings'][4]['Value'] param2
['OptionSettings'][5]['OptionName'] MinSize
['OptionSettings'][5]['Namespace'] aws:autoscaling:asg
['OptionSettings'][5]['Value'] 1
['OptionSettings'][6]['OptionName'] MaxSize
['OptionSettings'][6]['Namespace'] aws:autoscaling:asg
['OptionSettings'][6]['Value'] 4
['OptionSettings'][7]['OptionName'] Availability Zones
['OptionSettings'][7]['Namespace'] aws:autoscaling:asg
['OptionSettings'][7]['Value'] Any
['OptionSettings'][8]['OptionName'] Cooldown
['OptionSettings'][8]['Namespace'] aws:autoscaling:asg
['OptionSettings'][8]['Value'] 360
['OptionSettings'][9]['OptionName'] IamInstanceProfile
['OptionSettings'][9]['Namespace'] aws:autoscaling:launchconfiguration
['OptionSettings'][9]['Value'] NepRole
['OptionSettings'][10]['OptionName'] MonitoringInterval
['OptionSettings'][10]['Namespace'] aws:autoscaling:launchconfiguration
['OptionSettings'][10]['Value'] 5 minutes
['OptionSettings'][11]['OptionName'] RootVolumeType
['OptionSettings'][11]['Namespace'] aws:autoscaling:launchconfiguration
['OptionSettings'][11]['Value'] gp2
['OptionSettings'][12]['OptionName'] RootVolumeSize
['OptionSettings'][12]['Namespace'] aws:autoscaling:launchconfiguration
['OptionSettings'][12]['Value'] 10
['OptionSettings'][13]['OptionName'] Notification Endpoint
['OptionSettings'][13]['Namespace'] aws:elasticbeanstalk:sns:topics
['OptionSettings'][13]['Value'] sunil.kumar2@pb.com
['OptionSettings'][14]['OptionName'] LogPublicationControl
['OptionSettings'][14]['Namespace'] aws:elasticbeanstalk:hostmanager
['OptionSettings'][14]['Value'] false
['OptionSettings'][15]['OptionName'] DeploymentPolicy
['OptionSettings'][15]['Namespace'] aws:elasticbeanstalk:command
['OptionSettings'][15]['Value'] Rolling
['OptionSettings'][16]['OptionName'] BatchSizeType
['OptionSettings'][16]['Namespace'] aws:elasticbeanstalk:command
['OptionSettings'][16]['Value'] Percentage
['OptionSettings'][17]['OptionName'] BatchSize
['OptionSettings'][17]['Namespace'] aws:elasticbeanstalk:command
['OptionSettings'][17]['Value'] 100
['OptionSettings'][18]['OptionName'] HealthCheckSuccessThreshold
['OptionSettings'][18]['Namespace'] aws:elasticbeanstalk:command
['OptionSettings'][18]['Value'] Ok
['OptionSettings'][19]['OptionName'] IgnoreHealthCheck
['OptionSettings'][19]['Namespace'] aws:elasticbeanstalk:command
['OptionSettings'][19]['Value'] false
['OptionSettings'][20]['OptionName'] Timeout
['OptionSettings'][20]['Namespace'] aws:elasticbeanstalk:command
['OptionSettings'][20]['Value'] 600
['OptionSettings'][21]['OptionName'] RollingUpdateEnabled
['OptionSettings'][21]['Namespace'] aws:autoscaling:updatepolicy:rollingupdate
['OptionSettings'][21]['Value'] false
['OptionSettings'][22]['OptionName'] ELBSubnets
['OptionSettings'][22]['Namespace'] aws:ec2:vpc
['OptionSettings'][22]['Value'] param3
['OptionSettings'][23]['OptionName'] SecurityGroups
['OptionSettings'][23]['Namespace'] aws:elb:loadbalancer
['OptionSettings'][23]['Value'] param4
['OptionSettings'][24]['OptionName'] ManagedSecurityGroup
['OptionSettings'][24]['Namespace'] aws:elb:loadbalancer
['OptionSettings'][24]['Value'] param4
['EnvironmentName'] ABC-Nodejs
['CNAMEPrefix'] ABC-Neptune
['ApplicationName'] Test
</code></pre>
| 0 | 2016-08-30T00:02:09Z | [
"python",
"python-2.7",
"python-3.x",
"dictionary",
"boto3"
] |
Which one I have to use to read image in Django, StringIO or BytesIO? | 39,198,531 | <p>I'm trying to compress the image file before uploading in my <code>django</code> application.</p>
<p>I found nice code snippet site : <a href="https://djangosnippets.org/snippets/10460/" rel="nofollow">https://djangosnippets.org/snippets/10460/</a></p>
<p>but It doesn't work in <code>python3</code>. I think the problem is about <code>str</code> or <code>byte</code>.</p>
<p>Someone advise to use <code>BytesIO</code> instead of <code>StringIO</code>.</p>
<p>So, I edit my code like this.</p>
<pre><code>from django.db import models
from django.core.urlresolvers import reverse
from django.utils import timezone
from django.utils.text import slugify
from django.core.files.uploadedfile import InMemoryUploadedFile
from PIL import Image as Img
from io import StringIO, BytesIO
def upload_location(instance, file_name):
return "{}/{}/{}/{}".format(
"album",
instance.day,
instance.week,
file_name
)
class Album(models.Model):
DAYS = (
('Sun', 'ì¼ìì¼'),
('Mon', 'ììì¼'),
)
name = models.CharField(max_length=50)
description = models.CharField(max_length=100, blank=True)
image = models.ImageField(upload_to=upload_location)
day = models.CharField(max_length=3, choices=DAYS)
week = models.IntegerField()
slug = models.SlugField(unique=True, allow_unicode=True)
date = models.DateField()
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
ordering = ['day', 'week']
def __str__(self):
return "{} - {}주차".format(self.get_day_display(), self.week)
def get_absolute_url(self):
return reverse(
"album:album_detail",
kwargs={
"slug": self.slug
}
)
def save(self, *args, **kwargs):
if not self.id:
self.slug = self.slug + "주차"
if self.image:
img = Img.open(BytesIO(self.image.read()))
if img.mode != 'RGB':
img = img.convert('RGB')
img.thumbnail((self.image.width/1.5,self.image.height/1.5), Img.ANTIALIAS)
output = BytesIO()
img.save(output, format='JPEG', quality=70)
output.seek(0)
self.image= InMemoryUploadedFile(
output,'ImageField',
"%s.jpg" %self.image.name.split('.')[0],
'image/jpeg',
output.len, None
)
super().save(*args, **kwargs)
</code></pre>
<p>But it occurs an error : <code>'_io.BytesIO' object has no attribute 'len'</code> --> <code>output.len</code> in my code occurs an error.</p>
<p>I start to doubt that It is right way to use <code>BytesIO</code> instead of <code>StringIO</code>. </p>
<p>And need some helps how to edit my code, too. Thanks.</p>
| 0 | 2016-08-29T04:33:19Z | 39,200,628 | <p>I modified the code to use the <code>with</code> statement so there is no need to close the files yourself.</p>
<pre><code>def save(self, *args, **kwargs):
if not self.id:
self.slug = self.slug + "주차"
if self.image:
with Img.open(BytesIO(self.image.read())) as img:
if img.mode != 'RGB':
img = img.convert('RGB')
img.thumbnail((self.image.width/1.5,self.image.height/1.5), Img.ANTIALIAS)
with BytesIO() as output:
img.save(output, format='JPEG', quality=70)
output.seek(0)
self.image = InMemoryUploadedFile(
output,'ImageField',
"%s.jpg" %self.image.name.split('.')[0],
'image/jpeg',
output.getbuffer().nbytes, None
)
super().save(*args, **kwargs)
</code></pre>
| 0 | 2016-08-29T07:24:40Z | [
"python",
"django",
"python-imaging-library"
] |
Color-coded syntax in terminal | 39,198,637 | <p>currently, the terminal I'm using to write my Python code is monochrome gray on black, but <a href="http://tstringer.github.io/vscode/2016/06/06/vscode-terminal.html" rel="nofollow">this</a> and <a href="https://code.visualstudio.com/docs/editor/integrated-terminal" rel="nofollow">this</a> example show that you can have color coded syntax as well. I can't identify anything in the default settings which indicates an ability to use color coded syntax in the terminal. I'd like to use a terminal window as a Python interactive shell, so color coded text is a must. How do I do so?</p>
<p>Visual Studio 2015 is so much easier in this regard, but I suppose that is because VSCode isn't an IDE. I just really like having an interactive shell to work with as I test new script.</p>
| 1 | 2016-08-29T04:47:45Z | 39,198,777 | <p>The default console REPL does not have syntax highlighting. However The IDLE IDE that ships with CPython has a syntax-highlighting REPL (interactive shell).</p>
<p><a href="https://ipython.org/install.html" rel="nofollow">IPython 5</a> also has <a href="http://stackoverflow.com/questions/25190889/syntax-highlighting-in-ipython-console">syntax highlighting in REPL</a>:</p>
<p><img src="http://blog.jupyter.org/content/images/2016/07/Screen-Shot-2016-07-07-at-11-28-25-1.png" alt="IPython screenshot"></p>
| 2 | 2016-08-29T05:02:38Z | [
"python",
"terminal"
] |
Why argument has been changed when using subprocess(python) to start application(windows)? | 39,199,060 | <p>I use the python code below to start debugger and the application(a simple code I write myself) with argument:</p>
<pre><code>debugger=r'C:\Program Files (x86)\Windows Kits\10\Debuggers\x86\windbg.exe'
exe='test.exe'
argument='\x01\x02...\xff'#from 0x01 to 0xff
subprocess.Popen(debugger+" "+exe+" "+argument)
</code></pre>
<p>Well it worked, but when I set a breakpoint in main function, and checked the argument, it seemed that argv[<a href="http://i.stack.imgur.com/uK3gR.png" rel="nofollow">1</a>] is not exactly right:</p>
<p><a href="http://i.stack.imgur.com/uK3gR.png" rel="nofollow"><img src="http://i.stack.imgur.com/uK3gR.png" alt="enter image description here"></a></p>
<p>It seems that '\x09' and '\x20' are changed to '\x00' and '\x22' is gone.</p>
<p>So what's wrong? Am I doing it in a wrong way?</p>
| 2 | 2016-08-29T05:31:49Z | 39,199,218 | <p>Command with arguments shold be passed to <code>Popen</code> as list:</p>
<pre><code>subprocess.Popen([debugger, exe, argument])
</code></pre>
| 0 | 2016-08-29T05:45:26Z | [
"python",
"debugging",
"parameters",
"parameter-passing"
] |
Gunicorn giving syntax error for my configuration file | 39,199,070 | <p>my config file </p>
<pre><code>[loggers]
keys=root, gunicorn.error, gunicorn.access
[handlers]
keys=console, error_file, access_file
[formatters]
keys=generic, access
[logger_root]
level=INFO
handlers=console
[logger_gunicorn.error]
level=INFO
handlers=error_file
propagate=1
qualname=gunicorn.error
[logger_gunicorn.access]
level=INFO
handlers=access_file
propagate=0
qualname=gunicorn.access
[handler_console]
class=StreamHandler
formatter=generic
args=(sys.stdout, )
[handler_error_file]
class=logging.FileHandler
formatter=generic
args=('/tmp/gunicorn.error.log',)
[handler_access_file]
class=logging.FileHandler
formatter=access
args=('/tmp/gunicorn.access.log',)
[formatter_generic]
format=%(asctime)s [%(process)d] [%(levelname)s] %(message)s
datefmt=%Y-%m-%d %H:%M:%S
class=logging.Formatter
[formatter_access]
format=%(message)s
class=logging.Formatter
</code></pre>
<p>My command to execute </p>
<pre><code>gunicorn --env DJANGO_SETTINGS_MODULE=myproject.settings myproject.wsgi --log-level debug --log-file=- -c file:gunicorn_log.conf
</code></pre>
<p>Getting this error </p>
<pre><code>Failed to read config file: gunicorn_log.conf
Traceback (most recent call last):
File "/home/jameel/django-env/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 93, in get_config_from_filename
execfile_(filename, cfg, cfg)
File "/home/jameel/django-env/local/lib/python2.7/site-packages/gunicorn/_compat.py", line 91, in execfile_
return execfile(fname, *args)
File "gunicorn_log.conf", line 27
class=StreamHandler
^
SyntaxError: invalid syntax
</code></pre>
<p>I was followed as per their example in github <a href="https://github.com/benoitc/gunicorn/blob/master/examples/logging.conf" rel="nofollow">link</a></p>
| 0 | 2016-08-29T05:32:21Z | 39,200,343 | <p>Gunicorn config file looks like this: <a href="https://github.com/benoitc/gunicorn/blob/master/examples/example_config.py" rel="nofollow">https://github.com/benoitc/gunicorn/blob/master/examples/example_config.py</a></p>
<p>Your file is a logger configuration file.</p>
<p>Logger configuration is passed by <a href="http://docs.gunicorn.org/en/latest/settings.html#logconfig" rel="nofollow">--log-config</a> parameter.</p>
| 1 | 2016-08-29T07:06:29Z | [
"python",
"django",
"logging",
"celery",
"gunicorn"
] |
Gunicorn giving syntax error for my configuration file | 39,199,070 | <p>my config file </p>
<pre><code>[loggers]
keys=root, gunicorn.error, gunicorn.access
[handlers]
keys=console, error_file, access_file
[formatters]
keys=generic, access
[logger_root]
level=INFO
handlers=console
[logger_gunicorn.error]
level=INFO
handlers=error_file
propagate=1
qualname=gunicorn.error
[logger_gunicorn.access]
level=INFO
handlers=access_file
propagate=0
qualname=gunicorn.access
[handler_console]
class=StreamHandler
formatter=generic
args=(sys.stdout, )
[handler_error_file]
class=logging.FileHandler
formatter=generic
args=('/tmp/gunicorn.error.log',)
[handler_access_file]
class=logging.FileHandler
formatter=access
args=('/tmp/gunicorn.access.log',)
[formatter_generic]
format=%(asctime)s [%(process)d] [%(levelname)s] %(message)s
datefmt=%Y-%m-%d %H:%M:%S
class=logging.Formatter
[formatter_access]
format=%(message)s
class=logging.Formatter
</code></pre>
<p>My command to execute </p>
<pre><code>gunicorn --env DJANGO_SETTINGS_MODULE=myproject.settings myproject.wsgi --log-level debug --log-file=- -c file:gunicorn_log.conf
</code></pre>
<p>Getting this error </p>
<pre><code>Failed to read config file: gunicorn_log.conf
Traceback (most recent call last):
File "/home/jameel/django-env/local/lib/python2.7/site-packages/gunicorn/app/base.py", line 93, in get_config_from_filename
execfile_(filename, cfg, cfg)
File "/home/jameel/django-env/local/lib/python2.7/site-packages/gunicorn/_compat.py", line 91, in execfile_
return execfile(fname, *args)
File "gunicorn_log.conf", line 27
class=StreamHandler
^
SyntaxError: invalid syntax
</code></pre>
<p>I was followed as per their example in github <a href="https://github.com/benoitc/gunicorn/blob/master/examples/logging.conf" rel="nofollow">link</a></p>
| 0 | 2016-08-29T05:32:21Z | 39,264,863 | <pre><code>[loggers]
keys=root, logstash.error, logstash.access
[handlers]
keys=console , logstash
[formatters]
keys=generic, access, json
[logger_root]
level=INFO
handlers=console
[logger_logstash.error]
level=DEBUG
handlers=logstash
propagate=1
qualname=gunicorn.error
[logger_logstash.access]
level=DEBUG
handlers=logstash
propagate=0
qualname=gunicorn.access
[handler_console]
class=StreamHandler
formatter=generic
args=(sys.stdout, )
[handler_logstash]
class=logstash.TCPLogstashHandler
formatter=json
args=('localhost',5959)
[formatter_generic]
format=%(asctime)s [%(process)d] [%(levelname)s] %(message)s
datefmt=%Y-%m-%d %H:%M:%S
class=logging.Formatter
[formatter_access]
format=%(message)s
class=logging.Formatter
[formatter_json]
class=jsonlogging.JSONFormatter
</code></pre>
<p>The above config file worked for me
to send logs to logstash which is running under localhost:5959</p>
| 0 | 2016-09-01T07:09:20Z | [
"python",
"django",
"logging",
"celery",
"gunicorn"
] |
ImportError: Cannot import name 'DurationField' | 39,199,209 | <p>I am trying to run an application that uses django (version 1.6.5) rest framework ( python version 3.4.5 ) .However I am getting the <strong>Import error "cannot import name DurationField"</strong>.How do I resolve this error ?</p>
<pre><code>File "/usr/src/app/Lab/models.py", line 8, in <module>
from Lab import logic, common <br>
File "/usr/src/app/Lab/logic.py", line 16, in <module>
from Rest import viewsAppComm <br>
File "/usr/src/app/Rest/viewsAppComm.py", line 7, in <module>
from rest_framework.response import Response <br>
File "/usr/local/lib/python3.4/site-packages/rest_framework/response.py", line 13, in <module>
from rest_framework.serializers import Serializer
File "/usr/local/lib/python3.4/site-packages/rest_framework/serializers.py", line 19, in <module>
from django.db.models import DurationField as ModelDurationField <br>
ImportError: cannot import name 'DurationField'
</code></pre>
| 2 | 2016-08-29T05:44:35Z | 39,199,565 | <p><code>DurationField</code> was <a href="https://docs.djangoproject.com/en/1.8/releases/1.8/#new-data-types" rel="nofollow">added in Django 1.8</a>. You are using Django 1.6, hence the error. </p>
<p>Your options are to upgrade (which is a good idea if you can, because Django 1.6 has reached end of life quite a while ago) or to downgrade to an older version of Django Rest Framework (the version you currently have is not compatible with Django 1.6).</p>
<p>You may also be able to install the third-party <a href="https://django-durationfield.readthedocs.io/en/latest/" rel="nofollow">django-duration-field</a> app and then import it with:</p>
<pre><code>from durationfield.db.models.fields.duration import DurationField
</code></pre>
<p>... but from the stack trace you posted it looks like it is DRF that is trying to import the model.</p>
| 3 | 2016-08-29T06:11:52Z | [
"python",
"django",
"python-3.x",
"django-models",
"importerror"
] |
Python 3 - How to write specific row in same file csv? | 39,199,248 | <p>I want to update a fieldname called "done" in same file csv</p>
<p>This is structure of file csv:</p>
<p>Input:</p>
<pre><code>Email Done
1@gmail.com
2@gmail.com
3@gmail.com
4@gmail.com
</code></pre>
<p>Output:</p>
<pre><code>Email Done
1@gmail.com 1
2@gmail.com 1
3@gmail.com 1
4@gmail.com 1
</code></pre>
<p>What I want look like:</p>
<pre><code>import csv
with open(r'E:\test.csv','r', encoding='utf-8') as f:
reader = csv.DictReader(f,delimiter='|')
for row in reader:
#do something here#
#then write "1" to filename "done"
f.close()
</code></pre>
<p>How to do it ? Thank you very much ! :)</p>
| -1 | 2016-08-29T05:49:04Z | 39,200,463 | <p>With a simple change like this you could use a simple pair of read and write files; read a line, and write it back out immediately with the addition.</p>
<pre><code>In [6]: with open('test.csv','r') as input:
...: with open('test1.csv','w') as output:
...: header=input.readline()
...: output.write(header)
...: for line in input:
...: output.write('%s %8d\n'%(line.strip(),1))
In [7]: cat test1.csv
Email Done
1@gmail.com 1
2@gmail.com 1
3@gmail.com 1
4@gmail.com 1
</code></pre>
<p>You could also use a csv reader and csv writer; A <code>numpy</code> user might be tempted to use the <code>loadtxt</code> and <code>savetxt</code> pair, though mixing string and number values in an array takes a bit more know-how.</p>
<p>In a new enough Python you could put both opens in one with context:</p>
<pre><code>with open('test.csv','r') as input, open('test1.csv','w') as output:
...
</code></pre>
| 3 | 2016-08-29T07:14:00Z | [
"python",
"python-3.x",
"csv"
] |
Plot a Data Set According to Counts of Categories of a Variable | 39,199,251 | <p>I have a dataset which has 14 columns (I had to only use 4 columns: travelling class, gender, age, and fare price) that I have split into train and test data sets. I need to create a vertical bar chart from the train data set for the distribution of the passengers by travelling class (1, 2, and 3 are the classes). I am not allowed to use NumPy, Pandas, SciPy, and SciKit-Learn.</p>
<p>I am very new to Python, and I know how to plot very simple graphs, but when it comes to more complicated graphs, I get a bit lost.</p>
<p>This is my code (I know there is a lot wrong):</p>
<pre><code>travelling_class = defaultdict(list)
for row in data:
travelling_class[row[0]]
travelling_class = {key: len(val) for key, val in travelling_class.items()}
keys = travelling_class()
vals = [travelling_class[key] for key in keys]
ind = range(min(travelling_class.keys()), max(travelling_class.keys()) + 1)
width = 0.6
plt.xticks([i + width/2 for i in ind], ind, ha='center')
plt.xlabel('Tracelling Class')
plt.ylabel('Counts of Passengers')
plt.title('Number of Passengers per Travelling Class')
plt.ylim(0, 1000)
plt.bar(keys, vals, width)
plt.show()
</code></pre>
<hr>
<pre><code>import matplotlib.pyplot as plt
classes = travelling_class[1, 2, 3]
plt.hist(classes)
plt.show()
</code></pre>
<p>@TrakJohnson This is the original asker of the question - sorry I accidentally somehow deleted my profile so had to make a new one. Thank you so much for your help. The problem is that my data set is 1045 rows, so it might be difficult to list all of them. Does the above seem reasonable?</p>
| 1 | 2016-08-29T05:49:08Z | 39,201,705 | <p>Use <code>plt.hist</code>, which will plot a histogram <a href="https://plot.ly/matplotlib/histograms/" rel="nofollow">(more info here)</a></p>
<p>Example:</p>
<pre><code>import matplotlib.pyplot as plt
classes = [1, 2, 1, 1, 3, 3]
plt.hist(classes)
plt.show()
</code></pre>
<p>And this is the result:</p>
<p><a href="http://i.stack.imgur.com/cGHWC.png" rel="nofollow"><img src="http://i.stack.imgur.com/cGHWC.png" alt="Histogram"></a></p>
| 0 | 2016-08-29T08:27:20Z | [
"python",
"bar-chart",
"categories",
"training-data",
"test-data"
] |
How to combine all the queued jobs(gerrit changes) and run as a single job in jenkins | 39,199,257 | <p>My Jenkins is configured with gerrit plug-in and build the code per each gerrit change when its merged. While my build is taking place there will be some more gerrits merged and those jobs will be in queue to build. My question is how to combine all those queued jobs and run as a single job once the current job is completed. Either way how to get those all latest merged change list and run in a single job next time.</p>
| -1 | 2016-08-29T05:49:26Z | 39,199,955 | <p>You can make a master job and make that job to run other jobs by setting Post-build Actions and set deploy other projects by giving names of jobs which you want to deploy.</p>
| 0 | 2016-08-29T06:39:05Z | [
"python",
"jenkins",
"jenkins-plugins"
] |
Pandas groupby sum using two DataFrames | 39,199,343 | <p>I have two very large Pandas DataFrames and would like to use them to guide each other in a fast sum operation. The two frames look like this:</p>
<p>Frame1:</p>
<pre><code>SampleName Gene1 Gene2 Gene3
Sample1 1 2 3
Sample2 4 5 6
Sample3 7 8 9
</code></pre>
<p>(in reality, Frame1 is roughly 1,000 rows x ~300,000 columns)</p>
<p>Frame2:</p>
<pre><code>FeatureName GeneID
Feature1 Gene1
Feature1 Gene3
Feature2 Gene1
Feature2 Gene2
Feature2 Gene3
</code></pre>
<p>(in reality, <code>Frame2</code> is about ~350,000 rows x 2 columns, with ~17,000 unique features)</p>
<p>I would like to sum the columns of Frame1 by Frame2's groups of genes. For example, the output of the two above frames would be:</p>
<pre><code>SampleName Feature1 Feature2
Sample1 4 6
Sample2 10 15
Sample3 16 24
</code></pre>
<p>(in reality, the output would be ~1,000 rows x 17,000 columns)</p>
<p>Is there any way to do this with minimal memory usage?</p>
| 2 | 2016-08-29T05:55:22Z | 39,199,506 | <p>You can first create <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_tuples.html" rel="nofollow"><code>MultiIndex.from_tuples</code></a>, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a> columns by it and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a>:</p>
<pre><code>#create MultiIndex from df2
cols = pd.MultiIndex.from_tuples(list(zip(df2.FeatureName, df2.GeneID)),
names=('FeatureName','GeneID'))
print (cols)
MultiIndex(levels=[['Feature1', 'Feature2'], ['Gene1', 'Gene2', 'Gene3']],
labels=[[0, 0, 1, 1, 1], [0, 2, 0, 1, 2]],
names=['FeatureName', 'GeneID'])
#reindex columns by MultiIndex
df = df1.set_index('SampleName').reindex(columns=cols, level=1)
print (df)
FeatureName Feature1 Feature2
GeneID Gene1 Gene3 Gene1 Gene2 Gene3
SampleName
Sample1 1 3 1 2 3
Sample2 4 6 4 5 6
Sample3 7 9 7 8 9
#groupby by level 0 of columns and aggregate sum
print (df.groupby(level=0, axis=1).sum())
FeatureName Feature1 Feature2
SampleName
Sample1 4 6
Sample2 10 15
Sample3 16 24
</code></pre>
| 2 | 2016-08-29T06:07:52Z | [
"python",
"pandas",
"optimization",
"dataframe",
"sum"
] |
Pandas groupby sum using two DataFrames | 39,199,343 | <p>I have two very large Pandas DataFrames and would like to use them to guide each other in a fast sum operation. The two frames look like this:</p>
<p>Frame1:</p>
<pre><code>SampleName Gene1 Gene2 Gene3
Sample1 1 2 3
Sample2 4 5 6
Sample3 7 8 9
</code></pre>
<p>(in reality, Frame1 is roughly 1,000 rows x ~300,000 columns)</p>
<p>Frame2:</p>
<pre><code>FeatureName GeneID
Feature1 Gene1
Feature1 Gene3
Feature2 Gene1
Feature2 Gene2
Feature2 Gene3
</code></pre>
<p>(in reality, <code>Frame2</code> is about ~350,000 rows x 2 columns, with ~17,000 unique features)</p>
<p>I would like to sum the columns of Frame1 by Frame2's groups of genes. For example, the output of the two above frames would be:</p>
<pre><code>SampleName Feature1 Feature2
Sample1 4 6
Sample2 10 15
Sample3 16 24
</code></pre>
<p>(in reality, the output would be ~1,000 rows x 17,000 columns)</p>
<p>Is there any way to do this with minimal memory usage?</p>
| 2 | 2016-08-29T05:55:22Z | 39,199,685 | <p>If you want to decrease the memory usage, I think your best option is to iterate over the first DataFrame since it has only 1k rows. </p>
<pre><code>dfs = []
frame1 = frame1.set_index('SampleName')
for idx, row in frame1.iterrows():
dfs.append(frame2.join(row, on='GeneID').groupby('FeatureName').sum())
pd.concat(dfs, axis=1).T
</code></pre>
<p>yields</p>
<pre><code>FeatureName Feature1 Feature2
Sample1 4 6
Sample2 10 15
Sample3 16 24
</code></pre>
| 3 | 2016-08-29T06:20:45Z | [
"python",
"pandas",
"optimization",
"dataframe",
"sum"
] |
Pandas groupby sum using two DataFrames | 39,199,343 | <p>I have two very large Pandas DataFrames and would like to use them to guide each other in a fast sum operation. The two frames look like this:</p>
<p>Frame1:</p>
<pre><code>SampleName Gene1 Gene2 Gene3
Sample1 1 2 3
Sample2 4 5 6
Sample3 7 8 9
</code></pre>
<p>(in reality, Frame1 is roughly 1,000 rows x ~300,000 columns)</p>
<p>Frame2:</p>
<pre><code>FeatureName GeneID
Feature1 Gene1
Feature1 Gene3
Feature2 Gene1
Feature2 Gene2
Feature2 Gene3
</code></pre>
<p>(in reality, <code>Frame2</code> is about ~350,000 rows x 2 columns, with ~17,000 unique features)</p>
<p>I would like to sum the columns of Frame1 by Frame2's groups of genes. For example, the output of the two above frames would be:</p>
<pre><code>SampleName Feature1 Feature2
Sample1 4 6
Sample2 10 15
Sample3 16 24
</code></pre>
<p>(in reality, the output would be ~1,000 rows x 17,000 columns)</p>
<p>Is there any way to do this with minimal memory usage?</p>
| 2 | 2016-08-29T05:55:22Z | 39,199,723 | <h3>One obnoxious line</h3>
<pre><code>Frame1.set_index('SampleName') \
.rename_axis('GeneID', axis=1) \
.stack().rename('Value') \
.reset_index().merge(Frame2) \
.groupby(['SampleName', 'FeatureName']) \
.Value.sum().unstack()
</code></pre>
<p><a href="http://i.stack.imgur.com/TJsKo.png" rel="nofollow"><img src="http://i.stack.imgur.com/TJsKo.png" alt="enter image description here"></a></p>
| 1 | 2016-08-29T06:23:12Z | [
"python",
"pandas",
"optimization",
"dataframe",
"sum"
] |
OCR Hand written data showing error in svm.train() | 39,199,433 | <p>I am using OpenCV3.1.0 with Python2.7 .
I have implemented code of OCR Hand Written Data from <a href="https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_ml/py_svm/py_svm_opencv/py_svm_opencv.html#svm-opencv" rel="nofollow">here</a>.</p>
<pre><code>responses = np.float32(np.repeat(np.arange(10),250)[:,np.newaxis])
svm.train(trainData,cv2.ml.ROW_SAMPLE, responses)
</code></pre>
<p>and getting these error</p>
<blockquote>
<p>svm.train(trainData,cv2.ml.ROW_SAMPLE, responses)
cv2.error: C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\ml\src\svm.cpp:1618: error: (-5) in the case of classification problem the responses must be categorical; either specify varType when creating TrainData, or pass integer responses in function cv::ml::SVMImpl::train</p>
</blockquote>
<p>Note: Since working on opencv3.x I have used cv2.ml.svm wherever necessary and rest is same</p>
<p>And if using</p>
<pre><code>responses = np.int32(np.repeat(np.arange(10),250)[:,np.newaxis])
</code></pre>
<p>getting 0 accuracy</p>
| 1 | 2016-08-29T06:01:41Z | 39,319,483 | <p>Try using <code>pytesseract</code>. It is much better than training an SVM classifier. If you want to check it out, follow this <a href="https://pypi.python.org/pypi/pytesseract/0.1)" rel="nofollow">link</a>. Also for an advanced example check out this <a href="https://realpython.com/blog/python/setting-up-a-simple-ocr-server/" rel="nofollow">site</a> as well.</p>
| 0 | 2016-09-04T17:11:53Z | [
"python",
"opencv",
"svm",
"ocr",
"categorical-data"
] |
pexpect for terminal at local computer | 39,199,451 | <p>I am developing codes user interaction using pexpect for the <em>local terminal</em> on <em>Mac</em> (not remote SSH) instead of using subprocess. But I don't know what did I do wrong with following cases to receive empty outputs:</p>
<p>1)</p>
<pre><code>child = pexpect.spawn('ls')
child.expect(pexpect.EOF)
output = child.before
print output
</code></pre>
<p>The output is empty</p>
<p>2)</p>
<pre><code>child = pexpect.spawn('ls -l')
child.expect(pexpect.EOF)
output = child.before
print output
</code></pre>
<p>It works well. The output is the list of files and folder just like we type ls -l at local terminal</p>
<p>3)</p>
<pre><code>child = pexpect.spawn('pwd')
child.expect(pexpect.EOF)
output = child.before
print output
</code></pre>
<p>The output is empty</p>
<p>The output must be existing, not empty in 3 cases right? Do you know why 'ls' and 'pwd' are empty, but 'ls -l' is not? what should I do to fix 'empty' output?</p>
<p>Best regards,
Quyen Tran</p>
| 0 | 2016-08-29T06:02:33Z | 39,269,069 | <p>For running commands which does not require an interaction, spawn is not the right method. It is better to use <code>pexpect.run</code> method and get the output as return value.</p>
<p><code>pexpect.spawn</code> is more suited to spawn child process where you need to send commands and expect a response. Your code works fine on my terminal but if you aren't able to get it done on yours, use <code>run</code> method</p>
<pre><code>child = pexpect.spawn('ls')
child.expect(pexpect.EOF)
0
print child.before
codes program.c
</code></pre>
| 0 | 2016-09-01T10:27:22Z | [
"python",
"output",
"pexpect"
] |
Does anyone understand this error? | 39,199,553 | <blockquote>
<p>ValueError: Error parsing datetime string "2015-01-31:00:00" at position 10</p>
</blockquote>
<p>I am trying to call <code>np.datetime64('2015-01-31:00:00')</code>. </p>
| -5 | 2016-08-29T06:11:22Z | 39,200,152 | <p>A date in <code>ms</code> gives an idea of the string format it accepts</p>
<pre><code>In [74]: np.datetime64('2015-01-31','ms')
Out[74]: numpy.datetime64('2015-01-31T00:00:00.000')
</code></pre>
<p>See also the examples on the <code>numpy</code> doc page</p>
<p><a href="http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html</a></p>
<p>Space also works</p>
<pre><code>In [79]: np.datetime64('2015-01-31 00:00:00','ms')
Out[79]: numpy.datetime64('2015-01-31T00:00:00.000')
</code></pre>
| 3 | 2016-08-29T06:52:45Z | [
"python",
"datetime",
"numpy"
] |
What is the equivalent of Java .get in python | 39,199,568 | <p>I am trying to code in python, with arrays, and I need to get the value of an element. In Java I would use <code>array.get(I)</code>, but I can't seem to find an equivalent in python.</p>
| -7 | 2016-08-29T06:11:59Z | 39,199,613 | <p>In Python you would do it exactly the same as in Java: <code>array[i]</code>.</p>
<p>BTW, Your <code>get(i)</code> is for <code>ArrayList</code> not for array in java.</p>
| 4 | 2016-08-29T06:16:10Z | [
"java",
"python"
] |
Convert a python list having a mix of unicode values and floating numbers | 39,199,596 | <p>So I am reading values from an excel and storing inside a list. But by default, it stored strings as unicode strings and numbers as floats. I want to convert unicode strings into normal strings and float to integer. How do I do that?</p>
<p>Here's my list:</p>
<pre><code>import xlrd
wb = xlrd.open_workbook("test.xlsx")
firstsheet = wb.sheet_by_index(0)
rows =1
rowvalues = []
while(rows <= (firstsheet.nrows)-1):
rowvalues.extend(firstsheet.row_values(rows))
rows+=1
print rowvalues
</code></pre>
<p>Output:</p>
<pre><code>[121090999.0, 3454554455.0, u'Shantharam', 121090999.0, 5645.0, u'Puranik']
</code></pre>
<p>What I need:</p>
<pre><code>[ 121090999, 3454554455, 'Shantharam', 121090999, 5645, 'Puranik' ]
</code></pre>
| 0 | 2016-08-29T06:14:54Z | 39,199,999 | <p>Below is the <code>list</code> comprehension to convert items in list to:</p>
<ol>
<li><code>str</code> if it is <code>unicode</code></li>
<li><code>int</code> if it is <code>float</code></li>
<li>Else, append as it is:</li>
</ol>
<blockquote>
<p>Sample Example:</p>
</blockquote>
<pre><code>>>> my_list = [121090999.0, 3454554455.0, u'Shantharam', 121090999.0, 5645.0, u'Puranik']
>>> [str(item) if isinstance(item, unicode) else int(item) if isinstance(item,float) else item for item in my_list]
# Output: [121090999, 3454554455, 'Shantharam', 121090999, 5645, 'Puranik']
</code></pre>
<p>However, instead of converting list to desired format, you should make these checks (<em>if</em> conditions) at the time you are extending <code>rowvalues</code></p>
| 0 | 2016-08-29T06:43:03Z | [
"python",
"unicode",
"xlrd"
] |
Convert a python list having a mix of unicode values and floating numbers | 39,199,596 | <p>So I am reading values from an excel and storing inside a list. But by default, it stored strings as unicode strings and numbers as floats. I want to convert unicode strings into normal strings and float to integer. How do I do that?</p>
<p>Here's my list:</p>
<pre><code>import xlrd
wb = xlrd.open_workbook("test.xlsx")
firstsheet = wb.sheet_by_index(0)
rows =1
rowvalues = []
while(rows <= (firstsheet.nrows)-1):
rowvalues.extend(firstsheet.row_values(rows))
rows+=1
print rowvalues
</code></pre>
<p>Output:</p>
<pre><code>[121090999.0, 3454554455.0, u'Shantharam', 121090999.0, 5645.0, u'Puranik']
</code></pre>
<p>What I need:</p>
<pre><code>[ 121090999, 3454554455, 'Shantharam', 121090999, 5645, 'Puranik' ]
</code></pre>
| 0 | 2016-08-29T06:14:54Z | 39,200,010 | <p>You can use <a href="https://docs.python.org/2/library/functions.html#isinstance" rel="nofollow">isinstance</a> to check the type of the variable</p>
<pre><code>rows = [121090999.0, 3454554455.0, u'Shantharam', 121090999.0, 5645.0, u'Puranik']
def convert_rows(rows):
for kk,vv in enumerate(rows):
if isinstance(vv,float):
rows[kk] = int(vv)
if isinstance(vv,unicode):
rows[kk] = vv.encode('ascii')
return rows
print( convert_rows(rows) )
# prints [121090999, 3454554455L, 'Shantharam', 121090999, 5645, 'Puranik']
</code></pre>
| 0 | 2016-08-29T06:43:27Z | [
"python",
"unicode",
"xlrd"
] |
Convert a python list having a mix of unicode values and floating numbers | 39,199,596 | <p>So I am reading values from an excel and storing inside a list. But by default, it stored strings as unicode strings and numbers as floats. I want to convert unicode strings into normal strings and float to integer. How do I do that?</p>
<p>Here's my list:</p>
<pre><code>import xlrd
wb = xlrd.open_workbook("test.xlsx")
firstsheet = wb.sheet_by_index(0)
rows =1
rowvalues = []
while(rows <= (firstsheet.nrows)-1):
rowvalues.extend(firstsheet.row_values(rows))
rows+=1
print rowvalues
</code></pre>
<p>Output:</p>
<pre><code>[121090999.0, 3454554455.0, u'Shantharam', 121090999.0, 5645.0, u'Puranik']
</code></pre>
<p>What I need:</p>
<pre><code>[ 121090999, 3454554455, 'Shantharam', 121090999, 5645, 'Puranik' ]
</code></pre>
| 0 | 2016-08-29T06:14:54Z | 39,200,065 | <p>Here is a short script that should do the trick:</p>
<pre><code>l=[121090999.0, 3454554455.0, u'Shantharam', 121090999.0, 5645.0, u'Puranik'];
i=0
for i in range(len(l)-1):
if isinstance(l[i], float):
l[i]=int(l[i])
else:
l[i]=str(l[i])
print(l)
</code></pre>
| 0 | 2016-08-29T06:47:19Z | [
"python",
"unicode",
"xlrd"
] |
Convert a python list having a mix of unicode values and floating numbers | 39,199,596 | <p>So I am reading values from an excel and storing inside a list. But by default, it stored strings as unicode strings and numbers as floats. I want to convert unicode strings into normal strings and float to integer. How do I do that?</p>
<p>Here's my list:</p>
<pre><code>import xlrd
wb = xlrd.open_workbook("test.xlsx")
firstsheet = wb.sheet_by_index(0)
rows =1
rowvalues = []
while(rows <= (firstsheet.nrows)-1):
rowvalues.extend(firstsheet.row_values(rows))
rows+=1
print rowvalues
</code></pre>
<p>Output:</p>
<pre><code>[121090999.0, 3454554455.0, u'Shantharam', 121090999.0, 5645.0, u'Puranik']
</code></pre>
<p>What I need:</p>
<pre><code>[ 121090999, 3454554455, 'Shantharam', 121090999, 5645, 'Puranik' ]
</code></pre>
| 0 | 2016-08-29T06:14:54Z | 39,200,911 | <p>Below is the list comprehension code which does exactly what you want:</p>
<pre><code> [int(d) if isinstance(d, float) else str(d) if isinstance(d, unicode) else d for d in my_list]
</code></pre>
<p>Run this code on your <code>my_list</code> and you'll get the following response:</p>
<pre><code> new_list = [121090999, 3454554455, 'Shantharam', 121090999, 5645, 'Puranik']
</code></pre>
<p>You can verify the datatype of the elements as:</p>
<pre><code> type(new_list[-1]) # <type 'str'>
type(new_list[-2]) # <type 'int'>
</code></pre>
| 0 | 2016-08-29T07:40:13Z | [
"python",
"unicode",
"xlrd"
] |
Error with two same views in different apps in django project | 39,199,621 | <p>Working on django project. There are two apps in my project one is custom defined "admin", and another is "home". I am using some views same in both app for eg:- My admin app urls.py file is</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^logout$', views.logout, name='logout'),
url(r'^dashboard$', views.dashboard, name='dashboard'),
url(r'^profile$', views.profile, name='profile'),
url(r'^edit-profile$', views.edit_profile, name='edit-profile'),
url(r'^check-password$', views.check_password, name='check-password'),
url(r'^testing$', views.testing_database, name='testing'),
]
</code></pre>
<p>and my home's app urls.py file is:-</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^sport$', views.sport, name='sport'),
url(r'^register$', views.signup_signin, name='register'),
url(r'^logout$', views.logout_view, name='logout'),
url(r'^dashboard$', views.dashboard, name='dashboard'),
url(r'^create-new-event$', views.create_new_event, name='create-new-event')
]
</code></pre>
<p>So you can check that some views are same in both of views.py file, That why whenever I am trying to login into the admin panel it's redirectingme to the home app's dashboard, and when I try to logout from admin panel it's redirecting me on home's app index page. I think that this is the problem of same views in both apps. I also tried to import views of particular app in respective urls.py file. for example:-</p>
<pre><code>from home import views
</code></pre>
<p>in home's urls.py file and for admin </p>
<pre><code>from admin import views
</code></pre>
<p>but its giving me error:-</p>
<pre><code>from admin import views
ImportError: cannot import name views
</code></pre>
<p>my root urls.py file is :-</p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include('admin.urls')),
url(r'^home/', include('home.urls')),
]
</code></pre>
<p>I also checked this solution on stackoverflow but did not get solution</p>
<pre><code>http://stackoverflow.com/questions/26083515/same-functions-in-different-views-django
</code></pre>
| 2 | 2016-08-29T06:16:38Z | 39,199,732 | <p>The <code>admin</code> in your root urls.py specifies the django admin module, and not your custom application. You have not included your admin app's urls in the root url at all. </p>
| 0 | 2016-08-29T06:24:04Z | [
"python",
"django"
] |
Error with two same views in different apps in django project | 39,199,621 | <p>Working on django project. There are two apps in my project one is custom defined "admin", and another is "home". I am using some views same in both app for eg:- My admin app urls.py file is</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^logout$', views.logout, name='logout'),
url(r'^dashboard$', views.dashboard, name='dashboard'),
url(r'^profile$', views.profile, name='profile'),
url(r'^edit-profile$', views.edit_profile, name='edit-profile'),
url(r'^check-password$', views.check_password, name='check-password'),
url(r'^testing$', views.testing_database, name='testing'),
]
</code></pre>
<p>and my home's app urls.py file is:-</p>
<pre><code>from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^$', views.index, name='index'),
url(r'^sport$', views.sport, name='sport'),
url(r'^register$', views.signup_signin, name='register'),
url(r'^logout$', views.logout_view, name='logout'),
url(r'^dashboard$', views.dashboard, name='dashboard'),
url(r'^create-new-event$', views.create_new_event, name='create-new-event')
]
</code></pre>
<p>So you can check that some views are same in both of views.py file, That why whenever I am trying to login into the admin panel it's redirectingme to the home app's dashboard, and when I try to logout from admin panel it's redirecting me on home's app index page. I think that this is the problem of same views in both apps. I also tried to import views of particular app in respective urls.py file. for example:-</p>
<pre><code>from home import views
</code></pre>
<p>in home's urls.py file and for admin </p>
<pre><code>from admin import views
</code></pre>
<p>but its giving me error:-</p>
<pre><code>from admin import views
ImportError: cannot import name views
</code></pre>
<p>my root urls.py file is :-</p>
<pre><code>from django.conf.urls import include, url
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include('admin.urls')),
url(r'^home/', include('home.urls')),
]
</code></pre>
<p>I also checked this solution on stackoverflow but did not get solution</p>
<pre><code>http://stackoverflow.com/questions/26083515/same-functions-in-different-views-django
</code></pre>
| 2 | 2016-08-29T06:16:38Z | 39,202,550 | <p>Okay so Django has something to solve this problem of yours.</p>
<p>What you can do is specify the <code>namespace</code> for the urls in your root <strong>urls.py</strong> and then when using then you can use this <code>namespace</code> value while <strong>reversing the url</strong> in your <strong>html templates</strong> when you specify the urls to redirect to. </p>
<p>Something like this:</p>
<p><strong>root urls.py</strong> will look pretty much the same as yours, just add the <code>namespace</code> value to the <code>home.urls</code> like this:</p>
<pre><code>url(r'^admin/', include('admin.urls', namespace='home')),
</code></pre>
<p>So now whenever you will be using the <strong>home</strong> app urls, then you can do something like this for redirecting:</p>
<pre><code><a href="{% url "home:logout" %}> Logout </a>
</code></pre>
<p>So now when you click on this <code><a></code> tag of <strong>Logout</strong>, then it will look for the <strong>home</strong> namespace and hence will pick the url whose <code>name</code> is <code>logout</code> from the <strong>home</strong> app and not from the <strong>admin</strong> app irrespective of the order in which you have included the urls in your root url.py</p>
<p>There are other solutions too which will require changing your URL patterns.
Like, either you can change the original url value or you can have different <code>name</code> value of same urls in different apps, but then when Django is already providing you the flexibility to do what you want to by using the power of <code>namespace</code>.</p>
<p>See this -> <a href="https://docs.djangoproject.com/en/1.10/topics/http/urls/#url-namespaces-and-included-urlconfs" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/http/urls/#url-namespaces-and-included-urlconfs</a></p>
<p>I hope the above will resolve your issue.</p>
| 1 | 2016-08-29T09:16:13Z | [
"python",
"django"
] |
Getting "TypeError: 'NoneType' object is not iterable" while doing parallel ssh | 39,199,781 | <p>I am trying to do parallel ssh on servers. While doing this, i am getting "TypeError: 'NoneType' object is not iterable" this error. Kindly help.</p>
<p>My script is below</p>
<pre><code>from pssh import ParallelSSHClient
from pssh.exceptions import AuthenticationException, UnknownHostException, ConnectionErrorException
def parallelsshjob():
client = ParallelSSHClient(['10.84.226.72','10.84.226.74'], user = 'root', password = 'XXX')
try:
output = client.run_command('racadm getsvctag', sudo=True)
print output
except (AuthenticationException, UnknownHostException, ConnectionErrorException):
pass
#print output
if __name__ == '__main__':
parallelsshjob()
</code></pre>
<p>And the Traceback is below</p>
<pre><code>Traceback (most recent call last):
File "parallelssh.py", line 17, in <module>
parallelsshjob()
File "parallelssh.py", line 10, in parallelsshjob
output = client.run_command('racadm getsvctag', sudo=True)
File "/Library/Python/2.7/site-packages/pssh/pssh_client.py", line 520, in run_command
raise ex
TypeError: 'NoneType' object is not iterable
</code></pre>
<p>Help me with the solution and also suggest me to use ssh-agent in this same script. Thanks in advance.</p>
| 0 | 2016-08-29T06:27:48Z | 39,200,109 | <p>From reading the code and debugging a bit on my laptop, I believe the issue is that you don't have a file called <code>~/.ssh/config</code>. It seems that <code>parallel-ssh</code> has a dependency on OpenSSH configuration, and this is the error you get when that file is missing.</p>
<p><code>read_openssh_config</code> returns None here: <a href="https://github.com/pkittenis/parallel-ssh/blob/master/pssh/utils.py#L79" rel="nofollow">https://github.com/pkittenis/parallel-ssh/blob/master/pssh/utils.py#L79</a></p>
<p>In turn, <code>SSHClient.__init__</code> blows up when trying to unpack the values it expects to receive: <a href="https://github.com/pkittenis/parallel-ssh/blob/master/pssh/ssh_client.py#L97" rel="nofollow">https://github.com/pkittenis/parallel-ssh/blob/master/pssh/ssh_client.py#L97</a>.</p>
<p>The fix is presumably to get some sort of OpenSSH config file in place, but I'm sorry to say I know nothing about that.</p>
<p><strong>EDIT</strong></p>
<p>After cleaning up some of <code>parallel-ssh</code>'s exception handling, here's a better stack trace for the error:</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 11, in <module>
parallelsshjob()
File "test.py", line 7, in parallelsshjob
output = client.run_command('racadm getsvctag', sudo=True)
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/pssh/pssh_client.py", line 517, in run_command
self.get_output(cmd, output)
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/pssh/pssh_client.py", line 601, in get_output
(channel, host, stdout, stderr, stdin) = cmd.get()
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/gevent/greenlet.py", line 480, in get
self._raise_exception()
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/gevent/greenlet.py", line 171, in _raise_exception
reraise(*self.exc_info)
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/gevent/greenlet.py", line 534, in run
result = self._run(*self.args, **self.kwargs)
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/pssh/pssh_client.py", line 559, in _exec_command
channel_timeout=self.channel_timeout)
File "/Users/smarx/test/pssh/venv/lib/python2.7/site-packages/pssh/ssh_client.py", line 98, in __init__
host, config_file=_openssh_config_file)
TypeError: 'NoneType' object is not iterable
</code></pre>
| 1 | 2016-08-29T06:50:23Z | [
"python",
"ssh",
"ssh-agent"
] |
Getting "TypeError: 'NoneType' object is not iterable" while doing parallel ssh | 39,199,781 | <p>I am trying to do parallel ssh on servers. While doing this, i am getting "TypeError: 'NoneType' object is not iterable" this error. Kindly help.</p>
<p>My script is below</p>
<pre><code>from pssh import ParallelSSHClient
from pssh.exceptions import AuthenticationException, UnknownHostException, ConnectionErrorException
def parallelsshjob():
client = ParallelSSHClient(['10.84.226.72','10.84.226.74'], user = 'root', password = 'XXX')
try:
output = client.run_command('racadm getsvctag', sudo=True)
print output
except (AuthenticationException, UnknownHostException, ConnectionErrorException):
pass
#print output
if __name__ == '__main__':
parallelsshjob()
</code></pre>
<p>And the Traceback is below</p>
<pre><code>Traceback (most recent call last):
File "parallelssh.py", line 17, in <module>
parallelsshjob()
File "parallelssh.py", line 10, in parallelsshjob
output = client.run_command('racadm getsvctag', sudo=True)
File "/Library/Python/2.7/site-packages/pssh/pssh_client.py", line 520, in run_command
raise ex
TypeError: 'NoneType' object is not iterable
</code></pre>
<p>Help me with the solution and also suggest me to use ssh-agent in this same script. Thanks in advance.</p>
| 0 | 2016-08-29T06:27:48Z | 39,247,413 | <p>This <a href="https://github.com/pkittenis/parallel-ssh/issues/63" rel="nofollow">was seemingly a regression in the <code>0.92.0</code> version of the library</a> which is now resolved in <a href="https://github.com/pkittenis/parallel-ssh/releases/tag/0.92.1" rel="nofollow">0.92.1</a>. Previous versions also work. OpenSSH config <em>should not</em> be a dependency.</p>
<p>To answer your SSH agent question, if there is one running and enabled in the user session it gets used automatically. If you would prefer to provide a private key programmatically can do the following</p>
<pre><code>from pssh import ParallelSSHClient
from pssh.utils import load_private_key
pkey = load_private_key('my_private_key')
client = ParallelSSHClient(hosts, pkey=pkey)
</code></pre>
<p>Can also provide an agent with multiple keys programmatically, per below</p>
<pre><code>from pssh import ParallelSSHClient
from pssh.utils import load_private_key
from pssh.agent import SSHAgent
pkey = load_private_key('my_private_key')
agent = SSHAgent()
agent.add_key(pkey)
client = ParallelSSHClient(hosts, agent=agent)
</code></pre>
<p><a href="http://parallel-ssh.readthedocs.io/en/latest/pssh_client.html" rel="nofollow">See documentation</a> for more examples.</p>
| 0 | 2016-08-31T10:41:47Z | [
"python",
"ssh",
"ssh-agent"
] |
unable to scrape text | 39,200,026 | <p>I was trying to get the title of the websites.
So, I used this snippet to do this</p>
<pre><code> sys.stdout = open("test_data.txt", "w")
url2 = "https://www.google.com/"
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/7046A194A'}
req = urllib2.Request(url2, None, headers)
req.add_header('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8')
html = urllib2.urlopen(req, timeout=60).read()
soup = BeautifulSoup(html)
# Extract title
list1 = soup.title.string
print list1.encode('utf-8')
</code></pre>
<p>This works perfectly and gives <strong>Google</strong> as the title and flushes the output to test_data.txt.</p>
<p>But when I try to run the same code as a web service, it doesn't work. I get an empty text file.
I am hitting this URL to run this web service on my local <a href="http://0.0.0.0:8881/get_title" rel="nofollow">http://0.0.0.0:8881/get_title</a></p>
<pre><code>from bottle import route, run, request
@route('/get_title')
def get_title():
sys.stdout = open("test_data.txt", "w")
url2 = "https://www.google.com/"
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/7046A194A'}
req = urllib2.Request(url2, None, headers)
req.add_header('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8')
html = urllib2.urlopen(req, timeout=60).read()
soup = BeautifulSoup(html)
# Extract title
list1 = soup.title.string
print list1.encode('utf-8')
if __name__ == "__main__":
run(host='0.0.0.0', port=8881, debug=True)
</code></pre>
<p>Another thing which has made me even more anxious is when I run the web service for msn.com, it works well for both snippets(even the web service).</p>
<p>Any help would be thankful !!</p>
| 1 | 2016-08-29T06:44:15Z | 39,200,126 | <p>Is this Flask? If so, you need to <code>return</code> the string you want to send to the user. A <code>print</code> statement writes to the web server log. You should replace the last line of your <code>get_title</code> function with this:</p>
<pre><code>return list1.encode('utf-8')
</code></pre>
| -2 | 2016-08-29T06:51:26Z | [
"python",
"web-services",
"scrape"
] |
Seaborn: Overlaying a box plot or mean with error bars on a histogram | 39,200,165 | <p>I am creating a histogram in Seaborn of my data in a pretty standard way, ie:</p>
<pre><code>rc = {'font.size': 32, 'axes.labelsize': 28.5, 'legend.fontsize': 32.0,
'axes.titlesize': 32, 'xtick.labelsize': 31, 'ytick.labelsize': 12}
sns.set(style="ticks", color_codes=True, rc = rc)
plt.figure(figsize=(25,20),dpi=300)
ax = sns.distplot(synData['SYNERGY_SCORE'])
print (np.mean(synData['SYNERGY_SCORE']), np.std(synData['SYNERGY_SCORE']))
# ax = sns.boxplot(synData['SYNERGY_SCORE'], orient = 'h')
ax.set(xlabel = 'Synergy Score', ylabel = 'Frequency', title = 'Aggregate Synergy Score Distribution')
</code></pre>
<p>This produces the following output: <img src="http://i.stack.imgur.com/QPfRW.png" alt="standard histogram."></p>
<p>I also want to visualize the mean + standard deviation of this dataset on the same plot, ideally by having a point for the mean on the x-axis (or right above the x-axis) and notched error bars showing the standard deviation. Another option is a boxplot hugging the x-axis. I tried just adding the line which is commented out (sns.boxplot()), but it looks super ugly and not at all what I'm looking for. Any suggestions?</p>
| 1 | 2016-08-29T06:53:39Z | 39,208,219 | <p>The boxplot is drawn on a categorical axis and won't coexist nicely with the density axis of the histogram, but it's possible to do it with a twin x axis plot:</p>
<pre><code>import numpy as np
import seaborn as sns
x = np.random.randn(300)
ax = sns.distplot(x)
ax2 = ax.twinx()
sns.boxplot(x=x, ax=ax2)
ax2.set(ylim=(-.5, 10))
</code></pre>
<p><a href="http://i.stack.imgur.com/pTTvg.png" rel="nofollow"><img src="http://i.stack.imgur.com/pTTvg.png" alt="enter image description here"></a></p>
| 1 | 2016-08-29T14:02:33Z | [
"python",
"matplotlib",
"histogram",
"data-visualization",
"seaborn"
] |
Input file doesn't exist even though the file is mentioned in the correct location- pyspark | 39,200,177 | <p>I'm trying to read log lines by forming key-value pairs but I get an error.
This is my code:</p>
<pre><code>logLine=sc.textFile("C:\TestLogs\testing.log").cache()
lines = logLine.flatMap(lambda x: x.split('\n'))
rx = "(\\S+)=(\\S+)"
line_collect = lines.collect()
for line in line_collect :
d = dict([(x,y) for x,y in re.findall(rx,line)])
d = str(d)
print d
</code></pre>
<p>Error: </p>
<blockquote>
<p>line_collect = lines.collect()......InvalidInputException: Input path
does not exist: file:/C:/TestLogs esting.log</p>
</blockquote>
<p>I don't know how to correct this. I'm new to python and spark. </p>
| 1 | 2016-08-29T06:54:08Z | 39,200,260 | <p>Try replace <code>logLine=sc.textFile("C:\TestLogs\testing.log").cache()</code> with
<code>logLine=sc.textFile("C:\\TestLogs\\testing.log").cache()</code></p>
<p>The backslash character is not <code>'\'</code> in a string but rather <code>"\\"</code></p>
| 0 | 2016-08-29T07:00:24Z | [
"python",
"apache-spark",
"pyspark"
] |
Input file doesn't exist even though the file is mentioned in the correct location- pyspark | 39,200,177 | <p>I'm trying to read log lines by forming key-value pairs but I get an error.
This is my code:</p>
<pre><code>logLine=sc.textFile("C:\TestLogs\testing.log").cache()
lines = logLine.flatMap(lambda x: x.split('\n'))
rx = "(\\S+)=(\\S+)"
line_collect = lines.collect()
for line in line_collect :
d = dict([(x,y) for x,y in re.findall(rx,line)])
d = str(d)
print d
</code></pre>
<p>Error: </p>
<blockquote>
<p>line_collect = lines.collect()......InvalidInputException: Input path
does not exist: file:/C:/TestLogs esting.log</p>
</blockquote>
<p>I don't know how to correct this. I'm new to python and spark. </p>
| 1 | 2016-08-29T06:54:08Z | 39,200,284 | <p>When the character sequence <code>\t</code> is found in a string, it will be replaced by a TAB character. You can actually see this in the error message.</p>
<p>I'd recommend always using the forward slash <code>/</code> as the directory separator, even on Windows. Alternatively prefix the string with an r like this: <code>r"does not replace \t with <tab>."</code>.</p>
<p>You might want to read up on string literals: <a href="https://docs.python.org/2.0/ref/strings.html" rel="nofollow">https://docs.python.org/2.0/ref/strings.html</a>.</p>
| 1 | 2016-08-29T07:02:13Z | [
"python",
"apache-spark",
"pyspark"
] |
Expecting value: line 1 column 1 (char 0) python | 39,200,229 | <p>I'm newbie in python and trying to parse data in my application using these lines of codes</p>
<pre><code>json_str = request.body.decode('utf-8')
py_str = json.loads(json_str)
</code></pre>
<p>But I'm getting this error on <code>json.loads</code></p>
<blockquote>
<p>Expecting value: line 1 column 1 (char 0)</p>
</blockquote>
<p>this is json formatted data that I send from angular app (Updated)</p>
<blockquote>
<p>Object { ClientTypeId: 6, ClientName: "asdasd", ClientId: 0, PhoneNo: "123", FaxNo: "123", NTN: "1238", GSTNumber: "1982", OfficialAddress: "sads", MailingAddress: "asdasd", RegStartDate: "17-Aug-2016", 15 more⦠}</p>
</blockquote>
<p>these are the values that I get in <code>json_str</code> </p>
<blockquote>
<p>ClientTypeId=5&ClientName=asdasd&ClientId=0&PhoneNo=123&FaxNo=123&NTN=123&GSTNumber=12&OfficialAddress=adkjh&MailingAddress=adjh&RegStartDate=09-Aug-2016&RegEndDate=16-Aug-2016&Status=1&CanCreateUser=true&UserQuotaFor=11&UserQuotaType=9&MaxUsers=132123&ApplyUserCharges=true&ApplyReportCharges=true&EmailInvoice=true&BillingType=1&UserCharges=132&ReportCharges=123&MonthlyCharges=123&BillingDate=16-Aug-2016&UserSessionId=324</p>
</blockquote>
<p>I don't know what's wrong in it.. can anyone mention what's the mistake is??</p>
| -1 | 2016-08-29T06:57:07Z | 39,200,564 | <p>Your data is not <a href="https://en.wikipedia.org/wiki/JSON" rel="nofollow">JSON-formatted</a>, not even the one you included in your updated answer. Your data is a JavaScript-object, not an encoded string. Please note the "N" in JSON: Notation -- it is a format inspired from how data is written in JavaScript code, but runtime JavaScript data is not represented in JSON. The "JSON" you pasted is how your browser represents the object to you, it is not proper JSON (that would be <code>{"ClientTypeId": 6, ...}</code> -- note the quotes around the property name).</p>
<p>When sending this data to the server, you have to encode it. You think you are sending it JSON-encoded, but you aren't. You are sending it "web form encoded" (data of type <a href="https://en.wikipedia.org/wiki/Percent-encoding#The_application.2Fx-www-form-urlencoded_type" rel="nofollow"><code>application/x-www-form-urlencoded</code></a>).</p>
<p>Now either you have to learn how to send the data in JSON format from Angular, or use the correct parsing routine in Python: <a href="https://docs.python.org/3/library/urllib.parse.html#urllib.parse.parse_qs" rel="nofollow"><code>urllib.parse.parse_qs</code></a>. Depending on the library you are using, there might be a convenience method to access the data as well, as this is a common use case.</p>
| 2 | 2016-08-29T07:20:47Z | [
"python",
"angularjs",
"json"
] |
How do I compare 2 lists and order 1 based on the number of matches | 39,200,256 | <p>Say you had a list, for example:</p>
<pre><code>first_list = ['a', 'b', 'c']
</code></pre>
<p>And you had the following list:</p>
<pre><code>second_list = ['a', 'a b c', 'abc zyx', 'ab cc ac']
</code></pre>
<p>How would you create a function that simply re-orders the second list based on the total number of times an element from the entire first list matches any part of an individual string from the second list?</p>
<hr>
<p><strong>For further clarity:</strong></p>
<ul>
<li>in the second list the 'a' string would contain 1 match</li>
<li>the 'a b c' string would contain 3 matches</li>
<li>the second example list would essentially end up in reverse order once the function has finished</li>
</ul>
<hr>
<p><strong>My attempt:</strong></p>
<pre><code>first_list = ['a', 'b', 'c']
second_list = ['a', 'a b c', 'abc zyx', 'ab cc ac']
print second_list
i = 0
for keyword in first_list:
matches = 0
for s in second_list:
matches += s.count(keyword)
if matches > second_list[0].count(keyword):
popped = second_list.pop(i)
second_list.insert(0, popped)
print second_list
</code></pre>
| 0 | 2016-08-29T06:59:55Z | 39,200,393 | <p>Here's one unstable way to do it:</p>
<pre><code>>>> l1 = ['a', 'b', 'c']
>>> l2 = ['a', 'a b c', 'abc zyx', 'ab cc ac']
>>> [s for _, s in sorted(((sum(s2.count(s1) for s1 in l1), s2) for s2 in l2), reverse=True)]
['ab cc ac', 'abc zyx', 'a b c', 'a']
</code></pre>
<p>If stable sort is required you could leverage <code>enumerate</code>:</p>
<pre><code>>>> l1 = ['a', 'b', 'c']
>>> l2 = ['a', 'a b c', 'ccc ccc', 'bb bb bb', 'aa aa aa']
>>> [x[-1] for x in sorted(((sum(s2.count(s1) for s1 in l1), -i, s2) for i, s2 in enumerate(l2)), reverse=True)]
['ccc ccc', 'bb bb bb', 'aa aa aa', 'a b c', 'a']
</code></pre>
<p>Above generates tuples where second item is a string from <code>l2</code> and first item is the count of matches from <code>l1</code>:</p>
<pre><code>>>> tuples = [(sum(s2.count(s1) for s1 in l1), s2) for s2 in l2]
>>> tuples
[(1, 'a'), (3, 'a b c'), (3, 'abc zyx'), (6, 'ab cc ac')]
</code></pre>
<p>Then those tuples are sorted to descending order:</p>
<pre><code>>>> tuples = sorted(tuples, reverse=True)
>>> tuples
[(6, 'ab cc ac'), (3, 'abc zyx'), (3, 'a b c'), (1, 'a')]
</code></pre>
<p>And finally only the strings are taken:</p>
<pre><code>>>> [s for _, s in tuples]
['ab cc ac', 'abc zyx', 'a b c', 'a']
</code></pre>
<p>In the second version tuples have reverse index to ensure stability:</p>
<pre><code>>>> [(sum(s2.count(s1) for s1 in l1), -i, s2) for i, s2 in enumerate(l2)]
[(1, 0, 'a'), (3, -1, 'a b c'), (3, -2, 'abc zyx'), (6, -3, 'ab cc ac')]
</code></pre>
| 1 | 2016-08-29T07:09:27Z | [
"python"
] |
How do I compare 2 lists and order 1 based on the number of matches | 39,200,256 | <p>Say you had a list, for example:</p>
<pre><code>first_list = ['a', 'b', 'c']
</code></pre>
<p>And you had the following list:</p>
<pre><code>second_list = ['a', 'a b c', 'abc zyx', 'ab cc ac']
</code></pre>
<p>How would you create a function that simply re-orders the second list based on the total number of times an element from the entire first list matches any part of an individual string from the second list?</p>
<hr>
<p><strong>For further clarity:</strong></p>
<ul>
<li>in the second list the 'a' string would contain 1 match</li>
<li>the 'a b c' string would contain 3 matches</li>
<li>the second example list would essentially end up in reverse order once the function has finished</li>
</ul>
<hr>
<p><strong>My attempt:</strong></p>
<pre><code>first_list = ['a', 'b', 'c']
second_list = ['a', 'a b c', 'abc zyx', 'ab cc ac']
print second_list
i = 0
for keyword in first_list:
matches = 0
for s in second_list:
matches += s.count(keyword)
if matches > second_list[0].count(keyword):
popped = second_list.pop(i)
second_list.insert(0, popped)
print second_list
</code></pre>
| 0 | 2016-08-29T06:59:55Z | 39,200,786 | <p>Similar answer:</p>
<pre><code>first_list = ['a', 'b', 'c']
second_list = ['a', 'a b c', 'abc zyx', 'ab cc ac']
#Find occurrences
list_for_sorting = []
for string in second_list:
occurrences = 0
for item in first_list:
occurrences += string.count(item)
list_for_sorting.append((occurrences, string))
#Sort list
sorted_by_occurrence = sorted(list_for_sorting, key=lambda tup: tup[0], reverse=True)
final_list = [i[1] for i in sorted_by_occurrence]
print(final_list)
['ab cc ac', 'a b c', 'abc zyx', 'a']
</code></pre>
| 2 | 2016-08-29T07:33:12Z | [
"python"
] |
How do I compare 2 lists and order 1 based on the number of matches | 39,200,256 | <p>Say you had a list, for example:</p>
<pre><code>first_list = ['a', 'b', 'c']
</code></pre>
<p>And you had the following list:</p>
<pre><code>second_list = ['a', 'a b c', 'abc zyx', 'ab cc ac']
</code></pre>
<p>How would you create a function that simply re-orders the second list based on the total number of times an element from the entire first list matches any part of an individual string from the second list?</p>
<hr>
<p><strong>For further clarity:</strong></p>
<ul>
<li>in the second list the 'a' string would contain 1 match</li>
<li>the 'a b c' string would contain 3 matches</li>
<li>the second example list would essentially end up in reverse order once the function has finished</li>
</ul>
<hr>
<p><strong>My attempt:</strong></p>
<pre><code>first_list = ['a', 'b', 'c']
second_list = ['a', 'a b c', 'abc zyx', 'ab cc ac']
print second_list
i = 0
for keyword in first_list:
matches = 0
for s in second_list:
matches += s.count(keyword)
if matches > second_list[0].count(keyword):
popped = second_list.pop(i)
second_list.insert(0, popped)
print second_list
</code></pre>
| 0 | 2016-08-29T06:59:55Z | 39,200,876 | <p>The most straightforward approach would be to use the <a href="https://docs.python.org/3.5/library/functions.html#sorted" rel="nofollow"><code>key</code> parameter of the <code>sorted</code> built-in function</a>:</p>
<pre><code>>>> sorted(second_list, key = lambda s: sum(s.count(x) for x in first_list), reverse=True)
['ab cc ac', 'a b c', 'abc zyx', 'a']
</code></pre>
<p>The key function is called once per item in the list to be sorted. Still this is inefficient since <code>count</code> takes linear time.</p>
| 3 | 2016-08-29T07:38:24Z | [
"python"
] |
deduplicate records in multiple CSV files with varying columns | 39,200,444 | <p>I have multiple CSV files in a directory. Some contain more columns (which would be OK to drop).</p>
<p>Is there an elegant way to deduplicate records between these CSV files and reduce columns to a common set of columns?</p>
<p>Currently, I will use python / pandas to accomplish this. I will load all the files into a single data frame, note in an additional column where the records originated from (filename), drop the additional columns and finally have pandas deduplicate via <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html</a>
In the last step, I write the deduplicated files back to disk based on the filename-identifier column.</p>
<pre><code># ASSUMPTION: files are in order, first file defines minimum common columns
path = '.'
files_in_dir = [f for f in os.listdir(path)if f.endswith('csv')]
isFirst = True
for filenames in fs.find('*.csv', path):
df = pd.read_csv(filenames, error_bad_lines=False)
df['origin'] = fs.add_suffix(filenames, '_deduplicated')
if (isFirst):
isFirst = False
bigDf = df
else:
bigDf = pd.concat(bigDf, df, axis=0, join='inner')
cols_for_dup = [col for col in bigDf.columns if col not in ['origin']]
bigDf.duplicated(subset=cols_for_dup).sum()
bigDf.duplicated().sum()
bigDf_withoutNA = bigDf.drop_duplicates(keep='first', subset= cols_for_dup)
grouped = bigDf_withoutNA.groupby('origin')
for name, group in grouped:
#filename = 'path' + name
group.to_csv(path_or_buf= name, sep=';', decimal=',')
</code></pre>
<p>Is there a simpler approach to this?</p>
| 0 | 2016-08-29T07:12:50Z | 39,201,499 | <p>I do not known how to make it simpler. I have an equal script done for some data of mine. It just runs twice, first to determine the min / max cols in all documents and finally to rewrite the csv files in an new folder, to keep the original data.</p>
<p>I am just using the csv lib from python.
<a href="https://docs.python.org/2/library/csv.html" rel="nofollow">https://docs.python.org/2/library/csv.html</a></p>
<p>There are no checks in this script, as it's just a quick and dirty script.</p>
<p>The deduplication is not done. It just cuts all data to the same length, but you can replace the last line with your deduplication code.</p>
<pre><code>import os
import csv
mincols = 0xffffffff
maxcols = 0
srcdir = '/tmp/csv/'
dstdir = '/tmp/csv2/'
for dirName, subdirList, fileList in os.walk(srcdir):
for fname in fileList:
if fname[-4:].lower() == '.csv':
with open(os.path.join(dirName, fname)) as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
for row in reader:
if mincols > len(row):
mincols = len(row)
if maxcols < len(row):
maxcols = len(row)
print(mincols, maxcols)
for dirName, subdirList, fileList in os.walk(srcdir):
for fname in fileList:
if fname[-4:].lower() == '.csv':
fullpath = os.path.join(dirName, fname)
newfile = os.path.join(dstdir, fullpath[len(srcdir):])
if not os.path.exists(os.path.dirname(newfile)):
os.makedirs(os.path.dirname(newfile))
with open(fullpath) as csvfile:
reader = csv.reader(csvfile, delimiter=',', quotechar='"')
with open(newfile, 'w') as dstfile:
writer = csv.writer(dstfile, delimiter=',', quotechar='"',
quoting=csv.QUOTE_MINIMAL)
for row in reader:
#You can deduplicate here
writer.writerow(row[:mincols])
</code></pre>
| 1 | 2016-08-29T08:14:59Z | [
"python",
"csv",
"pandas",
"duplicates"
] |
numpy 3 dimension array middle indexing bug | 39,200,644 | <p>I seems found a bug when I'm using python 2.7 with numpy module:</p>
<pre><code>import numpy as np
x=np.arange(3*4*5).reshape(3,4,5)
x
</code></pre>
<p>Here I got the full 'x' array as follows:</p>
<pre><code>array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]],
[[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39]],
[[40, 41, 42, 43, 44],
[45, 46, 47, 48, 49],
[50, 51, 52, 53, 54],
[55, 56, 57, 58, 59]]])
</code></pre>
<p>Then I try to indexing single row values in sheet [1]:</p>
<pre><code>x[1][0][:]
</code></pre>
<p>Result:</p>
<pre><code>array([20, 21, 22, 23, 24])
</code></pre>
<p>But something wrong while I was try to indexing single column in sheet [1]:</p>
<pre><code>x[1][:][0]
</code></pre>
<p>Result still be the same as previous:</p>
<pre><code>array([20, 21, 22, 23, 24])
</code></pre>
<p>Should it be array([20, 25, 30, 35])??</p>
<p>It seems something wrong while indexing the middle index with range?</p>
| 2 | 2016-08-29T07:25:30Z | 39,200,766 | <p>No, it's not a bug.</p>
<p>When you use <code>[:]</code> you are using <a class='doc-link' href="http://stackoverflow.com/documentation/python/289/indexing-and-slicing#t=201608290734003933874">slicing</a> notation and it takes all the list:</p>
<pre><code>l = ["a", "b", "c"]
l[:]
#output:
["a", "b", "c"]
</code></pre>
<p>and in your case:</p>
<pre><code>x[1][:]
#output:
array([[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39]])
</code></pre>
<p>What you realy wish is using numpy <code>indexing</code> notation:</p>
<pre><code>x[1, : ,0]
#output:
array([20, 25, 30, 35])
</code></pre>
| 2 | 2016-08-29T07:32:09Z | [
"python",
"arrays",
"numpy"
] |
numpy 3 dimension array middle indexing bug | 39,200,644 | <p>I seems found a bug when I'm using python 2.7 with numpy module:</p>
<pre><code>import numpy as np
x=np.arange(3*4*5).reshape(3,4,5)
x
</code></pre>
<p>Here I got the full 'x' array as follows:</p>
<pre><code>array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]],
[[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39]],
[[40, 41, 42, 43, 44],
[45, 46, 47, 48, 49],
[50, 51, 52, 53, 54],
[55, 56, 57, 58, 59]]])
</code></pre>
<p>Then I try to indexing single row values in sheet [1]:</p>
<pre><code>x[1][0][:]
</code></pre>
<p>Result:</p>
<pre><code>array([20, 21, 22, 23, 24])
</code></pre>
<p>But something wrong while I was try to indexing single column in sheet [1]:</p>
<pre><code>x[1][:][0]
</code></pre>
<p>Result still be the same as previous:</p>
<pre><code>array([20, 21, 22, 23, 24])
</code></pre>
<p>Should it be array([20, 25, 30, 35])??</p>
<p>It seems something wrong while indexing the middle index with range?</p>
| 2 | 2016-08-29T07:25:30Z | 39,200,843 | <p>This is not a bug. <code>x[1][:][0]</code> is not a multiple index ("give me the elements where first dimension is 1, second is any, third is 0"). Instead, you are indexing three times, three objects.</p>
<pre><code>x1 = x[1] # x1 is the first 4x5 subarray
x2 = x1[:] # x2 is same as x1
x3 = x2[0] # x3 is the first row of x2
</code></pre>
<p>To use multiple index, you want to do it in a single slice:</p>
<pre><code>x[1, :, 0]
</code></pre>
| 0 | 2016-08-29T07:36:50Z | [
"python",
"arrays",
"numpy"
] |
I can't able to push on heroku | 39,200,695 | <p>I have an app on heroku named as <code>floating-river-39482</code> .And I'm trying to deploy the app by giving the command <code>git push heroku master</code> .
But I got an error from terminal like this</p>
<pre><code>remote: ! No such app as sleepy-inlet-36834.
fatal: repository 'https://git.heroku.com/sleepy-inlet-36834.git/' not found
</code></pre>
<p>In this error message the name of the app is <code>sleepy-inlet-36834</code> .But I'm trying to push the app <code>floating-river-39482</code> .
I don't have an app named <code>sleepy-inlet-36834</code>.
How can I correct this error?</p>
| 0 | 2016-08-29T07:28:29Z | 39,208,169 | <p>It looks like you've somehow got your Git repository referencing an old (or deleted) Heroku application.</p>
<p>What you can do most easily is open up your <code>.git/config</code> file in your project, and switch out the <code>https://git.heroku.com/sleepy-inlet-36834.git/</code> URL with the correct one for your Heroku application: <code>https://git.heroku.com/floating-river-39482.git</code>.</p>
<p>Then give it another go =)</p>
<p>To explain further: Heroku works by using Git to setup a 'remote'. Remotes are just places you can push (or pull) code from. Git creates remotes by listing them in your project's <code>.git/config</code> file.</p>
| 0 | 2016-08-29T14:00:23Z | [
"python",
"git",
"heroku"
] |
Django - POST get options from select | 39,200,802 | <p>I have a html form that looks like this.</p>
<pre><code>{% url 'myapp:myapp_submit' as submit %}
<form name='main' method="POST" action={{submit}}>
{% csrf_token %}
<select class='form-control' size=10 id="test" name='test' multiple>
<option>Test</option>
</select>
<input type="submit"/>
</form>
</code></pre>
<p><strong>and url.py</strong></p>
<pre><code>from . import views
app_name = 'myapp'
urlpatterns = [
url(r'^$', views.myapp, name='myapp'),
url(r'results/$', views.myapp_submit, name='myapp_submit')
]
</code></pre>
<p><strong>and views.py</strong></p>
<pre><code>def myapp_submit(request):
print request.POST
</code></pre>
<p>The only thing I get back is </p>
<pre><code><QueryDict: {u'csrfmiddlewaretoken'...]}>
</code></pre>
<p>How do I get back the options held in the select tag? I would use the model/view form here but I'm doing some very crazy things with JS to constantly update the options available. </p>
<p><strong>UPDATE</strong></p>
<p>I have used:</p>
<pre><code>request.POST.getlist('test')
</code></pre>
<p>But it will only return ['Test'] If I highlight it with my mouse. I simply want all the options under the select tag. ex.</p>
<pre><code><select class='form-control' size=10 id="test" name='test' multiple>
<option>Test1</option>
<option>Test2</option>
<option>Test3</option>
<option>Test4</option>
</select>
</code></pre>
<p>and </p>
<pre><code>###Not sure if it's still getlist method
>>request.POST.getlist('test')
['Test','Test2','Test3','Test4']
</code></pre>
| 0 | 2016-08-29T07:34:25Z | 39,200,939 | <p>Because you say you're doing weird things with <strong>Js</strong>, you can first check the <code>POST</code> request from your browser and what parameters you send with it. </p>
| 0 | 2016-08-29T07:42:01Z | [
"python",
"django"
] |
Django - POST get options from select | 39,200,802 | <p>I have a html form that looks like this.</p>
<pre><code>{% url 'myapp:myapp_submit' as submit %}
<form name='main' method="POST" action={{submit}}>
{% csrf_token %}
<select class='form-control' size=10 id="test" name='test' multiple>
<option>Test</option>
</select>
<input type="submit"/>
</form>
</code></pre>
<p><strong>and url.py</strong></p>
<pre><code>from . import views
app_name = 'myapp'
urlpatterns = [
url(r'^$', views.myapp, name='myapp'),
url(r'results/$', views.myapp_submit, name='myapp_submit')
]
</code></pre>
<p><strong>and views.py</strong></p>
<pre><code>def myapp_submit(request):
print request.POST
</code></pre>
<p>The only thing I get back is </p>
<pre><code><QueryDict: {u'csrfmiddlewaretoken'...]}>
</code></pre>
<p>How do I get back the options held in the select tag? I would use the model/view form here but I'm doing some very crazy things with JS to constantly update the options available. </p>
<p><strong>UPDATE</strong></p>
<p>I have used:</p>
<pre><code>request.POST.getlist('test')
</code></pre>
<p>But it will only return ['Test'] If I highlight it with my mouse. I simply want all the options under the select tag. ex.</p>
<pre><code><select class='form-control' size=10 id="test" name='test' multiple>
<option>Test1</option>
<option>Test2</option>
<option>Test3</option>
<option>Test4</option>
</select>
</code></pre>
<p>and </p>
<pre><code>###Not sure if it's still getlist method
>>request.POST.getlist('test')
['Test','Test2','Test3','Test4']
</code></pre>
| 0 | 2016-08-29T07:34:25Z | 39,201,120 | <p>Try to change your <code>views.py</code>:</p>
<pre><code>def myapp_submit(request):
request.POST.getlist('test[]')
</code></pre>
| 1 | 2016-08-29T07:52:49Z | [
"python",
"django"
] |
How to print html output each on its line in python? | 39,200,873 | <p>so i'm making alittle project as i am a beginner and i'm doing some webscraping. I wanted to print the lyrics of a song each on it's line using beautifulsoup in python but instead it's printing like this:</p>
<p>I looked out this morning and the sun was goneTurned on some music to start my dayI lost myself in a familiar songI closed my eyes and I slipped awayIt's more than a feeling (more than a feeling)When I hear that old song they used to play (more than a feeling)And I begin dreaming (more than a feeling)Till I see Marianne walk awayI see my Marianne walkin' awaySo many people have come and goneTheir faces fade as the years go byYet I still recall as I wander onAs clear as the sun in the summer skyIt's more than a feeling (more than a feeling)When I hear that old song they used to play (more than a feeling)And I begin dreaming (more than a feeling)Till I see Marianne walk awayI see my Marianne walkin' awayWhen I'm tired and thinking coldI hide in my music, forget the dayAnd dream of a girl I used to knowI closed my eyes and she slipped awayShe slipped awayIt's more than a feeling (more than a feeling)When I hear that old song they used to play (more than a feeling)And I begin dreaming (more than a feeling)Till I see Marianne walk away</p>
<p>This is my code:</p>
<pre><code>import urllib
from bs4 import BeautifulSoup
html = urllib.urlopen("http://www.metrolyrics.com/more-than-a-feeling-lyrics-boston.html")
bsObj = BeautifulSoup(html, "lxml")
namelist = bsObj.find_all("div", {"id": "lyrics-body-text"})
print("".join([p.get_text(strip=True) for p in namelist]))
</code></pre>
| 0 | 2016-08-29T07:38:16Z | 39,200,949 | <p>You need to remove the <code>strip = True</code> parameter to <code>get_text</code>. That strips the string resulting in the joined output you see. </p>
<p>By removing it:</p>
<pre><code>print("".join([p.get_text() for p in namelist]))
</code></pre>
<p>It prints fine.</p>
| 1 | 2016-08-29T07:42:53Z | [
"python",
"html",
"python-3.x",
"web-scraping",
"beautifulsoup"
] |
How to print html output each on its line in python? | 39,200,873 | <p>so i'm making alittle project as i am a beginner and i'm doing some webscraping. I wanted to print the lyrics of a song each on it's line using beautifulsoup in python but instead it's printing like this:</p>
<p>I looked out this morning and the sun was goneTurned on some music to start my dayI lost myself in a familiar songI closed my eyes and I slipped awayIt's more than a feeling (more than a feeling)When I hear that old song they used to play (more than a feeling)And I begin dreaming (more than a feeling)Till I see Marianne walk awayI see my Marianne walkin' awaySo many people have come and goneTheir faces fade as the years go byYet I still recall as I wander onAs clear as the sun in the summer skyIt's more than a feeling (more than a feeling)When I hear that old song they used to play (more than a feeling)And I begin dreaming (more than a feeling)Till I see Marianne walk awayI see my Marianne walkin' awayWhen I'm tired and thinking coldI hide in my music, forget the dayAnd dream of a girl I used to knowI closed my eyes and she slipped awayShe slipped awayIt's more than a feeling (more than a feeling)When I hear that old song they used to play (more than a feeling)And I begin dreaming (more than a feeling)Till I see Marianne walk away</p>
<p>This is my code:</p>
<pre><code>import urllib
from bs4 import BeautifulSoup
html = urllib.urlopen("http://www.metrolyrics.com/more-than-a-feeling-lyrics-boston.html")
bsObj = BeautifulSoup(html, "lxml")
namelist = bsObj.find_all("div", {"id": "lyrics-body-text"})
print("".join([p.get_text(strip=True) for p in namelist]))
</code></pre>
| 0 | 2016-08-29T07:38:16Z | 39,201,027 | <p>Try writing it into a simple for loop</p>
<pre><code>for p in namelist:
print(p.get_text(strip=True))
</code></pre>
| 0 | 2016-08-29T07:48:24Z | [
"python",
"html",
"python-3.x",
"web-scraping",
"beautifulsoup"
] |
Creating a slot machine for school | 39,200,882 | <p>I'm programming a slot machine for school and cannot get the machine to re-run once it is finished. I am relatively new and would like some honest feedback. How can I get my program to re-run? This is the code I'm trying to do this with. I've just modified my code to look like this.</p>
<pre><code>import random
import sys
print "Hi there user, welcome to the amazing poker machine simulator."
print "Your opening account has in it $1000."
print "To win a jackpot, three leprachauns must be in a row."
print "Enter yes or no when prompted to finish or continue the program."
balance = 1000
balance == int(balance)
winnings = 0
winnings == int(winnings)
Symbols = ["Leprachaun", "Goldbar", "Pyramid", "Blackcat"]
# Subroutines: Checking the Bet input and amount
def betcheck(betamount):
if betamount.isdigit() == True:
betamount == int(betamount)
rightbet = True
else:
rightbet = False
print "Please enter a whole number, no decimals and a bet on or below the balance."
return rightbet
# Limiting the bet
def betlimit(betamount):
if betamount > balance == False:
goodlimit = False
print "That bet is too high!"
else:
goodlimit = True
return goodlimit
# Checking the 'Ask' input to play the machine.
def askinputcheck(answerinput):
if answerinput == "Yes" or answerinput == "yes" or answerinput == "y" or answerinput == "No" or answerinput == "no" or answerinput == "n":
rightanswerinput = True
else:
rightanswerinput = False
print "This is an incorrect input, please type an appropriate answer in."
return rightanswerinput
# Printing and sorting symbols.
def spinning(reels):
global balance
if reelone == "Leprachaun" and reeltwo == "Leprachaun" and reelthree == "Leprachaun":
winnings = int(betamount) + int(balance) * 1000
print "You won the jackpot! Congragulations! This is how much your account contains $", winnings
elif reelone == "Goldbar" and reeltwo == "Goldbar" and reelthree == "Goldbar":
winnings = int(betamount) + int(balance) * 500
print "You won a considerable return! Awesome! Your balance and wins are $", winnings
elif reelone == "Pyramid" and reeltwo == "Pyramid" and reelthree == "Pyramid":
winnings = int(betamount) + int(balance) * 250
print "You won a good return! Its a conspiracy! This is all of your money total $", winnings
elif reelone == "Blackcat" and reeltwo == "Blackcat" and reelthree == "Blackcat":
winnings = int(balance) - int(betamount)
print "Unfortunately you didn't win anything, bad luck! You rewards are $", winnings
else:
winnings = int(balance) - int(betamount)
print "Bad luck! Maybe next time you'll win! Your remaining cash is $", winnings
print winnings
return reels
# If you have no money
def rebalance(balance):
while balance == 0 == True and startagain == True:
unbalance = True
balance = 1000
print "You ran out of money, here is $1000"
else:
unbalance = False
print "You still have money."
return unbalance
# Leads to Bet input check.
Validbet = False
while Validbet == False:
betamount = raw_input("Please enter amount you wish to bet: ")
Validbet = betcheck(betamount)
betamount == int(betamount)
# Leads to betlimit
appropriatelimit = betlimit(betamount)
# RandomSymbolGen + 3 reels
reelone = random.sample(["Leprachaun", "Goldbar", "Goldbar", "Pyramid", "Pyramid", "Pyramid", "Blackcat", "Blackcat", "Blackcat", "Blackcat"],1)
reeltwo = random.sample(["Leprachaun", "Goldbar", "Goldbar", "Pyramid", "Pyramid", "Pyramid", "Blackcat", "Blackcat", "Blackcat", "Blackcat"],1)
reelthree = random.sample(["Leprachaun", "Goldbar", "Goldbar", "Pyramid", "Pyramid", "Pyramid", "Blackcat", "Blackcat", "Blackcat", "Blackcat"],1)
reels = [reelone, reeltwo, reelthree]
slotspin = spinning(reels)
print reels
# Leads to Ask input check. (At the bottom due to program order)
validask = False
while validask == False:
answerinput = raw_input("Would you like to play again?: ")
validask = askinputcheck(answerinput)
# Leads to restart
startagain = False
while startagain == False:
startagain = answerinput
while True:
if answerinput == "Yes" or answerinput == "yes" or answerinput == "y":
startagain = True
balance = int(winnings) + int(balance)
print "You have $", balance
pass
elif answerinput == "No" or answerinput == "no" or answerinput == "n":
startagain = False
balance = winnings
print "You ended the game with", balance
break
else:
print "This is an incorrect input, please answer yes or no."
# Leads to rebalance
if answerinput == "Yes" or answerinput == "yes" or answerinput == "y" and balance == 0:
balance = rebalance(balance)
</code></pre>
| 1 | 2016-08-29T07:38:45Z | 39,232,161 | <p>Having some spare time, I have modified your code.<br>
The code you supplied is riddled with small coding and logical errors. You would be best running the <code>diff</code> command between your original code and the code below, to see where the many differences are and there is no guarantee that is bug free now either.<br>
I hope you're not planning on fleecing your fellow pupils, the <code>rebalance</code> routine is the twisted work of a future loan-shark. ;)</p>
<pre><code>import random
print "Hi there user, welcome to the amazing poker machine simulator."
print "Your opening account has in it $1000."
print "To win a jackpot, three leprachauns must be in a row."
print "Enter yes or no when prompted to finish or continue the program."
balance = 1000
Symbols = ["Leprachaun", "Goldbar", "Pyramid", "Blackcat"]
# Subroutines: Checking the Bet input and amount
def betcheck(betamount):
if betamount.isdigit() == True:
betamount == int(betamount)
rightbet = True
else:
rightbet = False
print "Please enter a whole number, no decimals and a bet on or below the balance."
return rightbet
# Limiting the bet
def betlimit(betamount):
if betamount > balance:
goodlimit = balance
print "That bet is too high! - bet adjusted to ", goodlimit
else:
goodlimit = betamount
return goodlimit
# Checking the 'Ask' input to play the machine.
def askinputcheck(answerinput):
if answerinput.lower().startswith('y') or answerinput.lower().startswith("n"):
rightanswerinput = True
else:
rightanswerinput = False
print "This is an incorrect input, please type an appropriate answer in."
return rightanswerinput
# Printing and sorting symbols.
def spinning(reels, betamount):
reelone, reeltwo, reelthree = reels[0], reels[1], reels[2]
global balance
winnings = 0
if reelone == "Leprachaun" and reeltwo == "Leprachaun" and reelthree == "Leprachaun":
winnings = int(betamount) * 10 + int(balance)
print "You won the jackpot! Congragulations! This is how much your account contains $", winnings
elif reelone == "Goldbar" and reeltwo == "Goldbar" and reelthree == "Goldbar":
winnings = int(betamount) *5 + int(balance)
print "You won a considerable return! Awesome! Your balance and wins are $", winnings
elif reelone == "Pyramid" and reeltwo == "Pyramid" and reelthree == "Pyramid":
winnings = int(betamount) *2 + int(balance)
print "You won a good return! Its a conspiracy! This is all of your money total $", winnings
elif reelone == "Blackcat" and reeltwo == "Blackcat" and reelthree == "Blackcat":
winnings = int(balance) - int(betamount)
print "Unfortunately you didn't win anything, bad luck! You rewards are $", winnings
else:
winnings = int(balance) - int(betamount)
print "Bad luck! Maybe next time you'll win! Your remaining cash is $", winnings
balance = winnings
return reels
# If you have no money
def rebalance(startagain):
global balance
if balance < 1 and startagain == True:
unbalance = True
balance = 1000
print "You ran out of money, here is $1000"
else:
unbalance = False
print "You still have money."
return unbalance
# Leads to Bet input check.
def my_mainloop():
global balance
while True:
Validbet = False
while Validbet == False:
betamount = raw_input("Please enter amount you wish to bet: ")
Validbet = betcheck(betamount)
betamount = int(betamount)
# Leads to betlimit
betamount = betlimit(betamount)
# RandomSymbolGen + 3 reels
if betamount > 0:
reelone = random.sample(["Leprachaun", "Goldbar", "Goldbar", "Pyramid", "Pyramid", "Pyramid", "Blackcat", "Blackcat", "Blackcat", "Blackcat"],1)
reeltwo = random.sample(["Leprachaun", "Goldbar", "Goldbar", "Pyramid", "Pyramid", "Pyramid", "Blackcat", "Blackcat", "Blackcat", "Blackcat"],1)
reelthree = random.sample(["Leprachaun", "Goldbar", "Goldbar", "Pyramid", "Pyramid", "Pyramid", "Blackcat", "Blackcat", "Blackcat", "Blackcat"],1)
reels = [reelone, reeltwo, reelthree]
print "\n",reels,"\n"
slotspin = spinning(reels, betamount)
# Leads to Ask input check. (At the bottom due to program order)
validask = False
while validask == False:
answerinput = raw_input("\nWould you like to play again?: ")
validask = askinputcheck(answerinput)
if answerinput.lower().startswith("y"):
startagain = True
print "You have $", balance
elif answerinput.lower().startswith("n"):
startagain = False
print "You ended the game with", balance
break
else:
print "This is an incorrect input, please answer yes or no."
# Leads to rebalance
if answerinput.lower().startswith("y") and balance < 1:
rebalance(startagain)
if __name__ == "__main__":
my_mainloop()
</code></pre>
| 0 | 2016-08-30T15:56:53Z | [
"python"
] |
Sort nested dictionaries and replace dictioary values with ordered numbers | 39,200,897 | <p>Not sure if my problem sound a bit tricky..my requirement is like this: I have three columns of data in txt file as below:</p>
<pre><code>col1,col2,col3/n
11,0.95,21/n
11,0.75,22/n
11,0.85,23/n
11,0.65,24/n
12,0.63,22/n
12,0.75,24/n
12,0.45,25/n
...
</code></pre>
<p>col1 can be viewed as dict keys which repeat for <= 5 times, col3 can also viewed as nested dict keys with values in col2, i.e. each key in col1 has <= 5 pairs of (col2: col3). </p>
<p>I would like to sort the nested dictionary by col2 and replace the col2 values with highest ranking, i.e.: I don't care about the values in col2, i only care about the ranking of col3 for each col1 value:</p>
<pre><code>col1,col2,col3
11,1,21/n
11,2,23/n
11,3,22/n
11,4,24/n
12,1,24/n
12,2,22/n
12,3,25/n
...
</code></pre>
<p>I tried turning the data into nested dictionaries like: </p>
<pre><code>{col1:{col3:col2}}
{11:{21:0.95,22:0.75,23:0.85,24:0.65},12:{22:0.63,24:0.75,25:0.45}}
</code></pre>
<p>I have searched around and found some solutions like sort nested dict etc., but I cannot replace the values with rankings either...can someone please help?</p>
| 0 | 2016-08-29T07:39:35Z | 39,201,189 | <p>Your input not defined here, I assumed as a list of list like this.</p>
<pre><code>[['col1', 'col2', 'col3'],
['11', '0.95', '21'],
['11', '0.75', '22'],
['11', '0.85', '23'],
['11', '0.65', '24'],
['12', '0.63', '22'],
['12', '0.75', '24'],
['12', '0.45', '25']]
</code></pre>
<p>Then you can do like this,</p>
<pre><code>result = {}
for i in input_list:
if i[0] in result:
result[i[0]].update({i[2]:i[1]})
else:
result[i[0]] = {i[2]:i[1]}
</code></pre>
<p><strong>Result</strong></p>
<pre><code>{'11': {'21': '0.95', '22': '0.75', '23': '0.85', '24': '0.65'},
'12': {'22': '0.63', '24': '0.75', '25': '0.45'},
'col1': {'col3': 'col2'}}
</code></pre>
| 0 | 2016-08-29T07:56:42Z | [
"python",
"sorting",
"dictionary"
] |
Sort nested dictionaries and replace dictioary values with ordered numbers | 39,200,897 | <p>Not sure if my problem sound a bit tricky..my requirement is like this: I have three columns of data in txt file as below:</p>
<pre><code>col1,col2,col3/n
11,0.95,21/n
11,0.75,22/n
11,0.85,23/n
11,0.65,24/n
12,0.63,22/n
12,0.75,24/n
12,0.45,25/n
...
</code></pre>
<p>col1 can be viewed as dict keys which repeat for <= 5 times, col3 can also viewed as nested dict keys with values in col2, i.e. each key in col1 has <= 5 pairs of (col2: col3). </p>
<p>I would like to sort the nested dictionary by col2 and replace the col2 values with highest ranking, i.e.: I don't care about the values in col2, i only care about the ranking of col3 for each col1 value:</p>
<pre><code>col1,col2,col3
11,1,21/n
11,2,23/n
11,3,22/n
11,4,24/n
12,1,24/n
12,2,22/n
12,3,25/n
...
</code></pre>
<p>I tried turning the data into nested dictionaries like: </p>
<pre><code>{col1:{col3:col2}}
{11:{21:0.95,22:0.75,23:0.85,24:0.65},12:{22:0.63,24:0.75,25:0.45}}
</code></pre>
<p>I have searched around and found some solutions like sort nested dict etc., but I cannot replace the values with rankings either...can someone please help?</p>
| 0 | 2016-08-29T07:39:35Z | 39,201,624 | <p>Well, here is a way to do it in basic Python:</p>
<pre><code>In [90]: col1
Out[90]: [11, 11, 11, 11, 12, 12, 12]
In [91]: col2
Out[91]: [0.95, 0.75, 0.85, 0.65, 0.63, 0.75, 0.45]
In [92]: col3
Out[92]: [21, 22, 23, 24, 22, 24, 25]
</code></pre>
<p>Let's create <code>data</code> consisting of items from each columns:</p>
<p>In [163]: data = [*zip(col1, col2, col3)]</p>
<pre><code>In [164]: data
Out[164]:
[(11, 0.95, 21),
(11, 0.75, 22),
(11, 0.85, 23),
(11, 0.65, 24),
(12, 0.63, 22),
(12, 0.75, 24),
(12, 0.45, 25)]
</code></pre>
<p>Let's use <code>itertools</code> module to group them up:</p>
<pre><code>In [174]: import itertools
In [175]: groups = itertools.groupby(data, key=lambda x: x[0])
</code></pre>
<p>Now, <code>groups</code> is a generator. If we want to see what it looks like<br>
we will need to iterate it:</p>
<pre><code>for a, b, in groups:
print(a, list(b))
</code></pre>
<p>and we get:</p>
<pre><code>11 [(11, 0.95, 21), (11, 0.75, 22), (11, 0.85, 23), (11, 0.65, 24)]
12 [(12, 0.63, 22), (12, 0.75, 24), (12, 0.45, 25)]
</code></pre>
<p>But we exhausted the iterator. So let's create it again, and now <br>
that we know what it contains, we can perform the desired sorting:</p>
<pre><code>In [177]: groups = itertools.groupby(data, key=lambda x: x[0])
In [178]: groups2 = [sorted(list(b), reverse=True) for a, b in groups]
In [179]: groups2
Out[179]:
[[(11, 0.95, 21), (11, 0.85, 23), (11, 0.75, 22), (11, 0.65, 24)],
[(12, 0.75, 24), (12, 0.63, 22), (12, 0.45, 25)]]
</code></pre>
<p>OK, one more thing, and I do that now in the editor:</p>
<pre><code>for i in range(len(groups2)):
groups2[i] = [(x, i, z) for i, (x, y, z) in enumerate(groups2[i], 1)]
for g in groups2:
for item in g:
print(item)
</code></pre>
<p>And we get:</p>
<pre><code>(11, 1, 21)
(11, 2, 23)
(11, 3, 22)
(11, 4, 24)
(12, 1, 24)
(12, 2, 22)
(12, 3, 25)
</code></pre>
| 0 | 2016-08-29T08:22:52Z | [
"python",
"sorting",
"dictionary"
] |
reading a GML file with networkx (python) without labels for the nodes | 39,200,898 | <p>I have a long GML file (Graph Modelling Language) that i am trying to read with Networkx in Python.
In the GML file, nodes don't have label, like this:</p>
<pre><code>graph [
node [
id 1
]
node [
id 2
]
edge [
source 2
target 1
]
]
</code></pre>
<p>I get an error when reading the file:
G = nx.read_gml('simple_graph.gml')</p>
<pre><code>---------------------------------------------------------------------------
NetworkXError Traceback (most recent call last)
<ipython-input-39-b1b319a08668> in <module>()
----> 1 G = nx.read_gml('simple_graph.gml')
<decorator-gen-319> in read_gml(path, label, destringizer)
/usr/lib/python2.7/dist-packages/networkx/utils/decorators.pyc in _open_file(func, *args, **kwargs)
218 # Finally, we call the original function, making sure to close the fobj.
219 try:
--> 220 result = func(*new_args, **kwargs)
221 finally:
222 if close_fobj:
/usr/lib/python2.7/dist-packages/networkx/readwrite/gml.pyc in read_gml(path, label, destringizer)
208 yield line
209
--> 210 G = parse_gml_lines(filter_lines(path), label, destringizer)
211 return G
212
/usr/lib/python2.7/dist-packages/networkx/readwrite/gml.pyc in parse_gml_lines(lines, label, destringizer)
407 raise NetworkXError('node id %r is duplicated' % (id,))
408 if label != 'id':
--> 409 label = pop_attr(node, 'node', 'label', i)
410 if label in labels:
411 raise NetworkXError('node label %r is duplicated' % (label,))
/usr/lib/python2.7/dist-packages/networkx/readwrite/gml.pyc in pop_attr(dct, type, attr, i)
397 except KeyError:
398 raise NetworkXError(
--> 399 "%s #%d has no '%s' attribute" % (type, i, attr))
400
401 nodes = graph.get('node', [])
NetworkXError: node #0 has no 'label' attribute
</code></pre>
<p>I see that it complains because the nodes don't have labels. From the documentation of GML i thought that labels were not obligatory (maybe i'm wrong?). Would there be a way to read such a file without labels? Or should i change my gml file?
Thank you for your help!</p>
| 0 | 2016-08-29T07:39:41Z | 39,201,481 | <p>If you want to use <code>id</code> attribute in GML for labeling nodes, you can designate the label attribute for the <code>nx.read_gml</code> argument as follows.</p>
<pre><code>G = nx.read_gml('simple_graph.gml', label='id')
</code></pre>
| 1 | 2016-08-29T08:13:32Z | [
"python",
"networkx",
"graph-modelling-language"
] |
2d fft numpy/python confusion | 39,200,937 | <p>I have data in the form x-y-z and want to create a power spectrum along x-y. Here is a basic example I am posting to check where I might be going wrong with my actual data:</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
fq = 10; N = 20
x = np.linspace(0,8,N); y = x
space = x[1] -x[0]
xx, yy = np.meshgrid(x,y)
fnc = np.sin(2*np.pi*fq*xx)
ft = np.fft.fft2(fnc)
ft = np.fft.fftshift(ft)
freq_x = np.fft.fftfreq(ft.shape[0], d=space)
freq_y = np.fft.fftfreq(ft.shape[1], d=space)
plt.imshow(
abs(ft),
aspect='auto',
extent=(freq_x.min(),freq_x.max(),freq_y.min(),freq_y.max())
)
plt.figure()
plt.imshow(fnc)
</code></pre>
<p>This results in the following <a href="http://i.stack.imgur.com/WWcN5.png" rel="nofollow">function</a> & <a href="http://i.stack.imgur.com/xooyg.png" rel="nofollow">frequency</a> figures with the incorrect frequency. Thanks.</p>
| 0 | 2016-08-29T07:41:58Z | 39,201,385 | <p>One of your problems is that matplotlib's <code>imshow</code> using a different coordinate system to what you expect. Provide a <code>origin='lower'</code> argument, and the peaks now appear at <code>y=0</code>, as expected.</p>
<p>Another problem that you have is that <code>fftfreq</code> needs to be told your timestep, which in your case is <code>8 / (N - 1)</code></p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
fq = 10; N = 20
x = np.linspace(0,8,N); y = x
xx, yy = np.meshgrid(x,y)
fnc = np.sin(2*np.pi*fq*xx)
ft = np.fft.fft2(fnc)
ft = np.fft.fftshift(ft)
freq_x = np.fft.fftfreq(ft.shape[0], d=8 / (N - 1)) # this takes an argument for the timestep
freq_y = np.fft.fftfreq(ft.shape[1], d=8 / (N - 1))
plt.imshow(
abs(ft),
aspect='auto',
extent=(freq_x.min(),freq_x.max(),freq_y.min(),freq_y.max()),
origin='lower' , # this fixes your problem
interpolation='nearest', # this makes it easier to see what is happening
cmap='viridis' # let's use a better color map too
)
plt.grid()
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/kv77L.png" rel="nofollow"><img src="http://i.stack.imgur.com/kv77L.png" alt="enter image description here"></a></p>
<p>You may say <em>"but the frequency is 10, not 0.5!"</em> However, if you want to sample a frequency of <code>10</code>, you need to sample a lot faster than <code>8/19</code>! Nyquist's theorem says you need to <strong>exceed</strong> a sampling rate of 20 to have any hope at all</p>
| 2 | 2016-08-29T08:08:12Z | [
"python",
"numpy",
"fft"
] |
find max square with black border | 39,201,004 | <p>For a square matrix, each cell is either black (1) or white (0), trying to find the max sub-square whose border is black. Here is my code with Python 2.7, wondering if logically correct? And any performance improvements? Thanks.</p>
<p>My major ideas is, keep track of how many black nodes are on top (continuously) and on left (continuously), which is <code>left</code> and <code>top</code> matrix represents. Then, based on the <code>left</code> and <code>top</code> tracking, for any node, I will try to find minimal value of top and left continuous black node, then I will base the minimal value to see if a square could be formed.</p>
<pre><code>A=[[1,1,0,0,0],
[1,1,1,1,1],
[0,0,1,0,1],
[0,0,1,1,1],
[0,0,0,0,0]]
top=[[0] * len(A[0]) for i in range(len(A))]
left=[[0] * len(A[0]) for i in range(len(A))]
result = 0
for i in range(len(A)):
for j in range(len(A[0])):
if A[i][j] == 1:
top[i][j] = top[i-1][j] + 1 if i > 0 else 1
left[i][j] = left[i][j-1] + 1 if j > 0 else 1
print top
print left
for i in range(len(A)):
for j in range(len(A[0])):
if A[i][j] == 1:
top[i][j] = top[i-1][j] + 1 if i > 0 else 1
left[i][j] = left[i][j-1] + 1 if j > 0 else 1
x = min(top[i][j], left[i][j])
if x > 1:
y = min(left[i-x+1][j], top[i][j-x+1])
result = max(result, y)
print result
</code></pre>
<p><strong>Edit 1</strong>, fix an issue, which is pointed by j_random_hacker.</p>
<pre><code>A = [[1, 1, 0, 0, 0],
[1, 1, 1, 1, 1],
[0, 0, 1, 0, 1],
[0, 0, 1, 1, 1],
[0, 0, 0, 0, 0]]
A = [[0,1,1],
[1,0,1],
[1,1,1]]
top = [[0] * len(A[0]) for i in range(len(A))]
left = [[0] * len(A[0]) for i in range(len(A))]
result = 0
for i in range(len(A)):
for j in range(len(A[0])):
if A[i][j] == 1:
top[i][j] = top[i - 1][j] + 1 if i > 0 else 1
left[i][j] = left[i][j - 1] + 1 if j > 0 else 1
print top
print left
for i in range(len(A)):
for j in range(len(A[0])):
if A[i][j] == 1:
top[i][j] = top[i - 1][j] + 1 if i > 0 else 1
left[i][j] = left[i][j - 1] + 1 if j > 0 else 1
x = min(top[i][j], left[i][j])
if x > 1:
y = min(left[i - x + 1][j], top[i][j - x + 1])
if x == y:
result = max(result, y)
print result
</code></pre>
<p><strong>Edit 2</strong>, address the issue by j_random_hacker.</p>
<pre><code>A = [[0,1,0,0],
[1,1,1,1],
[0,1,0,1],
[0,1,1,1]]
A = [[0,1,1],
[1,0,1],
[1,1,1]]
A = [[1, 1, 0, 0, 0],
[1, 1, 1, 1, 1],
[0, 0, 1, 0, 1],
[0, 0, 1, 1, 1],
[0, 0, 0, 0, 0]]
top = [[0] * len(A[0]) for i in range(len(A))]
left = [[0] * len(A[0]) for i in range(len(A))]
result = 0
for i in range(len(A)):
for j in range(len(A[0])):
if A[i][j] == 1:
top[i][j] = top[i - 1][j] + 1 if i > 0 else 1
left[i][j] = left[i][j - 1] + 1 if j > 0 else 1
print top
print left
for i in range(len(A)):
for j in range(len(A[0])):
if A[i][j] == 1:
top[i][j] = top[i - 1][j] + 1 if i > 0 else 1
left[i][j] = left[i][j - 1] + 1 if j > 0 else 1
x = min(top[i][j], left[i][j])
while x > 1:
y = min(left[i - x + 1][j], top[i][j - x + 1])
if x == y:
result = max(result, y)
break
x -= 1
print result
</code></pre>
<p><strong>Edit 3</strong>, new fix</p>
<pre><code>A = [[0,1,0,0],
[1,1,1,1],
[0,1,0,1],
[0,1,1,1]]
A = [[1, 1, 0, 0, 0],
[1, 1, 1, 1, 1],
[0, 0, 1, 0, 1],
[0, 0, 1, 1, 1],
[0, 0, 0, 0, 0]]
A = [[0,1,1],
[1,0,1],
[1,1,1]]
top = [[0] * len(A[0]) for i in range(len(A))]
left = [[0] * len(A[0]) for i in range(len(A))]
result = 0
for i in range(len(A)):
for j in range(len(A[0])):
if A[i][j] == 1:
top[i][j] = top[i - 1][j] + 1 if i > 0 else 1
left[i][j] = left[i][j - 1] + 1 if j > 0 else 1
print top
print left
for i in range(len(A)):
for j in range(len(A[0])):
if A[i][j] == 1:
top[i][j] = top[i - 1][j] + 1 if i > 0 else 1
left[i][j] = left[i][j - 1] + 1 if j > 0 else 1
x = min(top[i][j], left[i][j])
while x > 1:
y = min(left[i - x + 1][j], top[i][j - x + 1])
if x <= y:
result = max(result, x)
break
x -= 1
print result
</code></pre>
| 0 | 2016-08-29T07:46:41Z | 39,205,809 | <p>This isn't logically correct, as the following simple counterexample shows:</p>
<pre><code>011
101
111
</code></pre>
<p>This array contains no bordered square, but your code reports that it contains a square of edge length 3. You need either a third nested loop through <em>all</em> candidate square sizes down from <code>x</code> to 1 (making the solution O(n^3)-time) or, possibly, some more sophisticated data structure that enables an equivalent check to be performed faster.</p>
<p><strong>[EDIT to address updated algorithm]</strong></p>
<p>The new algorithm thinks there is no bordered square in either</p>
<pre><code>0100
1111 1111
0101 or 0101
0111 0111
</code></pre>
<p>despite there being a 3x3 one in each.</p>
<p>Stop guessing at fixes and think about how to break down the set of <em>all possible</em> locations of bordered squares into sets that you can efficiently (or even not-so-efficiently) check. As I tried to say at the top: For each possible bottom-right corner of a bordered square, your code is currently only checking <em>one</em> rectangle. But many more rectangles could have (i, j) as a bottom-right corner -- where are the tests for them?</p>
| 1 | 2016-08-29T11:59:10Z | [
"python",
"algorithm",
"python-2.7"
] |
Pygame - Smoother Movement | 39,201,171 | <p>I have added an Object/Image on the <code>main_screen</code>, the object is called <code>cancer_cell</code>.
What I'm trying to do here is that I want the object move smoothly. I have to repeatedle press on the arrow keys to keep it moving.
How do I make it move <code>while</code> arrow keys are pressed ? </p>
<p>here's the code:</p>
<pre><code>exitgame = False
cellpos_x = 0
cellpos_y = cancer_cell.get_rect().height*2
while not exitgame:
for event in pygame.event.get():
if event.type == pygame.QUIT:
exitgame = True
quitgame()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
cellpos_x -= 10
if event.key == pygame.K_RIGHT:
cellpos_x += 10
gameplay_bg = pygame.image.load("/Users/wolf/Desktop/python/img/gameplay_bg.png").convert()
main_screen.fill(white)
main_screen.blit(gameplay_bg, [0,0])
main_screen.blit(cancer_cell, [cellpos_x, cellpos_y])
pygame.display.flip()
clock.tick(20)
</code></pre>
<p>someone told me to try the solution at <a href="http://stackoverflow.com/questions/25494726/how-to-use-pygame-keydown">How to use pygame.KEYDOWN</a>:
but that didn't work either. Or maybe i did it wrong:</p>
<pre><code>if event.type == pygame.KEYDOWN:
key_pressed = pygame.key.get_pressed()
if key_pressed[pygame.K_LEFT]:
cellpos_x -= 10
if key_pressed[pygame.K_RIGHT]:
cellpos_x += 10
</code></pre>
| 1 | 2016-08-29T07:55:10Z | 39,201,830 | <p><strong>PROBLEM SOLVED</strong></p>
<p>I have solved the problem by just unindenting this part from the <code>FOR</code> loop
while not exitgame:</p>
<pre><code> for event in pygame.event.get():
if event.type == pygame.QUIT:
exitgame = True
quitgame()
key_pressed = pygame.key.get_pressed()
if key_pressed[pygame.K_LEFT]:
cellpos_x -= 10
if key_pressed[pygame.K_RIGHT]:
cellpos_x += 10
</code></pre>
| 0 | 2016-08-29T08:35:48Z | [
"python",
"pygame"
] |
Pygame - Smoother Movement | 39,201,171 | <p>I have added an Object/Image on the <code>main_screen</code>, the object is called <code>cancer_cell</code>.
What I'm trying to do here is that I want the object move smoothly. I have to repeatedle press on the arrow keys to keep it moving.
How do I make it move <code>while</code> arrow keys are pressed ? </p>
<p>here's the code:</p>
<pre><code>exitgame = False
cellpos_x = 0
cellpos_y = cancer_cell.get_rect().height*2
while not exitgame:
for event in pygame.event.get():
if event.type == pygame.QUIT:
exitgame = True
quitgame()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
cellpos_x -= 10
if event.key == pygame.K_RIGHT:
cellpos_x += 10
gameplay_bg = pygame.image.load("/Users/wolf/Desktop/python/img/gameplay_bg.png").convert()
main_screen.fill(white)
main_screen.blit(gameplay_bg, [0,0])
main_screen.blit(cancer_cell, [cellpos_x, cellpos_y])
pygame.display.flip()
clock.tick(20)
</code></pre>
<p>someone told me to try the solution at <a href="http://stackoverflow.com/questions/25494726/how-to-use-pygame-keydown">How to use pygame.KEYDOWN</a>:
but that didn't work either. Or maybe i did it wrong:</p>
<pre><code>if event.type == pygame.KEYDOWN:
key_pressed = pygame.key.get_pressed()
if key_pressed[pygame.K_LEFT]:
cellpos_x -= 10
if key_pressed[pygame.K_RIGHT]:
cellpos_x += 10
</code></pre>
| 1 | 2016-08-29T07:55:10Z | 39,202,542 | <p>I see you've solved the indentation issue, here's another version of your example:</p>
<pre><code>import pygame
class Player(object):
def __init__(self, img_path):
self.image = pygame.image.load(img_path)
self.x = 0
self.y = self.image.get_rect().height*2
def handle_keys(self):
key = pygame.key.get_pressed()
dist = 1
if key[pygame.K_RIGHT]:
self.x += dist
elif key[pygame.K_LEFT]:
self.x -= dist
def draw(self, surface):
surface.blit(self.image, (self.x, self.y))
pygame.init()
clock = pygame.time.Clock()
size = width, height = 1024, 768
speed = [2, 2]
white = 1, 1, 1
main_screen = pygame.display.set_mode(size)
gameplay_bg = pygame.image.load("background.jpg")
cancer_cell = Player("player.jpg")
running = False
while not running:
event = pygame.event.poll()
if event.type == pygame.QUIT:
running = True
main_screen.fill(white)
main_screen.blit(gameplay_bg, [0, 0])
cancer_cell.handle_keys()
cancer_cell.draw(main_screen)
pygame.display.flip()
clock.tick(50)
pygame.display.set_caption("fps: " + str(clock.get_fps()))
</code></pre>
<p>You need to adjust the paths of the images ("background.jpg", "player.jpg"), with this version you're not loading over and over your sprite in the game event loop and the Player movement will be smooth.</p>
| 0 | 2016-08-29T09:15:56Z | [
"python",
"pygame"
] |
Access Jupyter Notebook over Internet | 39,201,437 | <p>I have a Windows 7 computer that I use to host a Jupyter noteboks for use across my local network. However, I would like to be able to access these notebooks remotely over the internet.</p>
<p>Is this possible?</p>
<p>I've tried researching ways but haven't managed to understand anything. The closest I've got was a guide on how to set it up using AWS (<a href="https://gist.github.com/iamatypeofwalrus/5183133" rel="nofollow">https://gist.github.com/iamatypeofwalrus/5183133</a>) but the key step in this is to set up a "public DNS". However, I haven't found how to set this up in Windows.</p>
<p>Thanks, </p>
| 0 | 2016-08-29T08:11:12Z | 39,201,523 | <p>You can use some of free/paid VPN solutions to make virtual network between you and jupyter-notebook host to workaround need of public DNS. <a href="http://logmein.com" rel="nofollow">Hamachi</a> for example. I'm talking about case when you just want to access jupyter notebook remotely, not when you want to share it with anyone. </p>
| 0 | 2016-08-29T08:16:31Z | [
"python",
"jupyter",
"jupyter-notebook"
] |
Python - Select "Save link as" and save a file using Selenium | 39,201,498 | <p>Newbie: There are different files on a webpage, which can be downloaded as follows:
1. Right click on a file link
2. Select "Save link as"
3. Click "Save" button on the new window.</p>
<p>I tried the following code(for first 2 steps), but it is not working:</p>
<pre><code> from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Chrome()
driver.maximize_window()
driver.get('www.example.com')
time.sleep(1)
driver.find_element_by_link_text("MarketFiles/").click()
actionChains = ActionChains(driver)
download_file = "Market_File1.csv"
link = driver.find_element_by_link_text(download_file)
actionChains.context_click(link).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.RETURN).perform();
</code></pre>
<p>Kindly suggest how to download the file using these 3 steps. Thanks</p>
| 1 | 2016-08-29T08:14:51Z | 39,220,537 | <p>Save Link As will open the system dialog which can not be controlled through selenium directly.</p>
<p>Having said that, download preferences can be configured in the profile, which can be used while launching the browser and in that case, any click to download will save the file as per preferences in the chrome profile</p>
<pre><code>chromeOptions = webdriver.ChromeOptions()
prefs = {"download.default_directory" : "/some/path"}
chromeOptions.add_experimental_option("prefs",prefs)
chromedriver = "path/to/chromedriver.exe"
driver = webdriver.Chrome(executable_path=chromedriver, chrome_options=chromeOptions)
</code></pre>
<p>Reference : <a href="https://sites.google.com/a/chromium.org/chromedriver/capabilities" rel="nofollow">https://sites.google.com/a/chromium.org/chromedriver/capabilities</a></p>
| 0 | 2016-08-30T06:46:15Z | [
"python",
"selenium"
] |
Python - Select "Save link as" and save a file using Selenium | 39,201,498 | <p>Newbie: There are different files on a webpage, which can be downloaded as follows:
1. Right click on a file link
2. Select "Save link as"
3. Click "Save" button on the new window.</p>
<p>I tried the following code(for first 2 steps), but it is not working:</p>
<pre><code> from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Chrome()
driver.maximize_window()
driver.get('www.example.com')
time.sleep(1)
driver.find_element_by_link_text("MarketFiles/").click()
actionChains = ActionChains(driver)
download_file = "Market_File1.csv"
link = driver.find_element_by_link_text(download_file)
actionChains.context_click(link).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.RETURN).perform();
</code></pre>
<p>Kindly suggest how to download the file using these 3 steps. Thanks</p>
| 1 | 2016-08-29T08:14:51Z | 39,221,139 | <p>As per java you can use following code for clicking save as link and click save in new window</p>
<pre><code>try {
Robot bot = new Robot();
//location of link on your page
bot.mouseMove(5, 12);
//instigate right click of mouse
bot.mousePress(InputEvent.BUTTON1_MASK);
bot.mouseRelease(InputEvent.BUTTON1_MASK);
Thread.sleep(1000);
/*repeatedly press down arrow untill you reach save as link in context menu*/
bot.keyPress(KeyEvent.VK_DOWN);
Thread.sleep(1000);
bot.keyPress(KeyEvent.VK_DOWN);
Thread.sleep(1000);
bot.keyPress(KeyEvent.VK_DOWN);
Thread.sleep(1000);
bot.keyPress(KeyEvent.VK_DOWN);
//press enter to select save as link
Thread.sleep(1000);
//press enter to press save button on new window
bot.keyPress(KeyEvent.VK_ENTER);
Thread.sleep(1000);
bot.keyPress(KeyEvent.VK_ENTER);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
</code></pre>
<p>As per your question you need python code so you can convert this code in python by using this link <a href="https://pypi.python.org/pypi/py_robot/0.1" rel="nofollow">https://pypi.python.org/pypi/py_robot/0.1</a></p>
| 0 | 2016-08-30T07:16:55Z | [
"python",
"selenium"
] |
pandas: pandas.DataFrame.describe returns information on only one column | 39,201,526 | <p>For a certain Kaggle dataset (rules prohibit me from sharing the data here, but is readily accessible <a href="https://www.kaggle.com/c/predicting-red-hat-business-value/data" rel="nofollow">here</a>), </p>
<pre><code>import pandas
df_train = pandas.read_csv(
"01 - Data/act_train.csv.zip"
)
df_train.describe()
</code></pre>
<p>I get:</p>
<pre><code>>>> df_train.describe()
outcome
count 2.197291e+06
mean 4.439544e-01
std 4.968491e-01
min 0.000000e+00
25% 0.000000e+00
50% 0.000000e+00
75% 1.000000e+00
max 1.000000e+00
</code></pre>
<p>whereas for the same dataset <code>df_train.columns</code> gives me: </p>
<pre><code>>>> df_train.columns
Index(['people_id', 'activity_id', 'date', 'activity_category', 'char_1',
'char_2', 'char_3', 'char_4', 'char_5', 'char_6', 'char_7', 'char_8',
'char_9', 'char_10', 'outcome'],
dtype='object')
</code></pre>
<p>and <code>df_train.dtypes</code> gives me: </p>
<pre><code>>>> df_train.dtypes
people_id object
activity_id object
date object
activity_category object
char_1 object
char_2 object
char_3 object
char_4 object
char_5 object
char_6 object
char_7 object
char_8 object
char_9 object
char_10 object
outcome int64
dtype: object
</code></pre>
<p>Am I missing some reason why pandas only <code>describe</code>s one column in the dataset?</p>
| 1 | 2016-08-29T08:16:35Z | 39,201,640 | <p>By default, <code>describe</code> only works on numeric dtype columns. Add a keyword-argument <code>include='all'</code>. <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html" rel="nofollow">From the documentation</a>:</p>
<blockquote>
<p>If include is the string âallâ, the output column-set will match the
input one.</p>
</blockquote>
<p>To clarify, the default arguments to <code>describe</code> are <code>include=None, exclude=None</code>. The behavior that results is:</p>
<blockquote>
<p>None to both (default). The result will include only numeric-typed
columns or, if none are, only categorical columns.</p>
</blockquote>
<p>Also, from the <strong>Notes</strong> section:</p>
<blockquote>
<p>The output DataFrame index depends on the requested dtypes:</p>
<p>For numeric dtypes, it will include: count, mean, std, min, max, and
lower, 50, and upper percentiles.</p>
<p>For object dtypes (e.g. timestamps or strings), the index will include
the count, unique, most common, and frequency of the most common.
Timestamps also include the first and last items.</p>
</blockquote>
| 2 | 2016-08-29T08:23:39Z | [
"python",
"python-3.x",
"pandas"
] |
Python way of doing difference of lists containing lists | 39,201,610 | <p>Consider two lists A and B. I know that <code>list(set(A) - set(B))</code> will give the difference between A and B. What about the situation whereby elements in both A and B are lists. i.e. A and B are list of list? For e.g.</p>
<pre><code>A = [[1,2], [3,4], [5,6]]
B = [[3,4], [7,8]]
</code></pre>
<p>I wish to return the difference <code>A - B</code> as a list of list i.e. <code>[[1,2],[5,6]]</code></p>
<pre><code>list(set(A) - set(B))
TypeError: unhashable type: 'list'
</code></pre>
| 1 | 2016-08-29T08:22:19Z | 39,201,658 | <p>Here's a one-liner you could use:</p>
<pre><code>diff = [x for x in A if x not in B]
</code></pre>
<p>Or if you want to use filter:</p>
<pre><code>diff = list(filter(lambda x: x not in B, A))
</code></pre>
| 1 | 2016-08-29T08:24:56Z | [
"python"
] |
Python way of doing difference of lists containing lists | 39,201,610 | <p>Consider two lists A and B. I know that <code>list(set(A) - set(B))</code> will give the difference between A and B. What about the situation whereby elements in both A and B are lists. i.e. A and B are list of list? For e.g.</p>
<pre><code>A = [[1,2], [3,4], [5,6]]
B = [[3,4], [7,8]]
</code></pre>
<p>I wish to return the difference <code>A - B</code> as a list of list i.e. <code>[[1,2],[5,6]]</code></p>
<pre><code>list(set(A) - set(B))
TypeError: unhashable type: 'list'
</code></pre>
| 1 | 2016-08-29T08:22:19Z | 39,201,671 | <pre><code>>>> [i for i in A if i not in B]
[[1, 2], [5, 6]]
</code></pre>
| 1 | 2016-08-29T08:25:23Z | [
"python"
] |
Python way of doing difference of lists containing lists | 39,201,610 | <p>Consider two lists A and B. I know that <code>list(set(A) - set(B))</code> will give the difference between A and B. What about the situation whereby elements in both A and B are lists. i.e. A and B are list of list? For e.g.</p>
<pre><code>A = [[1,2], [3,4], [5,6]]
B = [[3,4], [7,8]]
</code></pre>
<p>I wish to return the difference <code>A - B</code> as a list of list i.e. <code>[[1,2],[5,6]]</code></p>
<pre><code>list(set(A) - set(B))
TypeError: unhashable type: 'list'
</code></pre>
| 1 | 2016-08-29T08:22:19Z | 39,201,689 | <p>The idea is to convert the list of lists to lists of tuples,<br>
which are hashable and are, thus, candidates of making sets themselves:</p>
<pre><code>In [192]: C = set(map(tuple, A)) - set(map(tuple, B))
In [193]: C
Out[193]: {(1, 2), (5, 6)}
</code></pre>
<p>And one more touch:</p>
<pre><code>In [196]: [*map(list, C)]
Out[196]: [[1, 2], [5, 6]]
</code></pre>
<p><strong>ADDED</strong></p>
<p>In python 2.7 the final touch is simpler:</p>
<pre><code>map(list, C)
</code></pre>
| 1 | 2016-08-29T08:26:33Z | [
"python"
] |
Python way of doing difference of lists containing lists | 39,201,610 | <p>Consider two lists A and B. I know that <code>list(set(A) - set(B))</code> will give the difference between A and B. What about the situation whereby elements in both A and B are lists. i.e. A and B are list of list? For e.g.</p>
<pre><code>A = [[1,2], [3,4], [5,6]]
B = [[3,4], [7,8]]
</code></pre>
<p>I wish to return the difference <code>A - B</code> as a list of list i.e. <code>[[1,2],[5,6]]</code></p>
<pre><code>list(set(A) - set(B))
TypeError: unhashable type: 'list'
</code></pre>
| 1 | 2016-08-29T08:22:19Z | 39,202,133 | <pre><code>A = [[1, 2], [3, 4], [5, 6]]
B = [[3, 4], [7, 8]]
print[x for x in A if x not in B]
</code></pre>
| 1 | 2016-08-29T08:52:47Z | [
"python"
] |
select data based on a string criteria python | 39,201,631 | <p>I have a large database in which I want to select all the columns that meet a certain criteria:</p>
<p>My data looks like the following:</p>
<pre><code>Name a b c
target-01 5196 24 24
target-02 5950 150 150
target-03 5598 50 50
object-01 6558 44 -1
object-02 6190 60 60
</code></pre>
<p>I want to select all the data whose <code>Name</code> starts with <code>target</code>.</p>
<p>So the selected <code>df</code> would be:</p>
<pre><code>target-01 5196 24 24
target-02 5950 150 150
target-03 5598 50 50
</code></pre>
<p>I am reading the data using:</p>
<pre><code>data = pd.read_csv('catalog.txt', sep = '\s+', header = None, skiprows =1 )
</code></pre>
<p>How can I select the data I want?</p>
| 1 | 2016-08-29T08:23:13Z | 39,201,771 | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.startswith.html" rel="nofollow"><code>str.startswith</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>print (df[df.Name.str.startswith('target')])
Name a b c
0 target-01 5196 24 24
1 target-02 5950 150 150
2 target-03 5598 50 50
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow"><code>str.contains</code></a>:</p>
<pre><code>print (df[df.Name.str.contains(r'^target')])
Name a b c
0 target-01 5196 24 24
1 target-02 5950 150 150
2 target-03 5598 50 50
</code></pre>
<p>Last solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="nofollow"><code>filter</code></a>:</p>
<pre><code>df.set_index('Name', inplace=True)
print (df.filter(regex=r'^target', axis=0))
a b c
Name
target-01 5196 24 24
target-02 5950 150 150
target-03 5598 50 50
print (df.filter(regex=r'^target', axis=0).reset_index())
Name a b c
0 target-01 5196 24 24
1 target-02 5950 150 150
2 target-03 5598 50 50
</code></pre>
| 1 | 2016-08-29T08:32:03Z | [
"python",
"string",
"pandas",
"dataframe",
"startswith"
] |
Parsing of list of bits in python | 39,201,679 | <p>I have a list of 64 bits.</p>
<pre><code>['11111111', '11111111', '10001111', '11111110', '11011011', '11111111', '01011011', '10000111']
</code></pre>
<p>Here each and every bit represents a feature. If a bit is 1 then that particular feature is supported else not.</p>
<p>Is there any way were i can check the bit and just return its associated feature if it is 1.</p>
<p><code>Input : 1111111</code></p>
<p><code>Output: If all bits are one then I need to print all eight features masked in it.</code></p>
| 1 | 2016-08-29T08:25:52Z | 39,201,866 | <p>You may unwrap the string in variable for each feature as:</p>
<pre><code>>>> my_bit = '11111110'
>>> feat_1, feat_2, feat_3, feat_4, feat_5, feat_6, feat_7, feat_8 = my_bit
>>> print feat_1, feat_2, feat_3, feat_4, feat_5, feat_6, feat_7, feat_8
1 1 1 1 1 1 1 0
</code></pre>
<p>Now, you may check for each feature as:</p>
<pre><code>if feat_1:
# Do something
else:
# Do something else
</code></pre>
| 0 | 2016-08-29T08:37:47Z | [
"python",
"python-2.7"
] |
Parsing of list of bits in python | 39,201,679 | <p>I have a list of 64 bits.</p>
<pre><code>['11111111', '11111111', '10001111', '11111110', '11011011', '11111111', '01011011', '10000111']
</code></pre>
<p>Here each and every bit represents a feature. If a bit is 1 then that particular feature is supported else not.</p>
<p>Is there any way were i can check the bit and just return its associated feature if it is 1.</p>
<p><code>Input : 1111111</code></p>
<p><code>Output: If all bits are one then I need to print all eight features masked in it.</code></p>
| 1 | 2016-08-29T08:25:52Z | 39,201,876 | <p>I guess you could do something like:</p>
<pre><code>bits = ['11111111', '11111111', '10001111', '11111110', '11011011', '11111111', '01011011', '10000111']
# to produce features mapping dynamically:
# features = {i:"feature " + str(i) for i in range(0,64)}
features = {0: 'feature 0', 1: 'feature 1', 2: 'feature 2', 3: 'feature 3', 4: 'feature 4', 5: 'feature 5', 6: 'feature 6', 7: 'feature 7', 8: 'feature 8', 9: 'feature 9', 10: 'feature 10', 11: 'feature 11', 12: 'feature 12', 13: 'feature 13', 14: 'feature 14', 15: 'feature 15', 16: 'feature 16', 17: 'feature 17', 18: 'feature 18', 19: 'feature 19', 20: 'feature 20', 21: 'feature 21', 22: 'feature 22', 23: 'feature 23', 24: 'feature 24', 25: 'feature 25', 26: 'feature 26', 27: 'feature 27', 28: 'feature 28', 29: 'feature 29', 30: 'feature 30', 31: 'feature 31', 32: 'feature 32', 33: 'feature 33', 34: 'feature 34', 35: 'feature 35', 36: 'feature 36', 37: 'feature 37', 38: 'feature 38', 39: 'feature 39', 40: 'feature 40', 41: 'feature 41', 42: 'feature 42', 43: 'feature 43', 44: 'feature 44', 45: 'feature 45', 46: 'feature 46', 47: 'feature 47', 48: 'feature 48', 49: 'feature 49', 50: 'feature 50', 51: 'feature 51', 52: 'feature 52', 53: 'feature 53', 54: 'feature 54', 55: 'feature 55', 56: 'feature 56', 57: 'feature 57', 58: 'feature 58', 59: 'feature 59', 60: 'feature 60', 61: 'feature 61', 62: 'feature 62', 63: 'feature 63'}
for ind, bit in enumerate(reversed("".join(bits))):
if bit == '1':
print "Feature " + features[ind] + " turned on"
</code></pre>
<p>Where using:</p>
<ul>
<li><code>reversed</code> - because bits are read from LSB to MSB so we need to iterate from right to left.</li>
<li><code>enumerate</code> - because we want to get the associated index to check in our mapping what is the feature name/attributes.</li>
<li><code>"".join(bits)</code> - because we want to be able to join all the items on the list to one iterable object we can go through.</li>
</ul>
| 0 | 2016-08-29T08:38:08Z | [
"python",
"python-2.7"
] |
Parsing of list of bits in python | 39,201,679 | <p>I have a list of 64 bits.</p>
<pre><code>['11111111', '11111111', '10001111', '11111110', '11011011', '11111111', '01011011', '10000111']
</code></pre>
<p>Here each and every bit represents a feature. If a bit is 1 then that particular feature is supported else not.</p>
<p>Is there any way were i can check the bit and just return its associated feature if it is 1.</p>
<p><code>Input : 1111111</code></p>
<p><code>Output: If all bits are one then I need to print all eight features masked in it.</code></p>
| 1 | 2016-08-29T08:25:52Z | 39,201,929 | <p>The solution depends on your way to enumerate the features. if you enumerate your features from 0 to 63 it would be something like this:</p>
<pre><code>import math
testdata = ['11111111', '11111111', '10001111', '11111110', '11011011', '11111111', '01011011', '10000111']
def test_feature(data, i):
block = math.floor(i/8)
pos = i % 8
if data[block][pos] == '1':
return True
return False
print(testdata)
print(test_feature(testdata, 0))
print(test_feature(testdata, 3))
print(test_feature(testdata, 8))
print(test_feature(testdata, 17))
print(test_feature(testdata, 33))
print(test_feature(testdata, 63))
</code></pre>
<p>However, you can easily adjust this method to match other enumerations.</p>
| 0 | 2016-08-29T08:41:47Z | [
"python",
"python-2.7"
] |
Parsing of list of bits in python | 39,201,679 | <p>I have a list of 64 bits.</p>
<pre><code>['11111111', '11111111', '10001111', '11111110', '11011011', '11111111', '01011011', '10000111']
</code></pre>
<p>Here each and every bit represents a feature. If a bit is 1 then that particular feature is supported else not.</p>
<p>Is there any way were i can check the bit and just return its associated feature if it is 1.</p>
<p><code>Input : 1111111</code></p>
<p><code>Output: If all bits are one then I need to print all eight features masked in it.</code></p>
| 1 | 2016-08-29T08:25:52Z | 39,201,933 | <p>You can make a list of features first.</p>
<pre><code>feat_list = ['f1','f2','f3','f4','f6','f7','f8']
bit_list = ['11111111', '11111111', '10001111', '11111110', '11011011', '11111111', '01011011', '10000111']
</code></pre>
<p>Then make a function. </p>
<pre><code>def extract_feature(bit_string):
result = []
for ind, bit in enumerate(bit_string):
if bit == '1':
result.append(feat_list[ind])
return result
</code></pre>
<p>Then use a loop to get features for each bit string. </p>
<pre><code>for bit_string in bit_list:
print extract_feature(bit_string)
</code></pre>
<p>Note that, it is not very efficient, but it is verbose so that a newbie can understand. </p>
| 0 | 2016-08-29T08:42:11Z | [
"python",
"python-2.7"
] |
Parsing of list of bits in python | 39,201,679 | <p>I have a list of 64 bits.</p>
<pre><code>['11111111', '11111111', '10001111', '11111110', '11011011', '11111111', '01011011', '10000111']
</code></pre>
<p>Here each and every bit represents a feature. If a bit is 1 then that particular feature is supported else not.</p>
<p>Is there any way were i can check the bit and just return its associated feature if it is 1.</p>
<p><code>Input : 1111111</code></p>
<p><code>Output: If all bits are one then I need to print all eight features masked in it.</code></p>
| 1 | 2016-08-29T08:25:52Z | 39,201,935 | <p>I don't know whether I've understood exactly :)</p>
<pre><code>def extract_features(x, f):
return [f[i] for i, elem in enumerate(x) if x[i] == '1']
features = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
x = '11111111'
y = '10101010'
print (extract_features(x, features))
print (extract_features(y, features))
"""
<Output>
['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
['a', 'c', 'e', 'g']
"""
</code></pre>
| 0 | 2016-08-29T08:42:25Z | [
"python",
"python-2.7"
] |
Parsing of list of bits in python | 39,201,679 | <p>I have a list of 64 bits.</p>
<pre><code>['11111111', '11111111', '10001111', '11111110', '11011011', '11111111', '01011011', '10000111']
</code></pre>
<p>Here each and every bit represents a feature. If a bit is 1 then that particular feature is supported else not.</p>
<p>Is there any way were i can check the bit and just return its associated feature if it is 1.</p>
<p><code>Input : 1111111</code></p>
<p><code>Output: If all bits are one then I need to print all eight features masked in it.</code></p>
| 1 | 2016-08-29T08:25:52Z | 39,202,060 | <pre><code>def f0(): print "executing feature0"
def f1(): print "executing feature1"
def f2(): print "executing feature2"
def f3(): print "executing feature3"
def f4(): print "executing feature4"
def f5(): print "executing feature5"
def f6(): print "executing feature6"
def f7(): print "executing feature7"
def run_features(mask):
for index, bit in (enumerate(mask[::-1])):
if bit == '1':
features[index]()
bits = [
'11111111', '11111111', '10001111', '11111110',
'11011011', '11111111', '01011011', '10000111'
]
features = [
f0, f1, f2, f3, f4, f5, f6, f7
]
for mask in bits:
run_features(mask)
print '-' * 80
</code></pre>
| 0 | 2016-08-29T08:48:15Z | [
"python",
"python-2.7"
] |
How to use str as list index in Python? | 39,201,749 | <p>I read the name and the points of each person from a database and I need to have something like that:</p>
<pre><code>myarray['Alex'] = 18
</code></pre>
<p>I've tried this :</p>
<pre><code>myArray = []
cur.execute("SELECT name, point FROM mytable WHERE name <> '' ")
for row in cur.fetchall():
name = row[0]
myArray[name] = row[1]
</code></pre>
<p>but I got this error</p>
<pre><code>TypeError: list indices must be integers, not str
</code></pre>
| 0 | 2016-08-29T08:30:33Z | 39,201,787 | <p>You need to use a <a href="https://docs.python.org/2/tutorial/datastructures.html#dictionaries" rel="nofollow">dictionary</a>, not an array:</p>
<pre><code>myDict = {} # Here!
cur.execute("SELECT name, point FROM mytable WHERE name <> '' ")
for row in cur.fetchall():
name = row[0]
myDict[name] = row[1]
</code></pre>
| 3 | 2016-08-29T08:33:04Z | [
"python"
] |
Speed up multilple matrix products with numpy | 39,201,783 | <p>In python I have 2 three dimensional arrays: </p>
<p><code>T</code> with size <code>(n,n,n)</code></p>
<p><code>U</code> with size <code>(k,n,n)</code></p>
<p><code>T</code> and <code>U</code> can be seen as many 2-D arrays one next to the other. I need to multiply all those matrices, ie I have to perform the following operation:</p>
<pre><code>for i in range(n):
H[:,:,i] = U[:,:,i].dot(T[:,:,i]).dot(U[:,:,i].T)
</code></pre>
<p>As <code>n</code> might be very big I am wondering if this operation could be in some way speed up with numpy.</p>
| 2 | 2016-08-29T08:32:40Z | 39,202,073 | <p>Carefully looking into the iterators and how they are involved in those dot product reductions, we could translate all of those into one <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a> implementation like so -</p>
<pre><code>H = np.einsum('ijk,jlk,mlk->imk',U,T,U)
</code></pre>
| 3 | 2016-08-29T08:48:52Z | [
"python",
"numpy"
] |
sqlalchemy orm - change column in a table depending on another table | 39,201,828 | <p>I have a 3 tables </p>
<p>table 1</p>
<pre><code>| id | name |
|:---:|:----:|
| 1 | name |
</code></pre>
<p>table 2</p>
<pre><code>| id | name | status |
|:---:|:----:|:------:|
| 1 | name | True |
</code></pre>
<p>table 3</p>
<pre><code>| id_table1 | id_table2 | datetime | status_table2 |
|:----------:|----------:|:--------:|:-------------:|
| 1 | 1 |01/11/2011| True |
</code></pre>
<hr>
<p>How I can change a status in table 2 when I create a link in table 3, with sqlalchemy ORM in python, status must be changed when link in table 3 created and also must be changed when link deleted, who have any cool and simple ideas?</p>
| 2 | 2016-08-29T08:35:41Z | 39,352,326 | <p>solved problem by use ORM Events</p>
| 0 | 2016-09-06T15:06:18Z | [
"python",
"postgresql",
"orm",
"sqlalchemy"
] |
AWS Python Lambda with Oracle - OID Generation Failed | 39,201,869 | <p>I'm trying to connect to an Oracle DB using AWS Lambda Python code.</p>
<p>My code is below:</p>
<pre><code>import sys, os
import cx_Oracle
import traceback
def main_handler(event, context):
# Enter your database connection details here
host = "server_ip_or_name"
port = 1521
sid = "server_sid"
username = "myusername"
password = "mypassword"
try:
dsn = cx_Oracle.makedsn(host, port, sid)
print dsn
connection = cx_Oracle.Connection("%s/%s@%s" % (username, password, dsn))
cursor = connection.cursor()
cursor.execute("select 1 / 0 from dual")
except cx_Oracle.DatabaseError, exc:
error, = exc.args
print >> sys.stderr, "Oracle-Error-Code:", error.code
print >> sys.stderr, "Oracle-Error-Message:", error.message
tb = traceback.format_exc()
else:
tb = "No error"
finally:
print tb
if __name__ == "__main__":
main_handler(sys.argv[0], None)
</code></pre>
<p>If have already added all dependencies in "lib" folder, thanks to <a href="http://stackoverflow.com/questions/37734748/aws-python-lambda-with-oracle/37753311#37753311">AWS Python Lambda with Oracle</a></p>
<p>When running this code, I'm getting:
DatabaseError: ORA-21561: OID generation failed</p>
<p>i've tried to connect using IP of the Oracle server and the name: same error.</p>
<p>Here is the output of the error</p>
<pre><code>Oracle-Error-Code: 21561
Oracle-Error-Message: ORA-21561: OID generation failed
Traceback (most recent call last):
File "/var/task/main.py", line 20, in main_handler
connection = cx_Oracle.Connection("%s/%s@%s" % (username, password, dsn))
DatabaseError: ORA-21561: OID generation failed
</code></pre>
<p>For those who have successfully run the CX_Oracle in AWS Lambda Python, can you please help ?</p>
<p>Thanks</p>
| 1 | 2016-08-29T08:37:53Z | 39,207,364 | <p>See the following other question for the resolution to this problem.</p>
<p><a href="http://stackoverflow.com/questions/31338916/sqlplus-remote-connection-giving-ora-21561">sqlplus remote connection giving ORA-21561</a></p>
<p>Effectively the client requires a host name in order to generate a unique identifier which is used when connecting to the database.</p>
| 0 | 2016-08-29T13:20:41Z | [
"python",
"amazon-web-services",
"aws-lambda",
"cx-oracle"
] |
Django display data in a table -- Sort of a table within a table (nested forloop?). Data needs to link as well | 39,201,892 | <p>I'm not sure the best way to explain it so I created an example picture and made up some data:</p>
<p><a href="http://i.stack.imgur.com/HclH8.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/HclH8.jpg" alt="Table"></a></p>
<p>I looked at this post and know I need to use some forloop template stuff: <a href="http://stackoverflow.com/questions/7287027/displaying-a-table-in-django-from-database">Displaying a Table in Django from Database</a>
<br></p>
<p>The parts that are throwing me off are how to have a table within a table. For example, the doctor 'Bob" has 3 different patients. Bob has one row but Bob's patient column has 3 rows within it. </p>
<pre><code> <table class="table table-striped table-bordered">
<thead>
<th>Name</th>
<th>Total Patient Meetings</th>
<th>Patient</th>
<th>Meetings for Patient</th>
</thread>
<tbody>
{% for doctor in query_results %}
<tr>
<td> {{ doctor.name }} </td>
<td> {{ doctor.total_meetings }}</td>
//*Would I have a nested forloop in here?*//
{% for patient in query_results2 %}
<td> {{patient.name }} </td>
<td> {{patient.meeting_number }}</td>
{% endfor %}
</tr>
{% endfor %}
</tbody>
</table>
</code></pre>
<p>I'm having trouble finding an example like this to work off of and am not sure how to approach it. </p>
<p>I also need those patients to link to the individual patient pages. So if I click on Bob's patient 'C755' I will be directed to that patient's page. I was thinking something like: </p>
<pre><code> `<p><a href="patient/{{patient.name}}">{{patient.name}}:</a></p>`
</code></pre>
<p>Thanks for any help. Just not sure the best way to approach this. </p>
| 0 | 2016-08-29T08:38:58Z | 39,202,241 | <blockquote>
<p>"A 'related manager' is a manager used in a one-to-many or many-to-many
related context."
<a href="https://docs.djangoproject.com/en/1.10/ref/models/relations/" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/models/relations/</a></p>
</blockquote>
<p>I can't see how your relation between <code>doctor</code> and <code>patient</code> is, but supposing it is something like this:</p>
<pre><code>class Patient(models.Model):
name = models.CharField(max_length=50)
class Doctor(models.Model):
patient = models.ManyToManyField(Patient, verbose_name='patients')
</code></pre>
<p>You can easly iterate over <code>doctor.patient</code> using:</p>
<pre><code>{% for doctor in doctors %}
{% for patient in doctor.patients.all %}
{{ patient.name }}
{% endfor %}
{% endfor %}
</code></pre>
<p>If you don't have a <code>verbose_name</code> attribute for the relation, then instead of using <code>doctor.patients.all</code> you use <code>doctor.patients_set.all</code>.</p>
<p>And to the patient's page, you can use a method inside your class <code>Patient</code> to easly do this:</p>
<pre><code><a href="{{ patient.get_absolute_url }}">{{ patient.name }}</a>
</code></pre>
<blockquote>
<p>"with get_absolute_url on your model it can make dealing with them on a per-object basis much simpler and cleaner across your entire site"</p>
</blockquote>
<p><a href="https://godjango.com/67-understanding-get_absolute_url/" rel="nofollow">https://godjango.com/67-understanding-get_absolute_url/</a></p>
| 0 | 2016-08-29T09:00:30Z | [
"python",
"django"
] |
Starting and stopping subprocess with Python | 39,202,206 | <p>I'm trying to start and stop a program on a device through <code>MQTT</code> commands, but it's not really working out the way I'm hoping..</p>
<p>To start the process, I'm using:</p>
<pre><code>p = subprocess.Popen(["sh", "process.sh"])
</code></pre>
<p>Which works out fine, it starts the program.</p>
<p>Further down the line in the code I'm trying to kill/terminate the program with either <code>p.kill</code> or <code>p.terminate</code> but it's returning the code:</p>
<pre><code>p.terminate()
UnboundLocalError: local variable 'p' referenced before assignment
</code></pre>
<p>The code I'm working with is my own, and goes as follows:</p>
<pre><code>def on_message(client, userdata, msg):
if msg.payload == "start":
p = subprocess.Popen(["sh", "stream.sh"])
if msg.payload == "stop":
p.terminate()
</code></pre>
| 3 | 2016-08-29T08:57:33Z | 39,202,259 | <p>You have to define p as global</p>
<pre><code>def on_message(...):
global p
</code></pre>
| 1 | 2016-08-29T09:01:20Z | [
"python"
] |
Starting and stopping subprocess with Python | 39,202,206 | <p>I'm trying to start and stop a program on a device through <code>MQTT</code> commands, but it's not really working out the way I'm hoping..</p>
<p>To start the process, I'm using:</p>
<pre><code>p = subprocess.Popen(["sh", "process.sh"])
</code></pre>
<p>Which works out fine, it starts the program.</p>
<p>Further down the line in the code I'm trying to kill/terminate the program with either <code>p.kill</code> or <code>p.terminate</code> but it's returning the code:</p>
<pre><code>p.terminate()
UnboundLocalError: local variable 'p' referenced before assignment
</code></pre>
<p>The code I'm working with is my own, and goes as follows:</p>
<pre><code>def on_message(client, userdata, msg):
if msg.payload == "start":
p = subprocess.Popen(["sh", "stream.sh"])
if msg.payload == "stop":
p.terminate()
</code></pre>
| 3 | 2016-08-29T08:57:33Z | 39,202,385 | <pre><code>p = None
def on_message(client, userdata, msg):
global p
if msg.payload == "start":
p = subprocess.Popen(["sh", "stream.sh"])
if msg.payload == "stop" and p:
p.terminate()
</code></pre>
| 2 | 2016-08-29T09:07:53Z | [
"python"
] |
SwigPyIterator value is always base class | 39,202,265 | <p>I have classes A, B, C in C++. B and C are derived from A. And I have function in c++ returns vector of B and C: like <code>std::vector<A*> getAllObjects()</code>. I use swig to generate Python wrapper. Then I call getAllObjects() in Python as below:</p>
<pre><code>objects = getAllObjects()
for obj in objects:
if isinstance(obj, B):
print("OK")
elif isinstance(obj, C):
print("OK")
</code></pre>
<p>The object I get from iterator is instance A, but it should be B or C. How to resolve the problem?</p>
| 0 | 2016-08-29T09:01:37Z | 39,236,767 | <p>You need <em>something</em> more to go on than just a type hierarchy. Typically in a Python/SWIG scenario one of the following is sufficient:</p>
<ol>
<li>a <code>virtual</code> function in the base class (i.e. RTTI)</li>
<li>some member variable or function that somehow identifies the most derived type of a given instance (e.g. a common pattern in C is to include the first field of a struct being some kind of type identifier).</li>
<li>some kind of hook at object creation time, for example if you know that all instances will be Python created/owned</li>
</ol>
<p>I'm working to the assumption that the first type is sufficient, but even for other cases it's not hard to adapt. </p>
<p>To illustrate this I wrote the following header file:</p>
<pre class="lang-cpp prettyprint-override"><code>class A {
public:
virtual ~A() {}
};
class B : public A {
};
class C: public A {
};
</code></pre>
<p>Given this header file, in pure C++ we can do the following, making use of RTTI:</p>
<pre class="lang-cpp prettyprint-override"><code>#include "test.hh"
#include <typeinfo>
#include <iostream>
int main() {
const auto& t1 = typeid(A);
const auto& t2 = typeid(B);
const auto& t3 = typeid(C);
A *a = new A;
A *b = new B;
A *c = new C;
const auto& at = typeid(*a);
const auto& bt = typeid(*b);
const auto& ct = typeid(*c);
std::cout << t1.name() << "\n";
std::cout << t2.name() << "\n";
std::cout << t3.name() << "\n";
std::cout << at.name() << "\n";
std::cout << bt.name() << "\n";
std::cout << ct.name() << "\n";
}
</code></pre>
<p>This illustrates that the problem we're trying to solve (what type is it really?) is in fact solvable using standard C++.</p>
<p>It's worth noting at this point that the problem is made slightly more complicated by the use of the <code>std::vector</code> iteration instead of just a function that returns a single <code>A*</code>. If we were just working with the return value of a function we'd write a <code>typemap(out)</code>. In the case of the <code>std::vector<A*></code> however it is possible to customise the behaviour of the iteration returning and insert extra code to make sure Python is aware of the derived type and not just the base. SWIG has a type traits mechanism that most of the standard containers use to help them with common uses (e.g. iteration) without excessive duplication. (For reference this is in std_common.i I think).</p>
<p>So the basic plan is to hook into the output of the iteration process (<code>SwigPyIterator</code>, implemented as <code>SwigPyIteratorClosed_T</code> in this case), using the traits types that SWIG introduces for customising this. Inside that hook, instead of blindly using the SWIG type info for <code>A*</code> we'll use <code>typeid</code> to lookup the type dynamically in a <code>std::map</code>. This map is maintained internally to the module. If we find anything in that map we'll use it to return a more derived Python object, as a Python programmer would expect. Finally we need to register the types in the map at initialisation time. </p>
<p>So my interface ended up looking like this:</p>
<pre class="lang-cpp prettyprint-override"><code>%module test
%{
#include "test.hh"
#include <vector>
#include <map>
#include <string>
#include <typeindex> // C++11! - see: http://stackoverflow.com/a/9859605/168175
%}
%include <std_vector.i>
%{
namespace {
// Internal only, store the type mappings
std::map<std::type_index, swig_type_info*> aheirarchy;
}
namespace swig {
// Forward declare traits, the fragments they're from won't be there yet
template <class Type> struct traits_from_ptr;
template <class Type>
inline swig_type_info *type_info();
template <> struct traits_from_ptr<A> {
static PyObject *from(A *val, int owner = 0) {
auto ty = aheirarchy[typeid(*val)];
if (!ty) {
// if there's nothing in the map, do what SWIG would have done
ty = type_info<A>();
}
return SWIG_NewPointerObj(val, ty, owner);
}
};
}
%}
%template(AList) std::vector<A*>;
%inline %{
const std::vector<A*>& getAllObjects() {
// Demo only
static auto ret = std::vector<A*>{new A, new B, new C, new C, new B};
return ret;
}
%}
%include "test.hh"
%init %{
// Register B and C here
aheirarchy[typeid(B)] = SWIG_TypeQuery("B*");
aheirarchy[typeid(C)] = SWIG_TypeQuery("C*");
%}
</code></pre>
<p>With the <code>%inline</code> function I wrote just to illustrate things that's enough to get things started. It allowed me to run the following test Python to demonstrate my solution:</p>
<pre><code>from test import getAllObjects, A, B, C
objects = getAllObjects()
for obj in objects:
print obj
if isinstance(obj, B):
print("OK")
elif isinstance(obj, C):
print("OK")
</code></pre>
<pre class="lang-none prettyprint-override"><code>swig3.0 -c++ -python -Wall test.i
g++ -std=c++11 -Wall test_wrap.cxx -o _test.so -shared -I/usr/include/python2.7/ -fPIC
python run.py
<test.A; proxy of <Swig Object of type 'A *' at 0xf7442950> >
<test.B; proxy of <Swig Object of type 'B *' at 0xf7442980> >
OK
<test.C; proxy of <Swig Object of type 'C *' at 0xf7442fb0> >
OK
<test.C; proxy of <Swig Object of type 'C *' at 0xf7442fc8> >
OK
<test.B; proxy of <Swig Object of type 'B *' at 0xf7442f98> >
OK
</code></pre>
<p>Which you'll notice matched the types created in my dummy implementation of <code>getAllObjects</code>.</p>
<p>You could do a few things more neatly:</p>
<ol>
<li>Add a macro for registering the types. (Or do it automatically some other way)</li>
<li>Add typemaps for regular returning of objects if needed.</li>
</ol>
<p>And as I indicated earlier this isn't the only way to solve this problem, just the most generic.</p>
| 2 | 2016-08-30T20:43:29Z | [
"python",
"c++",
"swig"
] |
Django REST Framework how to add context to a ViewSet | 39,202,380 | <p>The ViewSets do everything that I want, but I am finding that if I want to pass extra context to a template (with TemplateHTMLRenderer) then I will have to get at the functions that give responses.. (like list(), create(), etc)</p>
<p>The only way I can see to get into these is to completely redefine them in my ViewSet, but it seems that there should be an easy way to add a bit of context to the Template without having to redefine a whole set of methods...</p>
<pre><code>class LanguageViewSet(viewsets.ModelViewSet):
"""Viewset for Language objects, use the proper HTTP methods to modify them"""
# TODO: add permissions for this view?
queryset = Language.objects.all()
serializer_class = LanguageSerializer
filter_backends = (filters.DjangoFilterBackend, )
filter_fields = ('name', 'active')
</code></pre>
<p>Right now my code is looking like this but I will be wanting to add different context to the responses and I am trying to avoid redefining an entire method for such a small change. like this...</p>
<pre><code>class LanguageViewSet(viewsets.ModelViewSet):
"""Viewset for Language objects, use the proper HTTP methods to modify them"""
# TODO: add permissions for this view?
queryset = Language.objects.all()
serializer_class = LanguageSerializer
filter_backends = (filters.DjangoFilterBackend, )
filter_fields = ('name', 'active')
def list(self, **kwargs):
"""Redefinition of list"""
..blah blah everything that list does
return Response({"foo": "bar"}, template_name="index.html")
</code></pre>
| 0 | 2016-08-29T09:07:34Z | 39,378,225 | <p>Although I disagree with 'pleasedontbelong' in principle, I agree with him on the fact that the extra contextual data ought to be emitted from the serializer. That seems to be the cleanest way since the serializer would be returning a native python data type which all renderers would know how to render.</p>
<p>Heres how it would look like:</p>
<p>ViewSet:</p>
<pre><code>class LanguageViewSet(viewsets.ModelViewSet):
queryset = Language.objects.all()
serializer_class = LanguageSerializer
filter_backends = (filters.DjangoFilterBackend, )
filter_fields = ('name', 'active')
def get_serializer_context(self):
context = super().get_serializer_context()
context['foo'] = 'bar'
return context
</code></pre>
<p>Serializer:</p>
<pre><code>class YourSerializer(serializers.Serializer):
field = serializers.CharField()
def to_representation(self, instance):
ret = super().to_representation(instance)
# Access self.context here to add contextual data into ret
ret['foo'] = self.context['foo']
return ret
</code></pre>
<p>Now, foo should be available inside your template.</p>
<p>Another way to achieve this, in case you don't wish to mess with your serializers, would be to create a custom TemplateHTMLRenderer.</p>
<pre><code>class TemplateHTMLRendererWithContext(TemplateHTMLRenderer):
def render(self, data, accepted_media_type=None, renderer_context=None):
# We can't really call super in this case, since we need to modify the inner working a bit
renderer_context = renderer_context or {}
view = renderer_context.pop('view')
request = renderer_context.pop('request')
response = renderer_context.pop('response')
view_kwargs = renderer_context.pop('kwargs')
view_args = renderer_context.pop('args')
if response.exception:
template = self.get_exception_template(response)
else:
template_names = self.get_template_names(response, view)
template = self.resolve_template(template_names)
context = self.resolve_context(data, request, response, render_context)
return template_render(template, context, request=request)
def resolve_context(self, data, request, response, render_context):
if response.exception:
data['status_code'] = response.status_code
data.update(render_context)
return data
</code></pre>
<p>To add data into the context, ViewSets provide a <code>get_renderer_context</code> method.</p>
<pre><code>class LanguageViewSet(viewsets.ModelViewSet):
queryset = Language.objects.all()
serializer_class = LanguageSerializer
filter_backends = (filters.DjangoFilterBackend, )
filter_fields = ('name', 'active')
def get_renderer_context(self):
context = super().get_renderer_context()
context['foo'] = 'bar'
return context
</code></pre>
<p><code>{'foo': 'bar'}</code> should now be available in your template.</p>
| 0 | 2016-09-07T20:09:51Z | [
"python",
"django",
"django-rest-framework"
] |
Indexing into the last axis of a 3D array with another 3D array | 39,202,509 | <p>if i have an array <strong>x</strong>, which value is as follow with shape <strong>(2,3,4)</strong>:</p>
<pre><code>array([[[ 0.15845319, 0.57808432, 0.05638804, 0.56237656],
[ 0.73164208, 0.80562342, 0.64561066, 0.15397456],
[ 0.34734043, 0.88063258, 0.4863103 , 0.09881028]],
[[ 0.35823078, 0.71260357, 0.49410944, 0.94909165],
[ 0.02730397, 0.67890392, 0.74340148, 0.47434223],
[ 0.02494292, 0.59827256, 0.20550867, 0.30859339]]])
</code></pre>
<p>and i have an index array <strong>y</strong>, which shape is <strong>(2, 3, 3)</strong>, and the value is:</p>
<pre><code>array([[[0, 2, 2],
[2, 0, 2],
[0, 0, 2]],
[[1, 2, 1],
[1, 1, 1],
[1, 2, 2]]])
</code></pre>
<p>so i could use <strong>x[0,0,y[0][0]]</strong> to index the array <strong>x</strong>, and it will generate the output as follow:</p>
<pre><code>array([ 0.15845319, 0.05638804, 0.05638804])
</code></pre>
<p>my question is: is there any simple way to do this? i had already tried with
<strong>x[y]</strong>, it did not work.</p>
| 1 | 2016-08-29T09:14:26Z | 39,203,280 | <p>You could use <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#combining-advanced-and-basic-indexing" rel="nofollow"><code>fancy-indexing</code></a> -</p>
<pre><code>m,n = y.shape[:2]
out = x[np.arange(m)[:,None,None],np.arange(n)[:,None],y]
</code></pre>
| 1 | 2016-08-29T09:50:31Z | [
"python",
"arrays",
"numpy"
] |
Gunicorn logstash log handler | 39,202,520 | <p>my log config file</p>
<pre><code>[loggers]
keys=root, logstash
[handlers]
keys=console , logstash
[formatters]
keys=generic, access
[logger_root]
level=INFO
handlers=console
[logger_logstash]
level=DEBUG
handlers=logstash
propagate=1
qualname=logstash
[handler_console]
class=StreamHandler
formatter=generic
args=(sys.stdout, )
[handler_logstash]
class=logstash.TCPLogstashHandler
formatter=generic
args=('localhost',5959)
[formatter_generic]
format=%(asctime)s [%(process)d] [%(levelname)s] %(message)s
datefmt=%Y-%m-%d %H:%M:%S
class=logging.Formatter
[formatter_access]
format=%(message)s
class=logging.Formatter
</code></pre>
<p>my command to execute</p>
<pre><code>gunicorn --env DJANGO_SETTINGS_MODULE=myproject.settings myproject.wsgi --log-level debug --log-file=- --log-config gunicorn_log.conf
</code></pre>
<p>I am not getting any error but logstash is not receiving access logs.
This handler worked with DJango and celery but I am helpless with gunicorn</p>
| 0 | 2016-08-29T09:15:00Z | 39,205,441 | <p>Based on <a href="https://github.com/vklochan/python-logstash#using-with-django" rel="nofollow">python-logstash documentation</a>:</p>
<pre><code>[handler_logstash]
class=logstash.TCPLogstashHandler
formatter=generic
host=localhost
port=5959
version=1
</code></pre>
| 0 | 2016-08-29T11:40:10Z | [
"python",
"django",
"logging",
"logstash",
"gunicorn"
] |
Gunicorn logstash log handler | 39,202,520 | <p>my log config file</p>
<pre><code>[loggers]
keys=root, logstash
[handlers]
keys=console , logstash
[formatters]
keys=generic, access
[logger_root]
level=INFO
handlers=console
[logger_logstash]
level=DEBUG
handlers=logstash
propagate=1
qualname=logstash
[handler_console]
class=StreamHandler
formatter=generic
args=(sys.stdout, )
[handler_logstash]
class=logstash.TCPLogstashHandler
formatter=generic
args=('localhost',5959)
[formatter_generic]
format=%(asctime)s [%(process)d] [%(levelname)s] %(message)s
datefmt=%Y-%m-%d %H:%M:%S
class=logging.Formatter
[formatter_access]
format=%(message)s
class=logging.Formatter
</code></pre>
<p>my command to execute</p>
<pre><code>gunicorn --env DJANGO_SETTINGS_MODULE=myproject.settings myproject.wsgi --log-level debug --log-file=- --log-config gunicorn_log.conf
</code></pre>
<p>I am not getting any error but logstash is not receiving access logs.
This handler worked with DJango and celery but I am helpless with gunicorn</p>
| 0 | 2016-08-29T09:15:00Z | 39,264,895 | <pre><code>[loggers]
keys=root, logstash.error, logstash.access
[handlers]
keys=console , logstash
[formatters]
keys=generic, access, json
[logger_root]
level=INFO
handlers=console
[logger_logstash.error]
level=DEBUG
handlers=logstash
propagate=1
qualname=gunicorn.error
[logger_logstash.access]
level=DEBUG
handlers=logstash
propagate=0
qualname=gunicorn.access
[handler_console]
class=StreamHandler
formatter=generic
args=(sys.stdout, )
[handler_logstash]
class=logstash.TCPLogstashHandler
formatter=json
args=('localhost',5959)
[formatter_generic]
format=%(asctime)s [%(process)d] [%(levelname)s] %(message)s
datefmt=%Y-%m-%d %H:%M:%S
class=logging.Formatter
[formatter_access]
format=%(message)s
class=logging.Formatter
[formatter_json]
class=jsonlogging.JSONFormatter
</code></pre>
<p>The above config file worked for me
to send logs to logstash which is running under localhost:5959</p>
| 0 | 2016-09-01T07:10:32Z | [
"python",
"django",
"logging",
"logstash",
"gunicorn"
] |
Unique dictionary on basis of multiple keys | 39,202,572 | <p>I have a dict with different "types" -> modified and deleted.
I want do unique this.</p>
<pre><code>myDict =
[
{'type': 'deleted', 'target': {'id': u'1', 'foo': {'value': ''}}},
{'type': 'modified', 'target': {'id': u'1', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'1', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'2', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'2', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'2', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'3', 'foo': {'value': ''}}},
{'type': 'modified', 'target': {'id': u'3', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'3', 'foo': {'value': ''}}}
]
</code></pre>
<p>To achieve a unique list I do this:</p>
<pre><code>dict((v['target']['id'],v) for v in myDict).values()
[
{'type': 'deleted', 'target': {'foo': {'value': ''}, 'id': u'1'}},
{'type': 'deleted', 'target': {'foo': {'value': ''}, 'id': u'2'}},
{'type': 'deleted', 'target': {'foo': {'value': ''}, 'id': u'3'}}
]
</code></pre>
<p>How can I get a unique list on the basis of a "couple of keys".</p>
<p>I need both 'types'. My expected result is:</p>
<pre><code>[
{'type': 'deleted', 'target': {'foo': {'value': ''}, 'id': u'1'}},
{'type': 'modified', 'target': {'foo': {'value': ''}, 'id': u'1'}},
{'type': 'deleted', 'target': {'foo': {'value': ''}, 'id': u'2'}},
{'type': 'modified', 'target': {'foo': {'value': ''}, 'id': u'2'}},
{'type': 'deleted', 'target': {'foo': {'value': ''}, 'id': u'3'}}
]
</code></pre>
| 0 | 2016-08-29T09:17:19Z | 39,202,755 | <p>Not sure I've understood your question, but, is this what you want?</p>
<pre><code>from collections import defaultdict
import json
my_list = [
{'type': 'deleted', 'target': {'id': u'1', 'foo': {'value': ''}}},
{'type': 'modified', 'target': {'id': u'1', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'1', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'2', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'2', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'2', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'3', 'foo': {'value': ''}}},
{'type': 'modified', 'target': {'id': u'3', 'foo': {'value': ''}}},
{'type': 'deleted', 'target': {'id': u'3', 'foo': {'value': ''}}}
]
out = defaultdict(set)
for v in my_list:
out[v["type"]].add(json.dumps(v["target"], sort_keys=True))
result = []
for k, v in out.iteritems():
for vv in out[k]:
result.append({
"type": k,
"target": json.loads(vv)
})
print out
print len(out["deleted"])
print len(out["modified"])
</code></pre>
| 1 | 2016-08-29T09:26:38Z | [
"python",
"dictionary"
] |
how to auto check check box by entering value in other field? | 39,202,584 | <p>I have made a module for hotel room management and I want to check room availability in Python. I have taken on checkbox named <code>available = fields.Boolean()</code>. Now, I want that it will be automatically checked when I enter check_in_time. so, how can I do this? I have taken
<code>check_in_time=fields. Datetime()</code>. I have already tried like this...</p>
<pre class="lang-py prettyprint-override"><code>class Hotel_Management(models.Model):
check_in_time = fields. Datetime()
available = fields.Boolean()
@api.onchange('check_in_time')
def auto_check(self):
for a in self:
if a.check_in_time:
a.available=True
print a.available
else:
print a.available
</code></pre>
<p>But it changes the value of the checkbox only in the terminal....not in the database....it takes "false" value by default every time. So, tell me how to change the value of the checkbox in the database?</p>
| 0 | 2016-08-29T09:17:58Z | 39,205,130 | <p>Your code is right. The value will be saved upon "Save" action in the client. <code>onchange</code> will save the value virtually at clientside. <code>depends</code> will instead write directly into database. But <code>onchange</code> should be enough here.</p>
| 1 | 2016-08-29T11:25:44Z | [
"python",
"openerp"
] |
Numpy rolling window over 2D array, as a 1D array with nested array as data values | 39,202,636 | <p>When using <code>np.lib.stride_tricks.as_strided</code>, how can I manage 2D a array with the nested arrays as data values? Is there a preferable <strong>efficient</strong> approach?</p>
<p>Specifically, if I have a 2D <code>np.array</code> looking as follows, where each data item in a 1D array is an array of length 2:</p>
<pre><code>[[1., 2.],[3., 4.],[5.,6.],[7.,8.],[9.,10.]...]
</code></pre>
<p>I want to reshape for rolling over as follows:</p>
<pre><code>[[[1., 2.],[3., 4.],[5.,6.]],
[[3., 4.],[5.,6.],[7.,8.]],
[[5.,6.],[7.,8.],[9.,10.]],
...
]
</code></pre>
<p>I have had a look at similar answers (e.g. <a href="http://stackoverflow.com/a/6811241/6334309">this rolling window function</a>), however in use I cannot leave the inner array/tuples untouched.</p>
<p>For example with a window length of <code>3</code>: I have tried a <code>shape</code> of <code>(len(seq)+3-1, 3, 2)</code> and a <code>stride</code> of <code>(2 * 8, 2 * 8, 8)</code>, but no luck. Maybe I am missing something obvious?</p>
<p>Cheers.</p>
<hr>
<p><strong>EDIT:</strong> It is easy to produce a functionally identical solution using Python built-ins (which can be optimised using e.g. <code>np.arange</code> similar to Divakar's solution), however, what about using <code>as_strided</code>? From my understanding, this could be used for a highly efficient solution?</p>
| 0 | 2016-08-29T09:20:44Z | 39,203,586 | <p>IIUC you could do something like this -</p>
<pre><code>def rolling_window2D(a,n):
# a: 2D Input array
# n: Group/sliding window length
return a[np.arange(a.shape[0]-n+1)[:,None] + np.arange(n)]
</code></pre>
<p>Sample run -</p>
<pre><code>In [110]: a
Out[110]:
array([[ 1, 2],
[ 3, 4],
[ 5, 6],
[ 7, 8],
[ 9, 10]])
In [111]: rolling_window2D(a,3)
Out[111]:
array([[[ 1, 2],
[ 3, 4],
[ 5, 6]],
[[ 3, 4],
[ 5, 6],
[ 7, 8]],
[[ 5, 6],
[ 7, 8],
[ 9, 10]]])
</code></pre>
| 1 | 2016-08-29T10:05:19Z | [
"python",
"arrays",
"performance",
"numpy",
"sliding-window"
] |
Numpy rolling window over 2D array, as a 1D array with nested array as data values | 39,202,636 | <p>When using <code>np.lib.stride_tricks.as_strided</code>, how can I manage 2D a array with the nested arrays as data values? Is there a preferable <strong>efficient</strong> approach?</p>
<p>Specifically, if I have a 2D <code>np.array</code> looking as follows, where each data item in a 1D array is an array of length 2:</p>
<pre><code>[[1., 2.],[3., 4.],[5.,6.],[7.,8.],[9.,10.]...]
</code></pre>
<p>I want to reshape for rolling over as follows:</p>
<pre><code>[[[1., 2.],[3., 4.],[5.,6.]],
[[3., 4.],[5.,6.],[7.,8.]],
[[5.,6.],[7.,8.],[9.,10.]],
...
]
</code></pre>
<p>I have had a look at similar answers (e.g. <a href="http://stackoverflow.com/a/6811241/6334309">this rolling window function</a>), however in use I cannot leave the inner array/tuples untouched.</p>
<p>For example with a window length of <code>3</code>: I have tried a <code>shape</code> of <code>(len(seq)+3-1, 3, 2)</code> and a <code>stride</code> of <code>(2 * 8, 2 * 8, 8)</code>, but no luck. Maybe I am missing something obvious?</p>
<p>Cheers.</p>
<hr>
<p><strong>EDIT:</strong> It is easy to produce a functionally identical solution using Python built-ins (which can be optimised using e.g. <code>np.arange</code> similar to Divakar's solution), however, what about using <code>as_strided</code>? From my understanding, this could be used for a highly efficient solution?</p>
| 0 | 2016-08-29T09:20:44Z | 39,210,101 | <p>What was wrong with your <code>as_strided</code> trial? It works for me.</p>
<pre><code>In [28]: x=np.arange(1,11.).reshape(5,2)
In [29]: x.shape
Out[29]: (5, 2)
In [30]: x.strides
Out[30]: (16, 8)
In [31]: np.lib.stride_tricks.as_strided(x,shape=(3,3,2),strides=(16,16,8))
Out[31]:
array([[[ 1., 2.],
[ 3., 4.],
[ 5., 6.]],
[[ 3., 4.],
[ 5., 6.],
[ 7., 8.]],
[[ 5., 6.],
[ 7., 8.],
[ 9., 10.]]])
</code></pre>
<p>On my first edit I used an <code>int</code> array, so had to use <code>(8,8,4)</code> for the strides.</p>
<p>Your shape could be wrong. If too large it starts seeing values off the end of the data buffer.</p>
<pre><code> [[ 7.00000000e+000, 8.00000000e+000],
[ 9.00000000e+000, 1.00000000e+001],
[ 8.19968827e-257, 5.30498948e-313]]])
</code></pre>
<p>Here it just alters the display method, the <code>7, 8, 9, 10</code> are still there. Writing those those slots could be dangerous, messing up other parts of your code. <code>as_strided</code> is best if used for read-only purposes. Writes/sets are trickier.</p>
| 1 | 2016-08-29T15:38:26Z | [
"python",
"arrays",
"performance",
"numpy",
"sliding-window"
] |
Can't run crontab without excecuting the programm once | 39,202,652 | <p>I wrote a <code>script.py</code> which should open a <code>Tkinter</code> window on a raspberry:</p>
<pre><code>from Tkinter import *
import turtle
import math
import time
import sys
import os
root = Tk()
root.config(cursor="none")
ccanvas = Canvas(root, width = 800, height = 480)
root.overrideredirect(1)
turtle_screen = turtle.TurtleScreen(ccanvas)
ccanvas.pack()
turtle = turtle.RawTurtle(turtle_screen)
turtle.hideturtle()
mainloop()
</code></pre>
<p>I am able to run the script from the command line with:</p>
<pre><code>python /home/pi/script.py
</code></pre>
<p>When I tried to run it via <code>crontab</code> first the Display was not found. I fixed that with:</p>
<pre><code>DISPLAY=:0 python /home/pi/script.py
</code></pre>
<p>But now I get the following error: <code>_tkinter.TclError: couldn't connect to display ":0"</code> until I execute the <code>script.py</code> once manual in the cmd. Then the <code>crontab</code> is able to execute the <code>script.py</code> without that error. How can I fix that?</p>
| 1 | 2016-08-29T09:21:29Z | 39,221,366 | <p>Finally solved my problem. Everything was fine, but i was using the <code>root crontab</code>. The <code>root crontab</code> was not able to find the display, before the display was not mentioned/used by another command. I transferred my <code>cronjobs</code> to the "normal" <code>crontab</code> and everything works fine.
Another point is that commands, which need the display (for example Tkinter) do not work if you start them <code>@reboot</code>. You have to implement some sleep time (~30 seconds) in your script, so the display has time to become available.</p>
<pre><code>import time
time.sleep(30)
...
</code></pre>
| 0 | 2016-08-30T07:29:39Z | [
"python",
"tkinter",
"crontab",
"raspberry-pi3"
] |
I got an error while using "heroku open" command | 39,202,673 | <p>I'm created a python app on heroku.
After pushing I gave <code>heroku open</code> command and I got an error on browser like this</p>
<pre><code>Application Error
An error occurred in the application and your page could not be served. Please try again in a few moments.
If you are the application owner, check your logs for details.
</code></pre>
<p>I check my logs by using command <code>heroku logs</code> ,I got</p>
<pre><code>2016-08-29T09:53:21.255789+00:00 heroku[api]: Enable Logplex by aparnabalagopal328@gmail.com
2016-08-29T09:53:21.255849+00:00 heroku[api]: Release v2 created by aparnabalagopal328@gmail.com
2016-08-29T09:54:11.279935+00:00 heroku[api]: Attach DATABASE (@ref:postgresql-parallel-41683) by aparnabalagopal328@gmail.com
2016-08-29T09:54:11.280127+00:00 heroku[api]: Release v3 created by aparnabalagopal328@gmail.com
2016-08-29T09:54:11.802177+00:00 heroku[api]: Scale to web=1 by aparnabalagopal328@gmail.com
2016-08-29T09:54:11.802730+00:00 heroku[api]: Deploy 07b053c by aparnabalagopal328@gmail.com
2016-08-29T09:54:11.802825+00:00 heroku[api]: Release v4 created by aparnabalagopal328@gmail.com
2016-08-29T09:54:12.230977+00:00 heroku[slug-compiler]: Slug compilation started
2016-08-29T09:54:12.230983+00:00 heroku[slug-compiler]: Slug compilation finished
2016-08-29T09:54:14.053809+00:00 heroku[web.1]: Starting process with command `gunicorn demoproject.wsgi --log-file -`
2016-08-29T09:54:15.687774+00:00 heroku[web.1]: Process exited with status 127
2016-08-29T09:54:15.625418+00:00 app[web.1]: bash: gunicorn: command not found
2016-08-29T09:54:15.698702+00:00 heroku[web.1]: State changed from starting to crashed
2016-08-29T09:54:15.699101+00:00 heroku[web.1]: State changed from crashed to starting
2016-08-29T09:54:18.068372+00:00 heroku[web.1]: Starting process with command `gunicorn demoproject.wsgi --log-file -`
2016-08-29T09:54:20.311898+00:00 app[web.1]: bash: gunicorn: command not found
2016-08-29T09:54:20.422709+00:00 heroku[web.1]: State changed from starting to crashed
2016-08-29T09:54:20.399025+00:00 heroku[web.1]: Process exited with status 127
2016-08-29T09:54:29.340320+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=desolate-dawn-92822.herokuapp.com request_id=4f6e1b38-3ceb-4fc6-94ae-3f9d714fc0cc fwd="106.66.160.62" dyno= connect= service= status=503 bytes=
2016-08-29T09:54:31.384865+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=desolate-dawn-92822.herokuapp.com request_id=c8a6001a-02b6-489f-b705-84aceff70b54 fwd="106.66.160.62" dyno= connect= service= status=503 bytes=
2016-08-29T09:58:56.842452+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=desolate-dawn-92822.herokuapp.com request_id=af27a7c5-2821-4fe7-9477-2cdc532a3336 fwd="106.66.160.62" dyno= connect= service= status=503 bytes=
2016-08-29T10:08:42.553637+00:00 heroku[api]: Deploy 85408b3 by aparnabalagopal328@gmail.com
2016-08-29T10:08:42.553679+00:00 heroku[api]: Release v5 created by aparnabalagopal328@gmail.com
2016-08-29T10:08:42.740177+00:00 heroku[slug-compiler]: Slug compilation started
2016-08-29T10:08:42.740186+00:00 heroku[slug-compiler]: Slug compilation finished
2016-08-29T10:08:43.684450+00:00 heroku[web.1]: State changed from crashed to starting
2016-08-29T10:08:46.465307+00:00 heroku[web.1]: Starting process with command `gunicorn myapp.wsgi`
2016-08-29T10:08:48.576484+00:00 app[web.1]: bash: gunicorn: command not found
2016-08-29T10:08:48.707427+00:00 heroku[web.1]: State changed from starting to crashed
2016-08-29T10:08:48.708462+00:00 heroku[web.1]: State changed from crashed to starting
2016-08-29T10:08:48.693847+00:00 heroku[web.1]: Process exited with status 127
2016-08-29T10:08:51.121516+00:00 heroku[web.1]: Starting process with command `gunicorn myapp.wsgi`
2016-08-29T10:08:53.420967+00:00 app[web.1]: bash: gunicorn: command not found
2016-08-29T10:08:53.485832+00:00 heroku[web.1]: State changed from starting to crashed
2016-08-29T10:08:53.480263+00:00 heroku[web.1]: Process exited with status 127
2016-08-29T10:08:54.151339+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=desolate-dawn-92822.herokuapp.com request_id=64a8fc46-fe45-4cb9-8d4b-66ef1ce06613 fwd="106.66.160.62" dyno= connect= service= status=503 bytes=
2016-08-29T10:10:27.972626+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=desolate-dawn-92822.herokuapp.com request_id=ed27deb7-9638-4bd4-8dde-1a067e721bd8 fwd="106.66.160.62" dyno= connect= service= status=503 bytes=
2016-08-29T10:25:36.636851+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=desolate-dawn-92822.herokuapp.com request_id=294294a1-d3fb-415c-a241-a5dd5856ca1c fwd="106.66.160.62" dyno= connect= service= status=503 bytes=
2016-08-29T10:34:37.538080+00:00 heroku[web.1]: State changed from crashed to starting
2016-08-29T10:34:40.686186+00:00 heroku[web.1]: Starting process with command `gunicorn myapp.wsgi`
2016-08-29T10:34:42.873466+00:00 heroku[web.1]: Process exited with status 127
2016-08-29T10:34:42.915769+00:00 heroku[web.1]: State changed from starting to crashed
2016-08-29T10:34:42.787502+00:00 app[web.1]: bash: gunicorn: command not found
2016-08-29T11:25:00.207326+00:00 heroku[web.1]: State changed from crashed to starting
2016-08-29T11:25:02.610880+00:00 heroku[web.1]: Starting process with command `gunicorn myapp.wsgi`
2016-08-29T11:25:04.177373+00:00 app[web.1]: bash: gunicorn: command not found
2016-08-29T11:25:04.259289+00:00 heroku[web.1]: State changed from starting to crashed
2016-08-29T11:25:04.244018+00:00 heroku[web.1]: Process exited with status 127
2016-08-29T12:49:30.399226+00:00 heroku[web.1]: State changed from crashed to starting
2016-08-29T12:49:32.974142+00:00 heroku[web.1]: Starting process with command `gunicorn myapp.wsgi`
2016-08-29T12:49:34.619515+00:00 app[web.1]: bash: gunicorn: command not found
2016-08-29T12:49:34.664374+00:00 heroku[web.1]: Process exited with status 127
2016-08-29T12:49:34.654539+00:00 heroku[web.1]: State changed from starting to crashed
2016-08-29T14:27:57.820573+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=desolate-dawn-92822.herokuapp.com request_id=e19b82f3-ee8d-48ba-81d3-f8b9f931ec08 fwd="106.66.175.169" dyno= connect= service= status=503 bytes=
2016-08-29T14:27:58.187626+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=desolate-dawn-92822.herokuapp.com request_id=846c1921-f4e4-4fd6-bf4d-f8d3b3c09d84 fwd="106.66.175.169" dyno= connect= service= status=503 bytes=
2016-08-29T14:28:11.624391+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=desolate-dawn-92822.herokuapp.com request_id=1104d918-c899-4ba8-a49c-90941ece26cb fwd="106.66.175.169" dyno= connect= service= status=503 bytes=
</code></pre>
<p>The link address of my app is
<a href="https://desolate-dawn-92822.herokuapp.com/" rel="nofollow">https://desolate-dawn-92822.herokuapp.com/</a></p>
| 0 | 2016-08-29T09:22:50Z | 39,210,363 | <p>It seems that <code>gunicorn</code> is missing. You should add <code>gunicorn</code> to your <code>requirements.txt</code> and deploy your project again to Heroku. </p>
| 0 | 2016-08-29T15:53:20Z | [
"python",
"git",
"heroku"
] |
Permutations of a list split into two strings | 39,202,734 | <p>Using the following list as an example :</p>
<pre><code>a = ["cat", "dog", "mouse", "rat", "horse"]
</code></pre>
<p>I can get all permutations using <code>itertools.permutations</code></p>
<pre><code>print (list(itertools.permutations(a, 2)))
[('cat', 'dog'), ('cat', 'mouse'), ('cat', 'rat'), ('cat', 'horse'), ('dog', 'cat'), ('dog', 'mouse'), ('dog', 'rat'), ('dog', 'horse'), ('mouse', 'cat'), ('mouse', 'dog'), ('mouse', 'rat'), ('mouse', 'horse'), ('rat', 'cat'), ('rat', 'dog'), ('rat', 'mouse'), ('rat', 'horse'), ('horse', 'cat'), ('horse', 'dog'), ('horse', 'mouse'), ('horse', 'rat')]
</code></pre>
<p>But what if I need to get this output in the following format which is a list of lists containing all items split in two strings as follows:</p>
<pre><code>[["cat", "dog mouse rat horse"], ["cat dog", "mouse rat horse"], ["cat dog mouse", "rat horse"], ["cat dog mouse rat", "horse"]]
</code></pre>
| 0 | 2016-08-29T09:25:30Z | 39,202,863 | <p>Slicing input list and joining string can do a job for you.</p>
<pre><code>seq = ["cat", "dog", "mouse", "rat", "horse"]
seq2 = [[' '.join(seq[:i+1]), ' '.join(seq[i+1:])] for i in range(len(seq)-1)]
# [['cat', 'dog mouse rat horse'], ['cat dog', 'mouse rat horse'], ['cat dog mouse', 'rat horse'], ['cat dog mouse rat', 'horse']]
</code></pre>
| 1 | 2016-08-29T09:31:26Z | [
"python"
] |
Permutations of a list split into two strings | 39,202,734 | <p>Using the following list as an example :</p>
<pre><code>a = ["cat", "dog", "mouse", "rat", "horse"]
</code></pre>
<p>I can get all permutations using <code>itertools.permutations</code></p>
<pre><code>print (list(itertools.permutations(a, 2)))
[('cat', 'dog'), ('cat', 'mouse'), ('cat', 'rat'), ('cat', 'horse'), ('dog', 'cat'), ('dog', 'mouse'), ('dog', 'rat'), ('dog', 'horse'), ('mouse', 'cat'), ('mouse', 'dog'), ('mouse', 'rat'), ('mouse', 'horse'), ('rat', 'cat'), ('rat', 'dog'), ('rat', 'mouse'), ('rat', 'horse'), ('horse', 'cat'), ('horse', 'dog'), ('horse', 'mouse'), ('horse', 'rat')]
</code></pre>
<p>But what if I need to get this output in the following format which is a list of lists containing all items split in two strings as follows:</p>
<pre><code>[["cat", "dog mouse rat horse"], ["cat dog", "mouse rat horse"], ["cat dog mouse", "rat horse"], ["cat dog mouse rat", "horse"]]
</code></pre>
| 0 | 2016-08-29T09:25:30Z | 39,202,955 | <p>This would provide the desired output from the initial question:</p>
<pre><code>a = ["cat", "dog", "mouse", "rat", "horse"]
print[[" ".join(a[0:i]), " ".join(a[i:])] for i in range(1, len(a))]
</code></pre>
| 2 | 2016-08-29T09:35:21Z | [
"python"
] |
Permutations of a list split into two strings | 39,202,734 | <p>Using the following list as an example :</p>
<pre><code>a = ["cat", "dog", "mouse", "rat", "horse"]
</code></pre>
<p>I can get all permutations using <code>itertools.permutations</code></p>
<pre><code>print (list(itertools.permutations(a, 2)))
[('cat', 'dog'), ('cat', 'mouse'), ('cat', 'rat'), ('cat', 'horse'), ('dog', 'cat'), ('dog', 'mouse'), ('dog', 'rat'), ('dog', 'horse'), ('mouse', 'cat'), ('mouse', 'dog'), ('mouse', 'rat'), ('mouse', 'horse'), ('rat', 'cat'), ('rat', 'dog'), ('rat', 'mouse'), ('rat', 'horse'), ('horse', 'cat'), ('horse', 'dog'), ('horse', 'mouse'), ('horse', 'rat')]
</code></pre>
<p>But what if I need to get this output in the following format which is a list of lists containing all items split in two strings as follows:</p>
<pre><code>[["cat", "dog mouse rat horse"], ["cat dog", "mouse rat horse"], ["cat dog mouse", "rat horse"], ["cat dog mouse rat", "horse"]]
</code></pre>
| 0 | 2016-08-29T09:25:30Z | 39,202,959 | <p>You may use the below code to achieve this:</p>
<pre><code>>>> a = ["cat", "dog", "mouse", "rat", "horse"]
>>> my_perm_list = []
>>> for i in range(len(a)-1):
... my_perm_list.append((' '.join(a[:i+1]), ' '.join(a[i+1:])))
...
>>> my_perm_list
[('cat', 'dog mouse rat horse'), ('cat dog', 'mouse rat horse'), ('cat dog mouse', 'rat horse'), ('cat dog mouse rat', 'horse')]
</code></pre>
| 1 | 2016-08-29T09:35:27Z | [
"python"
] |
Python automate key press of a QtGUI | 39,202,870 | <p>I have a python script with the section below:</p>
<pre><code>for index in range(1,10):
os.system('./test')
os.system('xdotool key Return')
</code></pre>
<p>What I want to do is to run the executable ./test, which brings up a QtGUI. In this GUI, a key press prompt buton comes up. I want to automate this key press of the GUI so that the executable continues. </p>
<p>My python script though runs the executable, the GUI prompt comes up and a key press is not entered until after the executable. Is there any way to fix this?</p>
| 0 | 2016-08-29T09:31:45Z | 39,204,653 | <p><code>os.system</code> doesn't return until the child process exits.
You need <code>subprocess.Popen</code>.
It's also a good idea to <code>sleep</code> a while before sending keystrokes
(it may take some time for the child process to get ready to accept user input):</p>
<pre><code>from subprocess import Popen
from time import sleep
for index in range(1,10):
# start the child
process = Popen('./test')
# sleep to allow the child to draw its buttons
sleep(.333)
# send the keystroke
os.system('xdotool key Return')
# wait till the child exits
process.wait()
</code></pre>
<p>I'm not shure that you need the last line. If all <strong>9</strong> child processes are supposed to stay alive -- remove it.</p>
| 0 | 2016-08-29T11:01:54Z | [
"python",
"automation",
"qtgui",
"xdotool"
] |
Cannot initialize python function | 39,202,874 | <p>I have (re)wrote a back-test function in python using pandas</p>
<pre><code>def backtest(positions,price,initial_capital=10000):
#creating protfolio
portfolio =positions*price['price']
pos_diff=positions.diff()
#creating holidings
portfolio['holidings']=(positions*price['price'].sum(axis=1)
portfolio['cash']=initial_capital-(pos_diff*price['price']).sum(axis=1).cumsum()
#full account equity
portfolio['total']=portfolio['cash']+ portfolio['holidings']
portfolio['return']=portfolio['total'].pct_change()
return portfolio
</code></pre>
<p>where <strong>positions</strong> and <strong>price</strong> are both dataframe of 1 column and 5 column respectively .</p>
<p>Inorder for checking error I run this function alone in my python but it is returning this error</p>
<pre><code>File "", line 8
portfolio['cash']=initial_capital-(pos_diff*price['price']).sum(axis=1).cumsum()
SyntaxError: invalid syntax
</code></pre>
| 0 | 2016-08-29T09:31:55Z | 39,202,924 | <p>missing trailing parenthesis on line before:</p>
<pre><code>portfolio['holidings']=(positions*price['price'].sum(axis=1)
^ need ) here
</code></pre>
<p>should be:</p>
<pre><code>portfolio['holidings']=(positions*price['price']).sum(axis=1)
</code></pre>
<p>Whenever you get a syntax error, look at the line before if the error and line in question look fine and doesn't make sense</p>
| 2 | 2016-08-29T09:34:19Z | [
"python",
"pandas"
] |
How to set result of future\add to queue from another thread in Cython? | 39,202,891 | <p>I have C++ dll which works with multiple threads.
So I wrapped this library with Cython and created special receiver callback-function, which must adds some results to asyncio.Queue.</p>
<pre><code>cdef void __cdecl NewMessage(char* message) nogil:
</code></pre>
<p>I marked it as nogil, this callback calls from another thread.
In this callback I just use:</p>
<pre><code>with gil:
print("hello") # instead adding to Queue. print("hello") is more simple operation to demostrate problem
</code></pre>
<p>And got deadlock here.
How to resolve it?</p>
<p>C++ callback declaration (header):</p>
<pre><code>typedef void (*receiver_t)(char*);
void SetReceiver(receiver_t new_rec);
</code></pre>
<p>cpp:</p>
<pre><code>static receiver_t receiver = nullptr;
void SetReceiver(receiver_t new_rec)
{
printf("setted%i\n", (int)new_rec);
a = 123;
if (new_rec != nullptr)
receiver = new_rec;
}
</code></pre>
<p>Cython code:</p>
<pre><code>cdef extern from "TeamSpeak3.h":
ctypedef void (*receiver_t) (char*) nogil
cdef void __cdecl SetReceiver(receiver_t new_rec) nogil
cdef void __cdecl NewMessage(char* message) nogil:
with gil:
print("hello")
SetReceiver(NewMessage)
</code></pre>
<p>Full code:
.h <a href="http://pastebin.com/ZTCjc6NA" rel="nofollow">http://pastebin.com/ZTCjc6NA</a></p>
<p>.cpp <a href="http://pastebin.com/MeygA8im" rel="nofollow">http://pastebin.com/MeygA8im</a></p>
<p>.pyx <a href="http://pastebin.com/k4X9c54P" rel="nofollow">http://pastebin.com/k4X9c54P</a></p>
<p>.py <a href="http://pastebin.com/1YV7tMiF" rel="nofollow">http://pastebin.com/1YV7tMiF</a></p>
| 1 | 2016-08-29T09:32:47Z | 39,204,446 | <p>This is a bit of a guess but you probably have a Cython/C/C++ loop running that's holding the GIL and never releasing it. The callback is then forced for wait forever for it.</p>
<p>In normal Python code the GIL is released every few instructions if another thread is waiting for it. In Cython that doesn't happen automatically. One way to ensure that it does happen every so often is to to your loop:</p>
<pre><code>while True:
# ... do stuff
with nogil:
pass
</code></pre>
<p>This ensures the GIL is released once per loop.</p>
<p>Unfortunately it's not obvious to me where you have your main loop. I wonder if it's inside <code>connect</code> in your <code>PyTeamSpeak3</code> class, and perhaps changing the definition of connect to:</p>
<pre><code>def connect(self):
with nogil:
self.thisptr.Connect()
</code></pre>
<p>might help?</p>
| 1 | 2016-08-29T10:51:01Z | [
"python",
"c++",
"multithreading",
"cython",
"gil"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.