title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Tornado templates setting value field | 39,232,936 | <p>I have a python array:</p>
<pre><code>a=["a","b","c","d","e","f"]
</code></pre>
<p>I pass this as a value to tornado render:</p>
<pre><code> self.render("index.html", a=a)
</code></pre>
<p>In my html I have the following:</p>
<pre><code> <input type="hidden" id="xxx" value="
{% for c in a %}
{{c}}
{% end %}
"/>
</code></pre>
<p>Which gives me the following messy output:</p>
<pre><code><input type="hidden" id="xxx" value="
a
b
c
d
e
f
"/>
</code></pre>
<p>I tried to use a {{c.strip()}} but that did the same thing. I also trying to mix in javascript however the javascript didn't execute after rendering to populate the value field. I want the html to be normal that is <code>value="a b c d e f"</code></p>
<p>Does anyone know how I can achieve this?</p>
| 0 | 2016-08-30T16:43:28Z | 39,233,815 | <p>Try Tornado's <a href="http://www.tornadoweb.org/en/stable/template.html#tornado.template.filter_whitespace" rel="nofollow">filter_whitespace</a> feature, new in version 4.3.</p>
<pre><code> {% whitespace oneline %}
<input type="hidden" id="xxx" value="{% for c in a %}
{{c}}
{% end %}"/>
{% whitespace all %}
</code></pre>
<p>There will still be a few extra spaces but it's much closer to what you want.</p>
| 2 | 2016-08-30T17:38:33Z | [
"python",
"html",
"tornado"
] |
Tornado templates setting value field | 39,232,936 | <p>I have a python array:</p>
<pre><code>a=["a","b","c","d","e","f"]
</code></pre>
<p>I pass this as a value to tornado render:</p>
<pre><code> self.render("index.html", a=a)
</code></pre>
<p>In my html I have the following:</p>
<pre><code> <input type="hidden" id="xxx" value="
{% for c in a %}
{{c}}
{% end %}
"/>
</code></pre>
<p>Which gives me the following messy output:</p>
<pre><code><input type="hidden" id="xxx" value="
a
b
c
d
e
f
"/>
</code></pre>
<p>I tried to use a {{c.strip()}} but that did the same thing. I also trying to mix in javascript however the javascript didn't execute after rendering to populate the value field. I want the html to be normal that is <code>value="a b c d e f"</code></p>
<p>Does anyone know how I can achieve this?</p>
| 0 | 2016-08-30T16:43:28Z | 39,238,177 | <p>The same as @Avinash answered, only in-template:</p>
<pre><code><input type="hidden" id="xxx" value="{{ ' '.join(a) }}" />
</code></pre>
<p>Tornado templates, unlike some other templating engines like Django or Jinja, allows arbitrary Python expressions in the templates.</p>
| 0 | 2016-08-30T22:41:49Z | [
"python",
"html",
"tornado"
] |
How can I perform a "match or create" operation in py2neo v3? | 39,233,017 | <p>I want to create a Node/Relationship only if a Node/Relationship with the same attribute<strong>s</strong> doesn't already exist in the graph. If they do, I would like to fetch the relevant item.</p>
<p>Right now I am doing something that I suppose is both unidiomatic and inefficient. Assuming each <code>Person</code> node has a unique pair (<code>name</code>, <code>age</code>), I do something like this:</p>
<pre><code>try:
node = graph.data('MATCH (n:Person) WHERE n.name = {name} AND'
'n.age = {age} RETURN n LIMIT 1',
name=my_name, age=my_age)[0]['n']
except IndexError:
node = Node('Person', name=my_name, age=my_age)
</code></pre>
<p>To my understanding <a href="http://py2neo.org/v3/database.html#py2neo.database.Graph.find_one" rel="nofollow"><code>find_one()</code></a> works only if you have just one property to search for, and <a href="http://py2neo.org/v3/database.html#py2neo.database.Graph.match_one" rel="nofollow"><code>match_one()</code></a> allows no properties for relationships.</p>
| 0 | 2016-08-30T16:48:34Z | 39,235,540 | <p>You can use the Cypher <a href="http://neo4j.com/docs/developer-manual/current/cypher/#query-merge" rel="nofollow">MERGE</a> clause to perform a "match or create":</p>
<pre><code>node = graph.data('MERGE (n:Person) WHERE n.name = {name} AND'
'n.age = {age} RETURN n LIMIT 1',
name=my_name, age=my_age)[0]['n']
</code></pre>
<p>py2neo does have <a href="http://py2neo.org/2.0/essentials.html#py2neo.Graph.merge" rel="nofollow">merge</a> and <a href="http://py2neo.org/2.0/essentials.html#py2neo.Graph.merge_one" rel="nofollow">merge_one</a> functions, but they only take a single property, so using Cypher would be the more general approach.</p>
| 1 | 2016-08-30T19:22:13Z | [
"python",
"database",
"neo4j",
"py2neo"
] |
Launch command through popen on OpenShift | 39,233,057 | <p>When I launch this command through <strong>popen</strong></p>
<pre><code>try:
os.popen("myCustomCmdHere")
except IOError:
logging.error('Error on myCustomCmdHere')
</code></pre>
<p><strong>I don't have error but the command does not launch</strong></p>
<p>And when I launch myCustomCmdHere directly on Openshift console the command is right</p>
<pre><code>rhc ssh -a myApp
cd app-root/repo/myApp
myCustomCmdHere
</code></pre>
<p><strong>What I need to add in my code to use os.popen in Openshift ?</strong></p>
<p>Thanks</p>
| 0 | 2016-08-30T16:51:12Z | 39,236,912 | <p>Try using the '&' to start a separate process and send to background. I use this for python 3.3 on Openshift. </p>
<pre><code>import subprocess
subprocess.Popen(['myCustomcmdHere &'])
</code></pre>
<p>Or this</p>
<pre><code>os.system("myCustomcmdHere &")
</code></pre>
| 0 | 2016-08-30T20:53:06Z | [
"python",
"python-2.7",
"openshift",
"popen"
] |
How to import a function from a module in the same folder? | 39,233,077 | <p>I am trying to separate my script into several files with functions, so I moved some functions into separate files and want to import them into one main file. The structure is:</p>
<pre><code>core/
main.py
posts_run.py
</code></pre>
<p><code>posts_run.py</code> has two functions, <code>get_all_posts</code> and <code>retrieve_posts</code>, so I try import <code>get_all_posts</code> with:</p>
<pre><code>from posts_run import get_all_posts
</code></pre>
<p>Python 3.5 gives the error:</p>
<pre><code>ImportError: cannot import name 'get_all_posts'
</code></pre>
<p>Main.py contains following rows of code:</p>
<pre><code>import vk
from configs import client_id, login, password
session = vk.AuthSession(scope='wall,friends,photos,status,groups,offline,messages', app_id=client_id, user_login=login,
user_password=password)
api = vk.API(session)
</code></pre>
<p>Then i need to import api to functions, so I have ability to get API calls to vk.</p>
<p>Full stack trace</p>
<p><code>Traceback (most recent call last):
File "E:/gited/vkscrap/core/main.py", line 26, in <module>
from posts_run import get_all_posts
File "E:\gited\vkscrap\core\posts_run.py", line 7, in <module>
from main import api, absolute_url, fullname
File "E:\gited\vkscrap\core\main.py", line 26, in <module>
from posts_run import get_all_posts
ImportError: cannot import name 'get_all_posts'</code></p>
<p>api - is a <code>api = vk.API(session)</code> in main.py.
absolute_url and fullname are also stored in main.py.
I am using PyCharm 2016.1 on Windows 7, Python 3.5 x64 in virtualenv.
How can I import this function?</p>
| 1 | 2016-08-30T16:52:51Z | 39,233,152 | <p>You need to add <code>__init__.py</code> in your core folder. You getting this error because python does not recognise your folder as <a href="https://docs.python.org/3/tutorial/modules.html#packages" rel="nofollow">python package</a></p>
<p>After that do</p>
<pre><code>from .posts_run import get_all_posts
# ^ here do relative import
# or
from core.posts_run import get_all_posts
# because your package named 'core' and importing looks in root folder
</code></pre>
| 1 | 2016-08-30T16:58:01Z | [
"python",
"python-3.x"
] |
How to import a function from a module in the same folder? | 39,233,077 | <p>I am trying to separate my script into several files with functions, so I moved some functions into separate files and want to import them into one main file. The structure is:</p>
<pre><code>core/
main.py
posts_run.py
</code></pre>
<p><code>posts_run.py</code> has two functions, <code>get_all_posts</code> and <code>retrieve_posts</code>, so I try import <code>get_all_posts</code> with:</p>
<pre><code>from posts_run import get_all_posts
</code></pre>
<p>Python 3.5 gives the error:</p>
<pre><code>ImportError: cannot import name 'get_all_posts'
</code></pre>
<p>Main.py contains following rows of code:</p>
<pre><code>import vk
from configs import client_id, login, password
session = vk.AuthSession(scope='wall,friends,photos,status,groups,offline,messages', app_id=client_id, user_login=login,
user_password=password)
api = vk.API(session)
</code></pre>
<p>Then i need to import api to functions, so I have ability to get API calls to vk.</p>
<p>Full stack trace</p>
<p><code>Traceback (most recent call last):
File "E:/gited/vkscrap/core/main.py", line 26, in <module>
from posts_run import get_all_posts
File "E:\gited\vkscrap\core\posts_run.py", line 7, in <module>
from main import api, absolute_url, fullname
File "E:\gited\vkscrap\core\main.py", line 26, in <module>
from posts_run import get_all_posts
ImportError: cannot import name 'get_all_posts'</code></p>
<p>api - is a <code>api = vk.API(session)</code> in main.py.
absolute_url and fullname are also stored in main.py.
I am using PyCharm 2016.1 on Windows 7, Python 3.5 x64 in virtualenv.
How can I import this function?</p>
| 1 | 2016-08-30T16:52:51Z | 39,233,202 | <p>MyFile.py:</p>
<pre><code>def myfunc():
return 12
</code></pre>
<p>start python interpreter:</p>
<pre><code>>>> from MyFile import myFunc
>>> myFunc()
12
</code></pre>
<p>Alternatively: </p>
<pre><code>>>> import MyFile
>>> MyFile.myFunc()
12
</code></pre>
<p>Does this not work on your machine?</p>
| 1 | 2016-08-30T17:00:38Z | [
"python",
"python-3.x"
] |
How to import a function from a module in the same folder? | 39,233,077 | <p>I am trying to separate my script into several files with functions, so I moved some functions into separate files and want to import them into one main file. The structure is:</p>
<pre><code>core/
main.py
posts_run.py
</code></pre>
<p><code>posts_run.py</code> has two functions, <code>get_all_posts</code> and <code>retrieve_posts</code>, so I try import <code>get_all_posts</code> with:</p>
<pre><code>from posts_run import get_all_posts
</code></pre>
<p>Python 3.5 gives the error:</p>
<pre><code>ImportError: cannot import name 'get_all_posts'
</code></pre>
<p>Main.py contains following rows of code:</p>
<pre><code>import vk
from configs import client_id, login, password
session = vk.AuthSession(scope='wall,friends,photos,status,groups,offline,messages', app_id=client_id, user_login=login,
user_password=password)
api = vk.API(session)
</code></pre>
<p>Then i need to import api to functions, so I have ability to get API calls to vk.</p>
<p>Full stack trace</p>
<p><code>Traceback (most recent call last):
File "E:/gited/vkscrap/core/main.py", line 26, in <module>
from posts_run import get_all_posts
File "E:\gited\vkscrap\core\posts_run.py", line 7, in <module>
from main import api, absolute_url, fullname
File "E:\gited\vkscrap\core\main.py", line 26, in <module>
from posts_run import get_all_posts
ImportError: cannot import name 'get_all_posts'</code></p>
<p>api - is a <code>api = vk.API(session)</code> in main.py.
absolute_url and fullname are also stored in main.py.
I am using PyCharm 2016.1 on Windows 7, Python 3.5 x64 in virtualenv.
How can I import this function?</p>
| 1 | 2016-08-30T16:52:51Z | 39,233,265 | <p>Python doesn't find the module to import because it is executed from another directory.</p>
<p>Open a terminal and cd into the script's folder, then execute python from there.</p>
<p>Run this code in your script to print from where python is being executed from:</p>
<pre><code>import os
print(os.getcwd())
</code></pre>
<p>EDIT:
This is a demonstration of what I mean</p>
<p>Put the code above in a <code>test.py</code> file located at <code>C:\folder\test.py</code></p>
<p>open a terminal and type</p>
<pre><code>python3 C:\folder\test.py
</code></pre>
<p>This will output the base directory of python executable</p>
<p>now type</p>
<pre><code>cd C:\folder
python3 test.py
</code></pre>
<p>This will output <code>C:\folder\</code>. So if you have other modules in <code>folder</code> importing them should not be a problem</p>
<p>I usually write a bash/batch script to cd into the directory and start my programs. This allows to have zero-impact on host machines</p>
| 0 | 2016-08-30T17:04:44Z | [
"python",
"python-3.x"
] |
How to import a function from a module in the same folder? | 39,233,077 | <p>I am trying to separate my script into several files with functions, so I moved some functions into separate files and want to import them into one main file. The structure is:</p>
<pre><code>core/
main.py
posts_run.py
</code></pre>
<p><code>posts_run.py</code> has two functions, <code>get_all_posts</code> and <code>retrieve_posts</code>, so I try import <code>get_all_posts</code> with:</p>
<pre><code>from posts_run import get_all_posts
</code></pre>
<p>Python 3.5 gives the error:</p>
<pre><code>ImportError: cannot import name 'get_all_posts'
</code></pre>
<p>Main.py contains following rows of code:</p>
<pre><code>import vk
from configs import client_id, login, password
session = vk.AuthSession(scope='wall,friends,photos,status,groups,offline,messages', app_id=client_id, user_login=login,
user_password=password)
api = vk.API(session)
</code></pre>
<p>Then i need to import api to functions, so I have ability to get API calls to vk.</p>
<p>Full stack trace</p>
<p><code>Traceback (most recent call last):
File "E:/gited/vkscrap/core/main.py", line 26, in <module>
from posts_run import get_all_posts
File "E:\gited\vkscrap\core\posts_run.py", line 7, in <module>
from main import api, absolute_url, fullname
File "E:\gited\vkscrap\core\main.py", line 26, in <module>
from posts_run import get_all_posts
ImportError: cannot import name 'get_all_posts'</code></p>
<p>api - is a <code>api = vk.API(session)</code> in main.py.
absolute_url and fullname are also stored in main.py.
I am using PyCharm 2016.1 on Windows 7, Python 3.5 x64 in virtualenv.
How can I import this function?</p>
| 1 | 2016-08-30T16:52:51Z | 39,233,501 | <p>A cheat solution can be found from this question (question is <a href="http://stackoverflow.com/questions/10095037/why-use-sys-path-appendpath-instead-of-sys-path-insert1-path">Why use sys.path.append(path) instead of sys.path.insert(1, path)?</a> ). Essentially you do the following</p>
<pre><code> import sys
sys.path.insert(1, directory_path_your_code_is_in)
import file_name_without_dot_py_at_end
</code></pre>
<p>This will get round that as you are running it in PyCharm 2016.1, it might be in a different current directory to what you are expecting...</p>
| 1 | 2016-08-30T17:19:19Z | [
"python",
"python-3.x"
] |
Google Reports API - reports.customerUsageReports.get issue | 39,233,118 | <p>I am having some issues with the Google Reports API. I have no issues running the sample code provided in the <a href="https://developers.google.com/admin-sdk/reports/v1/quickstart/python" rel="nofollow">documentation</a> to get the reports.activities.list data, but when I change to program to to try and pull full domain data (reports.customerUsageReports.get) I get an error stating that "'Resource' object has no attribute 'usage'". </p>
<p>I have no issue with auth and have changed the api scopes in the program to <a href="https://www.googleapis.com/auth/admin.reports.usage.readonly" rel="nofollow">https://www.googleapis.com/auth/admin.reports.usage.readonly</a> as required.</p>
<p>I am running the following snippet to try and access the data</p>
<pre><code>credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('admin', 'reports_v1', http=http)
print('Getting data')
results = service.usage().list(date='2016-01-01').execute()
usage = results.get('items', [])
if not usage:
print('No data found.')
else:
print('Data:')
for use in usageReports:
print(use)
</code></pre>
| 0 | 2016-08-30T16:56:07Z | 39,246,910 | <p>Make sure you follow these steps:</p>
<ol>
<li><p>According to the <a href="https://developers.google.com/admin-sdk/reports/v1/quickstart/python" rel="nofollow">Python quickstart</a>, if you modify the scopes, you need to delete the previously saved credentials at ~/.credentials/admin-reports_v1-python-quickstart.json</p></li>
<li><p>For the Scopes, in requesting access using OAuth 2.0, your application needs the scope information, as well as information that Google supplies when you register your application (such as the client ID and the client secret).</p></li>
</ol>
<p>For more information check this <a href="https://developers.google.com/admin-sdk/reports/v1/guides/authorizing" rel="nofollow">link</a> and the documentation about <a href="https://developers.google.com/admin-sdk/reports/v1/reference/customerUsageReports/get" rel="nofollow">CustomerUsageReports: get</a> to know how to use these parameter correctly.</p>
| 0 | 2016-08-31T10:19:06Z | [
"python",
"google-api",
"google-apps",
"google-reporting-api"
] |
using struct pack and unpack on floating point number 0.01 output 0.009999999776482582. | 39,233,151 | <p>using 'struct pack and unpack' on floating point number 0.01 outputs 0.009999999776482582.
For my project I would have to configure values which are float
and the same needs to be stored in binary file and I would need this data to analyze later and I need the exact values that was configured. </p>
<p>Can some one help me if there is any way that I could store the exact value 0.01 and retrieve the same.</p>
<p>Based on some of the post I have already tried using double instead of float and it works but the problem with using double is my file size increases. I have a lot of parameters which is float and can take values any where between 0 to 5.
Hence any other suggestion would be appreciated. </p>
<p>Here is the sample code</p>
<pre><code>import struct
a = 0.01
b = struct.pack('<f', a)
c = struct.unpack('<f', b)
print c
(0.009999999776482582,)
</code></pre>
| 1 | 2016-08-30T16:57:53Z | 39,233,244 | <p>There is no floating point number <code>0.01</code>. IEEE floating point numbers do not represent the entire real number line; they represent an approximation using a particular binary scheme. The closest approximation has the decimal representation 0.009999999776482582. </p>
<p>You could try serializing a textual representation (e.g. json) or pickle a BigDecimal which can represent arbitrary precisions.</p>
| 2 | 2016-08-30T17:03:26Z | [
"python"
] |
using struct pack and unpack on floating point number 0.01 output 0.009999999776482582. | 39,233,151 | <p>using 'struct pack and unpack' on floating point number 0.01 outputs 0.009999999776482582.
For my project I would have to configure values which are float
and the same needs to be stored in binary file and I would need this data to analyze later and I need the exact values that was configured. </p>
<p>Can some one help me if there is any way that I could store the exact value 0.01 and retrieve the same.</p>
<p>Based on some of the post I have already tried using double instead of float and it works but the problem with using double is my file size increases. I have a lot of parameters which is float and can take values any where between 0 to 5.
Hence any other suggestion would be appreciated. </p>
<p>Here is the sample code</p>
<pre><code>import struct
a = 0.01
b = struct.pack('<f', a)
c = struct.unpack('<f', b)
print c
(0.009999999776482582,)
</code></pre>
| 1 | 2016-08-30T16:57:53Z | 39,233,341 | <p>You say that numbers go from 0 to 5. If you only need two decimal places (0.00, 0.01, ..., 5.00) then you can store the numbers as 16 bit integers, first by multiplying the number by 100, and then when reading dividing the number by 100:</p>
<pre><code>>>> import struct
>>> digits = 2
>>> a = 0.01
>>> b = struct.pack('<H', int(round(a, digits)) * (10 ** digits))
>>> c = struct.unpack('<H', b)
>>> print (c[0] / (10.0 ** digits))
0.01
</code></pre>
<p>Storing numbers with H format you can actually save 50% space compared to floats. If you need more precision, more than two decimal places, you need yo multiply the number and then dividing by 1000, 10000, etc, changing the format from H to I.</p>
| 2 | 2016-08-30T17:09:13Z | [
"python"
] |
Python list only folders that contain specific subfolders | 39,233,156 | <p>I have a script that will list all folders and subfolders and create a JSON file. What I am trying to have is only folders that contain subfolders named "Maps" or "Reports" listed. If they contain those then only the parent folder will be listed, so "Maps", "Reports" would not be shown. Currently stuck on how to accomplish this, any help would be great.</p>
<pre><code>import os, json, sys
reload(sys)
sys.setdefaultencoding('utf-8')
path = "G:\Testfolders"
def get_list_of_dirs(path):
try:
output_dictonary = {}
list_of_dirs = [os.path.join(path, item) for item in os.listdir(path) if os.path.isdir(os.path.join(path, item )) and os.path.isdir("./Maps") and os.path.isdir("./Reports")]
output_dictonary["text"] = path.decode('latin1')
output_dictonary["type"] = "directory"
output_dictonary["children"] = []
for dir in list_of_dirs:
output_dictonary["children"].append(get_list_of_dirs(dir))
return output_dictonary
except WindowsError:
pass
print json.dumps(get_list_of_dirs(path))
with open(r'U:\JSONDatatest.json', 'w') as f:
json.dump(get_list_of_dirs(path), f)
</code></pre>
<p><strong>Edited:</strong></p>
<pre><code>import os, json, sys
def all_dirs_with_subdirs(path,subdirs):
# make sure no relative paths are returned, can be omitted
try:
output_dictonary = {}
path = os.path.abspath(path)
list_of_dirs = []
for root, dirs, files in os.walk(path):
if all(subdir in dirs for subdir in subdirs):
list_of_dirs.append(root)
return list_of_dirs
output_dictonary["text"] = path.decode('latin1')
output_dictonary["type"] = "directory"
output_dictonary["children"] = []
for dir in list_of_dirs:
output_dictonary["children"].append(all_dirs_with_subdirs(dir))
return output_dictonary
except WindowsError:
pass
with open(r'U:\jsontesting\JSONData.json', 'w') as f:
json.dump(all_dirs_with_subdirs("G:\TESTPATH", ('Maps', 'Temp')), f)
</code></pre>
| 0 | 2016-08-30T16:58:11Z | 39,233,490 | <p>I'm thinking you're looking for the <code>os.walk</code> command, I've implemented a function using it below. (Also made it slightly more generic)</p>
<pre><code>import os
# path : string to relative or absolute path to be queried
# subdirs: tuple or list containing all names of subfolders that need to be
# present in the directory
def all_dirs_with_subdirs(path, subdirs):
# make sure no relative paths are returned, can be omitted
path = os.path.abspath(path)
result = []
for root, dirs, files in os.walk(path):
if all(subdir in dirs for subdir in subdirs):
result.append(root)
return result
def get_directory_listing(path):
output = {}
output["text"] = path.decode('latin1')
output["type"] = "directory"
output["children"] = all_dirs_with_subdirs(path, ('Maps', 'Reports'))
return output
with open('test.json', 'w+') as f:
listing = get_directory_listing(".")
json.dump(listing, f)
</code></pre>
<p><strong>UPDATE: Also the JSON stuff</strong></p>
| 0 | 2016-08-30T17:18:45Z | [
"python",
"python-2.7"
] |
Python list only folders that contain specific subfolders | 39,233,156 | <p>I have a script that will list all folders and subfolders and create a JSON file. What I am trying to have is only folders that contain subfolders named "Maps" or "Reports" listed. If they contain those then only the parent folder will be listed, so "Maps", "Reports" would not be shown. Currently stuck on how to accomplish this, any help would be great.</p>
<pre><code>import os, json, sys
reload(sys)
sys.setdefaultencoding('utf-8')
path = "G:\Testfolders"
def get_list_of_dirs(path):
try:
output_dictonary = {}
list_of_dirs = [os.path.join(path, item) for item in os.listdir(path) if os.path.isdir(os.path.join(path, item )) and os.path.isdir("./Maps") and os.path.isdir("./Reports")]
output_dictonary["text"] = path.decode('latin1')
output_dictonary["type"] = "directory"
output_dictonary["children"] = []
for dir in list_of_dirs:
output_dictonary["children"].append(get_list_of_dirs(dir))
return output_dictonary
except WindowsError:
pass
print json.dumps(get_list_of_dirs(path))
with open(r'U:\JSONDatatest.json', 'w') as f:
json.dump(get_list_of_dirs(path), f)
</code></pre>
<p><strong>Edited:</strong></p>
<pre><code>import os, json, sys
def all_dirs_with_subdirs(path,subdirs):
# make sure no relative paths are returned, can be omitted
try:
output_dictonary = {}
path = os.path.abspath(path)
list_of_dirs = []
for root, dirs, files in os.walk(path):
if all(subdir in dirs for subdir in subdirs):
list_of_dirs.append(root)
return list_of_dirs
output_dictonary["text"] = path.decode('latin1')
output_dictonary["type"] = "directory"
output_dictonary["children"] = []
for dir in list_of_dirs:
output_dictonary["children"].append(all_dirs_with_subdirs(dir))
return output_dictonary
except WindowsError:
pass
with open(r'U:\jsontesting\JSONData.json', 'w') as f:
json.dump(all_dirs_with_subdirs("G:\TESTPATH", ('Maps', 'Temp')), f)
</code></pre>
| 0 | 2016-08-30T16:58:11Z | 39,233,685 | <p>If you would like to use glob:</p>
<pre><code>def get_list_of_dirs():
from glob import glob
import os.path as path
# get the directories containing Maps
maps = set([path.abspath(path.join(f + path.pardir))
for f in glob('**/Maps/')])
# get the directories containing Reports
reports = set([path.abspath(path.join(f + path.pardir))
for f in glob('**/Reports/')])
# return the common ones
return maps.intersection(reports)
</code></pre>
<p>You can make it more generic by passing a list of glob expressions to the function and then returning the intersection of the sets or intersecting as you go.</p>
| 0 | 2016-08-30T17:30:43Z | [
"python",
"python-2.7"
] |
Python cannot seem to find libraries - conflict between user and system python versions | 39,233,198 | <p>So lets start from the very very top. </p>
<p>This is the problem I am currently having:</p>
<pre><code>bob@me:~/cloud/simtk/opensim_core_install/lib/python2.7/site-packages$ python
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import opensim
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "opensim/__init__.py", line 2, in <module>
from common import *
File "opensim/common.py", line 21, in <module>
_common = swig_import_helper()
File "opensim/common.py", line 20, in swig_import_helper
return importlib.import_module('_common')
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named _common
</code></pre>
<p>For those interested in the specifics, I am trying to use this <a href="https://github.com/opensim-org/opensim-core" rel="nofollow">package</a>, and specifically the bleeding edge version.</p>
<p>From what I can deign from the world wide web and other sources, I have two versions of python installed on my machine</p>
<pre><code>bob@me:/usr$ which python && python --version
/usr/bin/python
Python 2.7.6
</code></pre>
<p>and</p>
<pre><code>bob@me:/usr/local$ /usr/local/bin/python2.7 --version
Python 2.7.9
</code></pre>
<p>Now, when building the above package one has to make reference to the <code>libpython2.7.so</code> files of which I have two (and they are both system, which is to say that none are located in <code>/usr/local/</code>:</p>
<pre><code>bob@me:/usr$ find . -name 'libpython2.7.so'
./lib/python2.7/config-x86_64-linux-gnu/libpython2.7.so
./lib/x86_64-linux-gnu/libpython2.7.so
</code></pre>
<p>and I built the wrappings by running </p>
<pre><code>python setup.py install
</code></pre>
<p>inside </p>
<pre><code>~/cloud/simtk/opensim_core_install/lib/python2.7/site-packages
</code></pre>
<p>which is where it resides. Now when it runs it puts it all in the <code>/usr/local/</code> dir rather than the system wide one, and this is where I think problems appear.</p>
<pre><code>bob@me:~/cloud/simtk/opensim_core_install/lib/python2.7/site-packages$ sudo python setup.py install
[sudo] password for bob:
running install
running bdist_egg
running egg_info
writing opensim.egg-info/PKG-INFO
writing top-level names to opensim.egg-info/top_level.txt
writing dependency_links to opensim.egg-info/dependency_links.txt
reading manifest file 'opensim.egg-info/SOURCES.txt'
writing manifest file 'opensim.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/_analyses.so -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/tools.py -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/simbody.py -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/actuators.py -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/_simbody.so -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/__init__.py -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/analyses.py -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/_simulation.so -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/version.py -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/_tools.so -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/_actuators.so -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/common.py -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/simulation.py -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/_common.so -> build/bdist.linux-x86_64/egg/opensim
copying build/lib.linux-x86_64-2.7/opensim/__init__.pyc -> build/bdist.linux-x86_64/egg/opensim
byte-compiling build/bdist.linux-x86_64/egg/opensim/tools.py to tools.pyc
byte-compiling build/bdist.linux-x86_64/egg/opensim/simbody.py to simbody.pyc
byte-compiling build/bdist.linux-x86_64/egg/opensim/actuators.py to actuators.pyc
byte-compiling build/bdist.linux-x86_64/egg/opensim/analyses.py to analyses.pyc
byte-compiling build/bdist.linux-x86_64/egg/opensim/version.py to version.pyc
byte-compiling build/bdist.linux-x86_64/egg/opensim/common.py to common.pyc
byte-compiling build/bdist.linux-x86_64/egg/opensim/simulation.py to simulation.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying opensim.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying opensim.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying opensim.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying opensim.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt
zip_safe flag not set; analyzing archive contents...
opensim.actuators: module references __file__
opensim.tools: module references __file__
opensim.analyses: module references __file__
opensim.simbody: module references __file__
opensim.common: module references __file__
opensim.simulation: module references __file__
creating 'dist/opensim-4.0-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing opensim-4.0-py2.7.egg
removing '/usr/local/lib/python2.7/dist-packages/opensim-4.0-py2.7.egg' (and everything under it)
creating /usr/local/lib/python2.7/dist-packages/opensim-4.0-py2.7.egg
Extracting opensim-4.0-py2.7.egg to /usr/local/lib/python2.7/dist-packages
opensim 4.0 is already the active version in easy-install.pth
Installed /usr/local/lib/python2.7/dist-packages/opensim-4.0-py2.7.egg
Processing dependencies for opensim==4.0
Finished processing dependencies for opensim==4.0
</code></pre>
<p>Now presumably because this is all created in <code>/usr/local/</code> that would be why the system version of python cannot find the associated libraries? Problem is that when I try to use <code>/usr/local/bin/python2.7</code> it still cannot find the libraries.</p>
<p>Also I should add:</p>
<pre><code>bob@me:~/cloud/simtk/opensim_core_install/lib/python2.7/site-packages$ ls *
setup.py
build:
bdist.linux-x86_64 lib.linux-x86_64-2.7
dist:
opensim-4.0-py2.7.egg
opensim:
actuators.py _analyses.so _common.so simbody.py simulation.py tools.py
_actuators.so common.py __init__.py simbody.pyc _simulation.so _tools.so
analyses.py common.pyc __init__.pyc _simbody.so tests version.py
opensim.egg-info:
dependency_links.txt PKG-INFO SOURCES.txt top_level.txt
</code></pre>
<p>and</p>
<pre><code>bob@me:~$ echo $LD_LIBRARY_PATH
:/local/bob/cloud/simtk/opensim_core_install/lib/python2.7/site-packages/opensim
</code></pre>
| 0 | 2016-08-30T17:00:23Z | 39,233,834 | <p>From what I get by your question you tried building by simply running <code>python setup.py install</code>.</p>
<p>You have to follow the instructions in the Readme and build with the method corresponding to your operating system instead.</p>
| 0 | 2016-08-30T17:39:19Z | [
"python"
] |
Split and merge pandas dataframe | 39,233,205 | <p>I am trying to split and merge a Pandas dataframe.</p>
<p>The columns of the original data frame are arranged like so:</p>
<pre><code>dataTime Record1Field1 ... Record1FieldN Record2Field1 ... Record1FieldN
time1 << record 1 data >> << record 2 data >>
</code></pre>
<p>I would like to take split the <code>Record2</code> fields into a separate data frame <code>tempdf</code>, indexed by the dataTime. <code>tempdf</code> will therefore look something like this:</p>
<pre><code>dataTime Record2Field1 ... Record2FieldN
time1 << record 2 data >>
</code></pre>
<p>Once <code>tempdf</code> is populated, delete the Record2 columns from the original data frame. The first difficulty I'm having is in creating this <code>tempdf</code> which contains the record 2 data.</p>
<p>Then, I would like to rename the columns in <code>tempdf</code> so that they align with the <code>Record1</code> columns in the original data frame. (This portion I know how to do)</p>
<p>Finally I would like to merge <code>tempdf</code> back into the original data frame.</p>
<p>The end result should look something like this:</p>
<pre><code>dataTime Record1Field1 ... Record1FieldN
time1 <<record 1 data>>
time1 <<record 2 data>>
</code></pre>
<p>So far I haven't determined a good method of doing this. Any help is appreciated! Thanks.</p>
| 0 | 2016-08-30T17:00:45Z | 39,233,330 | <p>try using <code>concat</code> </p>
<p>So trying something like:</p>
<pre><code>Combined = [DataFrame1,DataFrame2]
Together = pandas.concat(Combined)
</code></pre>
<p>as one of the others commented - <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow">merge</a> may be a good option as well.</p>
| 0 | 2016-08-30T17:08:41Z | [
"python",
"pandas",
"dataframe"
] |
Split and merge pandas dataframe | 39,233,205 | <p>I am trying to split and merge a Pandas dataframe.</p>
<p>The columns of the original data frame are arranged like so:</p>
<pre><code>dataTime Record1Field1 ... Record1FieldN Record2Field1 ... Record1FieldN
time1 << record 1 data >> << record 2 data >>
</code></pre>
<p>I would like to take split the <code>Record2</code> fields into a separate data frame <code>tempdf</code>, indexed by the dataTime. <code>tempdf</code> will therefore look something like this:</p>
<pre><code>dataTime Record2Field1 ... Record2FieldN
time1 << record 2 data >>
</code></pre>
<p>Once <code>tempdf</code> is populated, delete the Record2 columns from the original data frame. The first difficulty I'm having is in creating this <code>tempdf</code> which contains the record 2 data.</p>
<p>Then, I would like to rename the columns in <code>tempdf</code> so that they align with the <code>Record1</code> columns in the original data frame. (This portion I know how to do)</p>
<p>Finally I would like to merge <code>tempdf</code> back into the original data frame.</p>
<p>The end result should look something like this:</p>
<pre><code>dataTime Record1Field1 ... Record1FieldN
time1 <<record 1 data>>
time1 <<record 2 data>>
</code></pre>
<p>So far I haven't determined a good method of doing this. Any help is appreciated! Thanks.</p>
| 0 | 2016-08-30T17:00:45Z | 39,233,452 | <p>if you know the columns to be selected , then use </p>
<pre><code> tempdf = df[['a','b']]
</code></pre>
<p>else to select last 2 columns use </p>
<pre><code> tempdf = df[df.columns[-2:]]
</code></pre>
| 0 | 2016-08-30T17:16:15Z | [
"python",
"pandas",
"dataframe"
] |
Split and merge pandas dataframe | 39,233,205 | <p>I am trying to split and merge a Pandas dataframe.</p>
<p>The columns of the original data frame are arranged like so:</p>
<pre><code>dataTime Record1Field1 ... Record1FieldN Record2Field1 ... Record1FieldN
time1 << record 1 data >> << record 2 data >>
</code></pre>
<p>I would like to take split the <code>Record2</code> fields into a separate data frame <code>tempdf</code>, indexed by the dataTime. <code>tempdf</code> will therefore look something like this:</p>
<pre><code>dataTime Record2Field1 ... Record2FieldN
time1 << record 2 data >>
</code></pre>
<p>Once <code>tempdf</code> is populated, delete the Record2 columns from the original data frame. The first difficulty I'm having is in creating this <code>tempdf</code> which contains the record 2 data.</p>
<p>Then, I would like to rename the columns in <code>tempdf</code> so that they align with the <code>Record1</code> columns in the original data frame. (This portion I know how to do)</p>
<p>Finally I would like to merge <code>tempdf</code> back into the original data frame.</p>
<p>The end result should look something like this:</p>
<pre><code>dataTime Record1Field1 ... Record1FieldN
time1 <<record 1 data>>
time1 <<record 2 data>>
</code></pre>
<p>So far I haven't determined a good method of doing this. Any help is appreciated! Thanks.</p>
| 0 | 2016-08-30T17:00:45Z | 39,233,618 | <p>To answer your immediate question, you could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="nofollow"><code>df.filter</code></a> with a regex pattern to select the columns of the form <code>Record2FieldN</code>:</p>
<pre><code>In [29]: tempdf = df.filter(regex=r'Record2.*'); tempdf
Out[29]:
Record2Field0 Record2Field1 Record2Field2
0 3 8 4
1 2 6 3
2 1 2 2
3 5 9 4
</code></pre>
<p>and you could rename the columns using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html" rel="nofollow"><code>tempdf.rename</code></a>:</p>
<pre><code>tempdf = tempdf.rename(columns={'Record2Field{}'.format(i):'Record1Field{}'.format(i) for i in range(3)})
</code></pre>
<p>and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow">drop</a> the <code>Record2</code> fields from <code>df</code> with:</p>
<pre><code>df = df.drop(['Record2Field{}'.format(i) for i in range(3)], axis=1)
</code></pre>
<hr>
<p>But there is a better approach to your overall problem: Replace the flat column names <code>RecordMFieldN</code> with a 2-level <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html" rel="nofollow"><code>MultiIndex</code></a> which splits the <code>Record</code> from the <code>Field</code>.
This will give you enough control to <em>stack</em> the data in the desired form:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(2016)
ncols, nrows = 3, 4
def make_dataframe(ncols, nrows):
columns = ['Record{}Field{}'.format(i, j) for i in range(1,3)
for j in range(ncols)]
df = pd.DataFrame(np.random.randint(10, size=(nrows, 2*ncols)), columns=columns)
df['dataTime'] = pd.date_range('2000-1-1', periods=nrows)
return df
df = make_dataframe(ncols, nrows)
# stash the `dataTime` in the row index so we can reassign
# the column index to `new_index`
result = df.set_index('dataTime')
new_index = pd.MultiIndex.from_product([[1,2], df.columns[:ncols]],
names=['record', 'field'])
result.columns = new_index
# Now the problem can be solved by stacking.
result = result.stack('record')
result.index = result.index.droplevel('record')
</code></pre>
<p>yields</p>
<pre><code>field Record1Field0 Record1Field1 Record1Field2
dataTime
2000-01-01 3 7 2
2000-01-01 3 8 4
2000-01-02 8 7 9
2000-01-02 2 6 3
2000-01-03 4 1 9
2000-01-03 1 2 2
2000-01-04 8 9 8
2000-01-04 5 9 4
</code></pre>
| 1 | 2016-08-30T17:27:11Z | [
"python",
"pandas",
"dataframe"
] |
Split and merge pandas dataframe | 39,233,205 | <p>I am trying to split and merge a Pandas dataframe.</p>
<p>The columns of the original data frame are arranged like so:</p>
<pre><code>dataTime Record1Field1 ... Record1FieldN Record2Field1 ... Record1FieldN
time1 << record 1 data >> << record 2 data >>
</code></pre>
<p>I would like to take split the <code>Record2</code> fields into a separate data frame <code>tempdf</code>, indexed by the dataTime. <code>tempdf</code> will therefore look something like this:</p>
<pre><code>dataTime Record2Field1 ... Record2FieldN
time1 << record 2 data >>
</code></pre>
<p>Once <code>tempdf</code> is populated, delete the Record2 columns from the original data frame. The first difficulty I'm having is in creating this <code>tempdf</code> which contains the record 2 data.</p>
<p>Then, I would like to rename the columns in <code>tempdf</code> so that they align with the <code>Record1</code> columns in the original data frame. (This portion I know how to do)</p>
<p>Finally I would like to merge <code>tempdf</code> back into the original data frame.</p>
<p>The end result should look something like this:</p>
<pre><code>dataTime Record1Field1 ... Record1FieldN
time1 <<record 1 data>>
time1 <<record 2 data>>
</code></pre>
<p>So far I haven't determined a good method of doing this. Any help is appreciated! Thanks.</p>
| 0 | 2016-08-30T17:00:45Z | 39,233,913 | <p>You could get all your <code>Record2</code> values under the <code>Record1</code> columns as follows:</p>
<p><strong>Data Setup:</strong> </p>
<pre><code>data = StringIO(
'''
dataTime Record1Field1 Record1Field2 Record1Field3 Record2Field1 Record2Field2 Record2Field3
01-01-2015 1 2 3 4 5 6
''')
df = pd.read_csv(data, delim_whitespace=True, parse_dates=['dataTime'])
print (df)
dataTime Record1Field1 Record1Field2 Record1Field3 Record2Field1 \
0 2015-01-01 1 2 3 4
Record2Field2 Record2Field3
0 5 6
</code></pre>
<p><strong>Operations:</strong></p>
<pre><code>df.set_index('dataTime', inplace=True)
# Filter column names corresponding to Record2
tempdf = df[[col for col in list(df) if col.startswith('Record2')]]
# Drop those columns after assigning to tempdf
df.drop(tempdf.columns, inplace=True, axis=1)
# Rename the column names for appending
tempdf.columns = [col for col in list(df) if col.startswith('Record1')]
# Concatenate row-wise
print (df.append(tempdf))
Record1Field1 Record1Field2 Record1Field3
dataTime
2015-01-01 1 2 3
2015-01-01 4 5 6
</code></pre>
| 0 | 2016-08-30T17:44:00Z | [
"python",
"pandas",
"dataframe"
] |
cv.cvtColor(img, cv.COLOR_BGR2GRAY) doesn't work | 39,233,231 | <p>This is my first attempt to detect faces and eyes in OpenCV 3.1. Here is my code:</p>
<pre><code>import cv2 as cv
import numpy as np
face_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv.CascadeClassifier('haarcascade_eye.xml')
cam = cv.VideoCapture(0)
while True:
tf, img = cam.read()
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
img = cv.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
roi_gray = gray[y:y + h, x:x + w]
roi_color = img[y:y + h, x:x + w]
eyes = eye_cascade.detectMultiScalenter code heree(roi_gray)
for (ex, ey, ew, eh) in eyes:
cv.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2)
print(tf)
cv.imshow('my cam', img)
key = cv.waitKey(1)
if key == 27:
break
cam.release()
cv.destroyAllWindows()
</code></pre>
<p>And I got this error:</p>
<pre><code>OpenCV Error: Assertion failed (!empty()) in cv::CascadeClassifier::detectMultiScale, file D:\Build\OpenCV\opencv-3.1.0\modules\objdetect\src\cascadedetect.cpp, line 1639
Traceback (most recent call last):
File "C:/Users/hedon/PycharmProjects/deneme_1.1/ilk.py", line 13, in <module>
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
cv2.error: D:\Build\OpenCV\opencv-3.1.0\modules\objdetect\src\cascadedetect.cpp:1639: error: (-215) !empty() in function cv::CascadeClassifier::detectMultiScale
</code></pre>
<p>Can anybody told where is my mistake? I've also tried:</p>
<p><code>gray = cv.cvtColor(img, cv.COLOR_BAYER_GR2GRAY)</code> as PyCharm suggested. Same error:</p>
<pre><code>OpenCV Error: Assertion failed (scn == 1 && dcn == 1) in cv::demosaicing, file D:\Build\OpenCV\opencv-3.1.0\modules\imgproc\src\demosaicing.cpp, line 1630
Traceback (most recent call last):
File "C:/Users/hedon/PycharmProjects/deneme_1.1/ilk.py", line 11, in <module>
gray = cv.cvtColor(img, cv.COLOR_BAYER_BG2GRAY)
cv2.error: D:\Build\OpenCV\opencv-3.1.0\modules\imgproc\src\demosaicing.cpp:1630: error: (-215) scn == 1 && dcn == 1 in function cv::demosaicing
</code></pre>
| 1 | 2016-08-30T17:02:09Z | 39,234,039 | <blockquote>
<p>OpenCV Error: Assertion failed (!empty()) in cv::CascadeClassifier::detectMultiScale</p>
</blockquote>
<p>is telling you that the classifier is <em>empty</em>, because you didn't load the xml files correctly. </p>
<p>Use the full path to the xml files to be sure to load them properly.</p>
| 0 | 2016-08-30T17:51:06Z | [
"python",
"opencv",
"numpy",
"python-3.5",
"opencv3.1"
] |
retrieve name, phone number, url and role with regex | 39,233,259 | <p>Is it possible to do complex regex operations to retrieve name, role, designation in python? I have attached the pic for my requirement.
<a href="http://i.stack.imgur.com/xqqMz.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/xqqMz.jpg" alt="enter image description here"></a></p>
| -1 | 2016-08-30T17:04:23Z | 39,234,489 | <p>No. You need actual Natural Language Processing for that.</p>
| 0 | 2016-08-30T18:17:37Z | [
"python",
"regex",
"nltk"
] |
retrieve name, phone number, url and role with regex | 39,233,259 | <p>Is it possible to do complex regex operations to retrieve name, role, designation in python? I have attached the pic for my requirement.
<a href="http://i.stack.imgur.com/xqqMz.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/xqqMz.jpg" alt="enter image description here"></a></p>
| -1 | 2016-08-30T17:04:23Z | 39,237,171 | <p>The answer is Yes and No. </p>
<p>Regex is pattern matching. Anything that follows a specific pattern like phone number and url, yes you can extract that information using Regex with a great degree of accuracy.</p>
<p>Refer:</p>
<p><a href="http://stackoverflow.com/questions/3868753/find-phone-numbers-in-python-script">Regex for phone number</a></p>
<p><a href="https://mathiasbynens.be/demo/url-regex" rel="nofollow">Regex for url</a></p>
<p>For roles & designation, if there is a manageable list that be used as anchors, it is still possible to use regex to retrieve this information.</p>
<p>Retrieving names can be tricky or simple depending on how uniformly you capture the data. I have shared a simple example that will look for 2 consecutive words with first letter capitalized separated by a space. However it might have to be tweaked to include cases that does not follow this pattern.</p>
<pre><code>^([A-Z]\w+)\s([A-Z]\w+).*?
</code></pre>
<p>So in summary, I would say yes you can use regex to some extend but it may or may not be the best solution depending on what you are trying to achieve.</p>
| 0 | 2016-08-30T21:12:39Z | [
"python",
"regex",
"nltk"
] |
pytest logbook logging to file and stdout | 39,233,277 | <p>I'm trying to setup logbook in a pytest test to output everything to both stderr and a file. The file should get every log level, but stderr should have a higher threshold (which pytest will manage with it's usual capture settings).</p>
<p>I've got the pytest-logbook plugin. That redirects stderr into pytest capture, but I'm not sure how to add the file output.</p>
<p>This is (hopefully) obvious to someone that knows logbook, but it's new to me.</p>
<p>One more thing, I want the file logging to be real time. My tests are generally long running and pytest's normal behavior of only showing output on failure isn't helping when I need to see if things are hung.</p>
<p>Here is code that I think should work, but doesn't. I get the log file, but nothing to stdout/stderr (even on fail):</p>
<p>contest.py:</p>
<pre><code>import os
import pytest
import logbook
import sys
@pytest.fixture(scope='module')
def modlog(request):
"""Logger that also writes to a file."""
name = request.module.__name__
if name.startswith('test_'):
name = name[5:]
logname = 'TEST-'+name+'.log'
if os.path.exists(logname):
os.rename(logname, logname+"~")
logger = logbook.Logger(name)
logger.handlers.append(logbook.FileHandler(logname, level='DEBUG'))
logger.handlers.append(logbook.StreamHandler(sys.stdout, level='INFO'))
logger.warn("Start of logging")
return logger
</code></pre>
<p>test_loggy.py:</p>
<pre><code>import pytest
def test_foo(modlog):
modlog.info('hello')
modlog.info('world')
assert 0 # logs will only print on test fail
</code></pre>
| 0 | 2016-08-30T17:05:08Z | 39,256,372 | <p>Answering my own question (in case other run into the same thing).</p>
<p>The logging handlers form a stack and you have to allow messages to "bubble" through. This is done as an option when the handlers are created.</p>
<p>So the handler creation should be:</p>
<pre><code>logger.handlers.append(logbook.FileHandler(logname, level='DEBUG', bubble=True))
logger.handlers.append(logbook.StreamHandler(sys.stderr, level='INFO', bubble=True))
</code></pre>
<p>If you run <code>py.test -s</code>, then you will see the info level messages in real time. If the test fails, the logbook plugin will show all log messages for the failed test (including debug). You will also get a copy in the file (in real time).</p>
| 0 | 2016-08-31T18:13:18Z | [
"python",
"py.test",
"logbook"
] |
Check django permission or operator? | 39,233,347 | <p>For my view, I am checking the permission through the @permission_required decorator but I really wish to check for "either" permission A or permission B. so if user has at least one of two permissions, the view is execute...</p>
<p>Is there a way to do this?</p>
| 0 | 2016-08-30T17:09:23Z | 39,233,603 | <p>You could write your own decorator for this.
Or use <code>django.contrib.auth.decorators.user_passes_test(your_test_func)</code> to create a custom decorator.</p>
<p>In both cases, have a look at the source code of the <code>permission_required</code> decorator in the above module.</p>
| 0 | 2016-08-30T17:26:08Z | [
"python",
"django",
"model-view-controller",
"permissions"
] |
Regex to parse out a part of URL | 39,233,526 | <p>I am having the following data,</p>
<pre><code>data
http://hsotname.com/2016/08/a-b-n-r-y-u
https://www.hostname.com/best-food-for-humans
http://www.hostname.com/wp-content/uploads/2014/07/a-w-w-2.jpg
http://www.hostname.com/a/geniusbar/
http://www.hsotname.com/m/
http://www.hsotname.com/
</code></pre>
<p>I want to avoid the first http:// or https:// and check for the last '/' and parse out the remaining parts of the URL. But the challenge here is, we have '/' on the end of few URLs as well. The output which I want is,</p>
<pre><code>parsed
a-b-n-r-y-u
best-food-for-humans
a-w-w-2.jpg
NULL
NULL
NULL
</code></pre>
<p>Can anybody help me to find the last / and parse out the remaining part of the URL? I am new to regex and any help would be appreciated.</p>
<p>Thanks</p>
| 1 | 2016-08-30T17:21:00Z | 39,233,587 | <p>Another option is to simply split on "/" and take the last element:</p>
<pre><code>"http://hsotname.com/2016/08/a-b-n-r-y-u".split("/")[-1]
# 'a-b-n-r-y-u'
"http://www.hostname.com/a/geniusbar/".split("/")[-1]
# ''
</code></pre>
| 2 | 2016-08-30T17:25:06Z | [
"python",
"regex",
"regex-negation"
] |
Regex to parse out a part of URL | 39,233,526 | <p>I am having the following data,</p>
<pre><code>data
http://hsotname.com/2016/08/a-b-n-r-y-u
https://www.hostname.com/best-food-for-humans
http://www.hostname.com/wp-content/uploads/2014/07/a-w-w-2.jpg
http://www.hostname.com/a/geniusbar/
http://www.hsotname.com/m/
http://www.hsotname.com/
</code></pre>
<p>I want to avoid the first http:// or https:// and check for the last '/' and parse out the remaining parts of the URL. But the challenge here is, we have '/' on the end of few URLs as well. The output which I want is,</p>
<pre><code>parsed
a-b-n-r-y-u
best-food-for-humans
a-w-w-2.jpg
NULL
NULL
NULL
</code></pre>
<p>Can anybody help me to find the last / and parse out the remaining part of the URL? I am new to regex and any help would be appreciated.</p>
<p>Thanks</p>
| 1 | 2016-08-30T17:21:00Z | 39,233,592 | <p>I'd go with something like this:</p>
<pre><code>\/([^/]*)$
</code></pre>
<p>It'll match the last slash, then grab anything after it (if anything) that isn't a slash.</p>
| 0 | 2016-08-30T17:25:18Z | [
"python",
"regex",
"regex-negation"
] |
Regex to parse out a part of URL | 39,233,526 | <p>I am having the following data,</p>
<pre><code>data
http://hsotname.com/2016/08/a-b-n-r-y-u
https://www.hostname.com/best-food-for-humans
http://www.hostname.com/wp-content/uploads/2014/07/a-w-w-2.jpg
http://www.hostname.com/a/geniusbar/
http://www.hsotname.com/m/
http://www.hsotname.com/
</code></pre>
<p>I want to avoid the first http:// or https:// and check for the last '/' and parse out the remaining parts of the URL. But the challenge here is, we have '/' on the end of few URLs as well. The output which I want is,</p>
<pre><code>parsed
a-b-n-r-y-u
best-food-for-humans
a-w-w-2.jpg
NULL
NULL
NULL
</code></pre>
<p>Can anybody help me to find the last / and parse out the remaining part of the URL? I am new to regex and any help would be appreciated.</p>
<p>Thanks</p>
| 1 | 2016-08-30T17:21:00Z | 39,233,598 | <p>Regex isn't the best tool in this case. Just use str.rfind:</p>
<pre><code>[url[url.rfind('/'):] for url in data]
</code></pre>
<p>Will give you what you're looking for</p>
| 0 | 2016-08-30T17:25:48Z | [
"python",
"regex",
"regex-negation"
] |
Regex to parse out a part of URL | 39,233,526 | <p>I am having the following data,</p>
<pre><code>data
http://hsotname.com/2016/08/a-b-n-r-y-u
https://www.hostname.com/best-food-for-humans
http://www.hostname.com/wp-content/uploads/2014/07/a-w-w-2.jpg
http://www.hostname.com/a/geniusbar/
http://www.hsotname.com/m/
http://www.hsotname.com/
</code></pre>
<p>I want to avoid the first http:// or https:// and check for the last '/' and parse out the remaining parts of the URL. But the challenge here is, we have '/' on the end of few URLs as well. The output which I want is,</p>
<pre><code>parsed
a-b-n-r-y-u
best-food-for-humans
a-w-w-2.jpg
NULL
NULL
NULL
</code></pre>
<p>Can anybody help me to find the last / and parse out the remaining part of the URL? I am new to regex and any help would be appreciated.</p>
<p>Thanks</p>
| 1 | 2016-08-30T17:21:00Z | 39,233,712 | <p>Regexes are probably not the way you should do this - considering that you marked the question <code>python</code>, try (assuming the URL is in name <code>url</code>):</p>
<pre><code>last-part = url.split('/')[-1]
</code></pre>
<p>This splits the URL into a list of substrings between slashes, and stores the last one in <code>last-part</code>.</p>
<p>If you insist on using regexes, though, matching on the end of the string is helpful here. Try <code>/[^/]*$</code>, which matches a slash, followed by any number of non-slashes, followed by the end of the string.</p>
<p>If you were to want to match the last non-empty part following a slash (if you didn't want the last three examples to return <code>""</code>), you could do <code>/[^/]*/?$</code>, which allows but does not require a single slash at the very end.</p>
| 1 | 2016-08-30T17:32:23Z | [
"python",
"regex",
"regex-negation"
] |
Regex to parse out a part of URL | 39,233,526 | <p>I am having the following data,</p>
<pre><code>data
http://hsotname.com/2016/08/a-b-n-r-y-u
https://www.hostname.com/best-food-for-humans
http://www.hostname.com/wp-content/uploads/2014/07/a-w-w-2.jpg
http://www.hostname.com/a/geniusbar/
http://www.hsotname.com/m/
http://www.hsotname.com/
</code></pre>
<p>I want to avoid the first http:// or https:// and check for the last '/' and parse out the remaining parts of the URL. But the challenge here is, we have '/' on the end of few URLs as well. The output which I want is,</p>
<pre><code>parsed
a-b-n-r-y-u
best-food-for-humans
a-w-w-2.jpg
NULL
NULL
NULL
</code></pre>
<p>Can anybody help me to find the last / and parse out the remaining part of the URL? I am new to regex and any help would be appreciated.</p>
<p>Thanks</p>
| 1 | 2016-08-30T17:21:00Z | 39,233,790 | <p>Possibly over kill for the example, but if you need to deal with location fragments/just location names (ie, the last forward slash is part of the http etc... (splitting <code>http://hostname.com</code> and taking the last <code>/</code> will give you <code>hostname.com</code> - <code>urlsplit</code> will give a path of <code>''</code> instead) then'll you're probably safer off using:</p>
<pre><code>>>> from urllib.parse import urlsplit
>>> urls = ['http://hsotname.com/2016/08/a-b-n-r-y-u', 'https://www.hostname.com/best-food-for-humans', 'http://www.hostname.com/wp-content/uploads/2014/07/a-w-w-2.jpg', 'http://www.hostname.com/a/geniusbar/', 'http://www.hsotname.com/m/', 'http://www.hsotname.com/']
>>> [urlsplit(url).path.rpartition('/')[2] for url in urls]
['a-b-n-r-y-u', 'best-food-for-humans', 'a-w-w-2.jpg', '', '', '']
</code></pre>
| 0 | 2016-08-30T17:37:16Z | [
"python",
"regex",
"regex-negation"
] |
Regex to parse out a part of URL | 39,233,526 | <p>I am having the following data,</p>
<pre><code>data
http://hsotname.com/2016/08/a-b-n-r-y-u
https://www.hostname.com/best-food-for-humans
http://www.hostname.com/wp-content/uploads/2014/07/a-w-w-2.jpg
http://www.hostname.com/a/geniusbar/
http://www.hsotname.com/m/
http://www.hsotname.com/
</code></pre>
<p>I want to avoid the first http:// or https:// and check for the last '/' and parse out the remaining parts of the URL. But the challenge here is, we have '/' on the end of few URLs as well. The output which I want is,</p>
<pre><code>parsed
a-b-n-r-y-u
best-food-for-humans
a-w-w-2.jpg
NULL
NULL
NULL
</code></pre>
<p>Can anybody help me to find the last / and parse out the remaining part of the URL? I am new to regex and any help would be appreciated.</p>
<p>Thanks</p>
| 1 | 2016-08-30T17:21:00Z | 39,235,084 | <p>Check from the end of the URL, and match every thing but / </p>
<pre><code>[^/]+?$
</code></pre>
<p>or</p>
<pre><code>\b[^/]+?\b$
</code></pre>
| 0 | 2016-08-30T18:54:03Z | [
"python",
"regex",
"regex-negation"
] |
Subplots size/tick spacing pyplot | 39,233,569 | <p>I have searched for similar problems and not found any, so I apologize.</p>
<p>I have this:</p>
<pre><code>import matplotlib.pyplot as plt
yearlymean_gm = np.load('ts_globalmean_annualmean.npz')
ts = yearlymean_gm['ts_aqct']
time = np.arange(0., 45 , 1)
plt.figure( figsize=(12, 5), dpi=80, facecolor='w', edgecolor='k' )
ax = plt.subplot(3, 4, 1)
data = ts[0, :]
plt.plot(time, data)
plt.title('Annual Mean Global Mean Temperature', fontsize=14)
plt.xlabel('year', fontsize=12)
plt.ylabel(modnames[0], fontsize=12)
plt.xlim(0, 50), plt.ylim(275, 310)
ax.set_xticks(time)
ax.set_xticklabels(time, fontsize = 8)
ax= plt.subplot(3, 4, 2)
data = ts[1, :]
plt.plot(time, data)
plt.title('Annual Mean Global Mean Temperature', fontsize=14)
plt.xlabel('year', fontsize=12)
plt.ylabel(modnames[1], fontsize=12)
plt.xlim(0, 50), plt.ylim(275, 310)
ax.set_xticks(time)
ax = plt.subplot(3, 4, 3)
data = ts[2, :]
plt.plot(time, data)
plt.title('Annual Mean Global Mean Temperature', fontsize=14)
plt.xlabel('year', fontsize=12)
plt.ylabel(modnames[2], fontsize=12)
plt.xlim(0, 50), plt.ylim(275, 310)
ax.set_xticks(time)
plt.tight_layout()
plt.show()
plt.close
</code></pre>
<p>There are currently 9 missing plots, as I'm sure you could guess, <a href="http://i.stack.imgur.com/Q07Fc.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/Q07Fc.jpg" alt="Here's what it's spitting out right now"></a></p>
<p>Here are the three issues, to be clear:
1) Each subplot is <em>tiny</em> compared to the size of the figure (and, you know... what's easily visible). Reducing the size of the figure doesn't make the subplots any more easily readable.</p>
<p>2) They are too close together. I have some ideas about how to solve this, but I feel like I need to resolve 1) first.</p>
<p>3) The axes are so small that the xticks appear all bunched up</p>
<p>I've searched and found no explanation as to how to do this that's written at a level I can understand. The pyplot documentation is essentially gibberish to me.</p>
<p>Thanks in advance for any assistance (and if anyone can offer more general advice about what I'm doing in addition to specific advice about solving this problem, I would appreciate the edification).</p>
| 0 | 2016-08-30T17:23:19Z | 39,237,503 | <p>Ok, so several things are happening here. Let got through them one at a time. After you instantiate the plot, you call <code>ax = plt.subplot(3, 4, _)</code> 3 times. However, <code>.subplot(3,4,_)</code> breaks the plot into 3 rows and 4 columns and the underscore selects which piece of this grid to select starting with 1 (instead of 0). We can number them with the following code:</p>
<pre><code>plt.figure( figsize=(12, 5), dpi=80, facecolor='w', edgecolor='k' )
for N in range(1,13):
ax = plt.subplot(3, 4, N)
ax.annotate(s=str(N), xy=(.4,.4), fontsize=16)
</code></pre>
<p><a href="http://i.stack.imgur.com/5ixzD.png" rel="nofollow"><img src="http://i.stack.imgur.com/5ixzD.png" alt="enter image description here"></a></p>
<p>So but using <code>.subplot(3,4,1)</code>, <code>.subplot(3,4,2)</code>, and <code>.subplot(3,4,3)</code> you are only selecting the first 3 of the 12 sections. </p>
<p>When you add the data to the plot, <code>ax.set_xticks(time)</code> adds 45 ticks to the xaxis (which is quite a few), and <code>ax.set_xticklabels(time, fontsize = 8)</code> adds a label at each tick. This is why it looks so crowded. One options is to reduce the number of ticks, the other is to stretch the x-axis out. Since you have 3 subplots, I think you wanted to 3 rows stacked vertically.</p>
<p>You don't need <code>plt.xlim(0, 50)</code> or <code>plt.ylim(275, 310)</code>. The axis will resize the plot limits for you, unless you have a specific reason for overriding them. </p>
<p>My recommendation would be to use <code>plt.subplots(3,1)</code> (notice the extra "s") rather than repeated calls to <code>plt.subplot</code>. What the difference? <code>plt.subplots(3,1)</code> returns a tuple of a <code>figure</code> object and an array of <code>axis</code> objects. In this case it is a 1-D array because we only call for 1 column. (NOTE: I created fake data for illustrative purposes.) </p>
<pre><code>fig, axes = plt.subplots(3,1, figsize=(12,5), dpi=80, facecolor='w',
edgecolor='k', sharex=True) # sharex shares the x-axis
for i, ax in enumerate(axes): # need to enumerate to slice the data
data = ts[i, :]
ax.plot(time, data)
ax.set_ylabel(modnames[i], fontsize=12)
ax.set_xticks(time)
ax.set_xticklabels(time, fontsize = 8)
# set xlabel outside of for loop so only the last axis get an xlabel
ax.set_xlabel('year', fontsize=12)
fig.tight_layout()
# set the title, adjust the spacing
fig.suptitle('Annual Mean Global Mean Temperature', fontsize=14)
fig.subplots_adjust(top=0.90)
</code></pre>
<p><a href="http://i.stack.imgur.com/nczqK.png" rel="nofollow"><img src="http://i.stack.imgur.com/nczqK.png" alt="enter image description here"></a></p>
| 1 | 2016-08-30T21:39:21Z | [
"python",
"matplotlib",
"subplot"
] |
Why isnt this simple "rock paper scissors" program working in Python? | 39,233,591 | <p>Im new to python and Im trying to write a program in which the user plays rock paper scissors against the computer. However, i dont know why it isnt working. Heres the code:</p>
<pre><code>import random
computerResult = random.randint(1,3)
print("Lets play Rock Paper Scissors")
playerResult = input("Type 1 for Rock, 2 for Paper or 3 for Scissors: ")
if computerResult == 1:
print("Computer says Rock")
if computerResult == 2:
print("Computer says Paper")
if computerResult == 3:
print("Computer says Scissors")
if playerResult == computerResult:
print("Its a draw")
elif playerResult == 1 and computerResult == 2:
print("You lose")
elif playerResult == 1 and computerResult == 3:
print("You win")
elif playerResult == 2 and computerResult == 1:
print("You win")
elif playerResult == 2 and computerResult == 3:
print("You lose")
elif playerResult == 3 and computerResult == 1:
print("You lose")
elif playerResult == 3 and computerResult == 2:
print("You win")
</code></pre>
<p>When I run the program and play the game this is what i get:</p>
<pre><code>Lets play Rock Paper Scissors
Type 1 for Rock, 2 for Paper or 3 for Scissors: 1
Computer says Scissors
</code></pre>
<p>It doesnt say wether I won or lost and I cant figure out why</p>
| 0 | 2016-08-30T17:25:14Z | 39,233,629 | <p>Because <code>playerResult</code> is a <code>str</code> and <code>computerResult</code> is <code>int</code>. And comparing string to int always results in <code>False</code></p>
<p>Change </p>
<pre><code>playerResult = input("Type 1 for Rock, 2 for Paper or 3 for Scissors: ")
</code></pre>
<p>To</p>
<pre><code>playerResult = int(input("Type 1 for Rock, 2 for Paper or 3 for Scissors: "))
</code></pre>
| 6 | 2016-08-30T17:27:55Z | [
"python",
"python-3.x"
] |
Why isnt this simple "rock paper scissors" program working in Python? | 39,233,591 | <p>Im new to python and Im trying to write a program in which the user plays rock paper scissors against the computer. However, i dont know why it isnt working. Heres the code:</p>
<pre><code>import random
computerResult = random.randint(1,3)
print("Lets play Rock Paper Scissors")
playerResult = input("Type 1 for Rock, 2 for Paper or 3 for Scissors: ")
if computerResult == 1:
print("Computer says Rock")
if computerResult == 2:
print("Computer says Paper")
if computerResult == 3:
print("Computer says Scissors")
if playerResult == computerResult:
print("Its a draw")
elif playerResult == 1 and computerResult == 2:
print("You lose")
elif playerResult == 1 and computerResult == 3:
print("You win")
elif playerResult == 2 and computerResult == 1:
print("You win")
elif playerResult == 2 and computerResult == 3:
print("You lose")
elif playerResult == 3 and computerResult == 1:
print("You lose")
elif playerResult == 3 and computerResult == 2:
print("You win")
</code></pre>
<p>When I run the program and play the game this is what i get:</p>
<pre><code>Lets play Rock Paper Scissors
Type 1 for Rock, 2 for Paper or 3 for Scissors: 1
Computer says Scissors
</code></pre>
<p>It doesnt say wether I won or lost and I cant figure out why</p>
| 0 | 2016-08-30T17:25:14Z | 39,233,656 | <p>This should fix it</p>
<pre><code>playerResult = input("Type 1 for Rock, 2 for Paper or 3 for Scissors: ")
playerResult = int(playerResult)
</code></pre>
| 1 | 2016-08-30T17:29:15Z | [
"python",
"python-3.x"
] |
Why isnt this simple "rock paper scissors" program working in Python? | 39,233,591 | <p>Im new to python and Im trying to write a program in which the user plays rock paper scissors against the computer. However, i dont know why it isnt working. Heres the code:</p>
<pre><code>import random
computerResult = random.randint(1,3)
print("Lets play Rock Paper Scissors")
playerResult = input("Type 1 for Rock, 2 for Paper or 3 for Scissors: ")
if computerResult == 1:
print("Computer says Rock")
if computerResult == 2:
print("Computer says Paper")
if computerResult == 3:
print("Computer says Scissors")
if playerResult == computerResult:
print("Its a draw")
elif playerResult == 1 and computerResult == 2:
print("You lose")
elif playerResult == 1 and computerResult == 3:
print("You win")
elif playerResult == 2 and computerResult == 1:
print("You win")
elif playerResult == 2 and computerResult == 3:
print("You lose")
elif playerResult == 3 and computerResult == 1:
print("You lose")
elif playerResult == 3 and computerResult == 2:
print("You win")
</code></pre>
<p>When I run the program and play the game this is what i get:</p>
<pre><code>Lets play Rock Paper Scissors
Type 1 for Rock, 2 for Paper or 3 for Scissors: 1
Computer says Scissors
</code></pre>
<p>It doesnt say wether I won or lost and I cant figure out why</p>
| 0 | 2016-08-30T17:25:14Z | 39,233,678 | <p>The problem with your code is is that your assuming the <code>input()</code> function returns a integer. It does not, it returns a <strong>string</strong>. Change this line</p>
<p><code>playerResult = input("Type 1 for Rock, 2 for Paper or 3 for Scissors: ")</code></p>
<p>to this:</p>
<p><code>playerResult = int(input("Type 1 for Rock, 2 for Paper or 3 for Scissors: "))</code></p>
<p>With the <code>int()</code> function you are converting your input to a integer.</p>
| 0 | 2016-08-30T17:30:25Z | [
"python",
"python-3.x"
] |
Why isnt this simple "rock paper scissors" program working in Python? | 39,233,591 | <p>Im new to python and Im trying to write a program in which the user plays rock paper scissors against the computer. However, i dont know why it isnt working. Heres the code:</p>
<pre><code>import random
computerResult = random.randint(1,3)
print("Lets play Rock Paper Scissors")
playerResult = input("Type 1 for Rock, 2 for Paper or 3 for Scissors: ")
if computerResult == 1:
print("Computer says Rock")
if computerResult == 2:
print("Computer says Paper")
if computerResult == 3:
print("Computer says Scissors")
if playerResult == computerResult:
print("Its a draw")
elif playerResult == 1 and computerResult == 2:
print("You lose")
elif playerResult == 1 and computerResult == 3:
print("You win")
elif playerResult == 2 and computerResult == 1:
print("You win")
elif playerResult == 2 and computerResult == 3:
print("You lose")
elif playerResult == 3 and computerResult == 1:
print("You lose")
elif playerResult == 3 and computerResult == 2:
print("You win")
</code></pre>
<p>When I run the program and play the game this is what i get:</p>
<pre><code>Lets play Rock Paper Scissors
Type 1 for Rock, 2 for Paper or 3 for Scissors: 1
Computer says Scissors
</code></pre>
<p>It doesnt say wether I won or lost and I cant figure out why</p>
| 0 | 2016-08-30T17:25:14Z | 39,233,772 | <p>The problem will be fixed casting the user input to an integer.</p>
<p>I'll also add a check to handle the bad input casistic:</p>
<pre><code>goodResult = False
</code></pre>
<p>playerResult = 0</p>
<pre><code>while not goodResult:
inp = input("Type 1 for Rock, 2 for Paper or 3 for Scissors: ")
try:
playerResult = int(inp)
goodResult = (playerResult > 0 and playerResult < 4)
except:
goodResult = False
</code></pre>
| 0 | 2016-08-30T17:35:54Z | [
"python",
"python-3.x"
] |
Sending in String vs Reading String | 39,233,641 | <p>I have a text file that contains something like:</p>
<pre><code>IP_ADD = "10.10.150.3"
BACKUP_IP = "10.10.150.4"
</code></pre>
<p>and the code to read it in:</p>
<pre><code>counter = 0
wordList = [None] * 100
with open("config.txt") as f:
content = f.read().splitlines()
for line in content:
line = line.split(' ',2)[-1]
wordList[counter] = line
counter = counter + 1
</code></pre>
<p>which will return to me just the IP Address with the quotes inside wordList.. IE</p>
<pre><code>wordList[0] = "10.10.150.3"
</code></pre>
<p>I then try to send an SNMP command using the OID and that IP address. IE</p>
<pre><code>agent.set(MY_OID,wordList[0])
</code></pre>
<p>but this doesnt work. If I change it to the following:</p>
<pre><code>agent.set(MY_OID,"10.10.150.3")
</code></pre>
<p>it works. What am I missing here?</p>
| 0 | 2016-08-30T17:28:31Z | 39,233,707 | <p>From what you wrote it appears that your file has the IP addr in quotes.
Hence</p>
<pre><code> line = line.split(' ',2)[-1]
</code></pre>
<p>will return an IP address in quotes as a string, aka</p>
<pre><code> "\"10.0.0.1\""
</code></pre>
<p>This is what you are sending across the wire, which is probably not what you intend to do.</p>
| 3 | 2016-08-30T17:31:53Z | [
"python",
"string",
"file",
"io"
] |
Sending in String vs Reading String | 39,233,641 | <p>I have a text file that contains something like:</p>
<pre><code>IP_ADD = "10.10.150.3"
BACKUP_IP = "10.10.150.4"
</code></pre>
<p>and the code to read it in:</p>
<pre><code>counter = 0
wordList = [None] * 100
with open("config.txt") as f:
content = f.read().splitlines()
for line in content:
line = line.split(' ',2)[-1]
wordList[counter] = line
counter = counter + 1
</code></pre>
<p>which will return to me just the IP Address with the quotes inside wordList.. IE</p>
<pre><code>wordList[0] = "10.10.150.3"
</code></pre>
<p>I then try to send an SNMP command using the OID and that IP address. IE</p>
<pre><code>agent.set(MY_OID,wordList[0])
</code></pre>
<p>but this doesnt work. If I change it to the following:</p>
<pre><code>agent.set(MY_OID,"10.10.150.3")
</code></pre>
<p>it works. What am I missing here?</p>
| 0 | 2016-08-30T17:28:31Z | 39,234,012 | <p>You almost got it, you just need to strip the double quotes around your IPs. Use <code>strip('"')</code> to do that.</p>
<pre><code>line.split(' ',2)[-1].strip('"')
</code></pre>
<hr>
<p>Your code seems too shabby, no offence. You are doing things many of which are not necessary. You can do it much simply this way:</p>
<pre><code>wordList = []
with open('config.txt') as file:
for line in file:
wordList.append(line.split()[-1].strip('"'))
print(wordList)
</code></pre>
<p>Output:</p>
<pre><code>['10.10.150.3', '10.10.150.4']
</code></pre>
| 0 | 2016-08-30T17:50:00Z | [
"python",
"string",
"file",
"io"
] |
Nonblocking fifo | 39,233,663 | <p>How can I make a fifo between two python processes, that allow dropping of lines if the reader is not able to handle the input?</p>
<ul>
<li>If the reader tries to <code>read</code> or <code>readline</code> faster then the writer writes, it should block.</li>
<li>If the reader cannot work as fast as the writer writes, the writer should not block. Lines should not be buffered (except one line at a time) and only the last line written should be received by the reader on its next <code>readline</code> attempt.</li>
</ul>
<p>Is this possible with a named fifo, or is there any other simple way for achiving this?</p>
| 3 | 2016-08-30T17:29:49Z | 39,233,976 | <p>Well, that's not actually a FIFO (queue) as far as I am aware - it's a single variable. I suppose it might be implementable if you set up a queue or pipe with a maximum size of 1, but it seems that it would work better to use a <a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Lock" rel="nofollow"><code>Lock</code></a> on a single object in one of the processes, which the other process references via a <a href="https://docs.python.org/3/library/multiprocessing.html#proxy-objects" rel="nofollow">proxy object</a>. The reader would set it to <code>None</code> whenever it reads, and the writer would overwrite the contents every time it writes.</p>
<p>You can get those to the other processes by passing the proxy of the object, and a proxy of the lock, as an argument to all relevant processes. To get it slightly more conveniently, you can use a <a href="https://docs.python.org/3/library/multiprocessing.html#managers" rel="nofollow"><code>Manager</code></a>, which provides a single object with proxy that you can pass in, which contains and provides proxies for whatever other objects (including locks) you want to put in it. <a href="http://stackoverflow.com/a/9436866/6051861">This answer</a> provides a useful example of proper use of a Manager to pass objects into a new process.</p>
| 0 | 2016-08-30T17:48:04Z | [
"python",
"blocking",
"nonblocking",
"fifo"
] |
Nonblocking fifo | 39,233,663 | <p>How can I make a fifo between two python processes, that allow dropping of lines if the reader is not able to handle the input?</p>
<ul>
<li>If the reader tries to <code>read</code> or <code>readline</code> faster then the writer writes, it should block.</li>
<li>If the reader cannot work as fast as the writer writes, the writer should not block. Lines should not be buffered (except one line at a time) and only the last line written should be received by the reader on its next <code>readline</code> attempt.</li>
</ul>
<p>Is this possible with a named fifo, or is there any other simple way for achiving this?</p>
| 3 | 2016-08-30T17:29:49Z | 39,259,030 | <p>The following code uses a named FIFO to allow communication between two scripts.</p>
<ul>
<li>If the reader tries to <code>read</code> faster than the writer, it blocks.</li>
<li>If the reader cannot keep up with the writer, the writer does not block.</li>
<li>Operations are buffer oriented. Line oriented operations are not currently implemented.</li>
<li>This code should be considered a proof-of-concept. The delays and buffer sizes are arbitrary.</li>
</ul>
<p><strong>Code</strong></p>
<pre><code>import argparse
import errno
import os
from select import select
import time
class OneFifo(object):
def __init__(self, name):
self.name = name
def __enter__(self):
if os.path.exists(self.name):
os.unlink(self.name)
os.mkfifo(self.name)
return self
def __exit__(self, exc_type, exc_value, exc_traceback):
if os.path.exists(self.name):
os.unlink(self.name)
def write(self, data):
print "Waiting for client to open FIFO..."
try:
server_file = os.open(self.name, os.O_WRONLY | os.O_NONBLOCK)
except OSError as exc:
if exc.errno == errno.ENXIO:
server_file = None
else:
raise
if server_file is not None:
print "Writing line to FIFO..."
try:
os.write(server_file, data)
print "Done."
except OSError as exc:
if exc.errno == errno.EPIPE:
pass
else:
raise
os.close(server_file)
def read_nonblocking(self):
result = None
try:
client_file = os.open(self.name, os.O_RDONLY | os.O_NONBLOCK)
except OSError as exc:
if exc.errno == errno.ENOENT:
client_file = None
else:
raise
if client_file is not None:
try:
rlist = [client_file]
wlist = []
xlist = []
rlist, wlist, xlist = select(rlist, wlist, xlist, 0.01)
if client_file in rlist:
result = os.read(client_file, 1024)
except OSError as exc:
if exc.errno == errno.EAGAIN or exc.errno == errno.EWOULDBLOCK:
result = None
else:
raise
os.close(client_file)
return result
def read(self):
try:
with open(self.name, 'r') as client_file:
result = client_file.read()
except OSError as exc:
if exc.errno == errno.ENOENT:
result = None
else:
raise
if not len(result):
result = None
return result
def parse_argument():
parser = argparse.ArgumentParser()
parser.add_argument('-c', '--client', action='store_true',
help='Set this flag for the client')
parser.add_argument('-n', '--non-blocking', action='store_true',
help='Set this flag to read without blocking')
result = parser.parse_args()
return result
if __name__ == '__main__':
args = parse_argument()
if not args.client:
with OneFifo('known_name') as one_fifo:
while True:
one_fifo.write('one line')
time.sleep(0.1)
else:
one_fifo = OneFifo('known_name')
while True:
if args.non_blocking:
result = one_fifo.read_nonblocking()
else:
result = one_fifo.read()
if result is not None:
print result
</code></pre>
<p>The <code>server</code> checks if the <code>client</code> has opened the FIFO. If the <code>client</code> has opened the FIFO, the <code>server</code> writes a line. Otherwise, the <code>server</code> continues running. I have implemented a non-blocking read because the blocking read causes a problem: If the <code>server</code> restarts, most of the time the <code>client</code> stays blocked and never recovers. With a non-blocking <code>client</code>, a <code>server</code> restart is more easily tolerated.</p>
<p><strong>Output</strong></p>
<pre><code>[user@machine:~] python onefifo.py
Waiting for client to open FIFO...
Waiting for client to open FIFO...
Writing line to FIFO...
Done.
Waiting for client to open FIFO...
Writing line to FIFO...
Done.
[user@machine:~] python onefifo.py -c
one line
one line
</code></pre>
<p><strong>Notes</strong></p>
<p>On startup, if the <code>server</code> detects that the FIFO already exists, it removes it. This is the easiest way to notify <code>clients</code> that the <code>server</code> has restarted. This notification is usually ignored by the blocking version of the <code>client</code>.</p>
| 1 | 2016-08-31T21:08:08Z | [
"python",
"blocking",
"nonblocking",
"fifo"
] |
Interactive HTML plots from Python's Bokeh to Latex | 39,233,673 | <p>I would like to embed an interactive HTML based plot (For example, <a href="http://bokeh.pydata.org/en/0.10.0/docs/gallery/burtin.html" rel="nofollow">http://bokeh.pydata.org/en/0.10.0/docs/gallery/burtin.html</a>)<br>
in a pdf document that I generated using latex. I am able to embed matplotlib based plots in my document using pythontex. I however fail to embed html based plots as shown above. </p>
<p>I would be extremely grateful for any insights. As long as they allow me to embed interactive plots, I am open to using platforms other than latex (even Microsoft Word) except for Python notebooks. I paste my code below. Thank you very much in advance for your time.</p>
<pre><code>\documentclass[11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{pythontex}
\usepackage{graphicx }
\begin{document}
\begin{pycode}
from collections import OrderedDict
from math import log, sqrt
import numpy as np
import pandas as pd
from six.moves import cStringIO as StringIO
from bokeh.plotting import figure, show, output_file
antibiotics = """
bacteria, penicillin, streptomycin, neomycin, gram
Mycobacterium tuberculosis, 800, 5, 2, negative
Salmonella schottmuelleri, 10, 0.8, 0.09, negative
Proteus vulgaris, 3, 0.1, 0.1, negative
Klebsiella pneumoniae, 850, 1.2, 1, negative
Brucella abortus, 1, 2, 0.02, negative
Pseudomonas aeruginosa, 850, 2, 0.4, negative
Escherichia coli, 100, 0.4, 0.1, negative
Salmonella (Eberthella) typhosa, 1, 0.4, 0.008, negative
Aerobacter aerogenes, 870, 1, 1.6, negative
Brucella antracis, 0.001, 0.01, 0.007, positive
Streptococcus fecalis, 1, 1, 0.1, positive
Staphylococcus aureus, 0.03, 0.03, 0.001, positive
Staphylococcus albus, 0.007, 0.1, 0.001, positive
Streptococcus hemolyticus, 0.001, 14, 10, positive
Streptococcus viridans, 0.005, 10, 40, positive
Diplococcus pneumoniae, 0.005, 11, 10, positive
"""
drug_color = OrderedDict([
("Penicillin", "#0d3362"),
("Streptomycin", "#c64737"),
("Neomycin", "black" ),
])
gram_color = {
"positive" : "#aeaeb8",
"negative" : "#e69584",
}
df = pd.read_csv(StringIO(antibiotics),
skiprows=1,
skipinitialspace=True,
engine='python')
width = 800
height = 800
inner_radius = 90
outer_radius = 300 - 10
minr = sqrt(log(.001 * 1E4))
maxr = sqrt(log(1000 * 1E4))
a = (outer_radius - inner_radius) / (minr - maxr)
b = inner_radius - a * maxr
def rad(mic):
return a * np.sqrt(np.log(mic * 1E4)) + b
big_angle = 2.0 * np.pi / (len(df) + 1)
small_angle = big_angle / 7
x = np.zeros(len(df))
y = np.zeros(len(df))
output_file("burtin.html", title="burtin.py example")
p = figure(plot_width=width, plot_height=height, title="",
x_axis_type=None, y_axis_type=None,
x_range=[-420, 420], y_range=[-420, 420],
min_border=0, outline_line_color="black",
background_fill="#f0e1d2", border_fill="#f0e1d2")
p.line(x+1, y+1, alpha=0)
# annular wedges
angles = np.pi/2 - big_angle/2 - df.index.to_series()*big_angle
colors = [gram_color[gram] for gram in df.gram]
p.annular_wedge(
x, y, inner_radius, outer_radius, -big_angle+angles, angles, color=colors,
)
# small wedges
p.annular_wedge(x, y, inner_radius, rad(df.penicillin),
-big_angle+angles+5*small_angle, -big_angle+angles+6*small_angle,
color=drug_color['Penicillin'])
p.annular_wedge(x, y, inner_radius, rad(df.streptomycin),
-big_angle+angles+3*small_angle, -big_angle+angles+4*small_angle,
color=drug_color['Streptomycin'])
p.annular_wedge(x, y, inner_radius, rad(df.neomycin),
-big_angle+angles+1*small_angle, -big_angle+angles+2*small_angle,
color=drug_color['Neomycin'])
# circular axes and lables
labels = np.power(10.0, np.arange(-3, 4))
radii = a * np.sqrt(np.log(labels * 1E4)) + b
p.circle(x, y, radius=radii, fill_color=None, line_color="white")
p.text(x[:-1], radii[:-1], [str(r) for r in labels[:-1]],
text_font_size="8pt", text_align="center", text_baseline="middle")
# radial axes
p.annular_wedge(x, y, inner_radius-10, outer_radius+10,
-big_angle+angles, -big_angle+angles, color="black")
# bacteria labels
xr = radii[0]*np.cos(np.array(-big_angle/2 + angles))
yr = radii[0]*np.sin(np.array(-big_angle/2 + angles))
label_angle=np.array(-big_angle/2+angles)
label_angle[label_angle < -np.pi/2] += np.pi # easier to read labels on the left side
p.text(xr, yr, df.bacteria, angle=label_angle,
text_font_size="9pt", text_align="center", text_baseline="middle")
# OK, these hand drawn legends are pretty clunky, will be improved in future release
p.circle([-40, -40], [-370, -390], color=list(gram_color.values()), radius=5)
p.text([-30, -30], [-370, -390], text=["Gram-" + gr for gr in gram_color.keys()],
text_font_size="7pt", text_align="left", text_baseline="middle")
p.rect([-40, -40, -40], [18, 0, -18], width=30, height=13,
color=list(drug_color.values()))
p.text([-15, -15, -15], [18, 0, -18], text=list(drug_color.keys()),
text_font_size="9pt", text_align="left", text_baseline="middle")
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
show(p)
\end{pycode}
\begin{center}
\includegraphics[width=1\textwidth]{burtin.html}
\end{center}
\end{document}
</code></pre>
| 0 | 2016-08-30T17:30:14Z | 39,234,545 | <p>As of Bokeh <code>0.12.1</code> this is not possible. Although Bokeh is a "Python" library, it was designed and conceived for the purpose of making it simple to make interactive visualizations in the browser. Accordingly, it has large JavaScript component (where all the work is done, actually), and requires a running JavaScript engine (i.e. a web browser) to function. </p>
<p>There is a "save tool" that will let you save PNGs from a web page by manually clicking a button, but that is it. There is currently no method to generate any kind of static image output programmatically or without involving a web browser. </p>
<p>There is an <a href="https://github.com/bokeh/bokeh/issues/538" rel="nofollow">open GitHub issue</a> you can follow for updates about this feature request. </p>
| 0 | 2016-08-30T18:21:30Z | [
"python",
"html",
"matplotlib",
"latex",
"bokeh"
] |
Insert into oracle table from CSV using python | 39,233,742 | <p>I am reading a csv file with 5 columns and push to oracle table</p>
<p><a href="http://i.stack.imgur.com/kkoNu.jpg" rel="nofollow">CSV file Structure</a></p>
<p>I know there are lots of resources on this .. But even then I am unable to find a solution for my problem </p>
<p>Code to read the CSV to python : </p>
<pre><code> import csv
reader = csv.reader(open("sample.csv","r"))
lines=[]
for line in reader:
lines.append(line)
print lines
</code></pre>
<p>Output :</p>
<blockquote>
<p>[['Firstname', 'LastName', 'email', 'Course_name', 'status'],
['Kristina', 'Bohn', 'abc@123.com', 'Guide to
Capnography in the Management of the Critically Ill Patient (CE)',
'Registered'], ['Peggy', 'Lutz', 'gef@123.com',
'Guide to Monitoring EtCO2 During Opioid Delivery (CE)', 'In
Progress']]</p>
</blockquote>
<p>Code to push the list to Oracle table :</p>
<pre><code>import cx_Oracle
con = cx_Oracle.connect('username/password@tamans*****vd/Servicename')
ver=con.version.split(".")
print(ver)
cur=con.cursor()
cur.execute("INSERT INTO TEST_CSODUPLOAD ('FIRSTNAME','LASTNAME','EMAIL','COURSE_NAME','STATUS') VALUES(:1,:2,:3,:4,:5)",lines)
con.commit ()
cur.close()
</code></pre>
<p>I am getting the error :</p>
<p><strong>DatabaseError: ORA-01484: arrays can only be bound to PL/SQL statements</strong></p>
<p>Please help me solve the issue Thanks in advance </p>
| 1 | 2016-08-30T17:33:50Z | 39,249,806 | <p>That error means what it said <a href="https://bitbucket.org/anthony_tuininga/cx_oracle/issues/36/ora-01484-arrays-can-only-be-bound-to-pl" rel="nofollow">here</a> </p>
<blockquote>
<p>This is not a bug. You are issuing a SQL query, not a PL/SQL statement and this sort of thing is not supported by Oracle. In this particular case you can simply pass the string. If you have multiple strings you will need to use a subquery instead. Look at the table() and cast() operators for inspiration!</p>
</blockquote>
<p>You can also dump into a sql file and execute it. </p>
| 0 | 2016-08-31T12:33:37Z | [
"python",
"oracle",
"csv",
"insert-into",
"cx-oracle"
] |
Insert into oracle table from CSV using python | 39,233,742 | <p>I am reading a csv file with 5 columns and push to oracle table</p>
<p><a href="http://i.stack.imgur.com/kkoNu.jpg" rel="nofollow">CSV file Structure</a></p>
<p>I know there are lots of resources on this .. But even then I am unable to find a solution for my problem </p>
<p>Code to read the CSV to python : </p>
<pre><code> import csv
reader = csv.reader(open("sample.csv","r"))
lines=[]
for line in reader:
lines.append(line)
print lines
</code></pre>
<p>Output :</p>
<blockquote>
<p>[['Firstname', 'LastName', 'email', 'Course_name', 'status'],
['Kristina', 'Bohn', 'abc@123.com', 'Guide to
Capnography in the Management of the Critically Ill Patient (CE)',
'Registered'], ['Peggy', 'Lutz', 'gef@123.com',
'Guide to Monitoring EtCO2 During Opioid Delivery (CE)', 'In
Progress']]</p>
</blockquote>
<p>Code to push the list to Oracle table :</p>
<pre><code>import cx_Oracle
con = cx_Oracle.connect('username/password@tamans*****vd/Servicename')
ver=con.version.split(".")
print(ver)
cur=con.cursor()
cur.execute("INSERT INTO TEST_CSODUPLOAD ('FIRSTNAME','LASTNAME','EMAIL','COURSE_NAME','STATUS') VALUES(:1,:2,:3,:4,:5)",lines)
con.commit ()
cur.close()
</code></pre>
<p>I am getting the error :</p>
<p><strong>DatabaseError: ORA-01484: arrays can only be bound to PL/SQL statements</strong></p>
<p>Please help me solve the issue Thanks in advance </p>
| 1 | 2016-08-30T17:33:50Z | 39,251,390 | <p>The problem is that you are trying to pass an array to a single insert statement. You have two options here:</p>
<p>1) Use a loop to insert each row separately:</p>
<pre><code>for line in lines:
cursor.execute("insert into ...", line)
</code></pre>
<p>2) Use cursor.executemany() instead to do an array insert</p>
<pre><code>cursor.executemany("insert into ...", lines)
</code></pre>
<p>The second option is more efficient but you have to make sure that the type of data is consistent for each line. If you have a number in one row and a string in the next row an error will be raised.</p>
| 1 | 2016-08-31T13:44:23Z | [
"python",
"oracle",
"csv",
"insert-into",
"cx-oracle"
] |
Insert into oracle table from CSV using python | 39,233,742 | <p>I am reading a csv file with 5 columns and push to oracle table</p>
<p><a href="http://i.stack.imgur.com/kkoNu.jpg" rel="nofollow">CSV file Structure</a></p>
<p>I know there are lots of resources on this .. But even then I am unable to find a solution for my problem </p>
<p>Code to read the CSV to python : </p>
<pre><code> import csv
reader = csv.reader(open("sample.csv","r"))
lines=[]
for line in reader:
lines.append(line)
print lines
</code></pre>
<p>Output :</p>
<blockquote>
<p>[['Firstname', 'LastName', 'email', 'Course_name', 'status'],
['Kristina', 'Bohn', 'abc@123.com', 'Guide to
Capnography in the Management of the Critically Ill Patient (CE)',
'Registered'], ['Peggy', 'Lutz', 'gef@123.com',
'Guide to Monitoring EtCO2 During Opioid Delivery (CE)', 'In
Progress']]</p>
</blockquote>
<p>Code to push the list to Oracle table :</p>
<pre><code>import cx_Oracle
con = cx_Oracle.connect('username/password@tamans*****vd/Servicename')
ver=con.version.split(".")
print(ver)
cur=con.cursor()
cur.execute("INSERT INTO TEST_CSODUPLOAD ('FIRSTNAME','LASTNAME','EMAIL','COURSE_NAME','STATUS') VALUES(:1,:2,:3,:4,:5)",lines)
con.commit ()
cur.close()
</code></pre>
<p>I am getting the error :</p>
<p><strong>DatabaseError: ORA-01484: arrays can only be bound to PL/SQL statements</strong></p>
<p>Please help me solve the issue Thanks in advance </p>
| 1 | 2016-08-30T17:33:50Z | 39,254,043 | <p>Thanks for your answers :)
The issue got resolved once I changed
from </p>
<pre><code>cur.execute("INSERT INTO TEST_CSODUPLOAD ('FIRSTNAME','LASTNAME','EMAIL','COURSE_NAME','STATUS') VALUES(:1,:2,:3,:4,:5)",lines)
</code></pre>
<p>to</p>
<pre><code>cur.executemany("insert into TEST_CSODUPLOAD(Firstname,LastName,email,Course_name,status) values (:1, :2, :3, :4,:5)", lines)
</code></pre>
| 1 | 2016-08-31T15:52:19Z | [
"python",
"oracle",
"csv",
"insert-into",
"cx-oracle"
] |
Error with calculator type program in python | 39,233,756 | <pre><code>numbers=[]
while True:
print('Input Number '+str(len(numbers)+1)+' (or nothing to close):')
number=input()
numbers=numbers+[number]
if number=='':
print('What do you want to do?')
answer=input()
break
if answer==mean:
mean
def mean():
end_mean=reduce(lambda x, y: x + y, numbers) / len(numbers)
print(end_mean)
</code></pre>
<p>I am trying to make a calculator type program in python which allows you to enter a list of numbers and then select what you do with them. The script above is only the beginning but when i enter the numbers and type in 'mean' when it asks me what to do, it ends the script and shows nothing. I am new to python so please be forgiving as such in the answers.</p>
<p>Edit 3 - </p>
<p>After using answers below i have fixed the script to end up with this:</p>
<pre><code>import functools
numbers=[]
def means():
end_mean = functools.reduce(lambda x, y: x + y, numbers) / len(numbers)
print(end_mean)
def sum():
end_sum = functools.reduce(lambda x, y: x + y, numbers)
print(end_sum)
def whatDo():
print('Input Extra Numbers '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
means()
while True:
print('Input Number '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
means()
print('Do you want anything else?')
reply=input()
if reply=='no':
break
elif reply--'yes':
whatDo()
else:
break
</code></pre>
<p>However i get this:</p>
<pre><code>Traceback (most recent call last):
File "E:/Python/calculator.py", line 26, in <module>
number= int(input())
ValueError: invalid literal for int() with base 10: ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:/Python/calculator.py", line 37, in <module>
elif reply--'yes':
TypeError: bad operand type for unary -: 'str'
</code></pre>
<p>after the 'Do you want anything else' and i enter 'yes'.</p>
| -1 | 2016-08-30T17:35:04Z | 39,233,802 | <p>First, you <strong>break</strong> before you get to the check. Your check itself will also fail:</p>
<pre><code>if answer==mean:
mean
</code></pre>
<p>You've compared the answer (a string) to mean (a function object). Try:</p>
<pre><code>if answer == "mean":
mean()
</code></pre>
<p>Also, I expect that you want to convert the input numbers from <strong>string</strong> to <strong>int</strong>:</p>
<pre><code>if number=='':
....
else:
numbers=numbers+[int(number)]
</code></pre>
| 1 | 2016-08-30T17:38:08Z | [
"python"
] |
Error with calculator type program in python | 39,233,756 | <pre><code>numbers=[]
while True:
print('Input Number '+str(len(numbers)+1)+' (or nothing to close):')
number=input()
numbers=numbers+[number]
if number=='':
print('What do you want to do?')
answer=input()
break
if answer==mean:
mean
def mean():
end_mean=reduce(lambda x, y: x + y, numbers) / len(numbers)
print(end_mean)
</code></pre>
<p>I am trying to make a calculator type program in python which allows you to enter a list of numbers and then select what you do with them. The script above is only the beginning but when i enter the numbers and type in 'mean' when it asks me what to do, it ends the script and shows nothing. I am new to python so please be forgiving as such in the answers.</p>
<p>Edit 3 - </p>
<p>After using answers below i have fixed the script to end up with this:</p>
<pre><code>import functools
numbers=[]
def means():
end_mean = functools.reduce(lambda x, y: x + y, numbers) / len(numbers)
print(end_mean)
def sum():
end_sum = functools.reduce(lambda x, y: x + y, numbers)
print(end_sum)
def whatDo():
print('Input Extra Numbers '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
means()
while True:
print('Input Number '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
means()
print('Do you want anything else?')
reply=input()
if reply=='no':
break
elif reply--'yes':
whatDo()
else:
break
</code></pre>
<p>However i get this:</p>
<pre><code>Traceback (most recent call last):
File "E:/Python/calculator.py", line 26, in <module>
number= int(input())
ValueError: invalid literal for int() with base 10: ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:/Python/calculator.py", line 37, in <module>
elif reply--'yes':
TypeError: bad operand type for unary -: 'str'
</code></pre>
<p>after the 'Do you want anything else' and i enter 'yes'.</p>
| -1 | 2016-08-30T17:35:04Z | 39,234,209 | <p>May be this will help you.</p>
<pre><code>numbers=[]
def means():
end_mean = reduce(lambda x, y: x + y, numbers) / len(numbers)
print(end_mean)
while True:
print('Input Number '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
means()
else:
break
</code></pre>
<p>What have I changed from your Code? </p>
<ul>
<li>Added a <code>try-except</code> block. This is added so that it can handle the case when you passed <code>''</code> as input.</li>
<li>cast input to <code>int</code> before passing to number, as you need <code>int</code> value to calculate mean, not <code>string</code> values.</li>
<li>You are making a list from the input and then adding to the previous list, which is unnecessary and inefficient. I replace it with <code>numbers.append(number)</code></li>
<li>The unnecessary <code>break</code> statement removed. <code>break</code> is used to get out from the loops. As you use <code>break</code>, you will never see the later statements executed.</li>
</ul>
| 1 | 2016-08-30T18:01:45Z | [
"python"
] |
Python: How to search tweets and store in database? | 39,233,769 | <p>I've got a nice Python script that currently prints out the past 200 tweets from a given username. </p>
<p>However, I'd like to modify it so that instead it will collect the past 200 tweets that include a certain hashtag (from any username) and then I'd like to store those results in a database.</p>
<p>Can anyone provide a suggestion on how to modify the code below?</p>
<pre><code>import sys
import operator
import requests
import json
import twitter
twitter_consumer_key = 'XXXX'
twitter_consumer_secret = 'XXXX'
twitter_access_token = 'XXXX'
twitter_access_secret = 'XXXX'
twitter_api = twitter.Api(consumer_key=twitter_consumer_key, consumer_secret=twitter_consumer_secret, access_token_key=twitter_access_token, access_token_secret=twitter_access_secret)
statuses = twitter_api.GetUserTimeline(screen_name=handle, count=200, include_rts=False)
for status in statuses:
if (status.lang == 'en'):
print status
</code></pre>
| 0 | 2016-08-30T17:35:40Z | 39,234,181 | <p>Not familiar with the twitter package but this could be a suggestion that you can work on. Depends on how you want to save the tweet, you can replace the "print status" with the way you want. <strong>However, this only allows you to filter the 200 tweets rather than get the 200 tweets that contain certain hashtag.</strong></p>
<pre><code>import sys
import operator
import requests
import json
import twitter
twitter_consumer_key = 'XXXX'
twitter_consumer_secret = 'XXXX'
twitter_access_token = 'XXXX'
twitter_access_secret = 'XXXX'
twitter_api = twitter.Api(consumer_key=twitter_consumer_key, consumer_secret=twitter_consumer_secret, access_token_key=twitter_access_token, access_token_secret=twitter_access_secret)
statuses = twitter_api.GetUserTimeline(screen_name=handle, count=200, include_rts=False)
tag_list = ["Xmas", "Summer"]
for status in statuses:
if (status.lang == 'en'):
#assume there exists a hashtag in the tweet
for hashtag in status.entities.hashtags:
if hashtag.text in tag_list:
print status
</code></pre>
| 0 | 2016-08-30T18:00:26Z | [
"python",
"twitter"
] |
Python: How to search tweets and store in database? | 39,233,769 | <p>I've got a nice Python script that currently prints out the past 200 tweets from a given username. </p>
<p>However, I'd like to modify it so that instead it will collect the past 200 tweets that include a certain hashtag (from any username) and then I'd like to store those results in a database.</p>
<p>Can anyone provide a suggestion on how to modify the code below?</p>
<pre><code>import sys
import operator
import requests
import json
import twitter
twitter_consumer_key = 'XXXX'
twitter_consumer_secret = 'XXXX'
twitter_access_token = 'XXXX'
twitter_access_secret = 'XXXX'
twitter_api = twitter.Api(consumer_key=twitter_consumer_key, consumer_secret=twitter_consumer_secret, access_token_key=twitter_access_token, access_token_secret=twitter_access_secret)
statuses = twitter_api.GetUserTimeline(screen_name=handle, count=200, include_rts=False)
for status in statuses:
if (status.lang == 'en'):
print status
</code></pre>
| 0 | 2016-08-30T17:35:40Z | 39,248,197 | <p>I am attaching a java code that will print out past 100 tweets including '#engineeringproblems' hashtag (from any user). You need to add twitter API 'twitter4J' in the library.</p>
<p>API download link- <a href="http://twitter4j.org/en/index.html#download" rel="nofollow">http://twitter4j.org/en/index.html#download</a></p>
<p>Java source code:</p>
<pre><code>public static void main(String[] args) {
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true)
.setOAuthConsumerKey("xxxx")
.setOAuthConsumerSecret("xxxx")
.setOAuthAccessToken("xxxx")
.setOAuthAccessTokenSecret("xxxx");
Twitter twitter = new TwitterFactory(cb.build()).getInstance();
Query query = new Query("#engineeringproblems ");
int numberOfTweets = 100;
long lastID = Long.MAX_VALUE;
ArrayList<Status> tweets = new ArrayList<Status>();
while (tweets.size() < numberOfTweets) {
if (numberOfTweets - tweets.size() > 100) {
query.setCount(100);
} else {
query.setCount(numberOfTweets - tweets.size());
}
try {
QueryResult result = twitter.search(query);
tweets.addAll(result.getTweets());
System.out.println("Gathered " + tweets.size() + " tweets" + "\n");
for (Status t : tweets) {
if (t.getId() < lastID) {
lastID = t.getId();
}
}
} catch (TwitterException te) {
System.out.println("Couldn't connect: " + te);
};
query.setMaxId(lastID - 1);
}
for (int i = 0; i < tweets.size(); i++) {
Status t = (Status) tweets.get(i);
String user = t.getUser().getScreenName();
String msg = t.getText();
System.out.println(i + " USER: " + user + " wrote: " + msg + "\n");
}
}
</code></pre>
| 0 | 2016-08-31T11:18:26Z | [
"python",
"twitter"
] |
Python: How to search tweets and store in database? | 39,233,769 | <p>I've got a nice Python script that currently prints out the past 200 tweets from a given username. </p>
<p>However, I'd like to modify it so that instead it will collect the past 200 tweets that include a certain hashtag (from any username) and then I'd like to store those results in a database.</p>
<p>Can anyone provide a suggestion on how to modify the code below?</p>
<pre><code>import sys
import operator
import requests
import json
import twitter
twitter_consumer_key = 'XXXX'
twitter_consumer_secret = 'XXXX'
twitter_access_token = 'XXXX'
twitter_access_secret = 'XXXX'
twitter_api = twitter.Api(consumer_key=twitter_consumer_key, consumer_secret=twitter_consumer_secret, access_token_key=twitter_access_token, access_token_secret=twitter_access_secret)
statuses = twitter_api.GetUserTimeline(screen_name=handle, count=200, include_rts=False)
for status in statuses:
if (status.lang == 'en'):
print status
</code></pre>
| 0 | 2016-08-30T17:35:40Z | 39,261,908 | <p>Sorry, but I've really been looking for a Python solution and I believe I've finally found it and tested it successfully. Code is below. Still looking for a way to modify the script to enter each line into a SQL database, but I hopefully I can find that elsewhere.</p>
<p>pip install TwitterSearch</p>
<pre><code>from TwitterSearch import *
try:
tso = TwitterSearchOrder() # create a TwitterSearchOrder object
tso.set_keywords(['Guttenberg', 'Doktorarbeit']) # let's define all words we would like to have a look for
tso.set_language('de') # we want to see German tweets only
tso.set_include_entities(False) # and don't give us all those entity information
# it's about time to create a TwitterSearch object with our secret tokens
ts = TwitterSearch(
consumer_key = 'aaabbb',
consumer_secret = 'cccddd',
access_token = '111222',
access_token_secret = '333444'
)
# this is where the fun actually starts :)
for tweet in ts.search_tweets_iterable(tso):
print( '@%s tweeted: %s' % ( tweet['user']['screen_name'], tweet['text'] ) )
except TwitterSearchException as e: # take care of all those ugly errors if there are some
print(e)
</code></pre>
| 0 | 2016-09-01T02:43:24Z | [
"python",
"twitter"
] |
Django 1.10: django-storages on S3 error: Not naive datetime (tzinfo is already set) | 39,233,853 | <p>I've started getting an error since upgrading to Django 1.10. I'm running Django 1.10 on Python 3.5 with <code>django-storages==1.5.0</code> and <code>boto3==1.4.0</code>.</p>
<pre><code>You have requested to collect static files at the destination
location as specified in your settings.
This will overwrite existing files!
Are you sure you want to do this?
Type 'yes' to continue, or 'no' to cancel: yes
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/core/management/__init__.py", line 367, in execute_from_command_line
utility.execute()
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/core/management/__init__.py", line 359, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/core/management/base.py", line 305, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/core/management/base.py", line 356, in execute
output = self.handle(*args, **options)
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 193, in handle
collected = self.collect()
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 124, in collect
handler(path, prefixed_path, storage)
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 337, in copy_file
if not self.delete_file(path, prefixed_path, source_storage):
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 258, in delete_file
target_last_modified = self.storage.get_modified_time(prefixed_path)
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/core/files/storage.py", line 231, in get_modified_time
return _possibly_make_aware(dt)
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/core/files/storage.py", line 243, in _possibly_make_aware
return timezone.make_aware(dt, tz).astimezone(timezone.utc)
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/django/utils/timezone.py", line 368, in make_aware
return timezone.localize(value, is_dst=is_dst)
File "/home/vagrant/.virtualenvs/wrds-classroom/lib/python3.5/site-packages/pytz/tzinfo.py", line 304, in localize
raise ValueError('Not naive datetime (tzinfo is already set)')
ValueError: Not naive datetime (tzinfo is already set)
</code></pre>
<p>Changeing <code>USE_TZ</code> to <code>False</code> changes the error, but I'm still not sure why this is being caused (this is new territory in the Django codebase for me):</p>
<pre><code>TypeError: can't compare offset-naive and offset-aware datetimes
</code></pre>
<p>Any idea what the cause here is?</p>
| 0 | 2016-08-30T17:40:44Z | 39,234,032 | <p>I've figured out a temporary fix which should lead to a permanent fix.</p>
<p>After some more sleuthing, a friend found this PR they thought might be related: <a href="https://github.com/jschneier/django-storages/pull/181" rel="nofollow">https://github.com/jschneier/django-storages/pull/181</a></p>
<p>I noticed the date of the pull request was two days after the latest django-storages release (1.5.0) which I was running. In my <code>requirements.txt</code> I simply did this, pointing at the hash of the commit:</p>
<pre><code>#django-storages==1.5.0
git+https://github.com/jschneier/django-storages.git#5f280571ee1ae93ee66ed805b53b08bfe5ab9f0c
boto3==1.4.0
</code></pre>
<p>Then, a <code>pip install --upgrade -r requirements.txt</code> and ran <code>collectstatic</code> again, and no more error! I'm guessing this fix will be included in a 1.5.1 release, at which point I'll simply change my <code>requirements.txt</code> again.</p>
| 0 | 2016-08-30T17:50:49Z | [
"python",
"django",
"boto3",
"django-storage"
] |
How to split a string to get the last item in braces? | 39,233,908 | <p>I'm trying to split a string to extract the last item in braces.</p>
<p>For example if I had the string</p>
<pre><code>'Stud Rd/(after) Ferntree Gully Rd (Scoresby)'
</code></pre>
<p>I would like to split it into the components</p>
<pre><code>('Stud Rd/(after) Ferntree Gully Rd', 'Scoresby')
</code></pre>
<p>So far I've written the following regex to do this</p>
<pre><code>re.search(r'^(.*) \((.*)\)$', string)
</code></pre>
<p>However this breaks in the case of an input like</p>
<pre><code>'Bell St/Oriel Rd (Bellfield (3081))'
</code></pre>
<p>Which I would like be split into</p>
<pre><code>('Bell St/Oriel Rd', 'Bellfield (3081)')
</code></pre>
<p>Is there a better way then this to approach this problem?</p>
| 0 | 2016-08-30T17:43:46Z | 39,234,003 | <p>I would use this scheme, given <code>t</code> being your text:</p>
<pre><code>last = re.findall('\([^())]+\)', t)[-1]
</code></pre>
<p>The regex searches for an opening parenthesis, then take everything that is neither opening nor closing parenthesis, and then matches the closing parenthesis. Since there could be more than one like this, I use <code>findall</code> and take the last one.</p>
| 0 | 2016-08-30T17:49:24Z | [
"python",
"regex"
] |
How to split a string to get the last item in braces? | 39,233,908 | <p>I'm trying to split a string to extract the last item in braces.</p>
<p>For example if I had the string</p>
<pre><code>'Stud Rd/(after) Ferntree Gully Rd (Scoresby)'
</code></pre>
<p>I would like to split it into the components</p>
<pre><code>('Stud Rd/(after) Ferntree Gully Rd', 'Scoresby')
</code></pre>
<p>So far I've written the following regex to do this</p>
<pre><code>re.search(r'^(.*) \((.*)\)$', string)
</code></pre>
<p>However this breaks in the case of an input like</p>
<pre><code>'Bell St/Oriel Rd (Bellfield (3081))'
</code></pre>
<p>Which I would like be split into</p>
<pre><code>('Bell St/Oriel Rd', 'Bellfield (3081)')
</code></pre>
<p>Is there a better way then this to approach this problem?</p>
| 0 | 2016-08-30T17:43:46Z | 39,234,049 | <p>This works assuming you don't have any parenthesis prior to the last chunk.</p>
<pre><code>var = 'Bell St/Oriel Rd', 'Bellfield (3081)'.split('(')
var[-1] = var[-1][:-1]
</code></pre>
| 0 | 2016-08-30T17:51:58Z | [
"python",
"regex"
] |
How to split a string to get the last item in braces? | 39,233,908 | <p>I'm trying to split a string to extract the last item in braces.</p>
<p>For example if I had the string</p>
<pre><code>'Stud Rd/(after) Ferntree Gully Rd (Scoresby)'
</code></pre>
<p>I would like to split it into the components</p>
<pre><code>('Stud Rd/(after) Ferntree Gully Rd', 'Scoresby')
</code></pre>
<p>So far I've written the following regex to do this</p>
<pre><code>re.search(r'^(.*) \((.*)\)$', string)
</code></pre>
<p>However this breaks in the case of an input like</p>
<pre><code>'Bell St/Oriel Rd (Bellfield (3081))'
</code></pre>
<p>Which I would like be split into</p>
<pre><code>('Bell St/Oriel Rd', 'Bellfield (3081)')
</code></pre>
<p>Is there a better way then this to approach this problem?</p>
| 0 | 2016-08-30T17:43:46Z | 39,234,078 | <p>You can use this lazy regex:</p>
<pre><code>(.*?) \((.*)\)[^()]*$
</code></pre>
<p><a href="https://regex101.com/r/yQ8xI8/1" rel="nofollow">RegEx Demo</a></p>
<p>Examples:</p>
<pre><code>>>> reg = r'(.*?) \((.*)\)[^()]*$'
>>> s = 'Bell St/Oriel Rd (Bellfield (3081))'
>>> re.findall(reg, s)
[('Bell St/Oriel Rd', 'Bellfield (3081)')]
>>> s = 'Stud Rd/(after) Ferntree Gully Rd (Scoresby)'
>>> re.findall(reg, s)
[('Stud Rd/(after) Ferntree Gully Rd', 'Scoresby')]
</code></pre>
| 4 | 2016-08-30T17:53:36Z | [
"python",
"regex"
] |
How to split a string to get the last item in braces? | 39,233,908 | <p>I'm trying to split a string to extract the last item in braces.</p>
<p>For example if I had the string</p>
<pre><code>'Stud Rd/(after) Ferntree Gully Rd (Scoresby)'
</code></pre>
<p>I would like to split it into the components</p>
<pre><code>('Stud Rd/(after) Ferntree Gully Rd', 'Scoresby')
</code></pre>
<p>So far I've written the following regex to do this</p>
<pre><code>re.search(r'^(.*) \((.*)\)$', string)
</code></pre>
<p>However this breaks in the case of an input like</p>
<pre><code>'Bell St/Oriel Rd (Bellfield (3081))'
</code></pre>
<p>Which I would like be split into</p>
<pre><code>('Bell St/Oriel Rd', 'Bellfield (3081)')
</code></pre>
<p>Is there a better way then this to approach this problem?</p>
| 0 | 2016-08-30T17:43:46Z | 39,234,200 | <p>Change your regex pattern and work with <strong>match</strong> object(returned by <code>search</code> function) in proper way:</p>
<pre><code>import re
str = 'Bell St/Oriel Rd (Bellfield (3081))'
result = re.search(r'^(.*?) \((.*?)\)$', str)
print(result.group(1,2)) # ('Bell St/Oriel Rd', 'Bellfield (3081)')
</code></pre>
| 1 | 2016-08-30T18:01:14Z | [
"python",
"regex"
] |
Using a variable as field name in MySQLdb, Python 2.7 | 39,233,935 | <p>This works when I replace the column variable with an actual column name. I do however need a variable. When I use a variable I get a MySQL syntax error. Can a field be a variable? If so, where is the error?</p>
<pre><code>conn = self.create_connection()
cur = conn[0]
db = conn[1]
cur.execute('''
UPDATE coefficients
SET %s = %s
WHERE coef_id = %s
''' , (sql_col_name, fgi, ici))
db.commit()
</code></pre>
<p>Ok here's the traceback:</p>
<pre><code> raise errorclass, errorvalue
_mysql_exceptions.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''base_rpm' = 3500 WHERE coef_id = 460' at line 1")
</code></pre>
| 0 | 2016-08-30T17:45:30Z | 39,236,320 | <p>The issue there is that Parameter substitution in the <code>execute</code> method is intended to be used for data only - as you found out.
That is not quite explicit in the <a href="https://www.python.org/dev/peps/pep-0249/" rel="nofollow">documentation</a>, but it is how most database drivers implement it. </p>
<p>It is important to note the intent of the parameter substitution in the <code>execute</code> and <code>executemany</code> methods is to format Python objects as strings to the database, and apply escaping and quotes, so that SQL injection becomes difficult (if not impossible) without one having to worry about several places where to put the escaping.</p>
<p>What is needed when you need to use variable column names (or vary other parts of the SQL statement) is to format the string using Python's string formatting methods - and leave the resulting string in a suitable way to be used as a suitable parameter for the data substitution, in a second substitution (so that type casting and quoting still is performed by the driver).</p>
<p>To avoid having to escape the <code>%s</code> entries for parameters itself, you could use a different string interpolation method than the <code>%</code> operator, for example, the newer <code>format</code> string method:</p>
<pre><code> cur.execute('''
UPDATE coefficients
SET {} = %s
WHERE coef_id = %s
'''.format(sql_col_name) ,
(fgi, ici))
</code></pre>
<p>This example shows the simplest way that would make your example work - just write it in code so that it is easily readable and maintanable - for example, using an extra variable for the statement, and calling <code>format</code> prior to the line calling <code>execute</code>. But that is just style. </p>
| 2 | 2016-08-30T20:13:11Z | [
"python",
"python-2.7",
"mysql-python"
] |
Trying to use a string variable in serial .write in Python | 39,233,956 | <p>I am communicating with a power supply through rs232. I can communicate no problem when I send for example:</p>
<pre><code>port.write("\x31")
</code></pre>
<p>but if instead I have a string as a variable</p>
<pre><code>teststring='"\\x31"'
</code></pre>
<p>(which prints out as "\x31")</p>
<p>and I try:</p>
<pre><code>port.write(teststring)
</code></pre>
<p>it does not send the command to the supply. I have tried:</p>
<pre><code>port.write(bytes(teststring,'utf-8'))
</code></pre>
<p>and </p>
<pre><code>port.write(teststring.encode('utf-8'))
</code></pre>
<p>But it still is somehow not sending the same as just entering the text. I need to be able to change this variable, so I cannot just code the text in.
Any help is appreciated!</p>
<p>Using comments below, I am now using an integer</p>
<p>testint=31
and if I print </p>
<p>chr(testint) I get a an odd box with 00 in the top row and 1F in the bottom. What I now need to be able to do is convert the 31 to 0x31, so I can use chr(0x31) which when printed produces 1. Hopefully the .write command will treat chr(0x31) the same as "\x31" ?</p>
| 0 | 2016-08-30T17:46:54Z | 39,234,044 | <p><code>teststring</code> in your example is escaping the backslash; you have <code>"\\x30"</code>, instead of <code>"\x30"</code>. <code>"\x30"</code> is a length-1 string containing the byte 0x30; <code>"\\x30"</code> is a length-4 string containing the characters <code>\</code>, <code>x</code>, <code>3</code> and <code>0</code>. Dropping the first slash in <code>teststring</code> should behave exactly like using <code>port.write("\x30")</code>.</p>
| 0 | 2016-08-30T17:51:32Z | [
"python"
] |
how can i put a input to take the (( url )) for this function | 39,234,059 | <pre><code>import random
import urllib.request
def down_load_imag(url):
name = random.randrange(1, 1000)
full_name = str(name) + ".jpg"
urllib.request.urlretrieve(url, full_name)
down_load_imag ("http://3.bp.blogspot.com/-WjQkpjkw9uQ/Vij8lG0pCdI/AAAAAAAAAJ4/-CifLZ5KG-Y/s1600/fedora_infinity_140x140.png")
</code></pre>
| -4 | 2016-08-30T17:52:39Z | 39,235,020 | <p>in python3 you must use <code>input()</code> function:</p>
<pre><code>import random
import urllib.request
def down_load_imag(url):
name = random.randrange(1, 1000)
full_name = str(name) + ".jpg"
urllib.request.urlretrieve(url, full_name)
url = input('Enter url: ')
# here you can evaluate url with regex and if passed:
down_load_imag (url)
</code></pre>
| 0 | 2016-08-30T18:50:26Z | [
"python",
"python-3.x"
] |
how can i put a input to take the (( url )) for this function | 39,234,059 | <pre><code>import random
import urllib.request
def down_load_imag(url):
name = random.randrange(1, 1000)
full_name = str(name) + ".jpg"
urllib.request.urlretrieve(url, full_name)
down_load_imag ("http://3.bp.blogspot.com/-WjQkpjkw9uQ/Vij8lG0pCdI/AAAAAAAAAJ4/-CifLZ5KG-Y/s1600/fedora_infinity_140x140.png")
</code></pre>
| -4 | 2016-08-30T17:52:39Z | 39,235,469 | <p>Python comes with the <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow" title="argparse">argparse</a> module. Suppose your code lives in <code>downloader.py</code>, then add this:</p>
<pre><code>import argparse
import sys
parser = argparse.ArgumentParser(description="Download image to random filename")
parser.add_argument("--url", help="image url to download")
args = parser.parse_args() # parses sys.argv
url = args.url
if url is None:
print("Give me an URL please!", file=sys.stderr)
sys.exit(-1)
down_load_imag(url)
</code></pre>
<p>Then use like this:</p>
<pre><code>python3 downloader.py --url http://what.ever.com/image.jpg
</code></pre>
<p>You can refine this to take the url using <code>input()</code> if <code>url</code> is None.</p>
| 0 | 2016-08-30T19:18:05Z | [
"python",
"python-3.x"
] |
list index out of range in heroku/phython - trying to follow guide to code | 39,234,065 | <p>I'm trying to follow this guide: <a href="https://medium.com/@sarahnadia/how-to-code-a-simple-twitter-bot-for-complete-beginners-36e37231e67d#.k2cljjf0m" rel="nofollow">https://medium.com/@sarahnadia/how-to-code-a-simple-twitter-bot-for-complete-beginners-36e37231e67d#.k2cljjf0m</a></p>
<p>This is the error message:</p>
<pre><code>MacBook-Pro-2:heroku_ebooks-master Rupert$ heroku run worker
Running worker on ⬢ desolate-brushlands-56729... up, run.6788
Traceback (most recent call last):
File "ebooks.py", line 79, in <module>
source_tweets_iter, max_id = grab_tweets(api,max_id)
File "ebooks.py", line 51, in grab_tweets
max_id = user_tweets[len(user_tweets)-1].id-1
IndexError: list index out of range
</code></pre>
<p>This is the code:</p>
<pre><code>def grab_tweets(api, max_id=None):
source_tweets=[]
user_tweets = api.GetUserTimeline(screen_name=user, count=200, max_id=max_id, include_rts=True, trim_user=True, exclude_replies=True)
max_id = user_tweets[len(user_tweets)-1].id-1
for tweet in user_tweets:
tweet.text = filter_tweet(tweet)
if len(tweet.text) != 0:
source_tweets.append(tweet.text)
return source_tweets, max_id
if __name__=="__main__":
order = ORDER
if DEBUG==False:
guess = random.choice(range(ODDS))
else:
guess = 0
if guess == 0:
if STATIC_TEST==True:
file = TEST_SOURCE
print ">>> Generating from {0}".format(file)
string_list = open(file).readlines()
for item in string_list:
source_tweets = item.split(",")
else:
source_tweets = []
for handle in SOURCE_ACCOUNTS:
user=handle
api=connect()
max_id=None
for x in range(17)[1:]:
source_tweets_iter, max_id = grab_tweets(api,max_id)
source_tweets += source_tweets_iter
</code></pre>
| 0 | 2016-08-30T17:52:57Z | 39,235,057 | <p>The traceback tells us the error is here:</p>
<pre><code>max_id = user_tweets[len(user_tweets)-1].id-1
</code></pre>
<p>which will be out of range if user_tweets is an empty list. You can check to make sure it isn't before trying to access its value.</p>
<pre><code>if len(user_tweets) > 0:
max_id = user_tweets[-1].id-1
</code></pre>
<p>Note the use of <code>user_tweets[-1]</code>. Using a negative index counts backward from the end, so the last element of a list is always at <code>[-1]</code>.</p>
| 0 | 2016-08-30T18:52:33Z | [
"python",
"heroku"
] |
Could not broadcast input array from shape (1285) into shape (1285, 5334) | 39,234,069 | <p>I'm trying to follow some example code provided in the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html" rel="nofollow">documentation</a> for <code>np.linalg.svd</code> in order to compare term and document similarities following an SVD on a TDM matrix. Here's what I've got:</p>
<pre><code>results_t = np.linalg.svd(tdm_t)
results_t[1].shape
</code></pre>
<p>yields</p>
<pre><code>(1285,)
</code></pre>
<p>Also </p>
<pre><code>results_t[2].shape
(5334, 5334)
</code></pre>
<p>So then trying to broadcast these results to create a real <code>S</code> matrix per the classic SVD projection approach, I've got:</p>
<pre><code>S = np.zeros((results_t[0].shape[0], results_t[2].shape[0]), dtype = float)
S[:results_t[2].shape[0], :results_t[2].shape[0]] = results_t[1]
</code></pre>
<p>This last line produces the error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-329-16e79bc97c4b> in <module>()
----> 1 S[:results_t[2].shape[0], :results_t[2].shape[0]] = results_t[1]
ValueError: could not broadcast input array from shape (1285) into shape (1285,5334)
</code></pre>
<p>What am I doing wrong here?</p>
| -2 | 2016-08-30T17:53:04Z | 39,234,275 | <p>(note, looking at Bi Rico's answer, it looks like perhaps the right interpretation was that you aren't actually trying to broadcast with that command? This answer shows how to actually do the broadcasting.)</p>
<pre><code>X = scipy.zeros((5,4))
X
> array([[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]])
Y = scipy.arange(5)
Y
> array([0, 1, 2, 3, 4])
X[:,:]=Y
> could not broadcast input array from shape (5) into shape (5,4)
</code></pre>
<p>So instead try</p>
<pre><code>X[:,:]=Y[:,None]
X
> array([[ 0., 0., 0., 0.],
[ 1., 1., 1., 1.],
[ 2., 2., 2., 2.],
[ 3., 3., 3., 3.],
[ 4., 4., 4., 4.]])
</code></pre>
<p>You can get a bit more understanding of the issue from the following</p>
<pre><code>Z = scipy.arange(4)
Z
> array([0, 1, 2, 3])
X[:,:]=Z
X
>array([[ 0., 1., 2., 3.],
[ 0., 1., 2., 3.],
[ 0., 1., 2., 3.],
[ 0., 1., 2., 3.],
[ 0., 1., 2., 3.]])
</code></pre>
<p>The issue here is that it's convinced you're treating <code>Y</code> (or <code>Z</code>) as a row of the array, while you're trying to fit it into a column. Doing <code>Y[:,None]</code> basically forces it to interpret Y as a column.</p>
| 0 | 2016-08-30T18:05:20Z | [
"python",
"numpy",
"svd"
] |
Could not broadcast input array from shape (1285) into shape (1285, 5334) | 39,234,069 | <p>I'm trying to follow some example code provided in the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html" rel="nofollow">documentation</a> for <code>np.linalg.svd</code> in order to compare term and document similarities following an SVD on a TDM matrix. Here's what I've got:</p>
<pre><code>results_t = np.linalg.svd(tdm_t)
results_t[1].shape
</code></pre>
<p>yields</p>
<pre><code>(1285,)
</code></pre>
<p>Also </p>
<pre><code>results_t[2].shape
(5334, 5334)
</code></pre>
<p>So then trying to broadcast these results to create a real <code>S</code> matrix per the classic SVD projection approach, I've got:</p>
<pre><code>S = np.zeros((results_t[0].shape[0], results_t[2].shape[0]), dtype = float)
S[:results_t[2].shape[0], :results_t[2].shape[0]] = results_t[1]
</code></pre>
<p>This last line produces the error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-329-16e79bc97c4b> in <module>()
----> 1 S[:results_t[2].shape[0], :results_t[2].shape[0]] = results_t[1]
ValueError: could not broadcast input array from shape (1285) into shape (1285,5334)
</code></pre>
<p>What am I doing wrong here?</p>
| -2 | 2016-08-30T17:53:04Z | 39,234,309 | <p>You're using slicing when you should be using indexing. Take for example:</p>
<pre><code>A = np.zeros((4, 5))
end0, end1 = A.shape
# These are both true
A == A[:, :]
A[:, :] == A[:end0, :end1]
# To get the diagonal of an array you want to use range or np.arange
A[range(end0), range(end0)] == A.diagonal()
</code></pre>
<p>Some other things you might find useful:</p>
<pre><code># If U.shape is (a, b) and S.shape is (b,)
U.dot(np.diag(S)) == (U * S)
# IF V.shape (b, a) and S.shape is (b,)
np.diag(S).dot(V) == (S.reshape(b, 1) * V)
</code></pre>
| 0 | 2016-08-30T18:06:59Z | [
"python",
"numpy",
"svd"
] |
Could not broadcast input array from shape (1285) into shape (1285, 5334) | 39,234,069 | <p>I'm trying to follow some example code provided in the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html" rel="nofollow">documentation</a> for <code>np.linalg.svd</code> in order to compare term and document similarities following an SVD on a TDM matrix. Here's what I've got:</p>
<pre><code>results_t = np.linalg.svd(tdm_t)
results_t[1].shape
</code></pre>
<p>yields</p>
<pre><code>(1285,)
</code></pre>
<p>Also </p>
<pre><code>results_t[2].shape
(5334, 5334)
</code></pre>
<p>So then trying to broadcast these results to create a real <code>S</code> matrix per the classic SVD projection approach, I've got:</p>
<pre><code>S = np.zeros((results_t[0].shape[0], results_t[2].shape[0]), dtype = float)
S[:results_t[2].shape[0], :results_t[2].shape[0]] = results_t[1]
</code></pre>
<p>This last line produces the error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-329-16e79bc97c4b> in <module>()
----> 1 S[:results_t[2].shape[0], :results_t[2].shape[0]] = results_t[1]
ValueError: could not broadcast input array from shape (1285) into shape (1285,5334)
</code></pre>
<p>What am I doing wrong here?</p>
| -2 | 2016-08-30T17:53:04Z | 39,238,203 | <p>So according to the error message, the target </p>
<pre><code>S[:results_t[2].shape[0], :results_t[2].shape[0]]
</code></pre>
<p>is <code>(1285,5334)</code>, while the source</p>
<pre><code>results_t[1]
</code></pre>
<p>is <code>(1285,)</code>.</p>
<p>So it has to broadcast the source to a shape that matches the target before it can do the assignment. Same would apply if trying to sum two arrays with these shapes, or multiply, etc.</p>
<ul>
<li><p>The first broadcasting step to make the number of dimensions match. The source is 1d, so it needs to be 2d. <code>numpy</code> will do try <code>results_t[1][np.newaxis,:]</code>, producing <code>(1, 1285)</code>.</p></li>
<li><p>Second step is to expand all size 1 dimensions to match the other. But that can't happen here - hence the error. (1285,5334)+(1,1285)?</p></li>
</ul>
<p>======</p>
<p>If you want to assign to a block (or all of <code>S</code>) then use:</p>
<pre><code> S[:results_t[2].shape[0], :results_t[2].shape[0]] = results_t[1][:,np.newaxis]
</code></pre>
<p>To assign <code>r1</code> to a diagonal of <code>S</code>, use list indexing (instead of slices):</p>
<pre><code>S[range(results_t[1].shape[0]), range(results_t[1].shape[0])] = results_t[1]
</code></pre>
<p>or</p>
<pre><code>S[np.diag_indices(results_t[1].shape[0])] = results_t[1]
</code></pre>
<p>In this case those <code>ranges</code> have to match <code>results_t[1]</code> in length.</p>
| 1 | 2016-08-30T22:44:26Z | [
"python",
"numpy",
"svd"
] |
'NotImplementedType' object is not iterable django-easy-pdf | 39,234,070 | <p>I use django-easy-pdf library for my project. When I try to set custom cyrillic font I have </p>
<blockquote>
<p>TypeError: 'NotImplementedType' object is not iterable</p>
</blockquote>
<p>I`ve seen where the error appear and here is what I got</p>
<p><a href="http://i.stack.imgur.com/04jnk.png" rel="nofollow"><img src="http://i.stack.imgur.com/04jnk.png" alt="Error debug"></a></p>
<p>Looks like I have a problem with <strong>src url</strong>, but I have no idea how to solve this issue. I`ve tried change <em>src: url()</em> different ways (manual input,{% static 'path' %}, etc) , but it is not working. I really stuck.</p>
<p>Django = 1.10, Python = 3.4.3</p>
<p>Here`s my template. </p>
<pre><code>{% extends "easy_pdf/base.html" %}
{% load staticfiles %}
{% block extra_style %}
<style type="text/css">
@font-face {
font-family: Palatino Linotype; src: url({% static 'automobiles/fonts/bold.ttf' %});
}
body{
font-family: "Palatino Linotype", Arial, sans-serif;
color: #333333;
}
</style>
{% endblock extra_style %}
{% block content %}
<div class="container">
<h1>PYTHON3 RULES</h1>
<p>ÐÑо должно ÑабоÑаÑÑ</p>
<h3>Ðо библиоÑеке ÑÑо-Ñо не нÑавиÑÑÑ</h3>
</div>
{% endblock content %}
</code></pre>
<p><em>Project structure:</em></p>
<p><a href="http://i.stack.imgur.com/tQ5zY.png" rel="nofollow"><img src="http://i.stack.imgur.com/tQ5zY.png" alt="Project structure"></a></p>
<p><em>settings.py</em></p>
<p><a href="http://i.stack.imgur.com/XmLGW.png" rel="nofollow"><img src="http://i.stack.imgur.com/XmLGW.png" alt="setting.py"></a></p>
| 0 | 2016-08-30T17:53:08Z | 39,235,314 | <p>STATIC_ROOT should be os.path.join(PROJECT_ROOT, 'static'). I am assuming automobiles is the main project. </p>
| 0 | 2016-08-30T19:08:08Z | [
"python",
"django",
"python-3.x",
"pdf",
"django-staticfiles"
] |
Python class shared variables gives unexpected output | 39,234,297 | <p>Based on the explanation on <a href="http://stackoverflow.com/questions/1537202/variables-inside-and-outside-of-a-class-init-function">Shared Variables in Python Class</a> here , I expected the following code to give output as :</p>
<pre><code>123 123
200 200
300 300
</code></pre>
<p>But it is </p>
<pre><code>123 123
200 123
200 300
</code></pre>
<p>Code:</p>
<pre><code>class A:
abc = 123
def __init__(self, a,b,c):
self._a = a
self._b = b
self._c = c
if __name__ == '__main__':
a = A(2, 4, 6)
b = A(3, 9, 27)
print a.abc , b.abc
a.abc = 200
print a.abc , b.abc
A.abc = 300
print a.abc , b.abc
</code></pre>
<p>Can somebody please help understand this ? My impression is that shared variables are same as static variables in C++ classes. Any insights to bust that myth, if it is so, would be helpful too. </p>
| 0 | 2016-08-30T18:06:25Z | 39,234,403 | <p>You are creating a new instance variable -> <code>a.abc</code> and setting it to <code>200</code>. Access the shared static variable instead.</p>
<pre><code>a = A(2, 4, 6)
b = A(3, 9, 27)
print a.abc , b.abc
A.abc = 200 # Set static class variable, instead of creating new instance member
print a.abc , b.abc
A.abc = 300
print a.abc , b.abc
</code></pre>
<p>I recommend reading the very informative Python official docs on <a href="https://docs.python.org/3/tutorial/classes.html#classes" rel="nofollow">[9] Classes</a>.</p>
| 5 | 2016-08-30T18:13:00Z | [
"python",
"class",
"oop"
] |
Python class shared variables gives unexpected output | 39,234,297 | <p>Based on the explanation on <a href="http://stackoverflow.com/questions/1537202/variables-inside-and-outside-of-a-class-init-function">Shared Variables in Python Class</a> here , I expected the following code to give output as :</p>
<pre><code>123 123
200 200
300 300
</code></pre>
<p>But it is </p>
<pre><code>123 123
200 123
200 300
</code></pre>
<p>Code:</p>
<pre><code>class A:
abc = 123
def __init__(self, a,b,c):
self._a = a
self._b = b
self._c = c
if __name__ == '__main__':
a = A(2, 4, 6)
b = A(3, 9, 27)
print a.abc , b.abc
a.abc = 200
print a.abc , b.abc
A.abc = 300
print a.abc , b.abc
</code></pre>
<p>Can somebody please help understand this ? My impression is that shared variables are same as static variables in C++ classes. Any insights to bust that myth, if it is so, would be helpful too. </p>
| 0 | 2016-08-30T18:06:25Z | 39,234,412 | <p>Initially, the class A has an <code>abc</code> defined to be 123, which each of <code>a</code> and <code>b</code> use since neither has an <code>abc</code> of their own.</p>
<p>Then you execute <code>a.abc = 200</code>, which creates an <code>abc</code> for <code>a</code>; <code>b</code> still uses the one from <code>A</code>.</p>
<p>Then you execute <code>A.abc = 300</code>, which updates the <code>abc</code> for <code>A</code>, which <code>b</code> still looks to, so it see the new value. But <code>a</code> has its own <code>abc</code>, and so doesn't care.</p>
| 3 | 2016-08-30T18:13:24Z | [
"python",
"class",
"oop"
] |
Python class shared variables gives unexpected output | 39,234,297 | <p>Based on the explanation on <a href="http://stackoverflow.com/questions/1537202/variables-inside-and-outside-of-a-class-init-function">Shared Variables in Python Class</a> here , I expected the following code to give output as :</p>
<pre><code>123 123
200 200
300 300
</code></pre>
<p>But it is </p>
<pre><code>123 123
200 123
200 300
</code></pre>
<p>Code:</p>
<pre><code>class A:
abc = 123
def __init__(self, a,b,c):
self._a = a
self._b = b
self._c = c
if __name__ == '__main__':
a = A(2, 4, 6)
b = A(3, 9, 27)
print a.abc , b.abc
a.abc = 200
print a.abc , b.abc
A.abc = 300
print a.abc , b.abc
</code></pre>
<p>Can somebody please help understand this ? My impression is that shared variables are same as static variables in C++ classes. Any insights to bust that myth, if it is so, would be helpful too. </p>
| 0 | 2016-08-30T18:06:25Z | 39,234,501 | <p>Your expected output would mean that you can't change the attribute of an object without changing the attribute of every instance of the objects class. That would obviously break the core of the object orientation idea. I think you can overwrite "shared" variables because this gives just more possibilities.</p>
| -1 | 2016-08-30T18:18:29Z | [
"python",
"class",
"oop"
] |
Python class shared variables gives unexpected output | 39,234,297 | <p>Based on the explanation on <a href="http://stackoverflow.com/questions/1537202/variables-inside-and-outside-of-a-class-init-function">Shared Variables in Python Class</a> here , I expected the following code to give output as :</p>
<pre><code>123 123
200 200
300 300
</code></pre>
<p>But it is </p>
<pre><code>123 123
200 123
200 300
</code></pre>
<p>Code:</p>
<pre><code>class A:
abc = 123
def __init__(self, a,b,c):
self._a = a
self._b = b
self._c = c
if __name__ == '__main__':
a = A(2, 4, 6)
b = A(3, 9, 27)
print a.abc , b.abc
a.abc = 200
print a.abc , b.abc
A.abc = 300
print a.abc , b.abc
</code></pre>
<p>Can somebody please help understand this ? My impression is that shared variables are same as static variables in C++ classes. Any insights to bust that myth, if it is so, would be helpful too. </p>
| 0 | 2016-08-30T18:06:25Z | 39,234,864 | <p><a href="http://stackoverflow.com/a/39234412/2373278">@Scott Hunter's answer</a> explains the behavior of code in question best but I would like to add C++ perspective to it here. Unfortunately it couldn't be added as comment in that answer as it is too long. </p>
<p>As in C++, to access static members name needs to be qualified by class name (e.g. <code>A::abc</code> or <code>A::func_static()</code> ) , in Python also you have to access the shared variable outside the class using Class name as qualifier. </p>
<p>Within the class, C++ allows to use the static variable name as like other member variables (i.e. without any qualifier) and same is true with Python as well, as shared variables can be accessed using self such as <code>self.abc</code> here.</p>
<p>Only thing which is different here is Python allows you to create an instance variable with the same name as shared variable for that instance only. </p>
| 0 | 2016-08-30T18:40:08Z | [
"python",
"class",
"oop"
] |
Python - Printing a ctype string buffer in my Read/WriteProcessMemory application | 39,234,298 | <p>Let me say that this was my first attempt at a proper memory reading and writing application. The writing function isn't even implemented yet but it will come in time. Just don't flame me for the code.</p>
<p>First thing is the complete program for so everyone can see (don't comment on my "bad coding choices", the thing works and I get the process handle and ID just fine)</p>
<pre class="lang-css prettyprint-override"><code>import ctypes
from ctypes import wintypes
from time import *
import win32ui, win32process ,win32api
PROCESS_ALL_ACCESS = 0x1F0FFF
ReadProcessMemory = ctypes.WinDLL('kernel32',use_last_error=True).ReadProcessMemory
ReadProcessMemory.argtypes = [wintypes.HANDLE,wintypes.LPCVOID,wintypes.LPVOID,ctypes.c_size_t,ctypes.POINTER(ctypes.c_size_t)]
ReadProcessMemory.restype = wintypes.BOOL
WriteProcessMemory = ctypes.WinDLL('kernel32',use_last_error=True).WriteProcessMemory
WriteProcessMemory.argtypes = [wintypes.HANDLE,wintypes.LPVOID,wintypes.LPCVOID,ctypes.c_size_t,ctypes.POINTER(ctypes.c_size_t)]
WriteProcessMemory.restype = wintypes.BOOL
def main():
WindowName = input("Enter the window name: ")
if (len(WindowName) > 0):
Process, pID = attach(WindowName)
Choice = input("Read or Write?").lower()
if (Choice == "write"):
#DoWriteMethod
pass
if (Choice == "read"):
Read(Process)
else:
print("Invalid Window Name!\n\n")
sleep(2)
return 0
def attach(WindowName):
try:
hWnd = win32ui.FindWindow(None,WindowName).GetSafeHwnd()
except:
print("Window not found!\n\n")
sleep(2)
return 0,0
pID = win32process.GetWindowThreadProcessId(hWnd)[1]
Process = win32api.OpenProcess(PROCESS_ALL_ACCESS,0,pID).handle
print("\nProcess = ", Process)
print("ProcessID = ", pID)
return Process,pID
def Write():
return 0
def Read(Process):
Address = input("Enter the address in hexadecial (0x form) \n")
if (Address[0] == "0" and Address[1] == "x"):
BufferAddress = ctypes.create_string_buffer(64)
ptr = ctypes.pointer(BufferAddress)
ReadProcessMemory(Process,Address,BufferAddress,64,None)
print("Pointer contains: ", BufferAddress, "\n\n")
main()
else:
print("Invalid address!\n\n")
sleep(2)
Read(Process)
return 0
if (main() == 0):
main()
if (main() == 1):
exit(0)
</code></pre>
<p>On line 54 I create a 'ctype.create_string_buffer(64)'</p>
<p>I then use this in ReadProcessMemory where the value should be stored but whenever I print it on line 57 it just returns the following: </p>
<pre><code>Enter the window name: New Tab - Google Chrome
Process = 608
ProcessID = 2844
Read or Write?Read
Enter the address in hexadecial (0x form)
0x1A6606BE380
Pointer contains: <ctypes.c_char_Array_64 object at 0x000001A0101D78C8>
</code></pre>
<p>I know this value should be 90 (found a random address in cheat engine) so it must contain the value 90? How could I find this value.</p>
<p>I am using Python 3.5.0 and the corresponding win32 modules package from</p>
<p>'pip install pypiwin32'</p>
| 1 | 2016-08-30T18:06:26Z | 39,382,072 | <p>You are printing the object. Print the contents:</p>
<pre><code>>>> import ctypes
>>> BufferAddress = ctypes.create_string_buffer(64)
>>> print(BufferAddress)
<ctypes.c_char_Array_64 object at 0x0000000003124248>
>>> dir(BufferAddress)
['__class__', '__ctypes_from_outparam__', '__delattr__', '__delitem__', '__dict__', '__dir__', '__doc__', '__eq__', '__f
ormat__', '__ge__', '__getattribute__', '__getitem__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__',
'__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__setitem__', '__setstate_
_', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_b_base_', '_b_needsfree_', '_length_', '_objects', '_t
ype_', 'raw', 'value']
>>> BufferAddress.raw
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x
00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x
00\x00\x00\x00\x00'
>>> BufferAddress.raw[0]
0
</code></pre>
| 0 | 2016-09-08T03:19:35Z | [
"python",
"memory",
"buffer",
"ctypes",
"readprocessmemory"
] |
.persist() line sometimes leads to Java Out of Heap Space error | 39,234,302 | <p>As far as I know, when you use <code>.persist()</code>, writing the line <code>persist</code> sets only the persistence level, and then the next <code>action</code> in the script will cause the actual persistence work to be invoked.</p>
<p>However, sometimes, seemingly depending on the dataframe, <code>persist()</code> will lead to a Java out of heap space error.</p>
<p>What is the intended behavior of persist, and why could this simple line actually lead to this memory error?</p>
| 1 | 2016-08-30T18:06:37Z | 39,237,323 | <p>The whole point of <a href="http://spark.apache.org/docs/latest/programming-guide.html#rdd-persistence" rel="nofollow">RDD Persistence</a> is to store intermediate results in memory, allowing faster access on subsequent use. There are several different levels of persistence, ranging from <code>MEMORY_ONLY</code> (the default), over <code>MEMORY_AND_DISK</code>, up to <code>DISK_ONLY</code>. Persisting purely to memory means that there has to be enough heap space for the persist to work. If you run out of heap memory you can</p>
<ul>
<li>go for a lower persistence level, </li>
<li>reduce the size of your partitions,</li>
<li>reduce the total number of persisted RDD stages, e.g., by <code>unpersist</code>ing them. </li>
</ul>
<p>Finding the right balance is one of the key challenges in Spark to achieve a good tradeoff between memory and CPU usage.</p>
| 2 | 2016-08-30T21:25:24Z | [
"python",
"apache-spark",
"pyspark",
"elastic-map-reduce"
] |
Towards understanding One Line For loops | 39,234,453 | <p>After reading <a href="http://stackoverflow.com/questions/12920214/python-for-syntax-block-code-vs-single-line-generator-expressions">this</a> question, I have this PySpark code:</p>
<pre><code>model = KMeansModel(model.Cs[0])
first_split = split_vecs.map(lambda x: x[0])
model.computeCost(first_split)
model = KMeansModel(model.Cs[1])
second_split = split_vecs.map(lambda x: x[1])
model.computeCost(second_split)
</code></pre>
<p>Can I write this into a one-liner for loop? Or these loops are restricted in having <em>only one</em> line in their body?</p>
<p>Note: I am not looking for code-only answers, I want to learn, so please, explain. :)</p>
<hr>
<p>Here is my tragic attempt:</p>
<pre><code>model.computeCost(split) for i in range(2): # <- the semicolon here doesn't feel right..Where to put the other lines?
</code></pre>
<hr>
<p>Edit:</p>
<p>Yes, I know that I can write a regular <a href="/questions/tagged/for-loop" class="post-tag" title="show questions tagged 'for-loop'" rel="tag">for-loop</a>, but I would like to understand one-liner for loops. This is for experimenting. You see, when reading others' people code, I see them often, and I am not comfortable with them... :/</p>
| 2 | 2016-08-30T18:15:28Z | 39,234,486 | <p>Try it without the semi colon. Though honestly I wouldn't recommend using one line for statements outside of generator expressions</p>
| 0 | 2016-08-30T18:17:23Z | [
"python",
"loops",
"for-loop",
"apache-spark",
"pyspark"
] |
Towards understanding One Line For loops | 39,234,453 | <p>After reading <a href="http://stackoverflow.com/questions/12920214/python-for-syntax-block-code-vs-single-line-generator-expressions">this</a> question, I have this PySpark code:</p>
<pre><code>model = KMeansModel(model.Cs[0])
first_split = split_vecs.map(lambda x: x[0])
model.computeCost(first_split)
model = KMeansModel(model.Cs[1])
second_split = split_vecs.map(lambda x: x[1])
model.computeCost(second_split)
</code></pre>
<p>Can I write this into a one-liner for loop? Or these loops are restricted in having <em>only one</em> line in their body?</p>
<p>Note: I am not looking for code-only answers, I want to learn, so please, explain. :)</p>
<hr>
<p>Here is my tragic attempt:</p>
<pre><code>model.computeCost(split) for i in range(2): # <- the semicolon here doesn't feel right..Where to put the other lines?
</code></pre>
<hr>
<p>Edit:</p>
<p>Yes, I know that I can write a regular <a href="/questions/tagged/for-loop" class="post-tag" title="show questions tagged 'for-loop'" rel="tag">for-loop</a>, but I would like to understand one-liner for loops. This is for experimenting. You see, when reading others' people code, I see them often, and I am not comfortable with them... :/</p>
| 2 | 2016-08-30T18:15:28Z | 39,234,855 | <p>This is untested, but it looks like it could work for you</p>
<pre><code>computedCost = [KMeansModel(model.Cs[i]).computeCost(x[i]) for i in xrange(2)]
</code></pre>
<p>What it does is create a list of results after performing computeCost() on the results of <code>KMeansModel()</code>. the <code>xrange for loop</code> just returns the values for i</p>
| 1 | 2016-08-30T18:39:50Z | [
"python",
"loops",
"for-loop",
"apache-spark",
"pyspark"
] |
Towards understanding One Line For loops | 39,234,453 | <p>After reading <a href="http://stackoverflow.com/questions/12920214/python-for-syntax-block-code-vs-single-line-generator-expressions">this</a> question, I have this PySpark code:</p>
<pre><code>model = KMeansModel(model.Cs[0])
first_split = split_vecs.map(lambda x: x[0])
model.computeCost(first_split)
model = KMeansModel(model.Cs[1])
second_split = split_vecs.map(lambda x: x[1])
model.computeCost(second_split)
</code></pre>
<p>Can I write this into a one-liner for loop? Or these loops are restricted in having <em>only one</em> line in their body?</p>
<p>Note: I am not looking for code-only answers, I want to learn, so please, explain. :)</p>
<hr>
<p>Here is my tragic attempt:</p>
<pre><code>model.computeCost(split) for i in range(2): # <- the semicolon here doesn't feel right..Where to put the other lines?
</code></pre>
<hr>
<p>Edit:</p>
<p>Yes, I know that I can write a regular <a href="/questions/tagged/for-loop" class="post-tag" title="show questions tagged 'for-loop'" rel="tag">for-loop</a>, but I would like to understand one-liner for loops. This is for experimenting. You see, when reading others' people code, I see them often, and I am not comfortable with them... :/</p>
| 2 | 2016-08-30T18:15:28Z | 39,234,861 | <p>A list comprehension version of what you did in that example would be:</p>
<pre><code>[KMeansModel(model.Cs[i]).computeCost(split_vecs.map(lambda x: x[i])) for i in range(2)]
</code></pre>
<p>This is no different than:</p>
<pre><code>results = []
for i in range(2):
results.append(KMeansModel(model.Cs[i]).computeCost(split_vecs.map(lambda x: x[i])))
</code></pre>
<p>So for each <code>i</code>, it appends the returning value of that chained expression to the list. For this example, it happened to work because your three lines could be chained together. You are calling <code>computeCost()</code> method on the object you created with <code>KMeansModel(model.Cs[0])</code> and the parameter for that is <code>split_vecs.map(lambda x: x[0])</code>.</p>
| 4 | 2016-08-30T18:40:04Z | [
"python",
"loops",
"for-loop",
"apache-spark",
"pyspark"
] |
Towards understanding One Line For loops | 39,234,453 | <p>After reading <a href="http://stackoverflow.com/questions/12920214/python-for-syntax-block-code-vs-single-line-generator-expressions">this</a> question, I have this PySpark code:</p>
<pre><code>model = KMeansModel(model.Cs[0])
first_split = split_vecs.map(lambda x: x[0])
model.computeCost(first_split)
model = KMeansModel(model.Cs[1])
second_split = split_vecs.map(lambda x: x[1])
model.computeCost(second_split)
</code></pre>
<p>Can I write this into a one-liner for loop? Or these loops are restricted in having <em>only one</em> line in their body?</p>
<p>Note: I am not looking for code-only answers, I want to learn, so please, explain. :)</p>
<hr>
<p>Here is my tragic attempt:</p>
<pre><code>model.computeCost(split) for i in range(2): # <- the semicolon here doesn't feel right..Where to put the other lines?
</code></pre>
<hr>
<p>Edit:</p>
<p>Yes, I know that I can write a regular <a href="/questions/tagged/for-loop" class="post-tag" title="show questions tagged 'for-loop'" rel="tag">for-loop</a>, but I would like to understand one-liner for loops. This is for experimenting. You see, when reading others' people code, I see them often, and I am not comfortable with them... :/</p>
| 2 | 2016-08-30T18:15:28Z | 39,235,032 | <p>You could wrap the three distinct functions you have (KMeansModel,split_vec.map, and computeCost) in another function, like so:</p>
<pre><code>def master_fx(var):
return fx_C(fx_B(fx_A(var)))
</code></pre>
<p>Now that it looks nice, you can either use list comprehension:</p>
<pre><code>[master_fx(element) for element in range(2)]
</code></pre>
<p>Or a for loop (on one or more lines -- it generally makes no difference, except in terms of readability. I say generally because I do get an error when I try to put another control structure on the same line, as in:</p>
<pre><code>for i in range(2): if i%2==0: print(i)
</code></pre>
<p>However, for readability's sake, you probably wouldn't want something like the line above anyway)</p>
<p>Probably the most important difference is that a for loop is just a control structure whereas list comprehension is a fancy kind of list over which you can define operations. That is why in an interactive interpreter such as ipython, you have to print the output (element and i in the examples above) of a for loop but not of list comprehension.</p>
| 1 | 2016-08-30T18:51:01Z | [
"python",
"loops",
"for-loop",
"apache-spark",
"pyspark"
] |
Towards understanding One Line For loops | 39,234,453 | <p>After reading <a href="http://stackoverflow.com/questions/12920214/python-for-syntax-block-code-vs-single-line-generator-expressions">this</a> question, I have this PySpark code:</p>
<pre><code>model = KMeansModel(model.Cs[0])
first_split = split_vecs.map(lambda x: x[0])
model.computeCost(first_split)
model = KMeansModel(model.Cs[1])
second_split = split_vecs.map(lambda x: x[1])
model.computeCost(second_split)
</code></pre>
<p>Can I write this into a one-liner for loop? Or these loops are restricted in having <em>only one</em> line in their body?</p>
<p>Note: I am not looking for code-only answers, I want to learn, so please, explain. :)</p>
<hr>
<p>Here is my tragic attempt:</p>
<pre><code>model.computeCost(split) for i in range(2): # <- the semicolon here doesn't feel right..Where to put the other lines?
</code></pre>
<hr>
<p>Edit:</p>
<p>Yes, I know that I can write a regular <a href="/questions/tagged/for-loop" class="post-tag" title="show questions tagged 'for-loop'" rel="tag">for-loop</a>, but I would like to understand one-liner for loops. This is for experimenting. You see, when reading others' people code, I see them often, and I am not comfortable with them... :/</p>
| 2 | 2016-08-30T18:15:28Z | 39,235,078 | <p>What you're calling "one-liner for loops" are actually called"list comprehensions", "dictionary comprehensions", or "<a href="https://docs.python.org/3/tutorial/classes.html#generators" rel="nofollow">generator</a> expressions". They are more limited than for-loops, and work as follows:</p>
<pre><code># List comprehension
result = [expression for name in iterable]
# equivalent to:
result = []
for name in iterable:
result.append(expression)
# Dictionary comprehension
result = {key_expression: value_expression for name in iterable}
# equivalent to:
result = {}
for name in iterable:
result[key_expression] = value_expression
# Generator expression
result = (expression for name in iterable)
# equivalent to
def anonymous_generator():
for name in iterable:
yield expression
result = anonymous_generator()
</code></pre>
<p>You can nest them, they aren't actually required to be one line. For a (probably-not-useful) example, list comprehensions could be used to get a list of all possible pairs of elements from a list <code>listA</code> and elements from lists in a dict <code>dictB</code> keyed by things from <code>listA</code> (the line break is not required, but helps readability):</p>
<pre><code>pairs = [(a, b) for a in listA
for b in dictB[a]]
# equivalent to:
pairs = []
for a in listA:
for b in dictB[a]:
pairs.append(a, b)
</code></pre>
<p>However, the main limitation of them is that you can't call arbitrary functions - the only places you can put expressions are when saying what iterables you're using and what to output into the result. Side-effects of any functions you call in that will happen, though! For your specific case, you can't do it in any of those simply, because you keep re-assigning model and you can't do that in the middle of a comprehension. It's probably possible to twist things around enough to get the same effect (by writing other functions that do the assignment as a side-effect before returning the correct value), but in this case not really worth it.</p>
| 2 | 2016-08-30T18:53:49Z | [
"python",
"loops",
"for-loop",
"apache-spark",
"pyspark"
] |
Matlab to Python numpy indexing and multiplication issue | 39,234,553 | <p>I have the following line of code in MATLAB which I am trying to convert to Python <code>numpy</code>:</p>
<pre><code>pred = traindata(:,2:257)*beta;
</code></pre>
<p>In Python, I have:</p>
<pre><code>pred = traindata[ : , 1:257]*beta
</code></pre>
<p><code>beta</code> is a 256 x 1 array. </p>
<p>In MATLAB,</p>
<pre><code>size(pred) = 1389 x 1
</code></pre>
<p>But in Python,</p>
<pre><code>pred.shape = (1389L, 256L)
</code></pre>
<p>So, I found out that multiplying by the <code>beta</code> array is producing the difference between the two arrays. </p>
<p>How do I write the original Python line, so that the size of <code>pred</code> is 1389 x 1, like it is in MATLAB when I multiply by my beta array?</p>
| 3 | 2016-08-30T18:22:03Z | 39,234,756 | <p>I suspect that <code>beta</code> is in fact a 1D <code>numpy</code> array. In <code>numpy</code>, 1D arrays are not row or column vectors where MATLAB clearly makes this distinction. These are simply 1D arrays agnostic of any shape. If you must, you need to manually introduce a new singleton dimension to the <code>beta</code> vector to facilitate the multiplication. On top of this, the <code>*</code> operator actually performs <strong>element-wise</strong> multiplication. To perform matrix-vector or matrix-matrix multiplication, you must use <code>numpy</code>'s <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html"><code>dot</code></a> function to do so.</p>
<p>Therefore, you must do something like this:</p>
<pre><code>import numpy as np # Just in case
pred = np.dot(traindata[:, 1:257], beta[:,None])
</code></pre>
<p><code>beta[:,None]</code> will create a 2D <code>numpy</code> array where the elements from the 1D array are populated along the rows, effectively making a column vector (i.e. 256 x 1). However, if you have already done this on <code>beta</code>, then you don't need to introduce the new singleton dimension. Just use <code>dot</code> normally:</p>
<pre><code>pred = np.dot(traindata[:, 1:257], beta)
</code></pre>
| 8 | 2016-08-30T18:34:13Z | [
"python",
"matlab",
"numpy"
] |
How do I create dynamically, and manage, new instances of common data types in Python? | 39,234,678 | <p>I want to make a program which will ask the user for a number (let's say 3) and create three 3x3 lists, or 3 sets of 3 members or some other complex data type (times 3) Python already knows of. </p>
<p>Many programs create new objects without strict programming declaration of their instances. For example in Cinema4D (3D graphics software) i can push a button and create as many cubes I want. But I don't know the programming mechanics of this automatic instance creation without a written code declaration like: </p>
<pre><code>cubeobj cube_1
cube_1.name("Cube.1") ...
</code></pre>
<p>In C++ something like that would require the operator <code>new</code> and the function <code>malloc()</code>. Are there any equivalents for them in Python?</p>
<p>I've searched among many Python books and didn't find anything, what kind of Python topic would discuss something like that?</p>
| 0 | 2016-08-30T18:29:52Z | 39,234,830 | <p>try this: </p>
<pre><code>num = input()
lst = [[0 for __ in range(num)] for _ in range(num)]
</code></pre>
<p>for specific type you should use numpy arrays or <code>array</code> module</p>
| 0 | 2016-08-30T18:38:30Z | [
"python",
"class",
"object",
"types"
] |
How do I create dynamically, and manage, new instances of common data types in Python? | 39,234,678 | <p>I want to make a program which will ask the user for a number (let's say 3) and create three 3x3 lists, or 3 sets of 3 members or some other complex data type (times 3) Python already knows of. </p>
<p>Many programs create new objects without strict programming declaration of their instances. For example in Cinema4D (3D graphics software) i can push a button and create as many cubes I want. But I don't know the programming mechanics of this automatic instance creation without a written code declaration like: </p>
<pre><code>cubeobj cube_1
cube_1.name("Cube.1") ...
</code></pre>
<p>In C++ something like that would require the operator <code>new</code> and the function <code>malloc()</code>. Are there any equivalents for them in Python?</p>
<p>I've searched among many Python books and didn't find anything, what kind of Python topic would discuss something like that?</p>
| 0 | 2016-08-30T18:29:52Z | 39,281,692 | <p>If you want a triple nested list, you can create it like this:</p>
<pre><code>In [1]: num = 3
In [2]: [[[[0 for __ in range(num)] for _ in range(num)]] for ___ in range(num)]
Out[2]:
[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]]],
[[[0, 0, 0], [0, 0, 0], [0, 0, 0]]],
[[[0, 0, 0], [0, 0, 0], [0, 0, 0]]]]
</code></pre>
<p>Managing multidimensional arrays may be much easier with <code>numpy</code> though:</p>
<pre><code>In [3]: import numpy as np
In [4]: np.zeros((num, num, num))
Out[4]:
array([[[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]],
[[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]],
[[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]]])
</code></pre>
<p>But the answer to your question in general is just that - you can create as many objects as you like by calling the corresponding constructor and keeping the returned instance accessible through a list, a dictionary or something else.</p>
<p>Using a hypothetical cube example, you can create <code>num</code> <code>Cube</code> objects and put them all in a list:</p>
<pre><code>num = int(input())
cubes = [Cube(name="Cube " + str(i)) for i in range(num)]
</code></pre>
| 0 | 2016-09-01T22:16:17Z | [
"python",
"class",
"object",
"types"
] |
How do I create dynamically, and manage, new instances of common data types in Python? | 39,234,678 | <p>I want to make a program which will ask the user for a number (let's say 3) and create three 3x3 lists, or 3 sets of 3 members or some other complex data type (times 3) Python already knows of. </p>
<p>Many programs create new objects without strict programming declaration of their instances. For example in Cinema4D (3D graphics software) i can push a button and create as many cubes I want. But I don't know the programming mechanics of this automatic instance creation without a written code declaration like: </p>
<pre><code>cubeobj cube_1
cube_1.name("Cube.1") ...
</code></pre>
<p>In C++ something like that would require the operator <code>new</code> and the function <code>malloc()</code>. Are there any equivalents for them in Python?</p>
<p>I've searched among many Python books and didn't find anything, what kind of Python topic would discuss something like that?</p>
| 0 | 2016-08-30T18:29:52Z | 39,281,724 | <p>OK, I think I've come up with a solution.
Instead of trying to find a way to create new lists I will just append as many new indexes I need on a predefined empty list and treat that linear list as many unrelated lists using <code>for</code> loops and modulos related to my input number.</p>
<p>I still don't know a generic programming method for automatic class instancing though ...</p>
| 0 | 2016-09-01T22:19:40Z | [
"python",
"class",
"object",
"types"
] |
Adding Probability to the predicted value | 39,234,697 | <p>I have a testDF like this and try to make a binary classification [0;1]:</p>
<p><a href="http://i.stack.imgur.com/wTSmJ.png" rel="nofollow"><img src="http://i.stack.imgur.com/wTSmJ.png" alt="enter image description here"></a></p>
<p>Also I have a trainDF with the same structured and with filled bad values in it for training purposes.</p>
<p>I make a target and train sets from trainDF:</p>
<pre><code>target = trainDF.bad.values
train = trainDF.drop('bad', axis=1).values
</code></pre>
<p>Then I append the logistic regression model and do the cross validation:</p>
<pre><code>model=[]
model.append (linear_model.LogisticRegression(C=1e5))
TRNtrain, TRNtest, TARtrain, TARtest = train_test_split(train, target,test_size=0.3, random_state=0)
</code></pre>
<p>Then fit on validated and do the preds:</p>
<pre><code>model.fit(TRNtrain, TARtrain)
pred_scr = model.predict_proba(TRNtest)[:, 1]
</code></pre>
<p>Then fit on the whole set and predict bad value:</p>
<pre><code>model.fit(train, target)
test = testDF.drop('bad', axis=1).values
testDF.bad=model.predict(test)
</code></pre>
<p>I receive a df with filled values:<a href="http://i.stack.imgur.com/G4nvN.png" rel="nofollow"><img src="http://i.stack.imgur.com/G4nvN.png" alt="enter image description here"></a></p>
<p>My question: How can I add the probability from logistic regression of bad value=1 in additional column? What steps should I take for that?</p>
<p>Any help would be greatly appreciated!</p>
| 0 | 2016-08-30T18:30:51Z | 39,236,521 | <p>The <code>.predict</code> method selects the most probable assignment for your input. If you want to access the probabilities you can use:</p>
<pre><code>log_prob = model.predict_log_proba(test) # Log of probability estimates.
prob = model.predict_proba(test) # Probability estimates.
</code></pre>
<p>You can add either of these directly to the data frame via columnar assignment.</p>
<pre><code>testDF['bad_prob'] = model.predict_proba(test)
</code></pre>
| 1 | 2016-08-30T20:26:22Z | [
"python",
"pandas",
"scikit-learn",
"logistic-regression",
"prediction"
] |
Python Regular Expression Finding a Specific Text Between Headers | 39,234,709 | <p>I'm just starting to learn about regular expressions in Python, and I've made a bit of progress on what I want to get done.</p>
<pre><code>import urllib.request
import urllib.parse
import re
x = urllib.request.urlopen("http://www.SOMEWEBSITE.com")
contents = x.read()
paragraphs = re.findall(r'<p>(.*?)</p>', str(contents))
</code></pre>
<p>So with that regular expression I'm able to find everything between the paragraph headers, but what if I want to find paragraphs with specific words in them? For example, parse all paragraphs that have the word "cat" in them. I know that (.*?) find everything, but I'm just a bit lost on the intuition on finding a paragraph with a specific keyword. </p>
<p>Anyway, thanks.</p>
| 1 | 2016-08-30T18:31:30Z | 39,234,970 | <p>It's better to use BeautifulSoup. Example:</p>
<pre><code>import urllib2
html = urllib2.urlopen("http://www.SOMEWEBSITE.com").read()
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(html)
# now you can search the soup
</code></pre>
<p><strong>Documentation:</strong></p>
<p><a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">BeautifulSoup Doc</a></p>
<p>But... if regex has to be used:</p>
<pre><code>>>> str = "<p>This is some cat in a paragraph.</p>"
>>> re.findall(r'<p>(.*cat.*)</p>', str)
['This is some cat in a paragraph.']
</code></pre>
| 4 | 2016-08-30T18:47:45Z | [
"python",
"regex"
] |
Python script to move files to either a train dir or test dir | 39,234,734 | <p>I am the moment making a python script capable of diving my data into either a train dir or test dir. I provide the script with an ratio, which says what the ratio between train/test should be, an according to that should files randomly be moved either to train or test. </p>
<p>ex. if the ratio = 0.5 then would half of my dataset be in train and the other half in test. </p>
<p>other ex. if the ratio = 0.25 then would 75% dataset be in train and the rest in test. </p>
<p>But the division seem to wrong everytime.. I am trying to seperate 84 files/dirs and can't seem to hit the golden 42/42 seperation.. Any suggesting what could i do differently?</p>
<p>Here is the code: </p>
<pre><code>import sys
import os
import shutil
import numpy
import random
src = sys.argv[1]
destination_data = sys.argv[2]
src_abs = os.path.abspath(src)
destination_data_abs = os.path.abspath(destination_data)
src_files = os.listdir(src_abs)
def copytree(src, dst, symlinks=False, ignore=None, split=0.5):
for item in os.listdir(src):
s = os.path.join(src, item)
d = os.path.join(dst, item)
d_test = os.path.join(dst, 'test', item)
d_train = os.path.join(dst, 'train', item)
print d_test
print d_train
minmax=0.0, 1.0
rand = random.uniform(*minmax)
print rand
if rand > split:
# Inserted into train
if os.path.isdir(s):
shutil.copytree(s, d_train, symlinks, ignore)
print "Copytree used! - TRAIN"
else:
shutil.copy2(s, d_train)
print "Copy 2 used! - TRAIN"
else:
# Inserted into test
if os.path.isdir(s):
shutil.copytree(s, d_test, symlinks, ignore)
print "Copytree used! - TEST"
else:
shutil.copy2(s, d_test)
print "Copy 2 used! - TEST"
copytree(src_abs,destination_data_abs,True)
</code></pre>
<p>the code is being executed on a unix machine ... if that matters?</p>
| 2 | 2016-08-30T18:32:37Z | 39,234,944 | <p>You can take the list of files, <strong>shuffle</strong> it, then <strong>split</strong> it with respect to the split ratio.</p>
<pre><code>import os
import numpy
src_files = os.listdir(".")
n_files = len(src_files)
split_ratio = 0.5
split_index = int(n_files * split_ratio)
numpy.random.shuffle(src_files)
print src_files[0:split_index]
print src_files[split_index:]
</code></pre>
<p><a href="http://www.wolframalpha.com/input/?i=flip%2084%20coins" rel="nofollow">Flipping a coin 84 times will result in a "perfect" 42 heads / 42 tails with a probability of 0.0868.</a></p>
| 3 | 2016-08-30T18:45:54Z | [
"python",
"unix",
"filehandle"
] |
Plot high-low with Seaborn, Pandas | 39,234,843 | <p>I have a pandas dataframe that has by category three data points: mean, max, min.</p>
<p>I'd like to plot these such that the mean is a dot and the max/min are a line. Similar to a high/low/close graph in stocks, or even just error bars.</p>
<p>For the sake of conversation, assume my code looks like</p>
<pre><code>df = pd.DataFrame({'day': ['M', 'T', 'W', 'F'],
'foo' : [1,2,3,4],
'foo_max' : [5,5,6,7],
'foo_min' : [0,1,1,1]})
sns.stripplot(df.day, df.foo, color='black')
plt.show()
</code></pre>
| 0 | 2016-08-30T18:39:13Z | 39,245,766 | <p>You could do it this way:</p>
<pre><code>df.set_index('day', inplace=True)
# tsplot with error bars
ax = sns.tsplot([df['foo_max'], df['foo_min']], err_style="ci_bars",
interpolate=False, color='g')
ax.set_xticks(np.arange(0, df.shape[0]))
ax.set_xticklabels(df.index)
ax.set_ylim(0, df.values.max()+1)
sns.plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/MVU7E.png" rel="nofollow"><img src="http://i.stack.imgur.com/MVU7E.png" alt="enter image description here"></a></p>
| 0 | 2016-08-31T09:28:58Z | [
"python",
"pandas",
"data-visualization",
"seaborn"
] |
how can I print key and value form list of dictionaries in python | 39,234,906 | <p>how can I print key and id from the following in python</p>
<pre><code>[<JIRA Issue: key=u'OPS-22158', id=u'566935'>,
<JIRA Issue: key=u'OPS-22135', id=u'566480'>,
<JIRA Issue: key=u'OPS-22131', id=u'566361'>,
<JIRA Issue: key=u'OPS-21850', id=u'561948'>,
<JIRA Issue: key=u'OPS-20967', id=u'533908'>,
]
</code></pre>
<p>more information about the project.
I am trying to use jira api calls and as an example get list of issues created by a certain user:</p>
<pre><code> from jira import JIRA
from getpass import getpass
from pprint import pprint
import csv
def main():
options = {
'server': 'https://staging-jira.engsrv.mobileiron.com/',
'verify': False
}
password = getpass()
jira = JIRA(options, basic_auth=('hhaddadian', password))
# Get the mutable application properties for this server (requires
# jira-system-administrators permission)
# props = jira.application_properties()
# Find all issues reported by the admin
issues = jira.search_issues('assignee=hhaddadian')
pprint (issues)
for items in issues:
print items
if __name__ == "__main__":
main()
</code></pre>
<p>and my result looks like this
[</p>
<pre><code>root@localhost ~]# python test.py
Password:
/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:838: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html
InsecureRequestWarning)
/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:838: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html
InsecureRequestWarning)
/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:838: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html
InsecureRequestWarning)
[<JIRA Issue: key=u'OPS-22158', id=u'566935'>,
<JIRA Issue: key=u'OPS-22135', id=u'566480'>,
<JIRA Issue: key=u'OPS-22131', id=u'566361'>,
<JIRA Issue: key=u'OPS-21850', id=u'561948'>,
<JIRA Issue: key=u'OPS-20967', id=u'533908'>,
]
OPS-22158
OPS-22135
OPS-22131
OPS-21850
OPS-20967
</code></pre>
<p>I was wondering what kind of data I am getting in return. and how can I print key and id also maybe converting the result to a csv file.</p>
| 1 | 2016-08-30T18:43:31Z | 39,235,398 | <p>If you know key,values this is an easy way: </p>
<pre><code>In [2]: dict_list = [{'key':'iman','value':21} , {'key': 'hooman', 'value' : 22}] #list of dictionaries
In [3]: for dict in dict_list: #dict = a dictionary of list
...: print dict['key'], dict['value'] #key,values
...:
iman 21
hooman 22
</code></pre>
| 1 | 2016-08-30T19:13:09Z | [
"python",
"python-2.7",
"dictionary"
] |
how can I print key and value form list of dictionaries in python | 39,234,906 | <p>how can I print key and id from the following in python</p>
<pre><code>[<JIRA Issue: key=u'OPS-22158', id=u'566935'>,
<JIRA Issue: key=u'OPS-22135', id=u'566480'>,
<JIRA Issue: key=u'OPS-22131', id=u'566361'>,
<JIRA Issue: key=u'OPS-21850', id=u'561948'>,
<JIRA Issue: key=u'OPS-20967', id=u'533908'>,
]
</code></pre>
<p>more information about the project.
I am trying to use jira api calls and as an example get list of issues created by a certain user:</p>
<pre><code> from jira import JIRA
from getpass import getpass
from pprint import pprint
import csv
def main():
options = {
'server': 'https://staging-jira.engsrv.mobileiron.com/',
'verify': False
}
password = getpass()
jira = JIRA(options, basic_auth=('hhaddadian', password))
# Get the mutable application properties for this server (requires
# jira-system-administrators permission)
# props = jira.application_properties()
# Find all issues reported by the admin
issues = jira.search_issues('assignee=hhaddadian')
pprint (issues)
for items in issues:
print items
if __name__ == "__main__":
main()
</code></pre>
<p>and my result looks like this
[</p>
<pre><code>root@localhost ~]# python test.py
Password:
/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:838: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html
InsecureRequestWarning)
/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:838: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html
InsecureRequestWarning)
/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:838: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html
InsecureRequestWarning)
[<JIRA Issue: key=u'OPS-22158', id=u'566935'>,
<JIRA Issue: key=u'OPS-22135', id=u'566480'>,
<JIRA Issue: key=u'OPS-22131', id=u'566361'>,
<JIRA Issue: key=u'OPS-21850', id=u'561948'>,
<JIRA Issue: key=u'OPS-20967', id=u'533908'>,
]
OPS-22158
OPS-22135
OPS-22131
OPS-21850
OPS-20967
</code></pre>
<p>I was wondering what kind of data I am getting in return. and how can I print key and id also maybe converting the result to a csv file.</p>
| 1 | 2016-08-30T18:43:31Z | 39,235,413 | <pre><code>import jira
# stuff
for issue in jira.search_issues('assignee=hhaddadian'):
print(issue.fields.project.key)
</code></pre>
<p>The result of the jira.search_issues function is a list of Jira objects. These objects are defined here: <a href="https://jira.readthedocs.io/en/latest/" rel="nofollow">https://jira.readthedocs.io/en/latest/</a></p>
<p>If you want the whole object (every field) in JSON format:</p>
<pre><code>print(issue.raw)
</code></pre>
| 2 | 2016-08-30T19:13:55Z | [
"python",
"python-2.7",
"dictionary"
] |
python beautifulsoup search across multiple lines | 39,235,110 | <p>I am trying to search for the P/E ratio of a finance page, from the input code shown below. So, essentially I am trying to extract '48.98' from the source.
As the structure is same for Market cap, book value, etc. I am unable to frame the correct code for a soup.find </p>
<p>Would be very grateful for the correct structure of the soup.find code.
I am a newbie and sorry if i am asking something very basic..
Thanks in advance ! </p>
<pre><code><div class="FL" style="width:210px; padding-right:10px">
<div class="PA7 brdb">
<div class="FL gL_10 UC">MARKET CAP (Rs Cr)</div>
<div class="FR gD_12">41,364.28</div>
<div class="CL"></div>
</div>
<div class="PA7 brdb">
<div class="FL gL_10 UC">P/E</div>
<div class="FR gD_12">**48.98**</div>
<div class="CL"></div>
</div>
<div class="PA7 brdb">
<div class="FL gL_10 UC">BOOK VALUE (Rs)</div>
<div class="FR gD_12">147.24</div>
<div class="CL"></div>
</div>
<div class="PA7 brdb">
<div class="FL gL_10 UC">DIV (%)</div>
<div class="FR gD_12">1000.00%</div>
<div class="CL"></div>
</div>
<div class="PA7 brdb">
<div class="FL gL_10 UC">Market Lot</div>
<div class="FR gD_12">1</div>
<div class="CL"></div>
</div>
<div class="PA7 brdb">
<div class="FL gL_10 UC">INDUSTRY P/E</div>
<div class="FR gD_12">60.95</div>
<div class="CL"></div>
</div>
</div>
</code></pre>
| 1 | 2016-08-30T18:55:56Z | 39,235,295 | <p>Use the text to find the div with <em>"P/E"</em> and get the next div:</p>
<pre><code>price = soup.find("div", class_="FL gL_10 UC", text="P/E").find_next("div").text
</code></pre>
<p>If it is always the second div with the css class <em>FR gD_12</em>, you could also just get the first two and extract the second</p>
<pre><code>price = soup.select("div.FR.gD_12", limit=2)[1].text
</code></pre>
| 4 | 2016-08-30T19:07:08Z | [
"python",
"beautifulsoup"
] |
Python Parse XML file for certain lines and output the line to Text widget | 39,235,119 | <p>I need to search a windows msinfo file (.nfo) for certain lines and print them to a Text widget. I can <code>print(line)</code> ever line in the file and I can output every line to the Text widget but as soon as I try to specify lines to output it stops working. I assume this is because the file is an XML but the XML parsing tools I see for python seem to look for lines like data=blah. The entries im looking for look like this when I open them in a txt editor:</p>
<pre><code> <Category name="Disks">
<Data>
<Item><![CDATA[Description]]></Item>
<Value><![CDATA[Disk drive]]></Value>
</Data>
<Data>
<Item><![CDATA[Manufacturer]]></Item>
<Value><![CDATA[(Standard disk drives)]]></Value>
</Data>
<Data>
<Item><![CDATA[Model]]></Item>
<Value><![CDATA[TOSHIB MK1652GSX SCSI Disk Device]]></Value>
</Data>
<Data>
<Item><![CDATA[Bytes/Sector]]></Item>
<Value><![CDATA[512]]></Value>
</Data>
<Data>
<Item><![CDATA[Media Loaded]]></Item>
<Value><![CDATA[Yes]]></Value>
</Data>
<Data>
<Item><![CDATA[Media Type]]></Item>
<Value><![CDATA[Fixed hard disk]]></Value>
</Data>
<Data>
<Item><![CDATA[Partitions]]></Item>
<Value><![CDATA[2]]></Value>
</Data>
<Data>
<Item><![CDATA[SCSI Bus]]></Item>
<Value><![CDATA[1]]></Value>
</Data>
<Data>
<Item><![CDATA[SCSI Logical Unit]]></Item>
<Value><![CDATA[0]]></Value>
</Data>
<Data>
<Item><![CDATA[SCSI Port]]></Item>
<Value><![CDATA[0]]></Value>
</Data>
<Data>
<Item><![CDATA[SCSI Target ID]]></Item>
<Value><![CDATA[0]]></Value>
</Data>
<Data>
<Item><![CDATA[Sectors/Track]]></Item>
<Value><![CDATA[63]]></Value>
</Data>
<Data>
<Item><![CDATA[Size]]></Item>
<Value><![CDATA[149.05 GB (160,039,272,960 bytes)]]></Value>
</Data>
<Data>
<Item><![CDATA[Total Cylinders]]></Item>
<Value><![CDATA[19,457]]></Value>
</Data>
<Data>
<Item><![CDATA[Total Sectors]]></Item>
<Value><![CDATA[312,576,705]]></Value>
</Data>
<Data>
<Item><![CDATA[Total Tracks]]></Item>
<Value><![CDATA[4,961,535]]></Value>
</Data>
<Data>
<Item><![CDATA[Tracks/Cylinder]]></Item>
<Value><![CDATA[255]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition]]></Item>
<Value><![CDATA[Disk #1, Partition #0]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition Size]]></Item>
<Value><![CDATA[117.19 GB (125,830,301,184 bytes)]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition Starting Offset]]></Item>
<Value><![CDATA[32,256 bytes]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition]]></Item>
<Value><![CDATA[Disk #1, Partition #1]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition Size]]></Item>
<Value><![CDATA[31.85 GB (34,200,714,240 bytes)]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition Starting Offset]]></Item>
<Value><![CDATA[125,830,333,440 bytes]]></Value>
</Data>
<Data>
</code></pre>
<p>I found a <a href="http://stackoverflow.com/questions/34070729/parse-a-nfo-file-with-python/34071854">post</a> asking for what I want but the solution doesn't work. The ET.parse is not found:</p>
<pre><code>import xml.etree as ET
file = 'D:\\MsInfo\\msinfo.nfo'
tree = ET.parse(file)
root = tree.getroot()
for element in root.findall('Category'):
value = element.find('Data')
for child in value:
print(child.tag ,":",child.text)
</code></pre>
<p>When using the above I get this:</p>
<blockquote>
<p>"C:\Program Files (x86)\Python35-32\python.exe" "D:/MY
STUFF/Programming/Python/testing.py" Traceback (most recent call
last): File "D:/MY STUFF/Programming/Python/testing.py", line 3, in
tree = ET.parse(file) AttributeError: module 'xml.etree' has no attribute 'parse'</p>
<p>Process finished with exit code 1</p>
</blockquote>
<p>This is a snippet from my code:</p>
<pre><code>try:
u = find("msinfo.nfo", s)
for i in u:
cpfotxt.insert('end', i + "\n")
cpfotxt.yview(END)
cpfotxt.insert('end', "================================= \n")
with open(i, "r") as f:
r = f.readlines()
for line in r:
if "Model" in line:
cpfotxt.insert('end', line + "\n")
</code></pre>
<p>If I remove the <code>if "Model" in line:</code> then it will dump everything into the Text widget fine.</p>
<p>This is how they look when opened normally with on windows:<p>
<a href="http://i.stack.imgur.com/cx4UV.png" rel="nofollow"><img src="http://i.stack.imgur.com/cx4UV.png" alt="enter image description here"></a></p>
<p>Any advice on how to pull lines I need from an nfo/XML file? </p>
<p>Also, when printing lines from an xml the font is bigger and double spaced. How can I make the line print the same way it would from a normal txt file? </p>
| 1 | 2016-08-30T18:56:16Z | 39,235,631 | <p>So you need to understand the structure of the XML and then use the actual tags you're looking for instead of 'Data'</p>
<pre><code> item = element.find('Item')
print(item.tag ,":",item.text)
value = element.find('Value')
print(value.tag ,":",value.text)
</code></pre>
<p>Your actual problem is that you need to change the import you use.</p>
<pre><code>import xml.etree.ElementTree as ET
</code></pre>
<p><a href="https://docs.python.org/2/library/xml.etree.elementtree.html" rel="nofollow">https://docs.python.org/2/library/xml.etree.elementtree.html</a></p>
<p><strong>Edit:</strong> with the way that's structured, you can get a list of Data elements by saying</p>
<pre><code>for data in root.findall('Data'):
item = data.find('Item')
print(item.tag ,":",item.text)
value = data.find('Value')
print(value.tag ,":",value.text)
</code></pre>
<p>Now, understand that if that "Data" tag is not at the root level, then you need to root.find() until you can get to it. In other words, if those "Data" tags are enclosed in some parent tags, you need to root.find("Parent Tag"), hope you get the gist of it</p>
<p><strong>Edit2:</strong> Looked at my own msinfo.nfo file and this worked:</p>
<pre><code>disks = root.find(".//Category[@name='Disks']")
for disk in disks:
item = disk.find('Item')
print(item.tag ,":",item.text)
value = disk.find('Value')
print(value.tag ,":",value.text)
</code></pre>
<p>Note: This uses XPath syntax to find the element, which is only available in ElementTree1.3 (Python 2.7 and higher). You can also brute force it by following the structure of the XML and traversing through the tree until you get to Disks. The path was System Summary->Components->Storage->Disks and under Disks were those Data elements with Item and Value as children.</p>
| 1 | 2016-08-30T19:28:35Z | [
"python",
"xml",
"python-3.x",
"tkinter"
] |
Python Parse XML file for certain lines and output the line to Text widget | 39,235,119 | <p>I need to search a windows msinfo file (.nfo) for certain lines and print them to a Text widget. I can <code>print(line)</code> ever line in the file and I can output every line to the Text widget but as soon as I try to specify lines to output it stops working. I assume this is because the file is an XML but the XML parsing tools I see for python seem to look for lines like data=blah. The entries im looking for look like this when I open them in a txt editor:</p>
<pre><code> <Category name="Disks">
<Data>
<Item><![CDATA[Description]]></Item>
<Value><![CDATA[Disk drive]]></Value>
</Data>
<Data>
<Item><![CDATA[Manufacturer]]></Item>
<Value><![CDATA[(Standard disk drives)]]></Value>
</Data>
<Data>
<Item><![CDATA[Model]]></Item>
<Value><![CDATA[TOSHIB MK1652GSX SCSI Disk Device]]></Value>
</Data>
<Data>
<Item><![CDATA[Bytes/Sector]]></Item>
<Value><![CDATA[512]]></Value>
</Data>
<Data>
<Item><![CDATA[Media Loaded]]></Item>
<Value><![CDATA[Yes]]></Value>
</Data>
<Data>
<Item><![CDATA[Media Type]]></Item>
<Value><![CDATA[Fixed hard disk]]></Value>
</Data>
<Data>
<Item><![CDATA[Partitions]]></Item>
<Value><![CDATA[2]]></Value>
</Data>
<Data>
<Item><![CDATA[SCSI Bus]]></Item>
<Value><![CDATA[1]]></Value>
</Data>
<Data>
<Item><![CDATA[SCSI Logical Unit]]></Item>
<Value><![CDATA[0]]></Value>
</Data>
<Data>
<Item><![CDATA[SCSI Port]]></Item>
<Value><![CDATA[0]]></Value>
</Data>
<Data>
<Item><![CDATA[SCSI Target ID]]></Item>
<Value><![CDATA[0]]></Value>
</Data>
<Data>
<Item><![CDATA[Sectors/Track]]></Item>
<Value><![CDATA[63]]></Value>
</Data>
<Data>
<Item><![CDATA[Size]]></Item>
<Value><![CDATA[149.05 GB (160,039,272,960 bytes)]]></Value>
</Data>
<Data>
<Item><![CDATA[Total Cylinders]]></Item>
<Value><![CDATA[19,457]]></Value>
</Data>
<Data>
<Item><![CDATA[Total Sectors]]></Item>
<Value><![CDATA[312,576,705]]></Value>
</Data>
<Data>
<Item><![CDATA[Total Tracks]]></Item>
<Value><![CDATA[4,961,535]]></Value>
</Data>
<Data>
<Item><![CDATA[Tracks/Cylinder]]></Item>
<Value><![CDATA[255]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition]]></Item>
<Value><![CDATA[Disk #1, Partition #0]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition Size]]></Item>
<Value><![CDATA[117.19 GB (125,830,301,184 bytes)]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition Starting Offset]]></Item>
<Value><![CDATA[32,256 bytes]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition]]></Item>
<Value><![CDATA[Disk #1, Partition #1]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition Size]]></Item>
<Value><![CDATA[31.85 GB (34,200,714,240 bytes)]]></Value>
</Data>
<Data>
<Item><![CDATA[Partition Starting Offset]]></Item>
<Value><![CDATA[125,830,333,440 bytes]]></Value>
</Data>
<Data>
</code></pre>
<p>I found a <a href="http://stackoverflow.com/questions/34070729/parse-a-nfo-file-with-python/34071854">post</a> asking for what I want but the solution doesn't work. The ET.parse is not found:</p>
<pre><code>import xml.etree as ET
file = 'D:\\MsInfo\\msinfo.nfo'
tree = ET.parse(file)
root = tree.getroot()
for element in root.findall('Category'):
value = element.find('Data')
for child in value:
print(child.tag ,":",child.text)
</code></pre>
<p>When using the above I get this:</p>
<blockquote>
<p>"C:\Program Files (x86)\Python35-32\python.exe" "D:/MY
STUFF/Programming/Python/testing.py" Traceback (most recent call
last): File "D:/MY STUFF/Programming/Python/testing.py", line 3, in
tree = ET.parse(file) AttributeError: module 'xml.etree' has no attribute 'parse'</p>
<p>Process finished with exit code 1</p>
</blockquote>
<p>This is a snippet from my code:</p>
<pre><code>try:
u = find("msinfo.nfo", s)
for i in u:
cpfotxt.insert('end', i + "\n")
cpfotxt.yview(END)
cpfotxt.insert('end', "================================= \n")
with open(i, "r") as f:
r = f.readlines()
for line in r:
if "Model" in line:
cpfotxt.insert('end', line + "\n")
</code></pre>
<p>If I remove the <code>if "Model" in line:</code> then it will dump everything into the Text widget fine.</p>
<p>This is how they look when opened normally with on windows:<p>
<a href="http://i.stack.imgur.com/cx4UV.png" rel="nofollow"><img src="http://i.stack.imgur.com/cx4UV.png" alt="enter image description here"></a></p>
<p>Any advice on how to pull lines I need from an nfo/XML file? </p>
<p>Also, when printing lines from an xml the font is bigger and double spaced. How can I make the line print the same way it would from a normal txt file? </p>
| 1 | 2016-08-30T18:56:16Z | 39,235,757 | <p>Here is my code with your sample data, I know it could be written better but I think this solves your problem :)<br>
you have to find the root(xml) and then iterate it's texts ! you can also use other methods like <code>iterfind</code> for better solutions</p>
<pre><code>xml_file = "<xml><Item><![CDATA[Model]]></Item><Value><![CDATA[TOSHIB MK1652GSX SCSI Disk Device]]></Value></xml>"
from xml.etree import ElementTree
root = ElementTree.fromstring(xml_file)
start = root.itertext()
while True:
try:
print start.next()
except StopIteration:
break
</code></pre>
<p>Here is the output: </p>
<pre><code>>>>Model
>>>TOSHIB MK1652GSX SCSI Disk Device
</code></pre>
| 0 | 2016-08-30T19:35:35Z | [
"python",
"xml",
"python-3.x",
"tkinter"
] |
ImportError: No module named datadog | 39,235,163 | <p>I'm new to datadog. I followed this <a href="http://docs.datadoghq.com/api/" rel="nofollow">post</a>, and replaced in my app/api keys.</p>
<p>I have : <code>nginx_dd.py</code></p>
<pre><code># Make sure you replace the API and/or APP key below
# with the ones for your account
from datadog import initialize, api
import time
options = {
'api_key': '***',
'app_key': '***'
}
initialize(**options)
now = int(time.time())
query = 'system.cpu.idle{*}by{host}'
print api.Metric.query(start=now - 3600, end=now, query=query)
</code></pre>
<hr>
<p>When I run it <code>python nginx_dd.py</code>, I kept getting </p>
<blockquote>
<p>ImportError: No module named datadog</p>
</blockquote>
<hr>
<p>Any hints / suggestions on this will be a huge helps ! </p>
| 0 | 2016-08-30T19:00:17Z | 39,235,805 | <p>Verify if the datadog package is installed in your environment.</p>
<p>You can do this with this command: </p>
<pre><code>$ pip freeze | grep datadog
</code></pre>
<p>If it's not installed, you can install it with this command:</p>
<pre><code>$ pip install datadog
</code></pre>
| 3 | 2016-08-30T19:38:33Z | [
"python",
"datadog"
] |
what does logging.basicConfig does? | 39,235,166 | <p>I was reading loggers in python and got confused with logging.basicConfig() method. As mentioned in the python docs that it set many config for root logger, does that mean that it set the configs only for root logger or does it do it even for user created loggers also ?</p>
<p>Another doubt is that whenever we create a user defined logger, does it become child logger of root logger ?</p>
| 0 | 2016-08-30T19:00:29Z | 39,236,919 | <p>To answer your second part:</p>
<pre><code> # Pass no arguments to get the root logger
root_logger = logging.getLogger()
# This logger is a child of the root logger
logger_a = logging.getLogger('foo')
# Configure logger_a here e.g. change threshold level to logging.DEBUG
# This is a child of logger_a
logger_b = logging.getLogger('foo.bar')
</code></pre>
<p>Both <code>logger_a</code> and <code>logger_b</code> are user-defined, but only <code>logger_a</code> will inherit the default configuration of <code>root_logger</code>. If, between lines 4 & 6 of the above, you were to configure <code>logger_a</code> so that it had different settings to <code>root_logger</code>, <code>logger_b</code> would default to <code>logger_a</code>'s settings rather than <code>root_logger</code>'s, since it is a direct descendant of <code>logger_a</code> and not <code>root_logger</code>.</p>
| 1 | 2016-08-30T20:53:22Z | [
"python",
"logging"
] |
oauth2 POST - twitter | 39,235,198 | <p>i created a script that will get the users friend list (GET request) and i was successful. Now i am attempting to make a script that will follow a particular user (POST request) and i've been unsuccessful.</p>
<p>here is my oauth function (where the problem lies):</p>
<pre><code>def augment_POST(url,**kwargs) :
secrets = hidden.oauth()
consumer = oauth2.Consumer(secrets['consumer_key'], secrets['consumer_secret'])
token = oauth2.Token(secrets['token_key'],secrets['token_secret'])
oauth_request = oauth2.Request.from_consumer_and_token(consumer, token= token, http_method='POST', http_url=url, parameters=kwargs)
oauth_request.to_postdata() # this returns post data, where should i put it?
oauth_request.sign_request(oauth2.SignatureMethod_HMAC_SHA1(), consumer, token)
return oauth_request.to_url()
</code></pre>
<p>my augment_GET function is the exact same thing except http_mehtod='GET'</p>
<p>for clarity:</p>
<pre><code>def follow_user(id):
seedurl="https://api.twitter.com/1.1/friendships/create.json"
print 'Attempting to follow: %d' % (id,)
url = augment_POST(seedurl,user_id=id)
connection = urllib.urlopen(url)
data = connection.read()
headers = connection.info().dict
</code></pre>
<p>any help will be greatly appreciated.</p>
| 1 | 2016-08-30T19:02:13Z | 39,260,091 | <p>First it seems you need to <code>import urllib2</code> to make a POST request.<br>
You have to send the POST data that you get from the <code>to_postdata</code> method
using the <code>data</code> argument of <code>urlopen</code>:</p>
<pre><code>def augment_POST(url, **kwargs) :
secrets = hidden.oauth()
consumer = oauth2.Consumer(secrets['consumer_key'],
secrets['consumer_secret'])
token = oauth2.Token(secrets['token_key'],
secrets['token_secret'])
oauth_request = oauth2.Request.from_consumer_and_token(
consumer,
token= token,
http_method='POST',
http_url=url,
parameters=kwargs
)
oauth_request.sign_request(oauth2.SignatureMethod_HMAC_SHA1(),
consumer, token)
# this is the data that must be sent with you POST request
return oauth_request.to_postdata()
def follow_user(id):
url = "https://api.twitter.com/1.1/friendships/create.json"
print 'Attempting to follow: %d' % id
postdata = augment(url, method='GET', user_id=id)
# Send the POST request with the data argument
# The url is the same as the data is sent in the body of the request
connection = urllib2.urlopen(url, data=postdata)
data = connection.read()
headers = connection.info().dict
</code></pre>
<hr>
<p>I would recommend to use the <code>requests_oauthlib</code> module which makes all this really easy:</p>
<pre><code>from requests_oauthlib import OAuth1Session
tokens = hidden.oauth()
client = OAuth1Session(tokens['consumer_key'],
tokens['consumer_secret'],
tokens['token_key'],
tokens['token_secret'])
def follow_user(id):
url = "https://api.twitter.com/1.1/friendships/create.json"
print 'Attempting to follow: %d' % id
# for GET requests use client.get and the `params` argument
# instead of the `data` argument
response = client.post(url, data={'user_id': id})
data = response.text
# or even `data = response.json()` to decode the data
headers = response.headers
</code></pre>
| 1 | 2016-08-31T22:39:27Z | [
"python",
"twitter",
"oauth-2.0",
"twitter-oauth"
] |
In python apache beam, is it possible to write elements in a specific order? | 39,235,274 | <p>I'm using beam to process time series data over overlapping windows. At the end of my pipeline I am writing each element to a file. Each element represents a csv row and one of the fields is a timestamp of the associated window. I would like to write the elements in order of that timestamp. Is there a way to do this using the python beam library? </p>
| 2 | 2016-08-30T19:05:57Z | 39,255,667 | <p>While this isn't part of the base distribution, this is something you could implement by processing these elements and sorting them as part of a global window before writing out to a file, with the following caveats:</p>
<ul>
<li>The entire contents of the window would need to fit in memory, or you would need to chunk up the file into smaller global windows.</li>
<li>If you are doing the second option, you'd need to have a strategy for writing the smaller windows in order to the file. </li>
</ul>
| 1 | 2016-08-31T17:30:23Z | [
"python",
"google-cloud-dataflow",
"apache-beam"
] |
PyCharm and reStructuredText (Sphinx) documentation popups | 39,235,275 | <p>Let's imagine, I want to see a docstring popup for one simple method in <strong>PyCharm</strong> 4.5 Community Edition (tried also in 5.0).</p>
<p>I wrote down these docstrings in both <em>epytext</em> syntax (Epydoc generator is unsupported since 2008 and works only for Python2) and <em>reStructuredText</em> syntax (which is used by Sphinx - actively supported generator, used for offical python docs)</p>
<p>The epytext one works in PyCharm documentation popups perfectly</p>
<p><a href="http://i.stack.imgur.com/xTJbX.png" rel="nofollow">PyCharm works with epytext Screenshot</a></p>
<p>But the reStructuredText one doesn't show any parameters at all!</p>
<p><a href="http://i.stack.imgur.com/8lD8l.png" rel="nofollow">PyCharm fails with reStructuredText Screenshot</a></p>
<p>Trying to handle this with PyCharm settings, reading the PyCharm helps, searching through the PyCharm bugtracker and using Google couldn't help me to find the reason why these docstring popups in PyCharm don't work correctly with community-recommended docstring markup language.</p>
<p>Is this because of low demand of the feature? Perhaps, is there some useful alternatives to view the modern documentation markup inside PyCharm or even another IDE? I also need to be able to generate HTML pretty formatted docpages.</p>
<p>I've found <a href="http://stackoverflow.com/questions/31796481/documenting-python-parameters-in-docstring-using-pycharm">another topic</a> here, related to the same issue, but it is still unanswered since previous year. So, I'm guessing what's wrong with my desires to view modern documentation inside modern IDE.</p>
<p>Here are my code examples</p>
<pre><code>def find_links(self, issue, link_type):
"""
Find all issues linked with C{issue} with C{link_type}.
@param issue: Issue key
@type issue: str
@param link_type: Accepts either Type Name (like 'Child') or Link Description (like 'child of')
@type link_type: str
@return: Keys of found issues
@rtype: list
"""
result_keys = []
link_list = self.get_link_list(issue)
for link in link_list:
... # omitted
return result_keys
def test_sphinx_docs_method(self, issue, link_type):
"""
Find all issues linked with *issue* with *link_type*.
:param issue: Issue key
:type issue: str
:param link_type: Accepts either Type Name (like 'Child') or Link Description (like 'child of')
:type link_type: str
:return: Keys of found issues
:rtype: list
"""
result_keys = []
link_list = self.get_link_list(issue)
for link in link_list:
... # omitted
return result_keys
</code></pre>
| 1 | 2016-08-30T19:06:02Z | 39,284,912 | <p>I do not know if this feature present only in recent PyCharm versions so what version do you have? In my PyCharm CE 2016.2.2 it looks like on the screenshot.</p>
<p><a href="http://i.stack.imgur.com/15b93.png" rel="nofollow"><img src="http://i.stack.imgur.com/15b93.png" alt="enter image description here"></a></p>
<p>Check Preferences > Editor > General > Code Completion, to be sure option "Autopop documentation" is enabled.</p>
<p>Good luck!</p>
| 0 | 2016-09-02T05:31:27Z | [
"python",
"pycharm",
"python-sphinx",
"epydoc"
] |
How to build the following tables? | 39,235,365 | <p>Hello I have process a excel file and to take some parameters of it to create many tables the structure of the tables is the following:</p>
<pre><code>"AWK|USL|R|SVKDIKG_tVstiKg|S|[PARAMETER1]~BURAGO~[PARAMETER2]~WVDG~333" "AFUSLR~USLSSHS~Farm~~%ERD_ARGV=MR4567.%VRSD%.%23WF%.333.%RVB%.tRt"
"AWK|USL|R|Bimbo|S|[PARAMETER3]~K~999" "USLo99941VRR.VxV"
"AWK|USL|R|Bimbo|S|[PARAMETER3]~Q~999" "USLo99941VRR.VxV"
"AWK|USL|R|Ford|S|[PARAMETER3]~K~999" "[PARAMETER3]~K"
"AWK|USL|R|Ford|S|[PARAMETER3]~Q~999" "[PARAMETER3]~K"
</code></pre>
<p>The parameters that I need to use to create the tables are contained in a excel file and they looks as follows:</p>
<pre><code>123123,RIBICOM,FACTIBLE
050944,TELCOM,423423
.
.
.
42342,CORPS,233243
</code></pre>
<p>The idea is to take the "," as a column separator, where the fist column would be the "PARAMETER1", second column "PARAMETER2" and finally "PARAMETER3" the third column raw by raw, for every raw or this archive I need to produce one table filling the place holders of my template as follows: </p>
<pre><code>"AWK|USL|R|SVKDIKG_tVstiKg|S|123123~BURAGO~RIBICOM~WVDG~333" "AFUSLR~USLSSHS~Farm~~%ERD_ARGV=MR4567.%VRSD%.%23WF%.333.%RVB%.tRt"
"AWK|USL|R|Bimbo|S|FACTIBLE~K~999" "USLo99941VRR.VxV"
"AWK|USL|R|Bimbo|S|FACTIBLE~Q~999" "USLo99941VRR.VxV"
"AWK|USL|R|Ford|S|FACTIBLE~K~999" "FACTIBLE~K"
"AWK|USL|R|Ford|S|FACTIBLE~Q~999" "FACTIBLE~K"
</code></pre>
<p>to be more clear the place holders of the template are the followings:</p>
<pre><code>[PARAMETER1]
[PARAMETER2]
[PARAMETER3]
</code></pre>
<p>those are the things that I need to fill,</p>
<p>The example of above would be the desired output for the first row, I need to produce a txt file with all the tables concatenated, in order to achieve this I tried:</p>
<pre><code>import pandas as pd
# -*- coding: utf-8 -*-
xl = pd.ExcelFile("Book1.xlsx")
#to clean from duplicates
df = xl.parse("Sheet1")
df=df.drop_duplicates()
#these are the values that I am concatenating below
Parameter1=df[u'Header1 ']
Parameter2=df[u'Header2 ']
Parameter3=df[u'Header3 ']
#This is the dataframe with the corresponding columns
important_Parameters=df[u'Header1 '].astype(str)+","+df[u'Header2 '].astype(str)+","+df[u'Header3 '].astype(str)
#to write my dataframe on disk.
important_Parameters.to_csv("important33.txt", index=False)
</code></pre>
<p>I am not sure of what would be the best approach to proceed since I used to do that kind of things in bash using "sed" and "awk" but this time I would like to try using pandas and python I really appreciate any suggestion to proceed with this specific task.</p>
| 0 | 2016-08-30T19:11:19Z | 39,236,481 | <p>you try this </p>
<pre><code>import pandas as pd
# -*- coding: utf-8 -*-
df = pd.read_csv("param.csv")
print df
df=df.drop_duplicates()
filename='sample.txt'
print "\n\nReplace with new values"
for index, row in df.iterrows():
print "New Values \n\n"
print row
f=open(filename)
filedata = f.read()
filedata=filedata.replace("[PARAMETER1]",row[0])
filedata=filedata.replace('[PARAMETER2]',row[1])
filedata=filedata.replace('[PARAMETER3]',row[2])
print filedata
</code></pre>
<p>output </p>
<pre><code> Parameter1 Parameter2 Parameter3
0 123123A RIBICOM FACTIBLE
1 050944BS TELCOM 423423
Replace with new values
New Values
Parameter1 123123A
Parameter2 RIBICOM
Parameter3 FACTIBLE
Name: 0, dtype: object
AWK|USL|R|SVKDIKG_tVstiKg|S|123123A~BURAGO~RIBICOM~WVDG~333 AFUSLR~USLSSHS~Farm~
~%ERD_ARGV=MR4567.%VRSD%.%23WF%.333.%RVB%.tRt
AWK|USL|R|Bimbo|S|FACTIBLE~K~999 USLo99941VRR.VxV
AWK|USL|R|Bimbo|S|FACTIBLE~Q~999 USLo99941VRR.VxV
AWK|USL|R|Ford|S|FACTIBLE~K~999 FACTIBLE~K
AWK|USL|R|Ford|S|FACTIBLE~Q~999 FACTIBLE~K
New Values
Parameter1 050944BS
Parameter2 TELCOM
Parameter3 423423
Name: 1, dtype: object
AWK|USL|R|SVKDIKG_tVstiKg|S|050944BS~BURAGO~TELCOM~WVDG~333 AFUSLR~USLSSHS~Farm~
~%ERD_ARGV=MR4567.%VRSD%.%23WF%.333.%RVB%.tRt
AWK|USL|R|Bimbo|S|423423~K~999 USLo99941VRR.VxV
AWK|USL|R|Bimbo|S|423423~Q~999 USLo99941VRR.VxV
AWK|USL|R|Ford|S|423423~K~999 423423~K
AWK|USL|R|Ford|S|423423~Q~999 423423~K
</code></pre>
<p>Sample.txt</p>
<pre><code>"AWK|USL|R|SVKDIKG_tVstiKg|S|[PARAMETER1]~BURAGO~[PARAMETER2]~WVDG~333" "AFUSLR~USLSSHS~Farm~~%ERD_ARGV=MR4567.%VRSD%.%23WF%.333.%RVB%.tRt"
"AWK|USL|R|Bimbo|S|[PARAMETER3]~K~999" "USLo99941VRR.VxV"
"AWK|USL|R|Bimbo|S|[PARAMETER3]~Q~999" "USLo99941VRR.VxV"
"AWK|USL|R|Ford|S|[PARAMETER3]~K~999" "[PARAMETER3]~K"
"AWK|USL|R|Ford|S|[PARAMETER3]~Q~999" "[PARAMETER3]~K"
</code></pre>
| 2 | 2016-08-30T20:23:46Z | [
"python",
"pandas"
] |
read_fwf in pandas in Python does not use comment character if colspecs argument does not include first column | 39,235,377 | <p>When reading fixed-width files using the <code>read_fwf</code> function in pandas (0.18.1) with Python (3.4.3), it is possible to specify a comment character using the <code>comment</code> argument. I expected that all lines beginning with the comment character would be ignored. However, if you do not specify the first column in the file in any column in <code>colspecs</code>, the comment character does not appear to be used.</p>
<pre><code>import io, sys
import pandas as pd
sys.version
# '3.4.3 (v3.4.3:9b73f1c3e601, Feb 24 2015, 22:43:06) [MSC v.1600 32 bit (Intel)]'
pd.__version__
# '0.18.1'
# Two input files, first line is comment, second line is data.
# Second file has a column (with the letter A)
# that I don't want at start of data.
string = "#\n1K\n"
off_string = "#\nA1K\n"
# When using skiprows to skip commented row, both work.
pd.read_fwf(io.StringIO(string), colspecs = [(0,1), (1,2)], skiprows = 1, header = None)
# 0 1
# 0 1 K
pd.read_fwf(io.StringIO(off_string), colspecs = [(1,2), (2,3)], skiprows = 1, header = None)
# 0 1
# 0 1 K
# If a comment character is specified, it only works when the colspecs
# includes the column with the comment character.
pd.read_fwf(io.StringIO(string), colspecs = [(0,1), (1,2)], comment = '#', header = None)
# 0 1
# 0 1 K
pd.read_fwf(io.StringIO(off_string), colspecs = [(1,2), (2,3)], comment = '#', header = None)
# 0 1
# 0 NaN NaN
# 1 1.0 K
</code></pre>
<p>Is there any documentation specifically referring to this? The simple workaround is to include the first column and then remove it after, but I wanted to verify if this was a bug or my misunderstanding the expected behaviour.</p>
| 5 | 2016-08-30T19:12:09Z | 39,282,739 | <p>I think this is a bug, the spec in the documentation says "if the line starts with a comment then the entire line is skipped". The problem is that columns are subsetted by <code>FixedWidthReader.__next__</code> before they are checked for comments (in <code>PythonParser</code> or <code>CParserWrapper</code>). The relevant code is in <code>io/parsers.py</code>. </p>
| 5 | 2016-09-02T00:37:32Z | [
"python",
"python-3.x",
"pandas"
] |
Python - Auto Detect Email Content Encoding | 39,235,436 | <p>I am writing a script to process emails, and I have access to the raw string content of the emails. </p>
<p>I am currently looking for the string "Content-Transfer-Encoding:" and scanning the characters that follow immediately after, to determine the encoding. Example encodings: base64 or 7bit or quoted-printable ..</p>
<p>Is there a better way to automatically determine the email encoding(at least a more pythonic way)?</p>
<p>Thank you.</p>
| 1 | 2016-08-30T19:15:43Z | 39,235,771 | <p><a href="https://stackoverflow.com/questions/436220/python-is-there-a-way-to-determine-the-encoding-of-text-file">Python: Is there a way to determine the encoding of text file?</a> has some good answers. Basically there's no way to do it perfectly reliably, and the initial approach you're using is the best (and should be checked first), but if it isn't there then there are a few options that can work sometimes.</p>
| 0 | 2016-08-30T19:36:13Z | [
"python",
"email",
"encoding",
"html-email"
] |
Python - Auto Detect Email Content Encoding | 39,235,436 | <p>I am writing a script to process emails, and I have access to the raw string content of the emails. </p>
<p>I am currently looking for the string "Content-Transfer-Encoding:" and scanning the characters that follow immediately after, to determine the encoding. Example encodings: base64 or 7bit or quoted-printable ..</p>
<p>Is there a better way to automatically determine the email encoding(at least a more pythonic way)?</p>
<p>Thank you.</p>
| 1 | 2016-08-30T19:15:43Z | 39,236,065 | <p>You may use this standard Python package: <a href="https://docs.python.org/2/library/email.html" rel="nofollow">email</a>.</p>
<p>For example:</p>
<pre><code>import email
raw = """From: John Doe <example@example.com>
MIME-Version: 1.0
Content-Type: text/plain
Content-Transfer-Encoding: quoted-printable
Hi there!
"""
my_email = email.message_from_string(raw)
print my_email["Content-Transfer-Encoding"]
</code></pre>
<p>See other examples <a href="https://docs.python.org/3/library/email-examples.html" rel="nofollow">here</a>.</p>
| 1 | 2016-08-30T19:56:22Z | [
"python",
"email",
"encoding",
"html-email"
] |
Error with calculator type program in python ValueError | 39,235,451 | <p>this is my script, i am new to python but have come to get this, please be forgiving as such in the answers and keep in mind that i am new to this.</p>
<pre><code>import functools
numbers=[]
def means():
end_mean = functools.reduce(lambda x, y: x + y, numbers) / len(numbers)
print(end_mean)
def sum():
end_sum = functools.reduce(lambda x, y: x + y, numbers)
print(end_sum)
def whatDo():
print('Input Extra Numbers '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
means()
while True:
print('Input Number '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
means()
print('Do you want anything else?')
reply=input()
if reply=='no':
break
elif reply--'yes':
whatDo()
else:
break
</code></pre>
<p>However i get this:</p>
<pre><code>Traceback (most recent call last):
File "E:/Python/calculator.py", line 26, in <module>
number= int(input())
ValueError: invalid literal for int() with base 10: ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:/Python/calculator.py", line 37, in <module>
elif reply--'yes':
TypeError: bad operand type for unary -: 'str'
</code></pre>
<p>after the 'Do you want anything else' and i enter 'yes'.</p>
| 0 | 2016-08-30T19:16:49Z | 39,235,507 | <p><code>elif reply--'yes':</code> should be <code>elif reply == 'yes':</code></p>
| 3 | 2016-08-30T19:20:55Z | [
"python"
] |
Error with calculator type program in python ValueError | 39,235,451 | <p>this is my script, i am new to python but have come to get this, please be forgiving as such in the answers and keep in mind that i am new to this.</p>
<pre><code>import functools
numbers=[]
def means():
end_mean = functools.reduce(lambda x, y: x + y, numbers) / len(numbers)
print(end_mean)
def sum():
end_sum = functools.reduce(lambda x, y: x + y, numbers)
print(end_sum)
def whatDo():
print('Input Extra Numbers '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
means()
while True:
print('Input Number '+str(len(numbers)+1)+' (or nothing to close):')
try:
number= int(input())
numbers.append(number)
except:
print('What do you want to do?')
answer = input()
if answer == "mean":
means()
print('Do you want anything else?')
reply=input()
if reply=='no':
break
elif reply--'yes':
whatDo()
else:
break
</code></pre>
<p>However i get this:</p>
<pre><code>Traceback (most recent call last):
File "E:/Python/calculator.py", line 26, in <module>
number= int(input())
ValueError: invalid literal for int() with base 10: ''
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:/Python/calculator.py", line 37, in <module>
elif reply--'yes':
TypeError: bad operand type for unary -: 'str'
</code></pre>
<p>after the 'Do you want anything else' and i enter 'yes'.</p>
| 0 | 2016-08-30T19:16:49Z | 39,235,528 | <p>You had a typo</p>
<pre><code>elif reply--'yes'
</code></pre>
<p>it should be, of course</p>
<pre><code>elif reply=='yes'
</code></pre>
| 1 | 2016-08-30T19:21:44Z | [
"python"
] |
PyAutoGUi : How to know if the left mouse click is pressed | 39,235,454 | <p>Am using PyAutoGUI library, how to know if the left mouse click is pressed?</p>
<p>This is what I want to do : </p>
<pre><code>if(leftmousebuttonpressed):
print("left")
else:
print("nothing")
</code></pre>
<p>Help appreciated.</p>
| 0 | 2016-08-30T19:17:11Z | 39,235,660 | <p>I don't think you can use PyAutoGui to listen for mouseclick.</p>
<p>Instead try Pyhook (from their source page):</p>
<pre><code>import pythoncom, pyHook
def OnMouseEvent(event):
# called when mouse events are received
print 'MessageName:',event.MessageName
print 'Message:',event.Message
print 'Time:',event.Time
print 'Window:',event.Window
print 'WindowName:',event.WindowName
print 'Position:',event.Position
print 'Wheel:',event.Wheel
print 'Injected:',event.Injected
print '---'
# return True to pass the event to other handlers
return True
# create a hook manager
hm = pyHook.HookManager()
# watch for all mouse events
hm.MouseAll = OnMouseEvent
# set the hook
hm.HookMouse()
# wait forever
pythoncom.PumpMessages()
</code></pre>
<p>I believe you can you do this:</p>
<pre><code>import pyHook, pythoncom
def left_down():
print("left down")
def right_down():
print("right down")
hm = pyHook.HookManager()
hm.SubscribeMouseLeftDown(left_down)
hm.SubscribeMouseRightDown(right_down)
hm.HookMouse()
pythoncom.PumpMessages()
hm.UnhookMouse()
</code></pre>
<p>They also do keyboards events, just go look their api up. </p>
<p>Edit:
Here's their mini tutorial: <a href="https://sourceforge.net/p/pyhook/wiki/PyHook_Tutorial/" rel="nofollow">https://sourceforge.net/p/pyhook/wiki/PyHook_Tutorial/</a></p>
<p>Also PyHook is only for windows (thanks to John Doe for pointing it out)</p>
| 0 | 2016-08-30T19:29:58Z | [
"python",
"pyautogui"
] |
Python library difference? | 39,235,568 | <p>If I run </p>
<pre><code>bob@me:/usr$ find . -name 'libpython2.7.so'
</code></pre>
<p>I get two hits:</p>
<pre><code>./lib/python2.7/config-x86_64-linux-gnu/libpython2.7.so
./lib/x86_64-linux-gnu/libpython2.7.so
</code></pre>
<p>What's the difference?</p>
| 3 | 2016-08-30T19:24:02Z | 39,235,593 | <p>One is the configuration for the libpython2.7.so while the other is the exact location where the files for that particular library reside.</p>
| 3 | 2016-08-30T19:26:19Z | [
"python",
"python-2.7",
"shared-libraries"
] |
When QComboBox is set editable | 39,235,687 | <p>The code below creates QComboBox and QPushButton both assigned to the same layout. Combobox is set to be editable so the user is able to type a new combobox item's value.
If the user hits <strong>Tab</strong> keyboard key (instead of Enter) the New Value will not be added to the ComboBox.
Question: How to make sure the ComboBox's items are updated with the New Value even if the user leaves the ComboBox with <strong>Tab</strong> key?</p>
<p><a href="http://i.stack.imgur.com/wPZPN.png" rel="nofollow"><img src="http://i.stack.imgur.com/wPZPN.png" alt="enter image description here"></a></p>
<pre><code>from PyQt4 import QtGui
def comboActivated(arg=None):
print '\n ...comboActivated: %s'%arg
widget = QtGui.QWidget()
layout = QtGui.QVBoxLayout()
widget.setLayout(layout)
combo = QtGui.QComboBox()
combo.setEditable(True)
combo.addItems(['One','Two','Three'])
combo.activated.connect(comboActivated)
layout.addWidget(combo)
layout.addWidget(QtGui.QPushButton('Push'))
widget.show()
</code></pre>
| 1 | 2016-08-30T19:31:49Z | 39,236,399 | <p>When a user edits the text in the box, the <code>editTextChanged()</code> signal is emitted with the edited text as its argument. In addition, when the widget itself loses focus, as when the user types <code>Tab</code> to move to the button, the widget emits the <code>focusOutEvent()</code> signal. The argument for this signal is a <code>QFocusEvent</code>, which you can query for the reason focus was lost. The <code>reason()</code> method of the event would return <code>Qt.TabFocusReason</code>, for example, if the user hit the <code>Tab</code> button to leave the widget.</p>
<p>You can connect a slot to either (or both) of these signals, so that when the user leaves the widget after editing text, you process it and add it to the box's list of values.</p>
<p>You may also want to look into the <code>QValidator</code> class and its subclasses, which you attach to widgets with editable text, and define the types of valid input for the widget (e.g., integers, text, etc.). This is the best and easiest way to verify a user's input for editable widgets.</p>
| 1 | 2016-08-30T20:18:40Z | [
"python",
"pyqt",
"pyside",
"qcombobox"
] |
Calculate local time derivative of Series | 39,235,712 | <p>I have data that I'm importing from an hdf5 file. So, it comes in looking like this:</p>
<pre><code>import pandas as pd
tmp=pd.Series([1.,3.,4.,3.,5.],['2016-06-27 23:52:00','2016-06-27 23:53:00','2016-06-27 23:54:00','2016-06-27 23:55:00','2016-06-27 23:59:00'])
tmp.index=pd.to_datetime(tmp.index)
>>>tmp
2016-06-27 23:52:00 1.0
2016-06-27 23:53:00 3.0
2016-06-27 23:54:00 4.0
2016-06-27 23:55:00 3.0
2016-06-27 23:59:00 5.0
dtype: float64
</code></pre>
<p>I would like to find the local slope of the data. If I just do tmp.diff() I do get the local change in value. But, I want to get the change in value per second (time derivative)
I would like to do something like this, but this is the wrong way to do it and gives an error:</p>
<pre><code>tmp.diff()/tmp.index.diff()
</code></pre>
<p>I have figured out that I can do it by converting all the data to a DataFrame, but that seems inefficient. Especially, since I'm going to have to work with a large, on disk file in chunks.
Is there a better way to do it other than this:</p>
<pre><code>df=pd.DataFrame(tmp)
df['secvalue']=df.index.astype(np.int64)/1e+9
df['slope']=df['Value'].diff()/df['secvalue'].diff()
</code></pre>
| 4 | 2016-08-30T19:33:18Z | 39,235,861 | <p>Use <code>numpy.gradient</code></p>
<pre><code>import numpy as np
import pandas as pd
slope = pd.Series(np.gradient(tmp.values), tmp.index, name='slope')
</code></pre>
<p>To address the unequal temporal index, i'd resample over minutes and interpolate. Then my gradients would be over equal intervals.</p>
<pre><code>tmp_ = tmp.resample('T').interpolate()
slope = pd.Series(np.gradient(tmp_.values), tmp_.index, name='slope')
df = pd.concat([tmp_.rename('values'), slope], axis=1)
df
</code></pre>
<p><a href="http://i.stack.imgur.com/SCX6d.png" rel="nofollow"><img src="http://i.stack.imgur.com/SCX6d.png" alt="enter image description here"></a></p>
<pre><code>df.plot()
</code></pre>
<p><a href="http://i.stack.imgur.com/HQRC0.png" rel="nofollow"><img src="http://i.stack.imgur.com/HQRC0.png" alt="enter image description here"></a></p>
| 4 | 2016-08-30T19:41:47Z | [
"python",
"datetime",
"pandas"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.