title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Timed method in Python
39,272,925
<p>How do I have a part of python script(only a method, the whole script runs in 24/7) run everyday at a set-time, exactly at every 20th minutes? Like 12:20, 12:40, 13:00 in every hour.</p> <p>I can not use cron, I tried periodic execution but that is not as accurate as I would... It depends from the script starting time.</p>
2
2016-09-01T13:30:24Z
39,273,687
<p>Module <a href="https://github.com/dbader/schedule" rel="nofollow">schedule</a> may be useful for this. See answer to <a href="http://stackoverflow.com/questions/373335/how-do-i-get-a-cron-like-scheduler-in-python">How do I get a Cron like scheduler in Python?</a> for details.</p>
0
2016-09-01T14:04:35Z
[ "python" ]
Python regex similar expressions
39,272,951
<p>I have a file with two different types of data I'd like to parse with a regex; however, the data is similar enough that I can't find the correct way to distinguish it.</p> <p>Some lines in my file are of form:</p> <pre><code>AED=FRI AFN=FRI:SAT AMD=SUN:SAT </code></pre> <p>Other lines are of form</p> <pre><code>AED=20180823 AMD=20150914 AMD=20150921 </code></pre> <p>The remaining lines are headers and I'd like to discard them. For example</p> <pre><code>[HEADER: BUSINESS DATE=20160831] </code></pre> <p>My solution attempt so far is to match first three capital letters and an equal sign, </p> <pre><code>r'\b[A-Z]{3}=\b' </code></pre> <p>but after that I'm not sure how to distinguish between dates (eg 20180823) and days (eg FRI:SAT:SUN).</p> <p>The results I'd expect from these parsing functions:</p> <pre><code>Regex weekday_rx = new Regex(&lt;EXPRESSION FOR TYPES LIKE AED=FRI&gt;); Regex date_rx = new Regex(&lt;EXPRESSION FOR TYPES LIKE AED=20160816&gt;); weekdays = [weekday_rx.Match(line) for line in infile.read()] dates = [date_rx.Match(line) for line in infile.read()] </code></pre>
1
2016-09-01T13:31:31Z
39,273,094
<pre><code>r'\S*\d$' </code></pre> <p>Will match all non-whitespace characters that end in a digit</p> <p>Will match <code>AED=20180823</code></p> <pre><code>r'\S*[a-zA-Z]$' </code></pre> <p>Matches all non-whitespace characters that end in a letter.</p> <p>will match <code>AED=AED=FRI</code> <code>AFN=FRI:SAT</code> <code>AMD=SUN:SAT</code></p> <p>Neither will match </p> <p><code>[HEADER: BUSINESS DATE=20160831]</code></p> <p>This will match both </p> <pre><code>r'(\S*[a-zA-Z]$|\S*\d$)' </code></pre> <p>Replacing the * with the number of occurences you expect will be safer, the <code>(a|b)</code> is match a or match b</p>
2
2016-09-01T13:38:11Z
[ "python", "regex" ]
Python regex similar expressions
39,272,951
<p>I have a file with two different types of data I'd like to parse with a regex; however, the data is similar enough that I can't find the correct way to distinguish it.</p> <p>Some lines in my file are of form:</p> <pre><code>AED=FRI AFN=FRI:SAT AMD=SUN:SAT </code></pre> <p>Other lines are of form</p> <pre><code>AED=20180823 AMD=20150914 AMD=20150921 </code></pre> <p>The remaining lines are headers and I'd like to discard them. For example</p> <pre><code>[HEADER: BUSINESS DATE=20160831] </code></pre> <p>My solution attempt so far is to match first three capital letters and an equal sign, </p> <pre><code>r'\b[A-Z]{3}=\b' </code></pre> <p>but after that I'm not sure how to distinguish between dates (eg 20180823) and days (eg FRI:SAT:SUN).</p> <p>The results I'd expect from these parsing functions:</p> <pre><code>Regex weekday_rx = new Regex(&lt;EXPRESSION FOR TYPES LIKE AED=FRI&gt;); Regex date_rx = new Regex(&lt;EXPRESSION FOR TYPES LIKE AED=20160816&gt;); weekdays = [weekday_rx.Match(line) for line in infile.read()] dates = [date_rx.Match(line) for line in infile.read()] </code></pre>
1
2016-09-01T13:31:31Z
39,273,201
<p>The following is a solution in Python :)</p> <pre><code>import re p = re.compile(r'\b([A-Z]{3})=((\d)+|([A-Z])+)') str_test_01 = "AMD=SUN:SAT" m = p.search(str_test_01) print (m.group(1)) print (m.group(2)) str_test_02 = "AMD=20150921" m = p.search(str_test_02) print (m.group(1)) print (m.group(2)) """ &lt;Output&gt; AMD SUN AMD 20150921 """ </code></pre>
2
2016-09-01T13:42:55Z
[ "python", "regex" ]
Python regex similar expressions
39,272,951
<p>I have a file with two different types of data I'd like to parse with a regex; however, the data is similar enough that I can't find the correct way to distinguish it.</p> <p>Some lines in my file are of form:</p> <pre><code>AED=FRI AFN=FRI:SAT AMD=SUN:SAT </code></pre> <p>Other lines are of form</p> <pre><code>AED=20180823 AMD=20150914 AMD=20150921 </code></pre> <p>The remaining lines are headers and I'd like to discard them. For example</p> <pre><code>[HEADER: BUSINESS DATE=20160831] </code></pre> <p>My solution attempt so far is to match first three capital letters and an equal sign, </p> <pre><code>r'\b[A-Z]{3}=\b' </code></pre> <p>but after that I'm not sure how to distinguish between dates (eg 20180823) and days (eg FRI:SAT:SUN).</p> <p>The results I'd expect from these parsing functions:</p> <pre><code>Regex weekday_rx = new Regex(&lt;EXPRESSION FOR TYPES LIKE AED=FRI&gt;); Regex date_rx = new Regex(&lt;EXPRESSION FOR TYPES LIKE AED=20160816&gt;); weekdays = [weekday_rx.Match(line) for line in infile.read()] dates = [date_rx.Match(line) for line in infile.read()] </code></pre>
1
2016-09-01T13:31:31Z
39,273,302
<p>Use pipes to express alternatives in regex. Pattern '[A-Z]{3}:[A-Z]{3}|[A-Z]{3}' will match both ABC and ABC:ABC. Then use parenthesis to group results:</p> <pre><code>import re match = re.match(r'([A-Z]{3}:[A-Z]{3})|([A-Z]{3})', 'ABC:ABC') assert match.groups() == ('ABC:ABC', None) match = re.match(r'([A-Z]{3}:[A-Z]{3})|([A-Z]{3})', 'ABC') assert match.groups() == (None, 'ABC') </code></pre> <p>You can research the concept of named groups to make this even more readable. Also, take a look at the docs for the match object for useful info and methods.</p>
2
2016-09-01T13:47:55Z
[ "python", "regex" ]
Suggestions to handle multiple python pandas scripts
39,273,012
<p>I currently have several python pandas scripts that I keep separate because of 1) readability, and 2) sometimes I am interested in the output of these partial individual scripts. </p> <p>However, generally, the CSV file output of one of these scripts is the CSV input of the next and in each I have to re-read datetimes which is inconvenient.</p> <p>What best practices do you suggest for this task? Is it better to just combine all the scripts into one for when I'm interested in running the whole program or is there a more Python/Pandas way to deal with this?</p> <p>thank you and I appreciate all your comments,</p>
0
2016-09-01T13:34:03Z
39,273,086
<p>Instead of writing a CSV output which you have to re-parse, you can write and read the <code>pandas.DataFrame</code> in efficient binary format with the methods <code>pandas.DataFrame.to_pickle()</code> and <code>pandas.read_pickle()</code>, respectively.</p>
1
2016-09-01T13:37:56Z
[ "python", "pandas" ]
Suggestions to handle multiple python pandas scripts
39,273,012
<p>I currently have several python pandas scripts that I keep separate because of 1) readability, and 2) sometimes I am interested in the output of these partial individual scripts. </p> <p>However, generally, the CSV file output of one of these scripts is the CSV input of the next and in each I have to re-read datetimes which is inconvenient.</p> <p>What best practices do you suggest for this task? Is it better to just combine all the scripts into one for when I'm interested in running the whole program or is there a more Python/Pandas way to deal with this?</p> <p>thank you and I appreciate all your comments,</p>
0
2016-09-01T13:34:03Z
39,273,803
<p>If I understand your question well, using modules would be the best approach to me.</p> <p>You can keep your scripts separated and import them as modules when needed in a dependent script. For example:</p> <p>Script 1:</p> <pre><code>import pandas def create_pandas_dataframe(): # Creating a dataframe ... df = pandas.DataFrame() return df def run(): # Run the script 1 df = create_pandas_dataframe() # Here, call other functions specific to this script if __name__ == '__main__': # Run the script run() </code></pre> <p>Script 2:</p> <pre><code>from script_1 import create_pandas_dataframe def use_pandas_dataframe(a_df): print a_df if __name__ == '__main__': df = create_pandas_dataframe() use_pandas_dataframe(df) </code></pre> <p>This way, you can directly use the output of an existing function as input for another one without them being in the same script.</p>
1
2016-09-01T14:09:17Z
[ "python", "pandas" ]
reduceByKey in spark for adding tuples
39,273,023
<p>Consider an Rdd with below dataset where 10000241 is the key and remaining are values</p> <pre><code> ('10000241',([0,0,1],[None,None,'RX'])) ('10000241',([0,2,0],[None,'RX','RX'])) ('10000241',([3,0,0],['RX',None,None])) pv1 = rdd.reduceBykey(lambda x,y :( addtup(x[0],y[0]), addtup(x[1],y[1]), )) def addtup(t1,t2): j =() for k,v in enumerate(t1): j = j + (t1[k] + t2[k],) return j </code></pre> <p>The final output i want is (10000241,(3,2,1)('RX','RX','RX)) but i get the error of cant add none type to none type or nonetype to Str .how can i overcome this issue?</p>
0
2016-09-01T13:34:38Z
39,273,351
<p>If I understood you correctly, you want to summarize numbers in the first tuple and to use logic or in the second?</p> <p>I think you should rewrite your function as following:</p> <pre><code>def addtup(t1,t2): left = list(map(lambda x: sum(x), zip(t1[0], t2[0]))) right = list(map(lambda x: x[0] or x[1], zip(t1[1], t2[1]))) return (left, right) </code></pre> <p>Then you can use it like this:</p> <pre><code>rdd.reduceBykey(addtup) </code></pre> <p>Here is a demonstration</p> <pre><code>import functools data = (([0,0,1],[None,None,'RX']), ([0,2,0],[None,'RX','RX']), ([3,0,0],['RX',None,None])) functools.reduce(addtup, data) #=&gt; ([3, 2, 1], ['RX', 'RX', 'RX']) </code></pre>
0
2016-09-01T13:50:18Z
[ "python", "apache-spark", "pyspark", "apache-spark-sql" ]
Python - Scrapy data lists
39,273,143
<p>I have the following piece of code in my scraper:</p> <pre><code>import scrapy import os import re from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors import LinkExtractor class MySpider(CrawlSpider): handle_httpstatus_list = [301,302,404,200,500] name = 'rust' allowed_domains = ['example.com'] start_urls = ['http://example.com'] rules = ( # Extract links matching 'category.php' (but not matching 'subsection.php') # and follow links from them (since no callback means follow=True by default). # Extract links matching 'item.php' and parse them with the spider's method parse_item Rule(LinkExtractor(), callback='parse_item', follow=True), ) def parse_item(self, response): a = [] if response.url == "http://example.com/": rlink = response.xpath('//a/@href').extract() litarget = response.xpath('//a/@target').extract() lirel = response.xpath('//a/@rel').extract() litext = response.xpath('//a/text()').extract() #print rlink for i, z in zip(rlink, litarget): #print i if i == "/some/link/": print z </code></pre> <p>This doesn't work for me :( </p> <p>I want to achieve the following thing: Since the extracted data is in lists: <code>rlink</code>, <code>litarget</code>, <code>lirel</code> and <code>litext</code>. I want to be able to extract corresponding information based on another one from the lists: </p> <pre><code>if link in response.xpath('//a/@href').extract() == "/some/link" </code></pre> <blockquote> <p>print its target, rel and text attribute.</p> </blockquote> <p>Can you help me solve that.</p> <p>Thanks!</p>
0
2016-09-01T13:40:09Z
39,274,047
<p>You probably run your scrapy spider from command line.</p> <p>In that case I would suggest you to debug your spider using pycharm ide.</p> <p>Just add this code inside <code>yourproject</code> directory and name it something like <code>main.py</code></p> <pre><code># -*- coding: utf-8 -*- import logging from scrapy.crawler import CrawlerRunner from scrapy.utils.log import configure_logging from scrapy.utils.project import get_project_settings from twisted.internet import reactor, defer from yourproject.spiders.my_spider import MySpider configure_logging(install_root_handler=False) logging.basicConfig( filename='log.txt', filemode='w', format='%(asctime)s: %(levelname)s: %(message)s', datefmt='%Y-%m-%d %H:%M:%S', level=logging.DEBUG ) console = logging.StreamHandler() console.setLevel(logging.DEBUG) # uncomment this line to print logs in console #logging.getLogger('').addHandler(console) logger = logging.getLogger(__name__) settings=get_project_settings() runner = CrawlerRunner(settings=settings) @defer.inlineCallbacks def crawl(): yield runner.crawl(MySpider) reactor.stop() crawl() reactor.run() # the script will block here until the last crawl call is finished </code></pre> <p>Afterwards use pycharm as follows</p> <ul> <li>connect your scrapy python interpreter to your pycharm project</li> <li>set <code>main.py</code> as the start script</li> <li>add a breakpoint in <code>parse_item</code></li> <li>press/run debug </li> </ul> <p>Hopefully this will fix your problem.</p>
0
2016-09-01T14:19:12Z
[ "python", "scrapy" ]
Python - Scrapy data lists
39,273,143
<p>I have the following piece of code in my scraper:</p> <pre><code>import scrapy import os import re from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors import LinkExtractor class MySpider(CrawlSpider): handle_httpstatus_list = [301,302,404,200,500] name = 'rust' allowed_domains = ['example.com'] start_urls = ['http://example.com'] rules = ( # Extract links matching 'category.php' (but not matching 'subsection.php') # and follow links from them (since no callback means follow=True by default). # Extract links matching 'item.php' and parse them with the spider's method parse_item Rule(LinkExtractor(), callback='parse_item', follow=True), ) def parse_item(self, response): a = [] if response.url == "http://example.com/": rlink = response.xpath('//a/@href').extract() litarget = response.xpath('//a/@target').extract() lirel = response.xpath('//a/@rel').extract() litext = response.xpath('//a/text()').extract() #print rlink for i, z in zip(rlink, litarget): #print i if i == "/some/link/": print z </code></pre> <p>This doesn't work for me :( </p> <p>I want to achieve the following thing: Since the extracted data is in lists: <code>rlink</code>, <code>litarget</code>, <code>lirel</code> and <code>litext</code>. I want to be able to extract corresponding information based on another one from the lists: </p> <pre><code>if link in response.xpath('//a/@href').extract() == "/some/link" </code></pre> <blockquote> <p>print its target, rel and text attribute.</p> </blockquote> <p>Can you help me solve that.</p> <p>Thanks!</p>
0
2016-09-01T13:40:09Z
39,300,555
<p>Maybe it would be easier to grab all the <code>a</code> elements without trying to match their attributes, something like:</p> <pre><code>for a in response.css('a'): if a.xpath('@href').extract_first() == 'http://some/link/': target = a.xpath('@target').extract_first() rel = a.xpath('@rel').extract_first() text = a.xpath('text()').extract_first() print target, rel, text </code></pre>
1
2016-09-02T20:52:42Z
[ "python", "scrapy" ]
Flatten pandas pivot table
39,273,441
<p>This is a follow up of my <a href="http://stackoverflow.com/questions/39229005/pivot-table-no-numeric-types-to-aggregate/39229396#39229396">question</a>. Rather than a pivot table, is it possible to flatten table to look like the following:</p> <pre><code>data = {'year': ['2016', '2016', '2015', '2014', '2013'], 'country':['uk', 'usa', 'fr','fr','uk'], 'sales': [10, 21, 20, 10,12], 'rep': ['john', 'john', 'claire', 'kyle','kyle'] } pd.DataFrame(data).pivot_table(index='country', columns='year', values=['rep','sales']) rep sales year 2013 2014 2015 2016 2013 2014 2015 2016 country fr None kyle claire None None 10 20 None uk kyle None None john 12 None None 10 usa None None None john None None None 21 </code></pre> <p>Flattened table:</p> <pre><code> rep_2013 rep_2014 rep_2015 rep_2016 sales_2013 sales_2014 sales_2015 sales_2016 country fr None kyle claire None None 10 20 None uk kyle None None john 12 None None 10 usa None None None john None None None 21 </code></pre>
2
2016-09-01T13:54:05Z
39,273,531
<p>Try this:</p> <pre><code>df.columns = df.columns.get_level_values(0) </code></pre> <p>followed by: </p> <pre><code>df.columns = [' '.join(col).strip() for col in df.columns.values] </code></pre> <p>This should flatten your multi-index </p>
2
2016-09-01T13:58:30Z
[ "python", "pandas" ]
Flatten pandas pivot table
39,273,441
<p>This is a follow up of my <a href="http://stackoverflow.com/questions/39229005/pivot-table-no-numeric-types-to-aggregate/39229396#39229396">question</a>. Rather than a pivot table, is it possible to flatten table to look like the following:</p> <pre><code>data = {'year': ['2016', '2016', '2015', '2014', '2013'], 'country':['uk', 'usa', 'fr','fr','uk'], 'sales': [10, 21, 20, 10,12], 'rep': ['john', 'john', 'claire', 'kyle','kyle'] } pd.DataFrame(data).pivot_table(index='country', columns='year', values=['rep','sales']) rep sales year 2013 2014 2015 2016 2013 2014 2015 2016 country fr None kyle claire None None 10 20 None uk kyle None None john 12 None None 10 usa None None None john None None None 21 </code></pre> <p>Flattened table:</p> <pre><code> rep_2013 rep_2014 rep_2015 rep_2016 sales_2013 sales_2014 sales_2015 sales_2016 country fr None kyle claire None None 10 20 None uk kyle None None john 12 None None 10 usa None None None john None None None 21 </code></pre>
2
2016-09-01T13:54:05Z
39,273,677
<p>see <a href="http://stackoverflow.com/q/37087020/2336654">collapse a pandas MultiIndex</a></p> <h3>Solution</h3> <pre><code>df.columns = df.columns.to_series().str.join('_') </code></pre>
4
2016-09-01T14:04:05Z
[ "python", "pandas" ]
How to size my imshow?
39,274,002
<p>I generated a 2d intensity matrix with the following code:</p> <pre><code>H, x_e, y_e = np.histogram2d(test_y, test_x, bins=(y_e, x_e)) </code></pre> <p>The values of x_e and y_e are:</p> <pre><code>x_e array([ 0.05 , 0.0530303 , 0.05606061, 0.05909091, 0.06212121, 0.06515152, 0.06818182, 0.07121212, 0.07424242, 0.07727273, 0.08030303, 0.08333333, 0.08636364, 0.08939394, 0.09242424, 0.09545455, 0.09848485, 0.10151515, 0.10454545, 0.10757576, 0.11060606, 0.11363636, 0.11666667, 0.11969697, 0.12272727, 0.12575758, 0.12878788, 0.13181818, 0.13484848, 0.13787879, 0.14090909, 0.14393939, 0.1469697 , 0.15 , 0.1530303 , 0.15606061, 0.15909091, 0.16212121, 0.16515152, 0.16818182, 0.17121212, 0.17424242, 0.17727273, 0.18030303, 0.18333333, 0.18636364, 0.18939394, 0.19242424, 0.19545455, 0.19848485, 0.20151515, 0.20454545, 0.20757576, 0.21060606, 0.21363636, 0.21666667, 0.21969697, 0.22272727, 0.22575758, 0.22878788, 0.23181818, 0.23484848, 0.23787879, 0.24090909, 0.24393939, 0.2469697 , 0.25 , 0.2530303 , 0.25606061, 0.25909091, 0.26212121, 0.26515152, 0.26818182, 0.27121212, 0.27424242, 0.27727273, 0.28030303, 0.28333333, 0.28636364, 0.28939394, 0.29242424, 0.29545455, 0.29848485, 0.30151515, 0.30454545, 0.30757576, 0.31060606, 0.31363636, 0.31666667, 0.31969697, 0.32272727, 0.32575758, 0.32878788, 0.33181818, 0.33484848, 0.33787879, 0.34090909, 0.34393939, 0.3469697 , 0.35 ]) y_e array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) </code></pre> <p>I cannot seem to achieve control over the shape of my plotted output with this code:</p> <pre><code>fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot(111) ax.set_title(feature_of_interest) im = plt.imshow(H, interpolation='nearest', origin='low', extent=[y_e[0], y_e[-1], x_e[0], x_e[-1]]) </code></pre> <p>This gives me a very squashed output where I can't see anything:</p> <p><a href="http://i.stack.imgur.com/zDB4O.png" rel="nofollow"><img src="http://i.stack.imgur.com/zDB4O.png" alt="enter image description here"></a></p> <p>How can I adjust the parameters to get a better aspect ratio?</p> <p>Here's what I've tried so far:</p> <ul> <li>playing around with the <code>extent</code> parameter. This changes the shape, but not in a predictable way. Also that makes the axis labels incorrect. </li> <li>changing the <code>figsize</code> parameter. This doesn't seem to have any effect. </li> </ul> <p>Thanks for any suggestions. </p>
3
2016-09-01T14:17:00Z
39,275,939
<p>You can set the aspect ratio of the <a href="http://matplotlib.org/api/axes_api.html" rel="nofollow">axes</a> <a href="http://matplotlib.org/examples/pylab_examples/equal_aspect_ratio.html" rel="nofollow">directly</a>. This is independent of the figure size. Here's an example:</p> <pre><code>import numpy as np from matplotlib import pyplot as plt data = np.random.rand(5, 100) fig = plt.figure() ax = fig.add_subplot(111) ax.imshow(data, interpolation='nearest') ax.set_aspect(5) plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/zqiES.png" rel="nofollow"><img src="http://i.stack.imgur.com/zqiES.png" alt="enter image description here"></a></p>
3
2016-09-01T15:48:45Z
[ "python", "matplotlib" ]
Need explanation on recursive and generator example
39,274,019
<p>I have this recursive function which is used to generate all possible up and lower cases to any string value you pass it to it.</p> <p>Here is the code sample and the output:</p> <pre><code>def test (name): if not name: yield "" else: first=name[:1] for sub in test(name[1:]): yield first.lower()+sub yield first.upper()+sub #print (first) for x in test("abc"): print (x) </code></pre> <p>the output will be like this : </p> <pre><code>abc Abc aBc ABc abC AbC aBC ABC </code></pre> <p>and if I add print function under the two yield functions:</p> <pre><code>print(first) </code></pre> <p>the out put will be like this : </p> <pre><code>abc Abc a aBc ABc a b abC AbC a aBC ABC a b c </code></pre> <p>I just want a clear explanation for each step and what is happening and why I got these value each time and how does it work, because recursive and generators together made me lost here.</p>
-1
2016-09-01T14:17:43Z
39,278,343
<p>Where are you stuck in producing your own trace? You've shown that you know how to use <strong>print</strong> statements. I assume that you also know how to search other examples of recursion traces; StackOverflow has many.</p> <p>To get you started, here's your code with a couple more <strong>print</strong> statements.</p> <pre><code>def test (name): print ("ENTER test, name=", name) if not name: yield "" else: first=name[:1] for sub in test(name[1:]): print (" LOOP\tname", name, "\tfirst", first, "\tsub", sub) yield first.lower()+sub yield first.upper()+sub #print (first) for x in test("abc"): print ("YIELDED:", x) </code></pre> <p>Output:</p> <pre><code>ENTER test, name= abc ENTER test, name= bc ENTER test, name= c ENTER test, name= LOOP name c first c sub LOOP name bc first b sub c LOOP name abc first a sub bc YIELDED: abc YIELDED: Abc LOOP name abc first a sub Bc YIELDED: aBc YIELDED: ABc LOOP name bc first b sub C LOOP name abc first a sub bC YIELDED: abC YIELDED: AbC LOOP name abc first a sub BC YIELDED: aBC YIELDED: ABC </code></pre> <p>Does this get you moving in a useful direction?</p>
0
2016-09-01T18:14:24Z
[ "python", "python-3.x", "recursion", "generator", "yield" ]
Need explanation on recursive and generator example
39,274,019
<p>I have this recursive function which is used to generate all possible up and lower cases to any string value you pass it to it.</p> <p>Here is the code sample and the output:</p> <pre><code>def test (name): if not name: yield "" else: first=name[:1] for sub in test(name[1:]): yield first.lower()+sub yield first.upper()+sub #print (first) for x in test("abc"): print (x) </code></pre> <p>the output will be like this : </p> <pre><code>abc Abc aBc ABc abC AbC aBC ABC </code></pre> <p>and if I add print function under the two yield functions:</p> <pre><code>print(first) </code></pre> <p>the out put will be like this : </p> <pre><code>abc Abc a aBc ABc a b abC AbC a aBC ABC a b c </code></pre> <p>I just want a clear explanation for each step and what is happening and why I got these value each time and how does it work, because recursive and generators together made me lost here.</p>
-1
2016-09-01T14:17:43Z
39,295,524
<p>thanks .... yesterday i just took a paper ,a pen and i trace this function from stack view and i understand it ... </p> <p>it's recursive so the function will call itself from the second letter to the end of whole string each time till the the function "test" called with "" value ... </p> <p>then i have to apply the for loop each time from the last to the first ... the last will have variables like this : first ="c" , sub=""</p> <pre><code>for each sub : first.lower+sub &gt;&gt; c first.upper+sub&gt;&gt; C </code></pre> <p>then back to the previous call will take the last result so for each "c" and for each "C" will apply </p> <pre><code>first.lower()+sub &gt; bc bC first.upper()+sub &gt; bC BC </code></pre> <p>then back to the first call with last result<br> right now we have sub = bc,Bc,bC,BC and first ='a'</p> <pre><code> for each sub : first.lower+sub first.upper+sub abc Abc aBc ABc abC AbC aBC ABC </code></pre>
0
2016-09-02T15:03:39Z
[ "python", "python-3.x", "recursion", "generator", "yield" ]
python code for string re-arrangement
39,274,110
<p>Can anyone help me with python code which transforms the word/string as follows Move all consonants before the vowels - The consonants and vowels should be in the reverse order of the original. - If two equal letters come next to each other in the result (case insensitive duplicates), drop the second letter in the source.</p> <p>I tried this:</p> <pre><code>def vowels(x): vowel = ["a", "e", "i", "o", "u", "A", "E", "I", "O", "U"] if x in vowel: return True else: return False def transform_word(word): result = "" if word is not None: x = len(word) - 1 v = "" c = "" while x is not -1: if (vowels(word[x])): v += word[x] x -= 1 else: c += word[x] x-=1 result = c + v result = "".join(OrderedDict.fromkeys(result)) return result </code></pre>
-3
2016-09-01T14:21:40Z
39,274,252
<p>Should work, more or less what you had concept wise, with worse far worse string concatenation. </p> <pre><code>def isvowel(ch): if ch in ["A", "E", "I", "O", "U", 'a','e','i','o','u']: return True else: return False vowels = [] consonants = [] for letter in word: if isvowel(letter): vowels.append(letter) else: consonants.append(letter) result = '' for consonant in consonants: result+=consonant for vowel in vowels: result+=vowel print result </code></pre>
0
2016-09-01T14:26:59Z
[ "python" ]
Why does round raise on ndigits=None for integers but not for floats?
39,274,173
<p>Why does <code>round()</code> behave different for int and float when <code>ndigits</code> is explicitly set to <code>None</code>?</p> <p>Console test in Python 3.5.1:</p> <pre><code>&gt;&gt;&gt; round(1, None) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: 'NoneType' object cannot be interpreted as an integer &gt;&gt;&gt; round(1.0, None) 1 </code></pre> <p><strong>Update:</strong></p> <p>I reported this as a bug (<a href="https://bugs.python.org/issue27936" rel="nofollow">Issue #27936</a>) which has been fixed and closed. I'll report here when the fix is released.</p> <p>@PadraicCunningham and @CraigBurgler were correct. </p>
4
2016-09-01T14:23:53Z
39,275,546
<p>From the source code for <code>float_round</code> in <code>floatobjects.c</code> in <code>3.5</code>:</p> <pre><code>float_round(PyObject *v, PyObject *args) ... if (!PyArg_ParseTuple(args, "|O", &amp;o_ndigits)) return NULL; ... if (o_ndigits == NULL || o_ndigits == Py_None) { /* single-argument round or with None ndigits: * round to nearest integer */ ... </code></pre> <p>The <code>|| o_ndigits == Py_None</code> bit catches an <code>ndigits=None</code> argument explicitly and discards it, treating the call to <code>round</code> as a single-argument call.</p> <p>In <code>3.4</code>, this code looks like:</p> <pre><code>float_round(PyObject *v, PyObject *args) ... if (!PyArg_ParseTuple(args, "|O", &amp;o_ndigits)) return NULL; ... if (o_ndigits == NULL) { /* single-argument round: round to nearest integer */ ... </code></pre> <p>there is no <code>|| o_ndigits == Py_None</code> test and hence an <code>ndgits=None</code> argument falls through and is treated like an <code>int</code>, thus causing a <code>TypeError</code> for <code>round(1.0, None)</code> in <code>3.4</code>.</p> <p>There is no check for <code>o_ndigits == Py_None</code> in <code>long_round</code> in <code>longobject.c</code> in both <code>3.4</code> and <code>3.5</code>, thus raising a <code>TypeError</code> for <code>round(1, None)</code> in both <code>3.4</code> and <code>3.5</code> </p> <pre><code> treat ndigits=None as resolve Version/Type single-argument call round(n, None) ---------- --------------- ----------- 3.4/float No TypeError 3.4/long No TypeError 3.5/float Yes round(n) 3.5/long No TypeError </code></pre>
2
2016-09-01T15:27:09Z
[ "python", "rounding", "python-3.5" ]
python: how to return 2 columns/index in a list
39,274,228
<p>I have a list that I generated using a for loop. it returns:</p> <pre><code>home1-0002_UUID 3457077784 2132011944 1307504896 62% home1-0003_UUID 3457077784 2088064860 1351451980 61% home1-0001_UUID 3457077784 2092270236 1347246604 61% </code></pre> <p>How can I return only the third and fifth columns?</p> <p>EDIT when I get an error it says 'Nonetype' object is not iterable</p> <pre><code> for index, elem in enumerate(my_list): print (index,elem) </code></pre> <p>I also tried to get the index by using list(enumerate(my_list)) but it doesn't work I get TypeError: 'NoneType' object is not iterable</p> <p>this is how I populate the list:</p> <pre><code>def h1ost(): p1 = subprocess.Popen("/opt/lustre-gem_s/default/bin/lfs df /lustre/home1 | sort -r -nk5",stdout=subprocess.PIPE, shell=True) use = p1.communicate()[0] for o in use.split('\n')[:6]: if "Available" not in o and "summary" not in o: print (o) </code></pre>
-1
2016-09-01T14:25:49Z
39,275,408
<p>As far as I cannot post a comment I will do my best to give you a solution to the question.</p> <pre><code>def empty_string_filter(value): return value != '' def h1ost(): p1 = subprocess.Popen("/opt/lustre-gem_s/default/bin/lfs df /lustre/home1 | sort -r -nk5",stdout=subprocess.PIPE, shell=True) use = p1.communicate()[0] new_file_content_list = [] # Separate by line break and filter any empty string filtered_use_list = filter(empty_string_filter, use.split(os.linesep)[:6]) for line in filtered_use_list : # Split the line and filter the empty strings in order to keep only # columns with information split_line = filter(empty_string_filter, line.split(' ')) # Beware! This will only work if each line has 5 or more data columns # I guess the correct option is to check if it has at least 5 columns # and if it hasn't do not store the information or raise an exception. # It's up to you. new_file_content_list.append('{0} {1}'.format(split_line[2] , split_line[4])) return os.linesep.join(new_file_content_list) </code></pre> <p>So the idea is split every single line by white spaces and filter any empty string left in order to get the 3rd and 5th column (index 2 and 4 respectively)</p>
1
2016-09-01T15:20:11Z
[ "python" ]
tkinter canvas scrolling but scrollbar not adjusting to show canvas size
39,274,249
<p>i am trying to place a frame within a canvas with a scroll bar, the canvas scrolls but the scrollbar does not adjust to show the position </p> <pre><code>from tkinter import * from tkinter import ttk parent=Tk() studentFrame=ttk.Frame(parent) studentFrame.pack() #settup the canvas canvas=Canvas(studentFrame,width=700,height=300) scroller=ttk.Scrollbar(studentFrame, orient=VERTICAL,command=canvas.yview) canvas.grid(row=0,column=0,stick="nsew") scroller.grid(row=0,column=1,stick="ns") canvas.config(yscrollcommand=scroller.set) list=ttk.Frame(canvas,width=700) ttk.Label(list,text="S/N",width=10,relief=SUNKEN).grid(row=0,column=0,ipadx=3,ipady=3) ttk.Label(list,text="Name",width=55,relief=SUNKEN).grid(row=0,column=1,ipadx=3,ipady=3) ttk.Label(list,text="",width=15,relief=SUNKEN).grid(row=0,column=2,ipadx=3,ipady=3) ttk.Label(list,text="",width=15,relief=SUNKEN).grid(row=0,column=3,ipadx=3,ipady=3) num=0 r=0 while(r&lt;50): num=num+1 ttk.Label(list,text=num,width=10,relief=SUNKEN).grid(row=r,column=0,ipadx=3,ipady=3) ttk.Label(list,text="NAME",width=55,relief=SUNKEN).grid(row=r,column=1,ipadx=3,ipady=3) ttk.Label(list,text="EDIT",width=15,relief=SUNKEN).grid(row=r,column=2,ipadx=3,ipady=3) ttk.Label(list,text="DELETE",width=15,relief=SUNKEN).grid(row=r,column=3,ipadx=3,ipady=3) r=r+1 canvas.create_window((0,0),window=list,anchor=W) canvas.config(scrollregion=canvas.bbox(ALL)) parent.mainloop() </code></pre>
0
2016-09-01T14:26:55Z
39,276,831
<p>You are setting the scroll region to be -15 pixels tall (you are setting the bottom-right of the scrollable area to be above the top of the scrollable area) </p>
0
2016-09-01T16:43:16Z
[ "python", "tkinter" ]
tkinter canvas scrolling but scrollbar not adjusting to show canvas size
39,274,249
<p>i am trying to place a frame within a canvas with a scroll bar, the canvas scrolls but the scrollbar does not adjust to show the position </p> <pre><code>from tkinter import * from tkinter import ttk parent=Tk() studentFrame=ttk.Frame(parent) studentFrame.pack() #settup the canvas canvas=Canvas(studentFrame,width=700,height=300) scroller=ttk.Scrollbar(studentFrame, orient=VERTICAL,command=canvas.yview) canvas.grid(row=0,column=0,stick="nsew") scroller.grid(row=0,column=1,stick="ns") canvas.config(yscrollcommand=scroller.set) list=ttk.Frame(canvas,width=700) ttk.Label(list,text="S/N",width=10,relief=SUNKEN).grid(row=0,column=0,ipadx=3,ipady=3) ttk.Label(list,text="Name",width=55,relief=SUNKEN).grid(row=0,column=1,ipadx=3,ipady=3) ttk.Label(list,text="",width=15,relief=SUNKEN).grid(row=0,column=2,ipadx=3,ipady=3) ttk.Label(list,text="",width=15,relief=SUNKEN).grid(row=0,column=3,ipadx=3,ipady=3) num=0 r=0 while(r&lt;50): num=num+1 ttk.Label(list,text=num,width=10,relief=SUNKEN).grid(row=r,column=0,ipadx=3,ipady=3) ttk.Label(list,text="NAME",width=55,relief=SUNKEN).grid(row=r,column=1,ipadx=3,ipady=3) ttk.Label(list,text="EDIT",width=15,relief=SUNKEN).grid(row=r,column=2,ipadx=3,ipady=3) ttk.Label(list,text="DELETE",width=15,relief=SUNKEN).grid(row=r,column=3,ipadx=3,ipady=3) r=r+1 canvas.create_window((0,0),window=list,anchor=W) canvas.config(scrollregion=canvas.bbox(ALL)) parent.mainloop() </code></pre>
0
2016-09-01T14:26:55Z
39,297,569
<p>Finally got it to work i used list.update_idletasks() Before setting the scroll region</p>
0
2016-09-02T17:06:59Z
[ "python", "tkinter" ]
python3 get parameter from command terminal and print input out
39,274,302
<pre><code>lines = [] for line in fileinput.input(): lines.append(line) print(line, end='') </code></pre> <p><a href="http://i.stack.imgur.com/3FyUr.png" rel="nofollow"><img src="http://i.stack.imgur.com/3FyUr.png" alt="enter image description here"></a></p> <p>My question, how can I get 2 into the program and print out content of input.txt at the same time? So far, it would get 2 as a file name and try to open it as a file, not as expected. You see in the pic.</p>
1
2016-09-01T14:29:11Z
39,274,678
<p>The content of the <code>input.txt</code> file can be accesed using the <code>sys.stdin</code> file handle from inside python. The arguments must be accesed using the <code>sys.argv</code> array.</p> <p>Here is a sample script.</p> <pre><code>#!/usr/bin/python import sys arg=sys.argv[1] #take first argument lines=[] for line in sys.stdin: lines.append(line) # save line from stdin to the array. Watch out for memory usage. print arg print lines </code></pre>
0
2016-09-01T14:47:00Z
[ "python", "linux", "unix", "input" ]
python3 get parameter from command terminal and print input out
39,274,302
<pre><code>lines = [] for line in fileinput.input(): lines.append(line) print(line, end='') </code></pre> <p><a href="http://i.stack.imgur.com/3FyUr.png" rel="nofollow"><img src="http://i.stack.imgur.com/3FyUr.png" alt="enter image description here"></a></p> <p>My question, how can I get 2 into the program and print out content of input.txt at the same time? So far, it would get 2 as a file name and try to open it as a file, not as expected. You see in the pic.</p>
1
2016-09-01T14:29:11Z
39,274,906
<p>@Eugene already pointed out, in your command:</p> <pre><code>./whale.py '2' &lt; input.txt </code></pre> <blockquote> <p><code>input.txt</code> is redirected to the standard input stream. <code>2</code> is going into argv.</p> </blockquote> <p>The error you got was probably because somehow your script thought <code>argv[1]</code> was the file to open.</p> <p>try this:</p> <pre><code>import sys print "this is sys.argv: {}".format(sys.argv[1]) print "below is STDIN:" for line in sys.stdin: print line </code></pre>
0
2016-09-01T14:57:14Z
[ "python", "linux", "unix", "input" ]
Queue- Multi Threading Python
39,274,368
<p>I have a bunch of code and I am attaching a piece here. Basically I have a thread - which is targeted on a function - that has a while loop as following :</p> <pre><code>while not stop_event.wait(1): # Continuous Reading Function #print "hello2" #print ("working on %s" % arg) data1 = Read(soa1, bytes1, ser) data2 = Read(soa2, bytes2, ser) data3 = Read(soa3, bytes3, ser) data = np.concatenate([data1, data2, data3]) print data.size queue_read.put(data) time.sleep(1) </code></pre> <p>I am reading the data from an MCU, through serial communication in this thread and sending it back to my Main program - using queue_read.put(data). This while is ran every 1 second.</p> <p>In my main program I am reading the output from this thread as ---</p> <pre><code>self.data = queue_read.get() </code></pre> <p>After five minutes the values in the MCU are intentionally changed. During the change this thread is still running as it is in while loop.</p> <p>So after five minutes, the variable "data" should have the updated new values from the MCU. But to my surprise it does not. It still has the initial values. Is there anything I am missing here? Is using queue, the right way to get data?</p>
0
2016-09-01T14:32:10Z
39,420,966
<p>The following approach reads and processes data from the queue continuously, which allows you to see and respond to data changes as they happen:</p> <pre><code>while True: self.data = queue_read.get() self.update_GUI() </code></pre> <p><code>queue_read.get()</code> blocks if <code>queue_read</code> is empty. You will need to run this code in its own thread if the main thread does anything other than update the GUI.</p>
0
2016-09-09T23:23:14Z
[ "python", "multithreading", "variables", "queue", "pyserial" ]
Python / Pygame FULLSCREEN Tag Creates A Game Screen That Is To Large For The Screen
39,274,460
<p><strong>UPDATED ISSUE</strong></p> <p>I have discovered the issue appears to be with the fact that I am using the FULLSCREEN tag to create the window. I added a rectangle to be drawn in the top left of the scree (0, 0), but when I run the program, It is mostly off the screen. Then, when I Alt-Tab away and back, the rectangle is appropriately placed at 0,0 and the turret is off center.</p> <p>So basically, when the program starts, the game screen is larger than my actual screen, but centered. Then after Alt-Tab, the game screen is lined up with 0,0 but since the game screen is larger than my screen, the turret looks off center, but is actually centered relative to the game.</p> <p>So the real question is why does using the FULLSCREEN tag make a screen larger than my computer screen?</p> <p><strong>ORIGINAL ISSUE</strong></p> <p>I am building a simple demonstration of a turret in the center of the screen which follows the location of the cursor as if to fire where it is. Everything works perfectly until I Alt-Tab away from the screen, and then Alt-Tab back. At this point to turret is now off center (down and to the right)</p> <pre><code>import pygame, math pygame.init() image_library = {} screen_dimen = pygame.display.Info() print("Screen Dimensions ", screen_dimen) def get_image(name): if name not in image_library: image = pygame.image.load(name) image_library[name] = image else: image = image_library[name] return image robot_turret_image = get_image('robot_turret.png') screen = pygame.display.set_mode((0, 0), pygame.FULLSCREEN) done = False clock = pygame.time.Clock() while not done: for event in pygame.event.get(): if event.type == pygame.QUIT: done = True if event.type == pygame.MOUSEMOTION: print(event.pos) if event.type == pygame.KEYDOWN and event.key == pygame.K_SPACE: done = True screen.fill((0, 0, 0)) pos = pygame.mouse.get_pos() angle = 360 - math.atan2(pos[1] - (screen_dimen.current_h / 2), pos[0] - (screen_dimen.current_w / 2)) * 180 / math.pi rot_image = pygame.transform.rotate(robot_turret_image, angle) rect = rot_image.get_rect(center=(screen_dimen.current_w / 2, screen_dimen.current_h / 2)) screen.blit(rot_image, rect) color = (0, 128, 255) pygame.draw.rect(screen, color, pygame.Rect(0, 0, 200, 200)) pygame.display.update() clock.tick(60) </code></pre> <p>It seems that the center is now off. I have printed out the screen dimensions before and after the Alt-Tab and they are the same, so I can't figure out why the image moves. I believe I am missing something regarding state changes with Pygame, but can't figure out what. If it is relevant, I am on Windows 10.</p>
1
2016-09-01T14:37:00Z
39,298,107
<p>Alright, I discovered a solution from <a href="http://gamedev.stackexchange.com/questions/105750/pygame-fullsreen-display-issue">gamedev.stackexchange</a></p> <p>And I will re-hash it here. The issue was that Using the fullscreen tag was making a screen larger than my computer screen. The following code solves this</p> <pre><code>import ctypes ctypes.windll.user32.SetProcessDPIAware() true_res = (ctypes.windll.user32.GetSystemMetrics(0), ctypes.windll.user32.GetSystemMetrics(1)) pygame.display.set_mode(true_res,pygame.FULLSCREEN) </code></pre> <p>It is important to note that this is potentially just a windows fix, but I do not have another system with which to test it on. But It works on Windows 10 with python 3.5.1 and pygame 1.9.2a0</p>
0
2016-09-02T17:45:01Z
[ "python", "pygame" ]
How to insert IDs into Nodes when converting .csv to XML?
39,274,698
<p>Hello I'm new to Python,</p> <p>and I would like to convert a <code>.csv</code>file to <code>XML</code>. The desired output should look like, where I would like to have each individual ID within a Node: <code>&lt;employee id="5"&gt;</code> and the variables corresponding to each individual beneath each other rather then on the same line:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;Document&gt; &lt;employee id="1"&gt; &lt;Name&gt;Steve&lt;/Name&gt; &lt;City&gt;Boston&lt;/City&gt; &lt;Age&gt;33&lt;/Age&gt; &lt;/employee&gt; &lt;employee id="2"&gt; &lt;Name&gt;Michael&lt;/Name&gt; &lt;City&gt;Dallas&lt;/City&gt; &lt;Age&gt;45&lt;/Age&gt; &lt;/employee&gt; &lt;employee id="3"&gt; &lt;Name&gt;John&lt;/Name&gt; &lt;City&gt;New York&lt;/City&gt; &lt;Age&gt;89&lt;/Age&gt; &lt;/employee&gt; &lt;employee id="4"&gt; &lt;Name&gt;Thomas&lt;/Name&gt; &lt;City&gt;LA&lt;/City&gt; &lt;Age&gt;62&lt;/Age&gt; &lt;/employee&gt; &lt;employee id="5"&gt; &lt;Name&gt;Clint&lt;/Name&gt; &lt;City&gt;Paris&lt;/City&gt; &lt;Age&gt;30&lt;/Age&gt; &lt;/employee&gt; &lt;/Document&gt; </code></pre> <p>Given some data:</p> <pre><code>import pandas ID = pandas.DataFrame([1,2,3,4,5]) name = pandas.DataFrame(["Steve","Michael","John","Thomas","Clint"]) city = pandas.DataFrame(["Boston","Dallas","New York","LA","Paris"]) Age = pandas.DataFrame([45,33,33,20,50]) df = pandas.concat([ID, name,city,Age], axis=1) df.columns = ['ID','name','city','Age'] df ID name city Age 0 1 Steve Boston 45 1 2 Michael Dallas 33 2 3 John New York 33 3 4 Thomas LA 20 4 5 Clint Paris 50 </code></pre> <p>And the conversion from <code>.csv</code> to <code>XML</code>:</p> <pre><code>import csv csvFile = 'df.csv' xmlFile = 'myData.xml' csvData = csv.reader(open(csvFile)) xmlData = open(xmlFile, 'w') xmlData.write('&lt;?xml version="1.0"?&gt;' + "\n") # there must be only one top-level tag xmlData.write('&lt;Document&gt;' + "\n") rowNum = 0 for employee in csvData: if rowNum == 0: tags = employee # replace spaces w/ underscores in tag names for i in range(len(tags)): tags[i] = tags[i].replace(' ', '_') else: xmlData.write('&lt;employee &gt;' + "\n") for i in range(len(tags)): xmlData.write(' ' + '&lt;' + tags[i] + '&gt;' \ + employee [i] + '&lt;/' + tags[i] + '&gt;' + "\n") xmlData.write('&lt;/employee &gt;' + "\n") rowNum +=1 xmlData.write('&lt;/Document&gt;' + "\n") xmlData.close() </code></pre> <p>Output <code>XML</code> which looks a bit off as desired:</p> <pre><code>&lt;&lt;?xml version="1.0"?&gt; &lt;Document&gt; &lt;employee&gt; &lt;X&gt;1&lt;/X&gt; &lt;ID&gt;1&lt;/ID&gt; &lt;Name&gt;Steve&lt;/Name&gt; &lt;City&gt;Boston&lt;/City&gt; &lt;Age&gt;33&lt;/Age&gt; &lt;/employee&gt; &lt;employee&gt; &lt;X&gt;2&lt;/X&gt; &lt;ID&gt;2&lt;/ID&gt; &lt;Name&gt;Michael&lt;/Name&gt; &lt;City&gt;Dallas&lt;/City&gt; &lt;Age&gt;45&lt;/Age&gt; &lt;/employee&gt; &lt;employee&gt; &lt;X&gt;3&lt;/X&gt; &lt;ID&gt;3&lt;/ID&gt; &lt;Name&gt;John&lt;/Name&gt; &lt;City&gt;New York&lt;/City&gt; &lt;Age&gt;89&lt;/Age&gt; &lt;/employee&gt; &lt;employee&gt; &lt;X&gt;4&lt;/X&gt; &lt;ID&gt;4&lt;/ID&gt; &lt;Name&gt;Thomas&lt;/Name&gt; &lt;City&gt;LA&lt;/City&gt; &lt;Age&gt;62&lt;/Age&gt; &lt;/employee&gt; &lt;employee&gt; &lt;X&gt;5&lt;/X&gt; &lt;ID&gt;5&lt;/ID&gt; &lt;Name&gt;Clint&lt;/Name&gt; &lt;City&gt;Paris&lt;/City&gt; &lt;Age&gt;30&lt;/Age&gt; &lt;/employee&gt; &lt;/Document&gt; </code></pre>
2
2016-09-01T14:48:04Z
39,275,116
<p>You need to specify the delimiter of the csv file when creating the csv reader object (default is ',').</p> <pre><code>csvData = csv.reader(open(csvFile), delimiter=' ') </code></pre> <p>If this is not given, then the entries of tags are not in the format you want.</p> <hr> <p>The else section in your for loop is not correct. This should be the solution:</p> <pre><code>import csv csvFile = 'df.csv' xmlFile = 'myData.xml' csvData = csv.reader(open(csvFile), delimiter=';') xmlData = open(xmlFile, 'w') xmlData.write('&lt;?xml version="1.0"?&gt;' + "\n") # there must be only one top-level tag xmlData.write('&lt;Document&gt;' + "\n") rowNum = 0 for employee in csvData: if rowNum == 0: tags = employee # replace spaces w/ underscores in tag names for i in range(len(tags)): tags[i] = tags[i].replace(' ', '_') else: xmlData.write('&lt;employee ' + tags[0] + '="' + employee[0] + '" &gt;' + "\n") for i in range(1,len(tags)): xmlData.write(' ' + '&lt;' + tags[i] + '&gt;' \ + employee [i] + '&lt;/' + tags[i] + '&gt;' + "\n") xmlData.write('&lt;/employee &gt;' + "\n") rowNum +=1 xmlData.write('&lt;/Document&gt;' + "\n") xmlData.close() </code></pre>
3
2016-09-01T15:06:48Z
[ "python" ]
How to insert IDs into Nodes when converting .csv to XML?
39,274,698
<p>Hello I'm new to Python,</p> <p>and I would like to convert a <code>.csv</code>file to <code>XML</code>. The desired output should look like, where I would like to have each individual ID within a Node: <code>&lt;employee id="5"&gt;</code> and the variables corresponding to each individual beneath each other rather then on the same line:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;Document&gt; &lt;employee id="1"&gt; &lt;Name&gt;Steve&lt;/Name&gt; &lt;City&gt;Boston&lt;/City&gt; &lt;Age&gt;33&lt;/Age&gt; &lt;/employee&gt; &lt;employee id="2"&gt; &lt;Name&gt;Michael&lt;/Name&gt; &lt;City&gt;Dallas&lt;/City&gt; &lt;Age&gt;45&lt;/Age&gt; &lt;/employee&gt; &lt;employee id="3"&gt; &lt;Name&gt;John&lt;/Name&gt; &lt;City&gt;New York&lt;/City&gt; &lt;Age&gt;89&lt;/Age&gt; &lt;/employee&gt; &lt;employee id="4"&gt; &lt;Name&gt;Thomas&lt;/Name&gt; &lt;City&gt;LA&lt;/City&gt; &lt;Age&gt;62&lt;/Age&gt; &lt;/employee&gt; &lt;employee id="5"&gt; &lt;Name&gt;Clint&lt;/Name&gt; &lt;City&gt;Paris&lt;/City&gt; &lt;Age&gt;30&lt;/Age&gt; &lt;/employee&gt; &lt;/Document&gt; </code></pre> <p>Given some data:</p> <pre><code>import pandas ID = pandas.DataFrame([1,2,3,4,5]) name = pandas.DataFrame(["Steve","Michael","John","Thomas","Clint"]) city = pandas.DataFrame(["Boston","Dallas","New York","LA","Paris"]) Age = pandas.DataFrame([45,33,33,20,50]) df = pandas.concat([ID, name,city,Age], axis=1) df.columns = ['ID','name','city','Age'] df ID name city Age 0 1 Steve Boston 45 1 2 Michael Dallas 33 2 3 John New York 33 3 4 Thomas LA 20 4 5 Clint Paris 50 </code></pre> <p>And the conversion from <code>.csv</code> to <code>XML</code>:</p> <pre><code>import csv csvFile = 'df.csv' xmlFile = 'myData.xml' csvData = csv.reader(open(csvFile)) xmlData = open(xmlFile, 'w') xmlData.write('&lt;?xml version="1.0"?&gt;' + "\n") # there must be only one top-level tag xmlData.write('&lt;Document&gt;' + "\n") rowNum = 0 for employee in csvData: if rowNum == 0: tags = employee # replace spaces w/ underscores in tag names for i in range(len(tags)): tags[i] = tags[i].replace(' ', '_') else: xmlData.write('&lt;employee &gt;' + "\n") for i in range(len(tags)): xmlData.write(' ' + '&lt;' + tags[i] + '&gt;' \ + employee [i] + '&lt;/' + tags[i] + '&gt;' + "\n") xmlData.write('&lt;/employee &gt;' + "\n") rowNum +=1 xmlData.write('&lt;/Document&gt;' + "\n") xmlData.close() </code></pre> <p>Output <code>XML</code> which looks a bit off as desired:</p> <pre><code>&lt;&lt;?xml version="1.0"?&gt; &lt;Document&gt; &lt;employee&gt; &lt;X&gt;1&lt;/X&gt; &lt;ID&gt;1&lt;/ID&gt; &lt;Name&gt;Steve&lt;/Name&gt; &lt;City&gt;Boston&lt;/City&gt; &lt;Age&gt;33&lt;/Age&gt; &lt;/employee&gt; &lt;employee&gt; &lt;X&gt;2&lt;/X&gt; &lt;ID&gt;2&lt;/ID&gt; &lt;Name&gt;Michael&lt;/Name&gt; &lt;City&gt;Dallas&lt;/City&gt; &lt;Age&gt;45&lt;/Age&gt; &lt;/employee&gt; &lt;employee&gt; &lt;X&gt;3&lt;/X&gt; &lt;ID&gt;3&lt;/ID&gt; &lt;Name&gt;John&lt;/Name&gt; &lt;City&gt;New York&lt;/City&gt; &lt;Age&gt;89&lt;/Age&gt; &lt;/employee&gt; &lt;employee&gt; &lt;X&gt;4&lt;/X&gt; &lt;ID&gt;4&lt;/ID&gt; &lt;Name&gt;Thomas&lt;/Name&gt; &lt;City&gt;LA&lt;/City&gt; &lt;Age&gt;62&lt;/Age&gt; &lt;/employee&gt; &lt;employee&gt; &lt;X&gt;5&lt;/X&gt; &lt;ID&gt;5&lt;/ID&gt; &lt;Name&gt;Clint&lt;/Name&gt; &lt;City&gt;Paris&lt;/City&gt; &lt;Age&gt;30&lt;/Age&gt; &lt;/employee&gt; &lt;/Document&gt; </code></pre>
2
2016-09-01T14:48:04Z
39,275,406
<p>Using an XML parser will be far easier. Here is your example using the <a href="https://docs.python.org/3.5/library/xml.etree.elementtree.html#building-xml-documents" rel="nofollow">xml.etree.ElementTree</a> module. I assumed that you converted the dataframe to csv with <code>df.to_csv('df.csv')</code></p> <pre><code>import csv import xml.etree.ElementTree as ET csvFile = 'df.csv' csvData = csv.reader(open(csvFile)) root = ET.Element('Document') next(csvData) # skip header for _, employee_id, name, city, age in csvData: employee_elem = ET.SubElement(root, "Employee") employee_elem.set('id', employee_id) # set attribute # Child elements name_elem = ET.SubElement(employee_elem, "Name") name_elem.text = name city_elem = ET.SubElement(employee_elem, "City") city_elem.text = city age_elem = ET.SubElement(employee_elem, "Name") age_elem.text = age ET.ElementTree(root).write('df.xml', encoding='utf-8', xml_declaration=True) </code></pre>
2
2016-09-01T15:20:02Z
[ "python" ]
Using path extension \\?\ for windows 7 with python script
39,274,722
<p>I'm using the tool <a href="https://github.com/NavicoOS/ac2git" rel="nofollow">ac2git</a> to convert my Accurev depot to git repository. I'm facing a problem when the os.walk() function in the python file runs. Since my project has a pretty complicated build path I have nested files that have path length exceeding the 260 limitation on Windows 7.I tried using the work-arounds provided by <a href="https://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#maxpath" rel="nofollow">microsoft support</a> but it is not resolving the error. I still get the error [Winerror 3]: File not found , when in fact it is present but cannot be accessed due to the length limitation.</p> <p>This is a part of the code in the ac2git.py script:</p> <pre><code>def PreserveEmptyDirs(self): preservedDirs = [] for root, dirs, files in os.walk(self.gitRepo.path, topdown=True): for name in dirs: path ="\\\\?\\"+ToUnixPath(os.path.join(root, name)) # Preserve empty directories that are not under the .git/ directory. if git.GetGitDirPrefix(path) is None and len(os.listdir(path)) == 0: filename = os.path.join(path, '.gitignore') with codecs.open(filename, 'w', 'utf-8') as file: #file.write('# accurev2git.py preserve empty dirs\n') preservedDirs.append(filename) if not os.path.exists(filename): logger.error("Failed to preserve directory. Couldn't create '{0}'.".format(filename)) return preservedDirs def ToUnixPath(path): rv = SplitPath(path) if rv is not None: if rv[0] == '/': rv = '/' + '/'.join(rv[1:]) else: rv = '/'.join(rv) return rv def SplitPath(path): rv = None if path is not None: path = str(path) rv = [] drive, path = os.path.splitdrive(path) head, tail = os.path.split(path) while len(head) &gt; 0 and head != '/' and head != '\\': # For an absolute path the starting slash isn't removed from head. rv.append(tail) head, tail = os.path.split(head) if len(tail) &gt; 0: rv.append(tail) if len(head) &gt; 0: # For absolute paths. rv.append(head) if len(drive) &gt; 0: rv.append(drive) rv.reverse() return rv </code></pre> <p>I have appended the "\\?\" in order to allow for longer path lengths but now I get this error:</p> <pre><code>FileNotFoundError: [WinError 3] The system cannot find the path specified: '\\\\?\\C:///s/cms' </code></pre> <p>I'm new to Python and I'm not very sure what is the right approach to tackle it. I have to continue using Windows 7 only. Any suggestions if this problem can be fixed another way?</p>
1
2016-09-01T14:49:12Z
39,294,824
<p>So after much ado, I made changes in the python code, </p> <p>Apparently this information is very important " <em>File I/O functions in the Windows API convert "/" to "\" as part of converting the name to an NT-style name, except when using the "\?\" prefix as detailed in the following sections.</em>"</p> <p>So I just added this code to the function:</p> <pre><code>def ToUnixPath(path): rv = SplitPath(path) rv[:] = [item for item in rv if item != '/'] rv = '\\'.join(rv) return r"\\?"+"\\"+rv </code></pre> <p>And it worked!</p>
2
2016-09-02T14:28:32Z
[ "python", "git", "windows-7-x64", "accurev" ]
Add function name to decorator output
39,274,743
<p>I have a python code snippet that allows me to time function as a decorator. I would like to add function name to the output. and time in milli-seconds </p> <pre><code>def func_timer(func): def f(*args, **kwargs): start = time.time() results = func(*args, **kwargs) print "Elapsed: %.6fs" % (time.time() - start) return results return f </code></pre> <p>Usage is: </p> <pre><code>@func_timer def foo(): pass </code></pre> <p>Current Output is :</p> <pre><code>Elapsed: 0.005168s </code></pre> <p>Output desired:</p> <pre><code>foo Elapsed: 5.168ms </code></pre>
0
2016-09-01T14:50:14Z
39,274,780
<p>Function objects have a <code>__name__</code> attribute, you can use that. Simply multiply the time by 1000 if you want milliseconds:</p> <pre><code>print "%s Elapsed: %.6fms" % (func.__name__, (time.time() - start) * 1000) </code></pre>
4
2016-09-01T14:52:04Z
[ "python", "function", "decorator", "python-decorators" ]
How to make python config file, in which relative paths are defined, but when scripts in other directories import config, paths are correct?
39,274,748
<p>I have the following directory structure for a program I'm writing in python: </p> <pre><code>\code\ main.py config.py \module_folder1\ script1.1.py \data\ data_file1 data_file2 </code></pre> <p>My <code>config.py</code> is a set of global variables that are set by the user, or generally fixed all the time. In particular <code>config.py</code> defines path variables to the 2 data files, something like <code>path1 = os.path.abspath("../data/data_file1")</code>. The primary use is to run <code>main.py</code> which imports <code>config</code> (and the other modules I wrote) and all is good. </p> <p>But sometimes I need to run <code>script1.1.py</code> by itself. Ok, no problem. I can add to <code>script1.1</code> the usual <code>if __name__ == '__main__':</code> and I can import <code>config</code>. But then I get <code>path1 = "../code/data/data_file1"</code> which doesn't exist. I thought that since the path is created in <code>config.py</code> the path would be relative to where <code>config.py</code> lives, but it's not. </p> <p>So the question is, how can I have a central config file which defines relative paths, so I can import the config file to scripts in different directories and have the paths still be correct? </p> <p>I should mention that the code repo will be shared among multiple machines, so hardcoding an absolute path is not an option.</p>
0
2016-09-01T14:50:37Z
39,274,866
<ol> <li>You know the correct relative path to the file from the directory where <code>config.py</code> is located</li> <li>You know the correct relative path to the directory where <code>config.py</code> is located (in your case, <code>..</code>)</li> </ol> <p>Both of this things are system-independent and do not change unless you change the structure of you project. Just add them together using <code>os.path.join('..', config.path_repative_to_config)</code></p>
1
2016-09-01T14:55:36Z
[ "python", "module", "relative-path" ]
How to make python config file, in which relative paths are defined, but when scripts in other directories import config, paths are correct?
39,274,748
<p>I have the following directory structure for a program I'm writing in python: </p> <pre><code>\code\ main.py config.py \module_folder1\ script1.1.py \data\ data_file1 data_file2 </code></pre> <p>My <code>config.py</code> is a set of global variables that are set by the user, or generally fixed all the time. In particular <code>config.py</code> defines path variables to the 2 data files, something like <code>path1 = os.path.abspath("../data/data_file1")</code>. The primary use is to run <code>main.py</code> which imports <code>config</code> (and the other modules I wrote) and all is good. </p> <p>But sometimes I need to run <code>script1.1.py</code> by itself. Ok, no problem. I can add to <code>script1.1</code> the usual <code>if __name__ == '__main__':</code> and I can import <code>config</code>. But then I get <code>path1 = "../code/data/data_file1"</code> which doesn't exist. I thought that since the path is created in <code>config.py</code> the path would be relative to where <code>config.py</code> lives, but it's not. </p> <p>So the question is, how can I have a central config file which defines relative paths, so I can import the config file to scripts in different directories and have the paths still be correct? </p> <p>I should mention that the code repo will be shared among multiple machines, so hardcoding an absolute path is not an option.</p>
0
2016-09-01T14:50:37Z
39,276,433
<p>(Not sure who posted this as a comment, then deleted it, but it seems to work so I'm posting as an answer.) The trick is to use <code>os.path.dirname(__file__)</code> in the config file, which gives the directory of the config file (<code>/code/</code>) regardless of where the script that imports config is. </p> <p>Specifically to answer the question, in the config file define </p> <pre><code>path1 = os.path.abspath(os.path.join(os.path.join(os.path.join( os.path.dirname(__file__) , '..'), 'data' ), 'data_file1' ) ) </code></pre>
0
2016-09-01T16:17:54Z
[ "python", "module", "relative-path" ]
Talking SMBus between RaspberryPi and ATMEGA 324PA - AVR not clock stretching
39,274,784
<p>I'm trying to get an ATMEGA 324PA to run as an SMBus slave.</p> <p>I'm using the following code on the Pi:</p> <pre><code>import smbus as smbus i2c = smbus.SMBus(1) i2c_addr = 0x30 result = i2c.read_block_data( i2c_addr, reg ) </code></pre> <p>On the AVR, I'm using:</p> <pre><code>#include &lt;avr/io.h&gt; #include &lt;avr/interrupt.h&gt; #include &lt;avr/sleep.h&gt; #include "smb_slave.h" #define SMB_COMMAND_RETURN_VENDOR_STRING 0x10 int main(void) { // Set data direction of PORTB as output and turn off LEDs. DDRA = 0xff; PORTA = 0xff; // Initialize SMBus SMBusInit(); SMBEnable(); // Enable interrupts globally sei(); for (;;) { } return 0; } void ProcessReceiveByte(SMBData *smb) { smb-&gt;txBuffer[0] = ~PIND; smb-&gt;txLength = 1; } static void ReturnVendorString(SMBData *smb) { unsigned char *vendor = (unsigned char*) "Vendor\0"; unsigned char i; unsigned char temp; i = 0; // Copy vendor ID string from EEPROM. while ((temp = vendor[i]) != '\0') { i++; smb-&gt;txBuffer[i] = temp; } smb-&gt;txBuffer[0] = i; // Byte count. smb-&gt;txLength = i + 1; // Number of bytes to be transmitted including byte count. smb-&gt;state = SMB_STATE_WRITE_READ_REQUESTED; PORTA ^= 0x40; // debug } static void UndefinedCommand(SMBData *smb) { // Handle undefined requests here. smb-&gt;error = TRUE; smb-&gt;state = SMB_STATE_IDLE; } void ProcessMessage(SMBData *smb) { if (smb-&gt;state == SMB_STATE_WRITE_REQUESTED) { switch (smb-&gt;rxBuffer[0]) // Command code. { case SMB_COMMAND_RETURN_VENDOR_STRING: // Block read, vendor ID. ReturnVendorString(smb); break; default: UndefinedCommand(smb); break; } } else { smb-&gt;state = SMB_STATE_IDLE; } } </code></pre> <p>With a (gcc-adapted) version of: <a href="http://www.atmel.com/images/AVR316.zip" rel="nofollow">http://www.atmel.com/images/AVR316.zip</a> from <a href="http://www.atmel.com/devices/ATMEGA324A.aspx?tab=documents" rel="nofollow">http://www.atmel.com/devices/ATMEGA324A.aspx?tab=documents</a></p> <p>Something's partially working, as my logic analyser shows:</p> <p><a href="http://i.stack.imgur.com/MbQO7.png" rel="nofollow"><img src="http://i.stack.imgur.com/MbQO7.png" alt="enter image description here"></a></p> <p>But I assume I'm doing something wrong as the AVR is not ACK'ing the READ, nor clock-stretching nor sending a response.</p> <p>Where should I look next?</p> <p>Can I have confidence in the Python smbus module on the RasPi?</p> <hr> <p>Could what I'm seeing be related to <a href="https://github.com/raspberrypi/linux/issues/254" rel="nofollow">https://github.com/raspberrypi/linux/issues/254</a> ?</p>
0
2016-09-01T14:52:12Z
39,289,339
<p>I tried using SMBus from a Beaglebone instead (replacing the Raspberry Pi).</p> <p>This worked perfectly, after I added some 10K pull-up resistors to the i2c bus. (The Raspberry Pi has internal pull-ups on the i2c pins.)</p>
0
2016-09-02T09:44:40Z
[ "python", "raspberry-pi", "avr", "i2c", "smbus" ]
Talking SMBus between RaspberryPi and ATMEGA 324PA - AVR not clock stretching
39,274,784
<p>I'm trying to get an ATMEGA 324PA to run as an SMBus slave.</p> <p>I'm using the following code on the Pi:</p> <pre><code>import smbus as smbus i2c = smbus.SMBus(1) i2c_addr = 0x30 result = i2c.read_block_data( i2c_addr, reg ) </code></pre> <p>On the AVR, I'm using:</p> <pre><code>#include &lt;avr/io.h&gt; #include &lt;avr/interrupt.h&gt; #include &lt;avr/sleep.h&gt; #include "smb_slave.h" #define SMB_COMMAND_RETURN_VENDOR_STRING 0x10 int main(void) { // Set data direction of PORTB as output and turn off LEDs. DDRA = 0xff; PORTA = 0xff; // Initialize SMBus SMBusInit(); SMBEnable(); // Enable interrupts globally sei(); for (;;) { } return 0; } void ProcessReceiveByte(SMBData *smb) { smb-&gt;txBuffer[0] = ~PIND; smb-&gt;txLength = 1; } static void ReturnVendorString(SMBData *smb) { unsigned char *vendor = (unsigned char*) "Vendor\0"; unsigned char i; unsigned char temp; i = 0; // Copy vendor ID string from EEPROM. while ((temp = vendor[i]) != '\0') { i++; smb-&gt;txBuffer[i] = temp; } smb-&gt;txBuffer[0] = i; // Byte count. smb-&gt;txLength = i + 1; // Number of bytes to be transmitted including byte count. smb-&gt;state = SMB_STATE_WRITE_READ_REQUESTED; PORTA ^= 0x40; // debug } static void UndefinedCommand(SMBData *smb) { // Handle undefined requests here. smb-&gt;error = TRUE; smb-&gt;state = SMB_STATE_IDLE; } void ProcessMessage(SMBData *smb) { if (smb-&gt;state == SMB_STATE_WRITE_REQUESTED) { switch (smb-&gt;rxBuffer[0]) // Command code. { case SMB_COMMAND_RETURN_VENDOR_STRING: // Block read, vendor ID. ReturnVendorString(smb); break; default: UndefinedCommand(smb); break; } } else { smb-&gt;state = SMB_STATE_IDLE; } } </code></pre> <p>With a (gcc-adapted) version of: <a href="http://www.atmel.com/images/AVR316.zip" rel="nofollow">http://www.atmel.com/images/AVR316.zip</a> from <a href="http://www.atmel.com/devices/ATMEGA324A.aspx?tab=documents" rel="nofollow">http://www.atmel.com/devices/ATMEGA324A.aspx?tab=documents</a></p> <p>Something's partially working, as my logic analyser shows:</p> <p><a href="http://i.stack.imgur.com/MbQO7.png" rel="nofollow"><img src="http://i.stack.imgur.com/MbQO7.png" alt="enter image description here"></a></p> <p>But I assume I'm doing something wrong as the AVR is not ACK'ing the READ, nor clock-stretching nor sending a response.</p> <p>Where should I look next?</p> <p>Can I have confidence in the Python smbus module on the RasPi?</p> <hr> <p>Could what I'm seeing be related to <a href="https://github.com/raspberrypi/linux/issues/254" rel="nofollow">https://github.com/raspberrypi/linux/issues/254</a> ?</p>
0
2016-09-01T14:52:12Z
39,292,289
<p>The issue you linked is the problem -- i2c clock stretching is simply broken on the Raspberry Pi. More info: <a href="http://www.advamation.com/knowhow/raspberrypi/rpi-i2c-bug.html" rel="nofollow">http://www.advamation.com/knowhow/raspberrypi/rpi-i2c-bug.html</a></p> <p>If a sensor has alternative output such as UART sometimes that's an option but for some projects I've had to use a micro or Beaglebone or some other thing.</p>
1
2016-09-02T12:19:40Z
[ "python", "raspberry-pi", "avr", "i2c", "smbus" ]
Print in single line in python using map() function
39,274,814
<p>I want to read an integer, and without using any string methods, I want to print something like this using the <code>map()</code> function: </p> <pre><code>123..N </code></pre> <p>For Example: </p> <pre><code>N:5 output:12345 </code></pre> <p>And not:</p> <pre><code>1 2 3 4 5 </code></pre> <p>I have already read the following answer, which is not what i want. i want to use map() function which is not used in the answer given below </p> <p><a href="http://stackoverflow.com/questions/3249524/print-in-one-line-dynamically">Print in one line dynamically</a></p>
-1
2016-09-01T14:53:20Z
39,355,297
<p>you can try this on python 2:</p> <p>from <strong>future</strong> import print_function</p> <p>map(lambda y:print (y,end=""),[x for x in range(1,int(input())+1)])</p>
-1
2016-09-06T18:10:12Z
[ "python", "python-2.7", "python-3.x" ]
Beautiful Soup Conditional Query
39,274,823
<p>I am new to Beautiful Soup. I need to get data from HTML file.</p> <pre><code>&lt;div class="ques_ans_block"&gt; &lt;div class="question"&gt; &lt;p&gt;is this correct ?&lt;/p&gt; &lt;div&gt; &lt;p class="answer"&gt;&lt;/p&gt; &lt;div class="moreinfo" style="display: block;"&gt; &lt;p class="answer"&gt; &lt;p&gt; &lt;p class="answer"&gt;&lt;/p&gt; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>condition is , there can be "moreinfo" div present or abscent.</p> <p>so i need to find question and answer(including answer from "moreinfo" if present) innertext for each ques_ans_block ? </p>
-1
2016-09-01T14:53:39Z
39,285,630
<p>This will give output as json containing Question, answer and FaqID.</p> <pre><code>import bs4 import json import codecs arrayList = [] bsp = bs4.BeautifulSoup(open('input.html')) ques_ans_block = bsp.find_all("div", {"class": "ques_ans_block"}) s = "" count = 1 for i in ques_ans_block: data = {} q = i.select('.question') for a in q: s+=a.text+"\n" for a in q: a.extract() data["Question"] = s del i['.question'] v = "" a = i.select('p') for a in a: v+=a.text+"\n" a = i.select('li') for a in a: v+=a.text+"\n" data["Answer"] = v data["FaqId"] = count print "\n" arrayList.append(data) count = count + 1 s = "" #print arrayList with codecs.open('output.json','wt','utf-8') as outfile: json.dump(arrayList, outfile, indent=4) </code></pre>
0
2016-09-02T06:27:53Z
[ "python", "python-2.7", "beautifulsoup" ]
find and replace using multiple criteria pandas python
39,274,824
<p>I have the following dataframe (df):</p> <pre><code>loc pop_1 source_1 pop_2 source_2 a 99 group_a 77 group_b b 93 group_a 90 group_b c 58 group_a 59 group_b d 47 group_a 62 group_b </code></pre> <p>I create an additional column 'upper_limit':</p> <pre><code>df['upper_limit'] = df[['pop_1','pop_2']].max(axis=1) </code></pre> <p>I now want to add another column that looks at the values in 'upper_limit', compares them to pop_1 and pop_2 and then selects the text from source_1 or source_2 when they match. I.e:</p> <pre><code>loc pop_1 source_1 pop_2 source_2 upper_limit source a 99 group_a 77 group_b 99 group_a b 93 group_a 90 group_b 93 group_a c 58 group_a 59 group_b 59 group_b d 47 group_a 62 group_b 62 group_b </code></pre> <p>I have tried to create a dict from pop_1 and source_1 through:</p> <pre><code>table_dict = df[['pop_1','source_1']] z = table_dict.to_dict </code></pre> <p>And then map this using:</p> <pre><code>df['source'] = 'n/a' df['source'].replace(z,inplace=True) </code></pre> <p>This returns the dataframe but with the column 'source' only showing n/a results. </p>
2
2016-09-01T14:53:41Z
39,275,044
<blockquote> <p>I now want to add another column that looks at the values in 'upper_limit', compares them to pop_1 and pop_2 and then selects the text from source_1 or source_2 when they match.</p> </blockquote> <p>You can do it much more simply using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow"><code>np.where</code></a>:</p> <pre><code>In [19]: import numpy as np In [20]: df['upper_limit source'] = np.where(df.upper_limit == df.pop_1, df.source_1, df.source_2) In [20]: df Out[20]: loc pop_1 pop_2 source_1 source_2 upper_limit upper_limit source 0 a 99 77 group_a group_b 99 group_a 1 b 93 90 group_a group_b 93 group_a 2 c 58 59 group_a group_b 59 group_b 3 d 47 62 group_a group_b 62 group_b </code></pre>
1
2016-09-01T15:03:44Z
[ "python", "pandas" ]
PuLP: Using only one item per group
39,274,840
<p>I have a Pandas dataframe that has following values:</p> <pre><code> Name Age City Points 1 John 24 CHI 35 2 Mary 18 NY 25 . . 80 Steve 30 CHI 32 </code></pre> <p>I'm trying to form a 5 person group that maximizes the sum of points. I'd like to have two constraints: age and city. Maximum age must be under 110 years and there can't be two persons from the same city.</p> <p>At the moment I have a script that maximizes the points and takes the age constraint into account:</p> <pre><code>x = pulp.LpVariable.dicts("x", df.index, cat='Integer', lowBound=0) mod = pulp.LpProblem("prog", pulp.LpMaximize) objvals_p = {idx: (df['Points'][idx]) for idx in df.index} mod += sum([x[idx]*objvals_p[idx] for idx in df.index]) objvals_a = {idx: (df['Age'][idx]) for idx in df.index} mod += pulp.lpSum([x[idx]*objvals_a[idx] for idx in df.index]) &lt; 110 </code></pre> <p>However I can't figure out how to add city constraint into my script.</p> <p>Any advices for me?</p> <p>Thanks!</p>
1
2016-09-01T14:54:13Z
39,275,189
<p>You can do something like this:</p> <pre><code>for city in df['City'].unique(): sub_idx = df[df['City']==city].index mod += pulp.lpSum([x[idx] for idx in sub_idx]) &lt;= 1 </code></pre> <p>For each city in the DataFrame, this sum is over a subset of DataFrame (indexed by sub_idx) and this sum should be smaller than or equal to 1 because 2 people from the same city cannot be in the team.</p> <p>For this (and your other constraint) to work, you need to change the definition of your decision variable. It should be binary; integrality is not enough.</p> <pre><code>x = pulp.LpVariable.dicts("x", df.index, 0, 1, pulp.LpInteger) </code></pre>
1
2016-09-01T15:09:46Z
[ "python", "pandas", "constraints", "solver", "pulp" ]
How to test RPC of SOAP web services?
39,274,850
<p>I am currently learning building a SOAP web services with django and spyne. I have successfully tested my model using unit test. However, when I tried to test all those @rpc functions, I have no luck there at all.</p> <p>What I have tried in testing those @rpc functions: 1. Get dummy data in model database 2. Start a server at localhost:8000 3. Create a suds.Client object that can communicate with localhost:8000 4. Try to invoke @rpc functions from the suds.Client object, and test if the output matches what I expected.</p> <p>However, when I run the test, I believe the test got blocked by the running server at localhost:8000 thus no test code can be run while the server is running.</p> <p>I tried to make the server run on a different thread, but that messed up my test even more.</p> <p>I have searched as much as I could online and found no materials that can answer this question.</p> <p>TL;DR: how do you test @rpc functions using unit test?</p>
0
2016-09-01T14:54:31Z
39,275,854
<p>I believe if you are using a service inside a test, that test should not be a <strong>unit</strong> test.</p> <p>you might want to consider use <strong>factory_boy</strong> or <strong>mock</strong>, both of them are python modules to mock or fake a object, for instance, to fake a object to give a response to your rpc call.</p>
0
2016-09-01T15:43:38Z
[ "python", "django", "testing", "rpc", "spyne" ]
How to test RPC of SOAP web services?
39,274,850
<p>I am currently learning building a SOAP web services with django and spyne. I have successfully tested my model using unit test. However, when I tried to test all those @rpc functions, I have no luck there at all.</p> <p>What I have tried in testing those @rpc functions: 1. Get dummy data in model database 2. Start a server at localhost:8000 3. Create a suds.Client object that can communicate with localhost:8000 4. Try to invoke @rpc functions from the suds.Client object, and test if the output matches what I expected.</p> <p>However, when I run the test, I believe the test got blocked by the running server at localhost:8000 thus no test code can be run while the server is running.</p> <p>I tried to make the server run on a different thread, but that messed up my test even more.</p> <p>I have searched as much as I could online and found no materials that can answer this question.</p> <p>TL;DR: how do you test @rpc functions using unit test?</p>
0
2016-09-01T14:54:31Z
39,287,322
<p>See <a href="http://stackoverflow.com/questions/19383937/testing-spyne-application">Testing Spyne application</a> </p> <p>Ignore the remaining, it's the trivial answer guard.</p> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>
0
2016-09-02T08:05:15Z
[ "python", "django", "testing", "rpc", "spyne" ]
How to find consecutive/successive objects in a list of objects
39,274,953
<p>I have a list of object that have attributes serial number (SN) and datetime along with others. From how the list is generated, the objects should be in chronological order. Cronologically, objects can have the following SN:</p> <p>1,1,1,1,1,2,2,2,3,3,3,3,3,3,2,2,1,2,2,2,2,3,3,1,1,1,3,...</p> <p>How can I retrive the first an last timestamps of all consecutive sequences of SN. For example for SN=1 that would be the first to fith timestamp as well as the 17th and 24th to 26th. Same for all SNs that appear in the list of objects. What I want to go for is a Gantt like Diagram to show at which times these SNs were present.</p>
-1
2016-09-01T14:59:37Z
39,275,667
<p>If I understand your question correctly, maybe you are looking for something like this?</p> <pre><code>def consecutive(nums, sn): count = {nums[0]: [[0]]} for idx, num in enumerate(nums[1:]): if num == nums[idx]: try: count[num][-1][1] = idx + 1 except IndexError: count[num][-1].append(idx + 1) else: try: count[nums[idx]][-1][1] = idx + 1 except IndexError: count[nums[idx]][-1].append(idx + 1) try: count[num].append([idx + 1]) except KeyError: count[num] = [[idx + 1]] return count[sn] </code></pre> <p>test:</p> <pre><code>test = [1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 2, 2, 1, 2, 2, 2, 2, 3, 3, 1, 1, 1, 3] print consecutive(test, 3) # will return [[8, 14], [21, 23], [26]], which are the index ranges for SN = 3 </code></pre>
0
2016-09-01T15:33:41Z
[ "python" ]
Get output of pv using python
39,274,976
<p>Is there a way to use the program pv from within python so as to get the progress of an operation?</p> <p>So far I have the following:</p> <pre><code> p0 = sp.Popen(["pv", "-f", args.filepath], bufsize=0, stdout=sp.PIPE, stderr=sp.PIPE) p1 = sp.Popen(["awk", "{print $1, $2, $1, $3, $4 }", "{}".format(args.filepath)], stdout=sp.PIPE, stdin=p0.stdout) </code></pre> <p>But I'm having trouble getting continuous output from <code>p0</code>. I tried:</p> <pre><code> for line in p0.stderr: print("line:", line) </code></pre> <p>But this waits for the process to finish and then only prints the last progress report from <code>pv</code>. Does anybody know how I can get it to print the continuously updating status?</p>
2
2016-09-01T15:00:23Z
39,275,519
<p>Try something like this.</p> <pre><code>p = subprocess.Popen(command, stdout=subprocess.PIPE) for line in iter(p.stdout.readline, ''): print("line", line) </code></pre>
0
2016-09-01T15:25:42Z
[ "python", "subprocess" ]
Get output of pv using python
39,274,976
<p>Is there a way to use the program pv from within python so as to get the progress of an operation?</p> <p>So far I have the following:</p> <pre><code> p0 = sp.Popen(["pv", "-f", args.filepath], bufsize=0, stdout=sp.PIPE, stderr=sp.PIPE) p1 = sp.Popen(["awk", "{print $1, $2, $1, $3, $4 }", "{}".format(args.filepath)], stdout=sp.PIPE, stdin=p0.stdout) </code></pre> <p>But I'm having trouble getting continuous output from <code>p0</code>. I tried:</p> <pre><code> for line in p0.stderr: print("line:", line) </code></pre> <p>But this waits for the process to finish and then only prints the last progress report from <code>pv</code>. Does anybody know how I can get it to print the continuously updating status?</p>
2
2016-09-01T15:00:23Z
39,287,568
<p>try reading from p0.stderr</p> <p>pv leaves stdout untouched, it writes only to stderr</p>
0
2016-09-02T08:18:45Z
[ "python", "subprocess" ]
Get output of pv using python
39,274,976
<p>Is there a way to use the program pv from within python so as to get the progress of an operation?</p> <p>So far I have the following:</p> <pre><code> p0 = sp.Popen(["pv", "-f", args.filepath], bufsize=0, stdout=sp.PIPE, stderr=sp.PIPE) p1 = sp.Popen(["awk", "{print $1, $2, $1, $3, $4 }", "{}".format(args.filepath)], stdout=sp.PIPE, stdin=p0.stdout) </code></pre> <p>But I'm having trouble getting continuous output from <code>p0</code>. I tried:</p> <pre><code> for line in p0.stderr: print("line:", line) </code></pre> <p>But this waits for the process to finish and then only prints the last progress report from <code>pv</code>. Does anybody know how I can get it to print the continuously updating status?</p>
2
2016-09-01T15:00:23Z
39,292,586
<p>It turns out <code>pv</code> outputs each line with a carriage return at end (<code>\r</code>). To be able to continuously read from the output, <code>Popen</code> needs to be initialized with <code>universal_lines=True</code>, like this:</p> <pre><code> p0 = sp.Popen(['pv', '-f', args.filepath], stdout=sp.PIPE, stderr=sp.PIPE, universal_newlines=True) </code></pre> <p>This leads to a continuous output of progress reports:</p> <pre><code>line: 7.12MB 0:00:01 [ 7.1MB/s] [=&gt; ] 8% ETA 0:00:11 line: 14.6MB 0:00:02 [7.42MB/s] [====&gt; ] 16% ETA 0:00:10 line: 22.1MB 0:00:03 [7.55MB/s] [=======&gt; ] 24% ETA 0:00:09 line: 29.5MB 0:00:04 [7.36MB/s] [==========&gt; ] 33% ETA 0:00:08 </code></pre> <p>Here's a reference to a similar question:</p> <p><a href="http://stackoverflow.com/questions/4620547/real-time-output-of-subprocess-popen-and-not-line-by-line">Real time output of subprocess.popen() and not line by line</a></p>
0
2016-09-02T12:34:38Z
[ "python", "subprocess" ]
Why, using pyHook, the timestamp (event.Time) of the events is wrong?
39,274,982
<p>This is the code that I used to debug the events:</p> <pre><code>print("Real timestamp:", int(time())) print("Event Timestamp:", event.Time) print("Event Time:", strftime("%H:%M:%S %z", localtime(event.Time))) </code></pre> <p>And this is the output I got:</p> <pre><code>Real timestamp: 1472741855 Event Timestamp: 50129625 Event Time: 06:53:45 W. Europe Daylight Time </code></pre> <p>Does someone know why this happens or which time is returning? Thanks.</p>
0
2016-09-01T15:00:37Z
40,049,296
<p>The time returned is not the actual Timestamp</p> <p>The time returned comes straight from the "time" member of the Win32 EVENTMSG struct, which is in units of "milliseconds since last boot". </p> <p>Shameless plugin from <a href="https://mail.python.org/pipermail/python-win32/2007-September/006309.html" rel="nofollow">here</a></p>
1
2016-10-14T17:55:03Z
[ "python", "timestamp", "pyhook" ]
Reference part of list item in loop
39,274,988
<p>I am trying to turn a txt file into a new txt file with specific formatting so i can put it straight onto a website.</p> <p>I have created a list called stats of the items to go under the Statistics heading but am now trying to write the loop to tell it to format the text in a specific way and I'm getting invalid syntax errors.</p> <p>Here is what I have so far..</p> <pre><code>txt = open(html_file, 'a') txt.write('&lt;h5&gt;&lt;a href="http://www.learningplusuk.org/data/education-reform" target="_blank"&gt;&lt;strong&gt;Education Reform&lt;/strong&gt;&lt;/a&gt;&lt;/h5&gt;' '\n' '&lt;p&gt;All the latest information on qualification reform can be found via our website, in the ‘Education Reform’ section.&lt;/p&gt;' '\n' '&lt;hr&gt;' '\n' '&lt;p&gt;Statistics and Data&lt;/p&gt;' '\n' '&lt;hr&gt;' '\n') for i in stats: j,k = enumerate(stats.split("\t")) txt.write('&lt;h5&gt;&lt;a href=/"'i.split("\t")[4]'" target="_blank"&gt;&lt;strong&gt;'i.split("\t")[1]'&lt;/strong&gt;&lt;/a&gt;&lt;/h5&gt;' '\n' '&lt;h5&gt;&lt;strong&gt;'i.split("\t")[2]'&lt;/strong&gt;&lt;/h5&gt;' '&lt;p&gt;'i.split("\t")[3]'&lt;br&gt;&lt;/p&gt;' '&lt;hr&gt;' '\n') txt.close() </code></pre> <p>but it says the i.split is invalid syntax. Any suggestions?</p> <p>Thanks!</p>
1
2016-09-01T15:00:54Z
39,275,068
<p>Try to use the + operator when concatenating strings: <code>'&lt;h5&gt;&lt;a href=/"' + i.split("\t")[4] + '" target=...'</code></p>
0
2016-09-01T15:04:35Z
[ "python" ]
Parse files in AWS S3 with boto3
39,275,043
<p>I am attempting to read files from my S3 bucket, and parse them with a regex pattern. However, I have not been able to figure out to read the files line by line. Is there a way to do this or a different way I need to be approaching this for parsing?</p> <pre><code>pattern = '^(19|20)\d\d[-.](0[1-9]|1[012])[-.](0[1-9]|[12][0-9]|3[01])[ \t]+([0-9]|0[0-9]|1[0-9]|2[0-3]):[0-5][0-9]:[0-5][0-9][ \t]+(?:[0-9]{1,3}\.){3}[0-9]{1,3}[ \t]+(?:GET|POST|PUT)[ \t]+([^\s]+)[ \t]+[1-5][0-9][0-9][ \t]+(\d+)[ \t]+(\d+)[ \t]+"(?:[^"\\]|\\.)*"[ \t]+"(?:[^"\\]|\\.)*"[ \t]+"(?:[^"\\]|\\.)*"' s3 = session.resource('s3') bucket_name = s3.Bucket(bucket) data = [obj for obj in list(bucket_name.objects.filter(Prefix=prefix)) if obj.key != prefix] for obj in data: key = obj.key body = obj.get()['Body'].read() print(key) print(body) for line in body: print(line) </code></pre> <p>So I am able to see the correct file and able to read the whole body of the file (close to an IIS log). However when I try to iterate the lines, I get numbers. So the output of <code>print(line)</code> is</p> <pre><code>35 101 119 147 etc. </code></pre> <p>I have no idea where these numbers are coming from. Are they words, characters, something else?</p> <p>My goal is to apply my pattern once I am able to read the file line by line with the regular expression operator.</p> <p>EDIT: Here is one of my log lines</p> <pre><code>2016-06-14 14:03:42 1.1.1.1 GET /origin/ScriptResource.axd?=5f9d5645 200 26222 0 "site.com/en-US/CategoryPage.aspx" "Mozilla/5.0 (Linux; Android 4.4.4; SM-G318HZ Build/KTU84P)" "ASP.NET_SessionId=emfyTVRJNqgijw=; __SessionCookie=bQMfQzEtcnfMSQ==; __CSARedirectTags=ABOcOxWK/O5Rw==; dtCookie=B52435A514751459148783108ADF35D5|VVMrZVN1aXRlK1BXU3wx" </code></pre>
2
2016-09-01T15:03:44Z
39,285,136
<p>Text file with below content I have used in below solution:</p> <pre><code>I love AWS. I love boto3. I love boto2. </code></pre> <p>I think the problem is with line :</p> <pre><code>for line in body: </code></pre> <p>It iterates character by character instead of line by line.</p> <pre><code>C:\Users\Administrator\Desktop&gt;python bt.py I l o v e A W S . I l o v e b o t o 3 . I l o v e b o t o 2 . C:\Users\Administrator\Desktop&gt; </code></pre> <p>Instead we use as below :</p> <pre><code>for line in body.splitlines(): </code></pre> <p>then the output looks like this</p> <pre><code>C:\Users\Administrator\Desktop&gt;python bt.py I love AWS. I love boto3. I love boto2. C:\Users\Administrator\Desktop&gt; </code></pre> <p>Applying above things, I tried the below code on text file with small regex which will give boto versions from the file.</p> <pre><code>import re header = ['Date', 'time', 'IP', 'method', 'request', 'status code', 'bytes', 'time taken', 'referrer', 'user agent', 'cookie'] s3 = session.resource('s3') bucket_name = s3.Bucket(bucket) data = [obj for obj in list(bucket_name.objects.filter(Prefix=prefix)) if obj.key != prefix] for obj in data: key = obj.key body = obj.get()['Body'].read() #print(key) #print(body) x=0 for line in body: m = re.search(r'(\d{4}-\d{2}-\d{2})\s+(\d{2}:\d{2}:\d{2})\s+([\d\.]+)\s+([GET|PUT|POST]+)\s+([\S]+)\s+(\d+)\s+(\d+)\s+(\d+)\s+([\S]+)\s+(\".*?\")\s+(.*)',line) if m is not None: for i in range(11): print header[i]," - ",m.group(x) x+=1 print "------------------------------------" x=0 </code></pre>
1
2016-09-02T05:49:53Z
[ "python", "amazon-s3", "boto3" ]
NameError while converting tar.gz to zip
39,275,086
<p>I got the following code from my question on how to convert the tar.gz file to zip file. </p> <pre><code>import tarfile, zipfile tarf = tarfile.open(name='sample.tar.gz', mode='r|gz' ) zipf = zipfile.ZipFile.open( name='myzip.zip', mode='a', compress_type=ZIP_DEFLATED ) for m in tarf.getmembers(): f = tarf.extractfile( m ) fl = f.read() fn = m.name zipf.writestr( fn, fl ) tarf.close() zipf.close() </code></pre> <p>but when I run it I get the error.</p> <p>What should I change in the code to make it work? </p> <pre><code>NameError: name 'ZIP_DEFLATED' is not defined </code></pre>
-1
2016-09-01T15:05:10Z
39,275,142
<p><code>ZIP_DEFLATED</code> is a name <a href="https://docs.python.org/2/library/zipfile.html#zipfile.ZIP_DEFLATED" rel="nofollow">defined by the <code>zipfile</code> module</a>; reference it from there:</p> <pre><code>zipf = zipfile.ZipFile( 'myzip.zip', mode='a', compression=zipfile.ZIP_DEFLATED) </code></pre> <p>Note that you don't use the <code>ZipFile.open()</code> method here; you are not opening members in the archive, you are writing <em>to</em> the object.</p> <p>Also, the correct <a href="https://docs.python.org/2/library/zipfile.html#zipfile.ZipFile" rel="nofollow"><code>ZipFile</code> class signature</a> names the 3rd argument <code>compression</code>. <code>compress_type</code> is only used as an attribute on <code>ZipInfo</code> objects and for the <code>ZipFile.writestr()</code> method. The first argument is not named <code>name</code> either; it's <code>file</code>, but you normally would just pass in the value as a positional argument.</p> <p>Next, you can't seek in a gzip-compressed tarfile, so you'll have issues accessing members in order if you use <code>tarf.getmembers()</code>. This method has to do a full scan to find all members to build a list, and then you can't go back to read the file data anymore.</p> <p>Instead, iterate directly over the object, and you'll get member objects in order at a point you can still read the file data too:</p> <pre><code>for m in tarf: f = tarf.extractfile( m ) fl = f.read() fn = m.name zipf.writestr( fn, fl ) </code></pre>
1
2016-09-01T15:07:34Z
[ "python", "zip", "tar", "zipfile", "tarfile" ]
How to calculate each string length that belongs to a list of strings by python?
39,275,108
<p>Suppose I have a file with <code>n</code> DNA sequences, each one in a line. I need to turn them into a list and then calculate each sequence's length and then total length of all of them together. I am not sure how to do that before they are into a list. </p> <pre><code># open file and writing each sequences' length f= open('seq.txt' , 'r') for line in f: line= line.strip() print (line) print ('this is the length of the given sequence', len(line)) # turning into a list: lines = [line.strip() for line in open('seq.txt')] print (lines) </code></pre> <p>How can I do math calculations from the list? Ex. the total length of all sequences together? Standard deviation from their different lengths etc.</p>
1
2016-09-01T15:06:23Z
39,275,220
<p>Just a couple of remarks. Use <code>with</code> to handle files so you don't have to worry about closing them after you are done reading\writing, flushing, etc. Also, since you are looping through the file once, why not create the list too? You don't need to go through it again.</p> <pre><code># open file and writing each sequences' length with open('seq.txt', 'r') as f: sequences = [] total_len = 0 for line in f: new_seq = line.strip() sequences.append(new_seq) new_seq_len = len(new_seq) total_len += new_seq_len print('number of sequences: {}'.format(len(sequences))) print('total lenght: {}'.format(total_len)) print('biggest sequence: {}'.format(max(sequences, key=lambda x: len(x)))) print('\t with length {}'.format(len(sorted(sequences, key=lambda x: len(x))[-1]))) print('smallest sequence: {}'.format(min(sequences, key=lambda x: len(x)))) print('\t with length {}'.format(len(sorted(sequences, key=lambda x: len(x))[0]))) </code></pre> <p>I have included some post-processing info to give you an idea of how to go about it. If you have any questions just ask.</p>
0
2016-09-01T15:11:02Z
[ "python", "string", "list", "python-3.x", "math" ]
How to calculate each string length that belongs to a list of strings by python?
39,275,108
<p>Suppose I have a file with <code>n</code> DNA sequences, each one in a line. I need to turn them into a list and then calculate each sequence's length and then total length of all of them together. I am not sure how to do that before they are into a list. </p> <pre><code># open file and writing each sequences' length f= open('seq.txt' , 'r') for line in f: line= line.strip() print (line) print ('this is the length of the given sequence', len(line)) # turning into a list: lines = [line.strip() for line in open('seq.txt')] print (lines) </code></pre> <p>How can I do math calculations from the list? Ex. the total length of all sequences together? Standard deviation from their different lengths etc.</p>
1
2016-09-01T15:06:23Z
39,275,248
<p>Look into the <a href="https://docs.python.org/3/library/statistics.html" rel="nofollow"><code>statistics</code></a> module. You'll find all kinds of measures of averages and spreads.</p> <p>You'll get the length of any sequence using <code>len</code>.</p> <p>In your case, you'll want to map the sequences to their lengths:</p> <pre><code>from statistics import stdev with open("seq.txt") as f: lengths = [len(line.strip()) for line in f] print("Number of sequences:", len(lengths)) print("Standard deviation:", stdev(lengths)) </code></pre> <p><strong>edit:</strong> Because it was asked in the comments: Here's how to cluster the instances into different files depending on their lengths:</p> <pre><code>from statistics import stdev, mean with open("seq.txt") as f: sequences = [line.strip() for line in f] lengths = [len(sequence) for sequence in sequences] mean_ = mean(lengths) stdev_ = stdev(lengths) with open("below.txt", "w") as below, open("above.txt", "w") as above, open("normal.txt", "w") as normal: for sequence in sequences: if len(sequence) &gt; mean+stdev_: above.write(sequence + "\n") elif mean+stdev_ &gt; len(sequence &gt; mean-stdev_: #inbetween normal.write(sequence + "\n") else: below.write(sequence + "\n") </code></pre>
1
2016-09-01T15:12:38Z
[ "python", "string", "list", "python-3.x", "math" ]
How to calculate each string length that belongs to a list of strings by python?
39,275,108
<p>Suppose I have a file with <code>n</code> DNA sequences, each one in a line. I need to turn them into a list and then calculate each sequence's length and then total length of all of them together. I am not sure how to do that before they are into a list. </p> <pre><code># open file and writing each sequences' length f= open('seq.txt' , 'r') for line in f: line= line.strip() print (line) print ('this is the length of the given sequence', len(line)) # turning into a list: lines = [line.strip() for line in open('seq.txt')] print (lines) </code></pre> <p>How can I do math calculations from the list? Ex. the total length of all sequences together? Standard deviation from their different lengths etc.</p>
1
2016-09-01T15:06:23Z
39,275,317
<p>Try this to output the individual length and calculate the total length:</p> <pre><code> lines = [line.strip() for line in open('seq.txt')] total = 0 for line in lines: print 'this is the length of the given sequence: {}'.format(len(line)) total += len(line) print 'this is the total length: {}'.format(total) </code></pre>
2
2016-09-01T15:16:04Z
[ "python", "string", "list", "python-3.x", "math" ]
How to calculate each string length that belongs to a list of strings by python?
39,275,108
<p>Suppose I have a file with <code>n</code> DNA sequences, each one in a line. I need to turn them into a list and then calculate each sequence's length and then total length of all of them together. I am not sure how to do that before they are into a list. </p> <pre><code># open file and writing each sequences' length f= open('seq.txt' , 'r') for line in f: line= line.strip() print (line) print ('this is the length of the given sequence', len(line)) # turning into a list: lines = [line.strip() for line in open('seq.txt')] print (lines) </code></pre> <p>How can I do math calculations from the list? Ex. the total length of all sequences together? Standard deviation from their different lengths etc.</p>
1
2016-09-01T15:06:23Z
39,275,380
<p>You have already seen how to get the list of sequences and a list of the lengths using append.</p> <pre><code> lines = [line.strip() for line in open('seq.txt')] total = 0 sizes = [] for line in lines: mysize = len(line) total += mysize sizes.append(mysize) </code></pre> <p>Note that you can also use a for loop to read each line and append to the two lists rather than read every line into lists and then loop through lists. It is a matter of which you would prefer.</p> <p>You can use the statistics library (as of Python 3.4) for the statistics on the list of lengths.</p> <p><a href="https://docs.python.org/3/library/statistics.html" rel="nofollow">statistics — Mathematical statistics functions</a></p> <blockquote> <p>mean() Arithmetic mean (“average”) of data. median() Median (middle value) of data. median_low() Low median of data.<br> median_high() High median of data. median_grouped() Median, or 50th percentile, of grouped data. mode() Mode (most common value) of discrete data. pstdev() Population standard deviation of data.<br> pvariance() Population variance of data. stdev() Sample standard deviation of data. variance() Sample variance of data.</p> </blockquote> <p>You can also use the answers at <a href="http://stackoverflow.com/questions/15389768/standard-deviation-of-a-list">Standard deviation of a list</a></p> <p>Note that there is an answer that actually shows the code that was added to Python 3.4 for the statistics module. If you have an older version, you can use that code or get the statistics module code for your own system.</p>
0
2016-09-01T15:18:53Z
[ "python", "string", "list", "python-3.x", "math" ]
How to calculate each string length that belongs to a list of strings by python?
39,275,108
<p>Suppose I have a file with <code>n</code> DNA sequences, each one in a line. I need to turn them into a list and then calculate each sequence's length and then total length of all of them together. I am not sure how to do that before they are into a list. </p> <pre><code># open file and writing each sequences' length f= open('seq.txt' , 'r') for line in f: line= line.strip() print (line) print ('this is the length of the given sequence', len(line)) # turning into a list: lines = [line.strip() for line in open('seq.txt')] print (lines) </code></pre> <p>How can I do math calculations from the list? Ex. the total length of all sequences together? Standard deviation from their different lengths etc.</p>
1
2016-09-01T15:06:23Z
39,275,472
<p>The map and reduce functions can be useful to work on collections.</p> <pre><code>import operator f= open('seq.txt' , 'r') for line in f: line= line.strip() print (line) print ('this is the length of the given sequence', len(line)) # turning into a list: lines = [line.strip() for line in open('seq.txt')] print (lines) print('The total length is 'reduce(operator.add,map(len,lines))) </code></pre>
1
2016-09-01T15:23:44Z
[ "python", "string", "list", "python-3.x", "math" ]
How to calculate each string length that belongs to a list of strings by python?
39,275,108
<p>Suppose I have a file with <code>n</code> DNA sequences, each one in a line. I need to turn them into a list and then calculate each sequence's length and then total length of all of them together. I am not sure how to do that before they are into a list. </p> <pre><code># open file and writing each sequences' length f= open('seq.txt' , 'r') for line in f: line= line.strip() print (line) print ('this is the length of the given sequence', len(line)) # turning into a list: lines = [line.strip() for line in open('seq.txt')] print (lines) </code></pre> <p>How can I do math calculations from the list? Ex. the total length of all sequences together? Standard deviation from their different lengths etc.</p>
1
2016-09-01T15:06:23Z
39,275,492
<p>This will do what you require. To do additional calculations you may want to save your results from the text file into a list or set so you won't need to read from a file again.</p> <pre><code>total_length = 0 # Create a variable that will save our total length of lines read with open('filename.txt', 'r') as f: for line in f: line = line.strip() total_length += len(line) # Add the length to our total print("Line Length: {}".format(len(line))) print("Total Length: {}".format(total_length)) </code></pre>
0
2016-09-01T15:24:36Z
[ "python", "string", "list", "python-3.x", "math" ]
Sort by certain order (Situation: pandas DataFrame Groupby)
39,275,294
<p>I want to change the day of order presented by below code.<br> What I want is a result with the order (Mon, Tue, Wed, Thu, Fri, Sat, Sun) <br> - should I say, sort by key in certain predefined order?</p> <hr> <p>Here is my code which needs some tweak:</p> <pre><code>f8 = df_toy_indoor2.groupby(['device_id', 'day'])['dwell_time'].sum() print(f8) </code></pre> <p>Current result:</p> <pre><code>device_id day device_112 Thu 436518 Wed 636451 Fri 770307 Tue 792066 Mon 826862 Sat 953503 Sun 1019298 device_223 Mon 2534895 Thu 2857429 Tue 3303173 Fri 3548178 Wed 3822616 Sun 4213633 Sat 4475221 </code></pre> <p>Desired result:</p> <pre><code>device_id day device_112 Mon 826862 Tue 792066 Wed 636451 Thu 436518 Fri 770307 Sat 953503 Sun 1019298 device_223 Mon 2534895 Tue 3303173 Wed 3822616 Thu 2857429 Fri 3548178 Sat 4475221 Sun 4213633 </code></pre> <hr> <p>Here, <code>type(df_toy_indoor2.groupby(['device_id', 'day'])['dwell_time'])</code> is a class 'pandas.core.groupby.SeriesGroupBy'.</p> <p>I have found <code>.sort_values()</code> , but it is a built-in sort function by values. <br> I want to get some pointers to set some order to use it further data manipulation.<br> Thanks in advance.</p>
4
2016-09-01T15:14:26Z
39,275,559
<p>Probably not the best way, but as far as I know you cannot pass a function/mapping to <code>sort_values</code>. As a workaround, I generally use <code>assign</code> to add a new column and sort by that column. In your example, that also requires resetting the index first (and setting it back).</p> <pre><code>days = {'Mon': 1, 'Tue': 2, 'Wed': 3, 'Thu': 4, 'Fri': 5, 'Sun': 6, 'Sat': 7} f8 = f8.reset_index() (f8.assign(day_num=f8['day'].map(days)) .sort_values(['device_id', 'day_num']) .set_index(['device_id', 'day']) .drop('day_num', axis=1)) Out: 0 device_id day 0d4fd55bb363bf6f6f7f8b3342cd0467 Mon 826862 Tue 792066 Wed 636451 Thu 436518 Fri 770307 Sun 1019298 Sat 953503 f6258edf9145d1c0404e6f3d7a27a29d Mon 2534895 Tue 3303173 Wed 3822616 Thu 2857429 Fri 3548178 Sun 4213633 Sat 4475221 </code></pre>
1
2016-09-01T15:27:45Z
[ "python", "sorting", "pandas" ]
Sort by certain order (Situation: pandas DataFrame Groupby)
39,275,294
<p>I want to change the day of order presented by below code.<br> What I want is a result with the order (Mon, Tue, Wed, Thu, Fri, Sat, Sun) <br> - should I say, sort by key in certain predefined order?</p> <hr> <p>Here is my code which needs some tweak:</p> <pre><code>f8 = df_toy_indoor2.groupby(['device_id', 'day'])['dwell_time'].sum() print(f8) </code></pre> <p>Current result:</p> <pre><code>device_id day device_112 Thu 436518 Wed 636451 Fri 770307 Tue 792066 Mon 826862 Sat 953503 Sun 1019298 device_223 Mon 2534895 Thu 2857429 Tue 3303173 Fri 3548178 Wed 3822616 Sun 4213633 Sat 4475221 </code></pre> <p>Desired result:</p> <pre><code>device_id day device_112 Mon 826862 Tue 792066 Wed 636451 Thu 436518 Fri 770307 Sat 953503 Sun 1019298 device_223 Mon 2534895 Tue 3303173 Wed 3822616 Thu 2857429 Fri 3548178 Sat 4475221 Sun 4213633 </code></pre> <hr> <p>Here, <code>type(df_toy_indoor2.groupby(['device_id', 'day'])['dwell_time'])</code> is a class 'pandas.core.groupby.SeriesGroupBy'.</p> <p>I have found <code>.sort_values()</code> , but it is a built-in sort function by values. <br> I want to get some pointers to set some order to use it further data manipulation.<br> Thanks in advance.</p>
4
2016-09-01T15:14:26Z
39,275,671
<p>If you sort the dataframe prior to the <code>groupby</code>, pandas will maintain the order of your sort. First thing you'll have to do is come up with a good way to sort the days of the week. One way of doing that is to assign an int representing the day of the week to each row, then sort on that column. For example:</p> <pre><code>import pandas df = pandas.DataFrame( columns=['device_id', 'day', 'dwell_time'], data=[[1, 'Wed', 35], [1, 'Mon', 63], [2, 'Sat', 83], [2, 'Fri', 82]] ) df['day_of_week'] = df.apply( lambda x: ['Mon', 'Tues', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'].index(x.day), 1 ) print(df.sort(['device_id', 'day_of_week']).groupby(['device_id', 'day'])['dwell_time'].sum()) </code></pre> <p>yields:</p> <pre><code>device_id day dwell_time 1 Mon 63 Wed 35 2 Fri 82 Sat 83 </code></pre>
1
2016-09-01T15:33:57Z
[ "python", "sorting", "pandas" ]
Sort by certain order (Situation: pandas DataFrame Groupby)
39,275,294
<p>I want to change the day of order presented by below code.<br> What I want is a result with the order (Mon, Tue, Wed, Thu, Fri, Sat, Sun) <br> - should I say, sort by key in certain predefined order?</p> <hr> <p>Here is my code which needs some tweak:</p> <pre><code>f8 = df_toy_indoor2.groupby(['device_id', 'day'])['dwell_time'].sum() print(f8) </code></pre> <p>Current result:</p> <pre><code>device_id day device_112 Thu 436518 Wed 636451 Fri 770307 Tue 792066 Mon 826862 Sat 953503 Sun 1019298 device_223 Mon 2534895 Thu 2857429 Tue 3303173 Fri 3548178 Wed 3822616 Sun 4213633 Sat 4475221 </code></pre> <p>Desired result:</p> <pre><code>device_id day device_112 Mon 826862 Tue 792066 Wed 636451 Thu 436518 Fri 770307 Sat 953503 Sun 1019298 device_223 Mon 2534895 Tue 3303173 Wed 3822616 Thu 2857429 Fri 3548178 Sat 4475221 Sun 4213633 </code></pre> <hr> <p>Here, <code>type(df_toy_indoor2.groupby(['device_id', 'day'])['dwell_time'])</code> is a class 'pandas.core.groupby.SeriesGroupBy'.</p> <p>I have found <code>.sort_values()</code> , but it is a built-in sort function by values. <br> I want to get some pointers to set some order to use it further data manipulation.<br> Thanks in advance.</p>
4
2016-09-01T15:14:26Z
39,275,799
<p>Took me some time, but I found the solution. <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reindex.html#pandas.Series.reindex" rel="nofollow">reindex</a> does what you want. See my code example:</p> <pre><code>a = [1, 2] * 2 + [2, 1] * 3 + [1, 2] b = ['Mon', 'Wed', 'Thu', 'Fri'] * 3 c = list(range(12)) df = pd.DataFrame(data=[a,b,c]).T df.columns = ['device', 'day', 'value'] df = df.groupby(['device', 'day']).sum() </code></pre> <p>gives:</p> <pre><code> value device day 1 Fri 7 Mon 0 Thu 12 Wed 14 2 Fri 14 Mon 12 Thu 6 Wed 1 </code></pre> <p>Then doing reindex:</p> <pre><code>df.reindex(['Mon', 'Wed', 'Thu', 'Fri'], level='day') </code></pre> <p>or more conveniently (credits to burhan)</p> <pre><code>df.reindex(list(calendar.day_abbr), level='day') </code></pre> <p>gives:</p> <pre><code> value device day 1 Mon 0 Wed 14 Thu 12 Fri 7 2 Mon 12 Wed 1 Thu 6 Fri 14 </code></pre>
8
2016-09-01T15:40:33Z
[ "python", "sorting", "pandas" ]
Sort by certain order (Situation: pandas DataFrame Groupby)
39,275,294
<p>I want to change the day of order presented by below code.<br> What I want is a result with the order (Mon, Tue, Wed, Thu, Fri, Sat, Sun) <br> - should I say, sort by key in certain predefined order?</p> <hr> <p>Here is my code which needs some tweak:</p> <pre><code>f8 = df_toy_indoor2.groupby(['device_id', 'day'])['dwell_time'].sum() print(f8) </code></pre> <p>Current result:</p> <pre><code>device_id day device_112 Thu 436518 Wed 636451 Fri 770307 Tue 792066 Mon 826862 Sat 953503 Sun 1019298 device_223 Mon 2534895 Thu 2857429 Tue 3303173 Fri 3548178 Wed 3822616 Sun 4213633 Sat 4475221 </code></pre> <p>Desired result:</p> <pre><code>device_id day device_112 Mon 826862 Tue 792066 Wed 636451 Thu 436518 Fri 770307 Sat 953503 Sun 1019298 device_223 Mon 2534895 Tue 3303173 Wed 3822616 Thu 2857429 Fri 3548178 Sat 4475221 Sun 4213633 </code></pre> <hr> <p>Here, <code>type(df_toy_indoor2.groupby(['device_id', 'day'])['dwell_time'])</code> is a class 'pandas.core.groupby.SeriesGroupBy'.</p> <p>I have found <code>.sort_values()</code> , but it is a built-in sort function by values. <br> I want to get some pointers to set some order to use it further data manipulation.<br> Thanks in advance.</p>
4
2016-09-01T15:14:26Z
39,276,164
<p>Set the <code>'day'</code> column as <a href="http://pandas.pydata.org/pandas-docs/stable/categorical.html" rel="nofollow">categorical</a> dtype, just make sure when you set the category your list of days is sorted as you'd like it to be. Performing the <code>groupby</code> will then automatically sort it for you, but if you otherwise tried to sort the column it will sort in the correct order that you specify.</p> <pre><code># Initial setup. np.random.seed([3,1415]) n = 100 days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'] df = pd.DataFrame({ 'device_id': np.random.randint(1,3,n), 'day': np.random.choice(days, n), 'dwell_time':np.random.random(n) }) # Set as category, groupby, and sort. df['day'] = df['day'].astype("category", categories=days, ordered=True) df = df.groupby(['device_id', 'day']).sum() </code></pre> <p>The resulting output:</p> <pre><code> dwell_time device_id day 1 Mon 4.428626 Tue 3.259319 Wed 2.436024 Thu 0.909724 Fri 4.974137 Sat 5.583778 Sun 2.687258 2 Mon 3.117923 Tue 2.427154 Wed 1.943927 Thu 4.599547 Fri 2.628887 Sat 6.247520 Sun 2.716886 </code></pre> <p>Note that this method works for any type of customized sorting. For example, if you had a column with entries <code>'a', 'b', 'c'</code>, and wanted it to be sorted in a non-standard order, e.g. <code>'c', 'a', 'b'</code>, you'd just do the same type of procedure: specify the column as categorical with your categories being in the non-standard order you want.</p>
2
2016-09-01T16:01:07Z
[ "python", "sorting", "pandas" ]
Converting integer tuple to float
39,275,304
<p>I have a tuple of integers. In reality, this is no tuple at all but a decimal number that all languages except English represent as <em>1,2</em> while English uses <em>1.2</em>. This "tuple" is read from a file and added into an array along with a series of integers, so the data type is automatically assigned by Python and I have no influence over it...</p> <pre><code>whatever[7] = 1,2 </code></pre> <p>Is there some clever way of converting this to 1.2? I found ways but they only work on strings but when I convert this to string, it is parenthesised and split by a space,</p> <pre><code>str(whatever[7]) '(1, 2)' </code></pre> <p>...so the task gets expanded to getting rid also of "(", ")" and space, which is certainly feasible:</p> <pre><code>str(str(str(str(whatever[7]).replace(",",".")).replace(" ","")).replace("(", "")).replace(")","") </code></pre> <p>...but it does not exactly strike me with its beauty and elegance.</p>
-2
2016-09-01T15:15:02Z
39,275,354
<p>You mean like</p> <pre><code>print "%s.%s"%whatever[7] </code></pre> <p>And if you want to convert it to float</p> <pre><code>float("%s.%s"%whatever[7]) </code></pre>
1
2016-09-01T15:17:35Z
[ "python", "python-3.x", "data-type-conversion" ]
Converting integer tuple to float
39,275,304
<p>I have a tuple of integers. In reality, this is no tuple at all but a decimal number that all languages except English represent as <em>1,2</em> while English uses <em>1.2</em>. This "tuple" is read from a file and added into an array along with a series of integers, so the data type is automatically assigned by Python and I have no influence over it...</p> <pre><code>whatever[7] = 1,2 </code></pre> <p>Is there some clever way of converting this to 1.2? I found ways but they only work on strings but when I convert this to string, it is parenthesised and split by a space,</p> <pre><code>str(whatever[7]) '(1, 2)' </code></pre> <p>...so the task gets expanded to getting rid also of "(", ")" and space, which is certainly feasible:</p> <pre><code>str(str(str(str(whatever[7]).replace(",",".")).replace(" ","")).replace("(", "")).replace(")","") </code></pre> <p>...but it does not exactly strike me with its beauty and elegance.</p>
-2
2016-09-01T15:15:02Z
39,275,402
<p>You can convert your tuple to a float by first creating the string representation <code>'1.2'</code> and then feeding that to <code>float</code>.</p> <p>To do that you could use <code>map</code> to cast <code>int</code>s to <code>str</code>s, join them on <code>'.'</code> and feed that to <code>float()</code>.</p> <p>For example:</p> <pre><code>&gt;&gt;&gt; t = 1, 2 &gt;&gt;&gt; f = float(".".join(map(str,t))) &gt;&gt;&gt; print(f) 1.2 </code></pre>
1
2016-09-01T15:19:56Z
[ "python", "python-3.x", "data-type-conversion" ]
Converting integer tuple to float
39,275,304
<p>I have a tuple of integers. In reality, this is no tuple at all but a decimal number that all languages except English represent as <em>1,2</em> while English uses <em>1.2</em>. This "tuple" is read from a file and added into an array along with a series of integers, so the data type is automatically assigned by Python and I have no influence over it...</p> <pre><code>whatever[7] = 1,2 </code></pre> <p>Is there some clever way of converting this to 1.2? I found ways but they only work on strings but when I convert this to string, it is parenthesised and split by a space,</p> <pre><code>str(whatever[7]) '(1, 2)' </code></pre> <p>...so the task gets expanded to getting rid also of "(", ")" and space, which is certainly feasible:</p> <pre><code>str(str(str(str(whatever[7]).replace(",",".")).replace(" ","")).replace("(", "")).replace(")","") </code></pre> <p>...but it does not exactly strike me with its beauty and elegance.</p>
-2
2016-09-01T15:15:02Z
39,275,493
<p>How about this?</p> <pre><code>&gt;&gt;&gt; whatever = 1,2 &gt;&gt;&gt; float('.'.join(str(num) for num in whatever)) 1.2 </code></pre> <p>Like Jim's answer, it creates a <code>str</code> representation of <code>'1.2'</code> but instead of using map, it creates a generator object (basically what's behind list comprehensions), which converts each item in <code>whatever</code> to string, that the <code>join</code> method can iterate over. Lastly, that string gets converted to a float type object</p>
-1
2016-09-01T15:24:40Z
[ "python", "python-3.x", "data-type-conversion" ]
Converting integer tuple to float
39,275,304
<p>I have a tuple of integers. In reality, this is no tuple at all but a decimal number that all languages except English represent as <em>1,2</em> while English uses <em>1.2</em>. This "tuple" is read from a file and added into an array along with a series of integers, so the data type is automatically assigned by Python and I have no influence over it...</p> <pre><code>whatever[7] = 1,2 </code></pre> <p>Is there some clever way of converting this to 1.2? I found ways but they only work on strings but when I convert this to string, it is parenthesised and split by a space,</p> <pre><code>str(whatever[7]) '(1, 2)' </code></pre> <p>...so the task gets expanded to getting rid also of "(", ")" and space, which is certainly feasible:</p> <pre><code>str(str(str(str(whatever[7]).replace(",",".")).replace(" ","")).replace("(", "")).replace(")","") </code></pre> <p>...but it does not exactly strike me with its beauty and elegance.</p>
-2
2016-09-01T15:15:02Z
39,275,687
<p>The below format will work with your given example.</p> <pre><code>whatever[7] = 1, 2 new_num = float("{}.{}".format(whatever[7][0], whatever[7][1])) </code></pre>
0
2016-09-01T15:34:51Z
[ "python", "python-3.x", "data-type-conversion" ]
How to modify Django form before save?
39,275,350
<p>In my project each user can have multiple enemies, like this:</p> <p><strong>models</strong></p> <pre><code>class EnemyModel(models.Model): name = models.CharField(max_length=128) weapon = models.CharField(max_length=128) related_user = models.ForeignKey(UserProfile) class UserProfile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) </code></pre> <p>Only user can add enemies to his profile, so I made a form like this:</p> <p><strong>forms</strong></p> <pre><code>class AddEnemyForm(forms.ModelForm): class Meta: model = EnemyModel exclude = ['related_user'] # only current user </code></pre> <p>My idea was to modify the excluded field in a view, but it doesn't work:</p> <p><strong>views</strong></p> <pre><code>def add_enemy(request): args={} if request.method == "POST": form = AddEnemyForm(request.POST) form.related_user = request.user # error # form.related_user_id = request.user.id # also error if form.is_valid(): form.save() return HttpResponse("&lt;h1&gt;Done!&lt;/h1&gt;") else: args.update(csrf(request)) args["form"]=form return render_to_response("add_enemy.html",args) args.update(csrf(request)) args["form"]=AddEnemyForm() return render_to_response("add_enemy.html",args) </code></pre> <p>How to modify a form before save?</p>
0
2016-09-01T15:17:28Z
39,275,387
<p>Use <code>form.instance</code>:</p> <pre><code>def add_enemy(request): args={} if request.method == "POST": form = AddEnemyForm(request.POST) # form.instance is the instance to be saved form.instance.related_user = request.user.userprofile if form.is_valid(): form.save() return HttpResponse("&lt;h1&gt;Done!&lt;/h1&gt;") else: args.update(csrf(request)) args["form"]=form return render_to_response("add_enemy.html",args) args.update(csrf(request)) args["form"]=AddEnemyForm() return render_to_response("add_enemy.html",args) </code></pre> <p>Note that <code>enemymodel.related_user</code> should be a <code>UserProfile</code>, and <code>request.user</code> is a <code>User</code>. You need to use <code>request.user.userprofile</code>. </p>
1
2016-09-01T15:19:04Z
[ "python", "django" ]
django cannot fix integrity error
39,275,366
<p>I've decided to drop a row from a field from a database i'm setting up in django. I've deleted it in models/form and completely re-ran the database (makemigrations, migrate). However, no matter what I do i keep getting an integrity error (NOT NULL constraint failed: index_user.email). I'm not sure why i'm getting this, as the field doesn't even exist anymore and I cant find any trace of it in any files. Anyone know how to solve this error?</p> <p>models: </p> <pre><code>from django.db import models # Create your models here. class user(models.Model): username = models.CharField(max_length=20) password = models.CharField(max_length=15) def __str__(self): return self.username + ' - ' + self.password </code></pre> <p>views: </p> <pre><code>def login(request): form = LoginForm(request.POST) if form.is_valid(): username = form.cleaned_data["username"] password = form.cleaned_data["password"] user = authenticate(username=username, password=password) if user is not None: if user.is_active: login(request, user) return redirect('loggedin.html') else: return HttpResponse("Account deleted or disabled") else: return HttpResponseRedirect('/invalid') return render(request, "login_page.html", {'form': form}) </code></pre> <p>form:</p> <pre><code>class LoginForm(forms.Form): username = forms.CharField() password = forms.CharField(widget=forms.PasswordInput) def clean(self, *args, **kwargs): username = self.cleaned_data.get("username") password = self.cleaned_data.get("password") if username and password: user = authenticate(username=username, password=password) if not user: raise forms.ValidationError("User does not exist.") if not user.is_active: raise forms.ValidationError("User is no longer active.") return super(UserLoginForm, self).clean(*args, **kwargs) </code></pre>
1
2016-09-01T15:18:07Z
39,275,694
<p>If you are using Mysql as your database backend, then when you delete any field in the django model, even you run makemigrate and migrate, Mysql will still <strong>NOT</strong> actually delete that field or column inside the database.</p> <p>Therefore the column <strong>index_user.email</strong>, you think have deleted, should still inside the database, that gives you the error.</p> <p>You have to get into the mysql console or any mysql cilent to drop that column by yourself, not by django migration.</p> <p>If you are using sqlite, unfortunately, you can't drop a column on a table. you can check the reason on the link below:</p> <p><a href="http://stackoverflow.com/a/8442173/4151886">http://stackoverflow.com/a/8442173/4151886</a></p> <p>and solution you can find on the link below:</p> <p><a href="http://stackoverflow.com/a/5987838/4151886">http://stackoverflow.com/a/5987838/4151886</a></p>
0
2016-09-01T15:35:15Z
[ "python", "django", null, "integrity" ]
django cannot fix integrity error
39,275,366
<p>I've decided to drop a row from a field from a database i'm setting up in django. I've deleted it in models/form and completely re-ran the database (makemigrations, migrate). However, no matter what I do i keep getting an integrity error (NOT NULL constraint failed: index_user.email). I'm not sure why i'm getting this, as the field doesn't even exist anymore and I cant find any trace of it in any files. Anyone know how to solve this error?</p> <p>models: </p> <pre><code>from django.db import models # Create your models here. class user(models.Model): username = models.CharField(max_length=20) password = models.CharField(max_length=15) def __str__(self): return self.username + ' - ' + self.password </code></pre> <p>views: </p> <pre><code>def login(request): form = LoginForm(request.POST) if form.is_valid(): username = form.cleaned_data["username"] password = form.cleaned_data["password"] user = authenticate(username=username, password=password) if user is not None: if user.is_active: login(request, user) return redirect('loggedin.html') else: return HttpResponse("Account deleted or disabled") else: return HttpResponseRedirect('/invalid') return render(request, "login_page.html", {'form': form}) </code></pre> <p>form:</p> <pre><code>class LoginForm(forms.Form): username = forms.CharField() password = forms.CharField(widget=forms.PasswordInput) def clean(self, *args, **kwargs): username = self.cleaned_data.get("username") password = self.cleaned_data.get("password") if username and password: user = authenticate(username=username, password=password) if not user: raise forms.ValidationError("User does not exist.") if not user.is_active: raise forms.ValidationError("User is no longer active.") return super(UserLoginForm, self).clean(*args, **kwargs) </code></pre>
1
2016-09-01T15:18:07Z
39,276,396
<p>you can run a fake migrations. It will let you by pass the error for now. Also try to delete your migrations first, if that does not work see if the below works. </p> <p>python manage.py makemigrations python manage.py migrate --fake</p>
0
2016-09-01T16:15:05Z
[ "python", "django", null, "integrity" ]
Regular expression to find text in specific section
39,275,503
<p>I have a regular expression problem. My data is as follows:</p> <pre><code>[Section 1] title = RegEx name = Joe color = blue [Section 2] height = 101 name = Gray </code></pre> <p>My question is can I write a regular expression to capture the 'name' key only from [Section 1]? Essentially, capture a key that may exist in multiple places, but only capture it from a specific section. I'll be implementing this in python. Thanks</p>
-2
2016-09-01T15:24:56Z
39,275,879
<p>Using ConfigParser is pretty simple, but you need to change your data format to be like:</p> <p>config_file.cfg</p> <pre><code>[Section 1] title: RegEx name: Joe color: blue [Section 2] height: 101 name: Gray </code></pre> <p>test_config.py</p> <pre><code>import ConfigParser def get_config(section, prop_file_path): config = ConfigParser.ConfigParser() config.read(prop_file_path) options = config.options(section) data = {} for option in options: try: data[option] = config.get(section, option) except: data[option] = None raise Exception("exception on %s!" % option) return data data = get_config("Section 1", "path/to/file/config_file.cfg") print data['name'] </code></pre>
0
2016-09-01T15:44:57Z
[ "python", "regex" ]
Regular expression to find text in specific section
39,275,503
<p>I have a regular expression problem. My data is as follows:</p> <pre><code>[Section 1] title = RegEx name = Joe color = blue [Section 2] height = 101 name = Gray </code></pre> <p>My question is can I write a regular expression to capture the 'name' key only from [Section 1]? Essentially, capture a key that may exist in multiple places, but only capture it from a specific section. I'll be implementing this in python. Thanks</p>
-2
2016-09-01T15:24:56Z
39,276,158
<p>While I wouldn't do this with regular expressions, since you asked:</p> <pre><code>\[Section 1\][^[]*name\s*=\s*(.*) </code></pre> <p>The <code>[^[]</code> bit prevents the regular expression from being too greedy and matching a "name" outside of the specified section (assuming no other fields/lines within a section include a <code>[</code>).</p> <p>The result will be in the captured group.</p> <p><a href="https://regex101.com/r/uC7xD1/1" rel="nofollow">https://regex101.com/r/uC7xD1/1</a></p>
0
2016-09-01T16:00:49Z
[ "python", "regex" ]
Regular expression to find text in specific section
39,275,503
<p>I have a regular expression problem. My data is as follows:</p> <pre><code>[Section 1] title = RegEx name = Joe color = blue [Section 2] height = 101 name = Gray </code></pre> <p>My question is can I write a regular expression to capture the 'name' key only from [Section 1]? Essentially, capture a key that may exist in multiple places, but only capture it from a specific section. I'll be implementing this in python. Thanks</p>
-2
2016-09-01T15:24:56Z
39,279,468
<p>Just for reference, you could use the newer <code>regex</code> module and named capture groups:</p> <pre><code>import regex as re rx = re.compile(""" (?(DEFINE) (?&lt;section&gt;^\[Section\ \d+\]) ) (?&amp;section) (?:(?!(?&amp;section))[\s\S])* ^\s*name\s*=\s*\K(?P&lt;name&gt;.+)$ """, re.VERBOSE|re.MULTILINE) string = """ [Section 1] title = RegEx name = Joe color = blue [Section 2] height = 101 name = Gray """ names = [match.group('name') for match in rx.finditer(string)] print(names) # ['Joe', 'Gray'] </code></pre> <p>See <a href="https://regex101.com/r/sL2bK9/1" rel="nofollow"><strong>a demo on regex101.com</strong></a>.</p>
0
2016-09-01T19:27:52Z
[ "python", "regex" ]
Select row from a DataFrame based on the type of the object(i.e. str)
39,275,533
<p>So there's a DataFrame say:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({ ... 'A':[1,2,'Three',4], ... 'B':[1,'Two',3,4]}) &gt;&gt;&gt; df A B 0 1 1 1 2 Two 2 Three 3 3 4 4 </code></pre> <p>I want to select the rows whose datatype of particular row of a particular column is of type <code>str</code>.</p> <p>For example I want to select the row where <code>type</code> of data in the column <code>A</code> is a <code>str</code>. so it should print something like:</p> <pre><code> A B 2 Three 3 </code></pre> <p>Whose intuitive code would be like:</p> <pre><code>df[type(df.A) == str] </code></pre> <p>Which obviously doesn't works!</p> <p>Thanks please help!</p>
3
2016-09-01T15:26:20Z
39,275,726
<p>You can do something <em>similar</em> to what you're asking with</p> <pre><code>In [14]: df[pd.to_numeric(df.A, errors='coerce').isnull()] Out[14]: A B 2 Three 3 </code></pre> <p>Why only similar? Because Pandas stores things in homogeneous columns (all entries in a column are of the same type). Even though you constructed the DataFrame from heterogeneous types, they are all made into columns each of the lowest common denominator:</p> <pre><code>In [16]: df.A.dtype Out[16]: dtype('O') </code></pre> <p>Consequently, you can't ask which rows are of what type - they will all be of the same type. What you <em>can</em> do is to try to convert the entries to numbers, and check where the conversion failed (this is what the code above does).</p>
2
2016-09-01T15:36:46Z
[ "python", "pandas" ]
Select row from a DataFrame based on the type of the object(i.e. str)
39,275,533
<p>So there's a DataFrame say:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({ ... 'A':[1,2,'Three',4], ... 'B':[1,'Two',3,4]}) &gt;&gt;&gt; df A B 0 1 1 1 2 Two 2 Three 3 3 4 4 </code></pre> <p>I want to select the rows whose datatype of particular row of a particular column is of type <code>str</code>.</p> <p>For example I want to select the row where <code>type</code> of data in the column <code>A</code> is a <code>str</code>. so it should print something like:</p> <pre><code> A B 2 Three 3 </code></pre> <p>Whose intuitive code would be like:</p> <pre><code>df[type(df.A) == str] </code></pre> <p>Which obviously doesn't works!</p> <p>Thanks please help!</p>
3
2016-09-01T15:26:20Z
39,277,211
<p>This works:</p> <pre><code>df[df['A'].apply(lambda x: type(x)==str)] </code></pre>
2
2016-09-01T17:03:33Z
[ "python", "pandas" ]
Tokenizing a corpus composed of articles into sentences Python
39,275,547
<p>I will like to analyze my first deep learning model using Python and in order to do so I have to first split my corpus (8807 articles) into sentences. My corpus is built as follows:</p> <pre><code>## Libraries to download from nltk.tokenize import RegexpTokenizer from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from gensim import corpora, models import gensim import json import nltk import re import pandas appended_data = [] #for i in range(20014,2016): # df0 = pandas.DataFrame([json.loads(l) for l in open('SDM_%d.json' % i)]) # appended_data.append(df0) for i in range(2005,2016): if i &gt; 2013: df0 = pandas.DataFrame([json.loads(l) for l in open('SDM_%d.json' % i)]) appended_data.append(df0) df1 = pandas.DataFrame([json.loads(l) for l in open('Scot_%d.json' % i)]) df2 = pandas.DataFrame([json.loads(l) for l in open('APJ_%d.json' % i)]) df3 = pandas.DataFrame([json.loads(l) for l in open('TH500_%d.json' % i)]) df4 = pandas.DataFrame([json.loads(l) for l in open('DRSM_%d.json' % i)]) appended_data.append(df1) appended_data.append(df2) appended_data.append(df3) appended_data.append(df4) appended_data = pandas.concat(appended_data) # doc_set = df1.body doc_set = appended_data.body </code></pre> <p>I am trying to use the function <code>Word2Vec.load_word2vec_format</code> from the library <code>gensim.models</code> but I have to first split my corpus (<code>doc_set</code>) into sentences.</p> <pre><code>from gensim.models import word2vec model = Word2Vec.load_word2vec_format(doc_set, binary=False) </code></pre> <p>Any recommendations? </p> <p>cheers</p>
2
2016-09-01T15:27:12Z
39,277,615
<p>So, Gensim's <code>Word2Vec</code> requires this format for its training input: <code>sentences = [['first', 'sentence'], ['second', 'sentence']]</code>.</p> <p>I assume your documents contain more than one sentence. You should first split by sentences, you can do that with <a href="http://www.nltk.org/api/nltk.tokenize.html#module-nltk.tokenize.punkt" rel="nofollow">nltk</a> (you might need to download the model first). Then tokenize each sentence and put everything together in a list.</p> <pre><code>sent_detector = nltk.data.load('tokenizers/punkt/english.pickle') sentenized = doc_set.body.apply(sent_detector.tokenize) sentences = itertools.chain.from_iterable(sentenized.tolist()) # just to flatten result = [] for sent in sentences: result += [nltk.word_tokenize(sent)] gensim.models.Word2Vec(result) </code></pre> <p>Unfortunately I am not good enough with Pandas to perform all the operations in a "pandastic" way.</p> <p>Pay a lot of attention to the parameters of <code>Word2Vec</code> picking them right can make a huge difference.</p>
1
2016-09-01T17:29:38Z
[ "python", "deep-learning", "gensim", "word2vec" ]
Regrid numpy array based on cell area
39,275,552
<pre><code>import numpy as np from skimage.measure import block_reduce arr = np.random.random((6, 6)) area_cell = np.random.random((6, 6)) block_reduce(arr, block_size=(2, 2), func=np.ma.mean) </code></pre> <p>I would like to regrid a numpy array <code>arr</code> from 6 x 6 size to 3 x 3. Using the skimage function block_reduce for this.</p> <p>However, <code>block_reduce</code> assumes each grid cell has same size. How can I solve this problem, when each grid cell has a different size? In this case size of each grid cell is given by the numpy array <code>area_cell</code></p> <p>-- EDIT:</p> <p>An example:</p> <p><code>arr</code></p> <pre><code>0.25 0.58 0.69 0.74 0.49 0.11 0.10 0.41 0.43 0.76 0.65 0.79 0.72 0.97 0.92 0.09 </code></pre> <p>If all elements of <code>area_cell</code> were 1, and we were to convert 4 x 4 arr into 2 x 2, result would be:</p> <pre><code>0.36 0.48 0.72 0.61 </code></pre> <p>However, if <code>area_cell</code> is as follows:</p> <pre><code>0.00 1.00 1.00 0.00 0.00 1.00 0.00 0.50 0.20 1.00 0.80 0.80 0.00 0.00 1.00 1.00 </code></pre> <p>Then, result becomes:</p> <pre><code>0.17 0.22 0.21 0.54 </code></pre>
3
2016-09-01T15:27:22Z
39,284,287
<p>It seems you are still reducing by blocks, but after scaling <code>arr</code> with <code>area_cell</code>. So, you just need to perform element-wise multiplication between these two arrays and use the same <code>block_reduce</code> code on that product array, like so -</p> <pre><code>block_reduce(arr*area_cell, block_size=(2, 2), func=np.ma.mean) </code></pre> <p>Alternatively, we can simply use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html" rel="nofollow"><code>np.mean</code></a> after reshaping to a <code>4D</code> version of the product array, like so -</p> <pre><code>m,n = arr.shape out = (arr*area_cell).reshape(m//2,2,n//2,2).mean(axis=(1,3)) </code></pre> <p>Sample run -</p> <pre><code>In [21]: arr Out[21]: array([[ 0.25, 0.58, 0.69, 0.74], [ 0.49, 0.11, 0.1 , 0.41], [ 0.43, 0.76, 0.65, 0.79], [ 0.72, 0.97, 0.92, 0.09]]) In [22]: area_cell Out[22]: array([[ 0. , 1. , 1. , 0. ], [ 0. , 1. , 0. , 0.5], [ 0.2, 1. , 0.8, 0.8], [ 0. , 0. , 1. , 1. ]]) In [23]: block_reduce(arr*area_cell, block_size=(2, 2), func=np.ma.mean) Out[23]: array([[ 0.1725 , 0.22375], [ 0.2115 , 0.5405 ]]) In [24]: m,n = arr.shape In [25]: (arr*area_cell).reshape(m//2,2,n//2,2).mean(axis=(1,3)) Out[25]: array([[ 0.1725 , 0.22375], [ 0.2115 , 0.5405 ]]) </code></pre>
2
2016-09-02T04:25:03Z
[ "python", "numpy", "skimage" ]
TensorFlow: How to write multistep decay
39,275,641
<p>There is multistep decay in Caffe. It is calculated as <code>base_lr * gamma ^ (floor(step))</code> where <code>step</code> is incremented after each of your decay steps. For example with <code>[100, 200]</code> decay steps and <code>global step=101</code> I want get <code>base_lr * gamma ^ 1</code>, for <code>global step=201</code> and more I want get <code>base_lr * gamma ^ 2</code> and so on.</p> <p>I tried to implement it based on exponential decay sources but I can do nothing. Here is code of exponential decay (<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/learning_rate_decay.py#L27" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/learning_rate_decay.py#L27</a> ):</p> <pre><code>def exponential_decay(learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None): with ops.name_scope(name, "ExponentialDecay", [learning_rate, global_step, decay_steps, decay_rate]) as name: learning_rate = ops.convert_to_tensor(learning_rate, name="learning_rate") dtype = learning_rate.dtype global_step = math_ops.cast(global_step, dtype) decay_steps = math_ops.cast(decay_steps, dtype) decay_rate = math_ops.cast(decay_rate, dtype) p = global_step / decay_steps if staircase: p = math_ops.floor(p) return math_ops.mul(learning_rate, math_ops.pow(decay_rate, p), name=name) </code></pre> <p>I must pass <code>decay_steps</code> as some sort of array - python array or Tensor. Also I must(?) pass <code>current_decay_step</code> (<code>step</code> in above formula).</p> <p><strong>First option:</strong> In pure python without tensors it is very simple:</p> <pre><code>decay_steps.append(global_step) p = sorted(decay_steps).index(global_step) # may be there must be `+1` or `-1`. I hope that main idea is clear </code></pre> <p>I cant' do it because there is no sort in TF. I don't know how many time it take to implement it.</p> <p><strong>Second option</strong>: something like code below. It doesn't work for many reasons. Firstly I don't know how to pass args to funtion in <code>tf.cond</code>. Secondly, it may not work even if I will pass args: <a href="http://stackoverflow.com/questions/34931121/can-cond-support-tf-ops-with-side-effects">Can cond support TF ops with side effects?</a></p> <pre><code>def new_decay_step(decay_steps): decay_steps = decay_steps[1:] current_decay_step.assign(current_decay_step + 1) return tf.no_op() tf.cond(tf.greater(tf.shape(decay_steps)[0], 0), tf.cond(tf.greater(global_step, decay_steps[0]), new_decay_step, tf.no_op()), tf.no_op()) p = current_decay_step </code></pre> <p><strong>Third option:</strong> It will not work because I can't get element with <code>tensor[another_tensor]</code>.</p> <pre><code> # if len(decay_steps) &gt; (current_step + 1): # if global_step &gt; decay_steps[current_step + 1]: # current_step += 1 current_decay_step = tf.cond(tf.greater(tf.shape(current_decay_step)[0], tf.add(current_decay_step,1)), tf.cond(tf.greater(global_step, decay_steps[tf.add(current_decay_step + 1]), tf.add(current_decay_step,1), tf.add(current_decay_step,0)), tf.add(current_decay_step, 0) </code></pre> <p>What can I do?</p> <p><strong>UPD:</strong> I almost can make it with second option. </p> <p>I can make </p> <pre><code> def nothing: return tf.no_op() tf.cond(tf.greater(global_step, decay_steps[0]), functools.partial(new_decay_step, decay_steps), nothing) </code></pre> <p>But for some reason inner <code>tf.cond</code> doesn't work</p> <p>For this code I get error <code>fn1 must be callable</code></p> <pre><code> def nothing: return tf.no_op() tf.cond(tf.greater(tf.shape(decay_steps)[0], 0), tf.cond(tf.greater(global_step, decay_steps[0]), functools.partial(new_decay_step, decay_steps), nothing), nothing) </code></pre> <p><strong>UPD2:</strong> Inner <code>tf.cond</code> will not work because they return tensor and args must be functions. </p> <p>I didn't check it but seems like it works (at least it doesn't crash with errors):</p> <pre><code> tf.cond(tf.logical_and(tf.greater(tf.shape(decay_steps)[0], 0), tf.greater(global_step, decay_steps[0])), functools.partial(new_decay_step, decay_steps), nothing) </code></pre> <p><strong>UPD3:</strong> I realized that code in <strong>UPD2</strong> wil not work because I can't change list inside the function.</p> <p>Also I don't know what parts of <code>tf.logical_and</code> are really executed.</p> <p>I made following code:</p> <pre><code>class ohmy: def __init__(self, decay_steps): self.decay_steps = decay_steps def multistep_decay(self, learning_rate, global_step, current_decay_step, decay_steps, decay_rate, staircase=False, name=None): learning_rate = tf.convert_to_tensor(learning_rate, name="learning_rate") dtype = learning_rate.dtype global_step = tf.cast(global_step, dtype) decay_rate = tf.cast(decay_rate, dtype) def new_step(): self.decay_steps = self.decay_steps[1:] current_decay_step.assign(current_decay_step + 1) return current_decay_step def curr_step(): return current_decay_step current_decay_step = tf.cond(tf.logical_and(tf.greater(tf.shape(self.decay_steps)[0], 0), tf.greater(global_step, self.decay_steps[0])), new_step, curr_step) a = tf.Print(global_step, [global_step], "global") b = tf.Print(self.decay_steps, [self.decay_steps], "decay_steps") c = tf.Print(current_decay_step, [current_decay_step], "step") with tf.control_dependencies([a, b, c, current_decay_step]): p = current_decay_step if staircase: p = tf.floor(p) return tf.mul(learning_rate, tf.pow(decay_rate, p), name=name) decay_steps = [3,4,5,6,7] decay_steps = tf.convert_to_tensor(decay_steps, dtype=tf.float32) current_decay_step = tf.Variable(0.0, trainable=False) global_step = tf.Variable(0, trainable=False) decay_rate = 0.5 c=ohmy(decay_steps) lr = ohmy.multistep_decay(c, 0.010, global_step, current_decay_step, decay_steps, decay_rate) #lr = tf.train.exponential_decay(0.001, global_step=global_step, decay_steps=2, decay_rate=0.5, staircase=True) tf.scalar_summary('learning_rate', lr) opt = tf.train.AdamOptimizer(lr) #...train loop and so on </code></pre> <p>It doesn't work at all. Here is output :</p> <pre><code>I tensorflow/core/kernels/logging_ops.cc:79] step[0] I tensorflow/core/kernels/logging_ops.cc:79] global[0] E tensorflow/core/client/tensor_c_api.cc:485] The tensor returned for MergeSummary/MergeSummary:0 was not valid. Traceback (most recent call last): File "flownet_new.py", line 528, in &lt;module&gt; summary_str = sess.run(summary_op) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 382, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 655, in _run feed_dict_string, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 723, in _do_run target_list, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 743, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors.InvalidArgumentError: The tensor returned for MergeSummary/MergeSummary:0 was not valid. </code></pre> <p>As you can see there is no output of decay steps. I can't even debug it!</p> <p>Now I definitely don't know how to make it with one function. Btw, either I do something wrong, or <code>tf.contrib.slim</code> doesn't work with learning rate decay. </p> <p>For now most simple solution is make what you want in train loop as <a href="http://stackoverflow.com/users/4385912/cleros">cleros</a> said.</p>
0
2016-09-01T15:31:40Z
39,278,807
<p>Use <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/train.html#exponential_decay" rel="nofollow"><code>tf.train.exponential_decay()</code></a>, it's exactly what you're looking for. The decayed learning rate is computed as follows:</p> <pre><code>decayed_learning_rate = learning_rate * decay_rate ^ (global_step / decay_steps) </code></pre> <p>Note that <code>decay_steps</code> parameter is an integer (not an array nor tensor) holding the period of iterations in which the learning rate changes. In your example <code>decay_steps=100</code>.</p>
0
2016-09-01T18:46:23Z
[ "python", "tensorflow" ]
Python Tools for Visual Studio and Unit Tests (PTVS)
39,275,712
<p>Hopefully someone can give me a hand/few pointers.</p> <p>So I am currently working on some python scripts and wanted to get some tests written.</p> <p>My environment is as follows: MS Visual Studio Community 2015, v.14 Update 3 PTVS v.2.2.4 (2.2.40623.00-14.0) Python 3.5 64-bit Environment</p> <p>I have some demo tests written in a test class, which have been appearing and disappearing in test explorer under their own will. Currently, I have none showing in Test Explorer, as per the screenshot below...</p> <p><a href="http://i.stack.imgur.com/BVEMF.png" rel="nofollow"><img src="http://i.stack.imgur.com/BVEMF.png" alt="Screenshot"></a> Several other people seem to be having issues, and the reasons vary from the test settings processor architecture selected, through to clearing the files in the temp folder etc.</p> <p>I have been pulling out my hair for a few days now, and am looking for your help, cos i'm now bald.</p> <p>I've tried removing PTVS and reinstalling, updating to latest, changing the test environment, adding and removing Nunit, incase that was conflicting, etc etc etc.</p> <p>Any help would be greatly appreciated.</p> <p>Just managed to get this error message to appear, but I do not think it's correct:</p> <p><a href="http://i.stack.imgur.com/ZLO6a.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZLO6a.png" alt="Test Error Msg"></a></p> <p>OK, so another update:</p> <p>I decided that I would download VS CE 2013, and then install PTVS 2.2.3.</p> <p>I opened the solution there, and the tests loaded instantly, and ran first time...</p> <p><a href="http://i.stack.imgur.com/I0Y8I.png" rel="nofollow"><img src="http://i.stack.imgur.com/I0Y8I.png" alt="enter image description here"></a></p> <p>I know that is not a solution to the problem, but at least I can now run my tests and continue working.</p> <p>Kindest Regards,</p>
0
2016-09-01T15:36:08Z
39,348,631
<p>Ok, so it seems that PTVS does not fully work with VS2015 Community Edition.</p> <p>You can run your scripts etc, but it does not integrate with the test explorer properly.</p> <p>You will need to download VS2013 CE, and PTVS 2.2.2. Then you can run the test explorer and click run all, which will find all your tests, and run properly. Hope this helps someone, as I spent days trying to get VS2015 and PTVS 2.2.4 working with no luck.</p>
0
2016-09-06T12:04:13Z
[ "python", "python-3.x", "visual-studio-2015", "ptvs" ]
Check if object attribute name appears in a string python
39,275,713
<p>I want to:</p> <ul> <li>check whether a string contains an object property</li> <li>if it does then access the attribute</li> </ul> <p>So for an object of class </p> <pre><code>class Person(object): name = "" age = 0 major = "" def __init__(self, name="", surname="", father="", age =0): self.name = name self.surname = surname self.father = father self.age = age self.identity = name +" "+ surname def __str__(self): return self.identity __repr__ = __str__ </code></pre> <p>and object</p> <pre><code>person = Person("Earl", "Martin", "Jason", 40) </code></pre> <p>I would like for string "What is the name" to return person.name (I already know which object the string is about)</p> <p>The most basic solution would be to do cases for each property being there but the actual code has quite a few and I am sure I don't manually have to write them out, I am just new to programming so I am not sure what syntax is used for this</p> <p>Any help appreciated </p>
1
2016-09-01T15:36:17Z
39,275,857
<p>You are looking for the function <code>hasattr()</code> and <code>getattr()</code>.</p> <p>To check whether the attribute exists:</p> <pre><code>hasattr(Person(), 'string') </code></pre> <p>And to call the attribute:</p> <pre><code>getattr(Person(), 'string') </code></pre>
3
2016-09-01T15:43:50Z
[ "python", "string", "python-2.7", "class", "object" ]
Check if object attribute name appears in a string python
39,275,713
<p>I want to:</p> <ul> <li>check whether a string contains an object property</li> <li>if it does then access the attribute</li> </ul> <p>So for an object of class </p> <pre><code>class Person(object): name = "" age = 0 major = "" def __init__(self, name="", surname="", father="", age =0): self.name = name self.surname = surname self.father = father self.age = age self.identity = name +" "+ surname def __str__(self): return self.identity __repr__ = __str__ </code></pre> <p>and object</p> <pre><code>person = Person("Earl", "Martin", "Jason", 40) </code></pre> <p>I would like for string "What is the name" to return person.name (I already know which object the string is about)</p> <p>The most basic solution would be to do cases for each property being there but the actual code has quite a few and I am sure I don't manually have to write them out, I am just new to programming so I am not sure what syntax is used for this</p> <p>Any help appreciated </p>
1
2016-09-01T15:36:17Z
39,276,801
<p>As others have noted, <a href="https://docs.python.org/3/library/functions.html#getattr" rel="nofollow"><code>getattr</code></a> is generally useful.</p> <p><a href="https://docs.python.org/3/library/functions.html#hasattr" rel="nofollow"><code>hasattr</code></a> is of lesser utility; internally, it's basically a <code>getattr</code> call in a <code>try</code>/<code>except AttributeError:</code> block (if <code>AttributeError</code> occurs, it returns <code>False</code>, no exception means <code>True</code>), so if you're considering code like:</p> <pre><code>if hasattr(myobj, attrname): attr = getattr(myobj, attrname) ... </code></pre> <p>just use:</p> <pre><code>try: attr = getattr(myobj, attrname) except AttributeError: pass else: ... </code></pre> <p>to avoid doubling the number of LEG<strong>B</strong> lookups, function calls and attribute lookups.</p> <p>Alternatively, for repeatedly pulling named attribute(s), <a href="https://docs.python.org/3/library/operator.html#operator.attrgetter" rel="nofollow"><code>operator.attrgetter</code></a> basically lets you make an optimized version of <code>getattr</code> that pre-binds the attribute name to lookup (making it ideal for use with stuff like the <code>map</code> and <code>filter</code> functions, as it makes them more efficient than their equivalent listcomps/genexprs).</p> <p>On top of those, depending on what your goal is, <a href="https://docs.python.org/3/library/functions.html#dir" rel="nofollow">the <code>dir</code></a> and (slightly less reliably, due to issues with classes <a href="https://docs.python.org/3/reference/datamodel.html?highlight=__slots__#object.__slots__" rel="nofollow">that use <code>__slots__</code> to define a known set of variables to reduce memory usage and prevent auto-vivification</a>) <a href="https://docs.python.org/3/library/functions.html#vars" rel="nofollow"><code>vars</code> function</a>s may be useful.</p> <p>For example, in your example case of pulling any attributes corresponding to a word from a string, you could do a bulk identification of legal attribute names using <code>vars()</code>/<code>dir()</code> and your choice of <code>filter</code> or <code>set</code> operations (or a mix) depending on the importance of order, uniqueness, etc.:</p> <pre><code>from future_builtins import filter # Only on Py2, not Py3 import operator import re def query_obj(obj, querystr): # Extract list of legal attribute names from string words = re.findall(r'\w+', querystr) # Reduce to names present on object's __dict__; no need to construct temporaries attrnames = filter(vars(obj).__contains__, words) # Alternate if __slots__ might be an issue (temp list &amp; frozenset): attrnames = filter(frozenset(dir(obj)).__contains__, words) # Or combine the two to be sure (on Py3, use .keys() instead of .viewkeys()) # (temp list and set): attrnames = filter((vars(obj).viewkeys() | dir(obj)).__contains__, words) # Convenient way to get all names discovered at once; returns single object # for single attr, tuple of objects for multiple attrs: return operator.attrgetter(*attrnames)(obj) # If you want a tuple unconditionally, use this instead: return tuple(getattr(obj, name) for name in attrnames) # Or to only return the first attribute encountered, raising StopIteration # if no attributes are found: return next(getattr(obj, name) for name in attrnames) </code></pre> <p>Then usage is:</p> <pre><code>&gt;&gt;&gt; person = Person("Earl", "Martin", "Jason", 40) &gt;&gt;&gt; query_obj(person, "What is the name?") 'Earl' # Would be ('Earl',) in unconditional tuple case &gt;&gt;&gt; query_obj(person, "What is the name and surname?") ('Earl', 'Martin') # Would be 'Earl' in single return case </code></pre>
1
2016-09-01T16:41:23Z
[ "python", "string", "python-2.7", "class", "object" ]
TypeError: in method 'new_Dialog', expected argument 1 of type 'wxWindow *'
39,275,718
<p>I am trying to build a file browser that uses SSH to browse files in a remote location. My GUI code keeps throwing one error or another at me so I can't even test the SSH portion (not included in below code). My current error seems to be a problem with either my class constructor or the way I call it [SSHFileDialog]. I'd greatly appreciate it if someone could point out where I've gone wrong here. My knowledge of Python is self-taught and I've only recently started coding GUIs with wxPython.</p> <p>Code:</p> <pre><code>import wx,os class SSHFileDialog(wx.Dialog): #, hostname = 'DefaultHost', username = 'DefaultUser', password = 'Password' def __init__(self, parent): super(SSHFileDialog, self).__init__(self, parent, -1, style = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER) # self.hostname = kwargs['hostname'] # self.username = kwargs['username'] # self.password = kwargs['password'] hostname = "Host" username = "User" self.SetMinSize((512,512)) self.Centre() self.SetTitle("Remote File Browser: Connection established to %s as %s"% (hostname,username)) #print password self.InitUI() def InitUI(self): currentDir = os.getcwd() #For Testing fileAttr = [("Test","Test","Test","Test")] #Need to get file attributes from all files/folders in directory: Name, Type, Size, ModDate pnl = wx.Panel(self) vbox = wx.BoxSizer(wx.VERTICAL) h_Dir_Box = wx.BoxSizer(wx.HORIZONTAL) st_Dir = wx.StaticText(self, label = currentDir,style=wx.ALIGN_LEFT) btn_Back = wx.Button(self,label = 'Back') h_Dir_Box.Add(st_Dir,flag=wx.ALL, border = 5) h_Dir_Box.Add(btn_Back) stbox = wx.StaticBox(pnl, wx.ID_ANY, "Directory Contents") stboxS = wx.StaticBoxSizer(stbox, orient = wx.HORIZONTAL) list = wx.ListCtrl(stbox, style = wx.LC_REPORT|wx.LC_VRULES|wx.LC_SINGLE_SEL) list.InsertColumn(0,'Filename',width = 175) list.InsertColumn(1,'Type', width = 100) list.InsertColumn(2,'Size', width = 75) list.InsertColumn(3,'Date Modified',wx.LIST_FORMAT_RIGHT, 90) for i in fileAttr: index = list.InsertStringItem(len(fileAttr)+10,i[0]) list.SetStringItem(index,1,i[1]) list.SetStringItem(index,2,i[2]) list.SetStringItem(index,3,i[3]) pnl.SetSizer(stboxS) h_Open_Box = wx.BoxSizer(wx.HORIZONTAL) btn_Open = wx.Button(self, label = 'Open') btn_Can = wx.Button(self, label = 'Cancel') h_Open_Box.Add(btn_Open) h_Open_Box.Add(btn_Can,flag = wx.LEFT,border=10) vbox.Add(h_Dir_Box, flag=wx.ALL|wx.EXPAND, border = 10) vbox.Add(pnl, proportion =1 , flag=wx.ALL|wx.EXPAND|wx.ALIGN_CENTER, border = 20) vbox.Add(h_Open_Box, flag = wx.ALIGN_RIGHT) self.SetSizer(vbox) btn_Open.Bind(wx.EVT_BUTTON, self.OnClose) btn_Can.Bind(wx.EVT_BUTTON, self.OnClose) def OnClose(self,e): self.Destroy() class TestGui(wx.Frame): def __init__(self,*args,**kwargs): super(TestGui,self).__init__(*args,**kwargs) self.InitUI() def InitUI(self): menubar = wx.MenuBar() fileMenu = wx.Menu() openFileItem = fileMenu.Append(wx.ID_OPEN,'&amp;Open') fileMenu.AppendSeparator() quitApp = fileMenu.Append(wx.ID_EXIT, "&amp;Quit\tCtrl+Q") menubar.Append(fileMenu, '&amp;File') self.SetMenuBar(menubar) self.Bind(wx.EVT_MENU,self.OnQuit,quitApp) self.Bind(wx.EVT_MENU,self.OnOpen,openFileItem) self.SetSize((500,500)) self.SetTitle('File Manager example') self.Centre() self.Show(True) def OnQuit(self,e): self.Close() def OnOpen(self,e): args = {'hostname':'Host','username':'user','password':'password'} fileDialog = SSHFileDialog(None) fileDialog.ShowModal() fileDialog.Destroy() def main(): app = wx.App() TestGui(None) app.MainLoop() if __name__ == '__main__': main() </code></pre> <p>Traceback:</p> <pre><code>Traceback (most recent call last): File "C:\Users\matthersa\Desktop\XML-Python Testing\SSHFileDialog.py", line 103, in OnOpen fileDialog = SSHFileDialog(None) File "C:\Users\matthersa\Desktop\XML-Python Testing\SSHFileDialog.py", line 6, in __init__ super(SSHFileDialog, self).__init__(self, parent, -1, style = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER) File "C:\Python27\lib\site-packages\wx-3.0-msw\wx\_windows.py", line 734, in __init__ _windows_.Dialog_swiginit(self,_windows_.new_Dialog(*args, **kwargs)) TypeError: in method 'new_Dialog', expected argument 1 of type 'wxWindow *' </code></pre>
0
2016-09-01T15:36:25Z
39,275,778
<p>you are likely working off an old tutorial</p> <pre><code>wx.Dialog.__init__(self,*args,**kwargs) #here you need self, as this does not pass self implicitly </code></pre> <p>whereas </p> <pre><code>super(MyDialogClass,self).__init__(*args,**kwargs) # here self is passed implicitly (eg you do not pass self as the first arg) </code></pre> <p>however you should be a little careful with <code>super</code> and wxPython iirc there are some base classes that do not inherit from <code>object</code> which causes the MRO to break ... (tbh its probably fixed by now)</p> <p>** TLDR; **</p> <p>change </p> <pre><code>super(SSHFileDialog, self).__init__(self, parent, -1, style = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER) </code></pre> <p>to </p> <pre><code>super(SSHFileDialog, self).__init__( parent, -1, style = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER) </code></pre> <hr> <p>to answer your other question from the comments</p> <pre><code>class SSHFileDialog(wx.Dialog): #, hostname = 'DefaultHost', username = 'DefaultUser', password = 'Password' def __init__(self, parent,host,username,password): self.ssh_thing = SSHClient(host,username,password) super(SSHFileDialog, self).__init__(self, parent, -1, style = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER) </code></pre>
1
2016-09-01T15:39:41Z
[ "python", "user-interface", "inheritance", "wxpython" ]
TypeError: in method 'new_Dialog', expected argument 1 of type 'wxWindow *'
39,275,718
<p>I am trying to build a file browser that uses SSH to browse files in a remote location. My GUI code keeps throwing one error or another at me so I can't even test the SSH portion (not included in below code). My current error seems to be a problem with either my class constructor or the way I call it [SSHFileDialog]. I'd greatly appreciate it if someone could point out where I've gone wrong here. My knowledge of Python is self-taught and I've only recently started coding GUIs with wxPython.</p> <p>Code:</p> <pre><code>import wx,os class SSHFileDialog(wx.Dialog): #, hostname = 'DefaultHost', username = 'DefaultUser', password = 'Password' def __init__(self, parent): super(SSHFileDialog, self).__init__(self, parent, -1, style = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER) # self.hostname = kwargs['hostname'] # self.username = kwargs['username'] # self.password = kwargs['password'] hostname = "Host" username = "User" self.SetMinSize((512,512)) self.Centre() self.SetTitle("Remote File Browser: Connection established to %s as %s"% (hostname,username)) #print password self.InitUI() def InitUI(self): currentDir = os.getcwd() #For Testing fileAttr = [("Test","Test","Test","Test")] #Need to get file attributes from all files/folders in directory: Name, Type, Size, ModDate pnl = wx.Panel(self) vbox = wx.BoxSizer(wx.VERTICAL) h_Dir_Box = wx.BoxSizer(wx.HORIZONTAL) st_Dir = wx.StaticText(self, label = currentDir,style=wx.ALIGN_LEFT) btn_Back = wx.Button(self,label = 'Back') h_Dir_Box.Add(st_Dir,flag=wx.ALL, border = 5) h_Dir_Box.Add(btn_Back) stbox = wx.StaticBox(pnl, wx.ID_ANY, "Directory Contents") stboxS = wx.StaticBoxSizer(stbox, orient = wx.HORIZONTAL) list = wx.ListCtrl(stbox, style = wx.LC_REPORT|wx.LC_VRULES|wx.LC_SINGLE_SEL) list.InsertColumn(0,'Filename',width = 175) list.InsertColumn(1,'Type', width = 100) list.InsertColumn(2,'Size', width = 75) list.InsertColumn(3,'Date Modified',wx.LIST_FORMAT_RIGHT, 90) for i in fileAttr: index = list.InsertStringItem(len(fileAttr)+10,i[0]) list.SetStringItem(index,1,i[1]) list.SetStringItem(index,2,i[2]) list.SetStringItem(index,3,i[3]) pnl.SetSizer(stboxS) h_Open_Box = wx.BoxSizer(wx.HORIZONTAL) btn_Open = wx.Button(self, label = 'Open') btn_Can = wx.Button(self, label = 'Cancel') h_Open_Box.Add(btn_Open) h_Open_Box.Add(btn_Can,flag = wx.LEFT,border=10) vbox.Add(h_Dir_Box, flag=wx.ALL|wx.EXPAND, border = 10) vbox.Add(pnl, proportion =1 , flag=wx.ALL|wx.EXPAND|wx.ALIGN_CENTER, border = 20) vbox.Add(h_Open_Box, flag = wx.ALIGN_RIGHT) self.SetSizer(vbox) btn_Open.Bind(wx.EVT_BUTTON, self.OnClose) btn_Can.Bind(wx.EVT_BUTTON, self.OnClose) def OnClose(self,e): self.Destroy() class TestGui(wx.Frame): def __init__(self,*args,**kwargs): super(TestGui,self).__init__(*args,**kwargs) self.InitUI() def InitUI(self): menubar = wx.MenuBar() fileMenu = wx.Menu() openFileItem = fileMenu.Append(wx.ID_OPEN,'&amp;Open') fileMenu.AppendSeparator() quitApp = fileMenu.Append(wx.ID_EXIT, "&amp;Quit\tCtrl+Q") menubar.Append(fileMenu, '&amp;File') self.SetMenuBar(menubar) self.Bind(wx.EVT_MENU,self.OnQuit,quitApp) self.Bind(wx.EVT_MENU,self.OnOpen,openFileItem) self.SetSize((500,500)) self.SetTitle('File Manager example') self.Centre() self.Show(True) def OnQuit(self,e): self.Close() def OnOpen(self,e): args = {'hostname':'Host','username':'user','password':'password'} fileDialog = SSHFileDialog(None) fileDialog.ShowModal() fileDialog.Destroy() def main(): app = wx.App() TestGui(None) app.MainLoop() if __name__ == '__main__': main() </code></pre> <p>Traceback:</p> <pre><code>Traceback (most recent call last): File "C:\Users\matthersa\Desktop\XML-Python Testing\SSHFileDialog.py", line 103, in OnOpen fileDialog = SSHFileDialog(None) File "C:\Users\matthersa\Desktop\XML-Python Testing\SSHFileDialog.py", line 6, in __init__ super(SSHFileDialog, self).__init__(self, parent, -1, style = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER) File "C:\Python27\lib\site-packages\wx-3.0-msw\wx\_windows.py", line 734, in __init__ _windows_.Dialog_swiginit(self,_windows_.new_Dialog(*args, **kwargs)) TypeError: in method 'new_Dialog', expected argument 1 of type 'wxWindow *' </code></pre>
0
2016-09-01T15:36:25Z
39,277,564
<p>You are passing an extra <code>self</code> to <code>wx.Dialog.__init__</code>. The call to <code>super</code> is essentially creating the bound method for you so you don't need to pass <code>self</code> again.</p> <pre><code>super(SSHFileDialog, self).__init__(parent, -1, style = wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER) </code></pre>
0
2016-09-01T17:26:31Z
[ "python", "user-interface", "inheritance", "wxpython" ]
Python Check for decimals
39,275,730
<p>i'm making a program which divides a lot of numbers and I want to check if the number gets decimals or not. I also want it to print those decimals. Example:</p> <pre><code>foo = 7/3 if foo has a 3 in the decimals: (Just an example of what I want to do there) print("It works!) elif foo has no decimals: (another example) print("it has no decimals") </code></pre> <p>EDIT: Ok, so since "check which decimal is afterwards" brought some confusion, let me explain. I want to be able to check IF a number has decimals. For example, 7/3 (foo) gives me decimals, but I want python to tell me that without me having to do the math. forget the "which decimal" part</p>
1
2016-09-01T15:36:50Z
39,275,765
<p>Based on the wording of your question I am assuming that you are looking to see if a certain number comes after the decimal point? If that is the case...</p> <p>What you can do is split foo using <code>.</code> as the delimiter and check to see if the number, in this case you're looking for 3, is in portion after the <code>.</code></p> <pre><code>foo = 7/float(3) foo = str(foo) after_decimal = foo.split('.')[1] if '3' in after_decimal: print 'Yes!' else: print 'No!' In this case the output is: Yes! </code></pre>
0
2016-09-01T15:39:15Z
[ "python" ]
Python Check for decimals
39,275,730
<p>i'm making a program which divides a lot of numbers and I want to check if the number gets decimals or not. I also want it to print those decimals. Example:</p> <pre><code>foo = 7/3 if foo has a 3 in the decimals: (Just an example of what I want to do there) print("It works!) elif foo has no decimals: (another example) print("it has no decimals") </code></pre> <p>EDIT: Ok, so since "check which decimal is afterwards" brought some confusion, let me explain. I want to be able to check IF a number has decimals. For example, 7/3 (foo) gives me decimals, but I want python to tell me that without me having to do the math. forget the "which decimal" part</p>
1
2016-09-01T15:36:50Z
39,275,808
<p>If you just want to test whether a division has decimals, just check the modulo:</p> <pre><code>foo = a % b if foo != 0: # Then foo contains decimals pass if foo == 0: # Then foo does NOT contain decimals pass </code></pre> <p>However (since your question is a bit unclear) if you want to split the integer and decimal parts then use <a href="https://docs.python.org/2/library/math.html#math.modf" rel="nofollow"><code>math.modf()</code></a> function:</p> <pre><code>import math x = 1234.5678 math.modf(x) # (0.5678000000000338, 1234.0) </code></pre>
2
2016-09-01T15:41:02Z
[ "python" ]
Python Check for decimals
39,275,730
<p>i'm making a program which divides a lot of numbers and I want to check if the number gets decimals or not. I also want it to print those decimals. Example:</p> <pre><code>foo = 7/3 if foo has a 3 in the decimals: (Just an example of what I want to do there) print("It works!) elif foo has no decimals: (another example) print("it has no decimals") </code></pre> <p>EDIT: Ok, so since "check which decimal is afterwards" brought some confusion, let me explain. I want to be able to check IF a number has decimals. For example, 7/3 (foo) gives me decimals, but I want python to tell me that without me having to do the math. forget the "which decimal" part</p>
1
2016-09-01T15:36:50Z
39,275,846
<p>your question is slightly unclear. However this bit of code might get you in the right direction. Just call <code>Decimal_Checker</code> and input the numbers you want.</p> <pre><code>def Decimal_Checker(x,y): if x %y != 0: print("it has decimals") else: print("it does not have decimals") Decimal_Checker(3,2) </code></pre>
0
2016-09-01T15:42:32Z
[ "python" ]
Change components properties in large scale
39,275,731
<p>I have a reasonable amount of components attached to a window. I want to change some properties of these components when a button is pressed. But do this in one component at a time is a boring job and will involve many lines of code. It is possible to make a component to listen the other component signal to perform a task when the signal is released? That is, something like the observer pattern. If this is not possible. What would be the best way to do a mass change of these components?</p> <p>Example:</p> <p><a href="http://i.stack.imgur.com/mKkp3.png" rel="nofollow"><img src="http://i.stack.imgur.com/mKkp3.png" alt="enter image description here"></a></p> <p><a href="https://paste.gnome.org/psstffmkw" rel="nofollow">Code UI</a></p> <p>And if I want to reset all the other components to the initial state when the "Reset" button is triggered? Create a handler to the button signal and change the components (Entries, Switches, CheckBose, ...) one at a time is very tiring</p>
0
2016-09-01T15:36:50Z
39,286,333
<p>This may not be the answer you wanted but...</p> <p>You are probably optimizing in the wrong place. The window in your example takes about 6 lines of very straight-forward code to 'reset'. This is insignificant compared to the benefits of the solution:</p> <ul> <li>the code is very easy to understand later on</li> <li>it's also easy to modify later on (when e.g. a default value changes)</li> </ul> <p>These are far more important things than the two minutes you spend on writing the code originally -- and this is unlikely to change even if you make the form more complex (although I would question the basic design if you end up with a screen full of GtkEntries).</p>
0
2016-09-02T07:10:41Z
[ "python", "gtk", "gtk3" ]
QTreeView checkbox has empty focus
39,275,876
<p>Why does my QTreeView, using PySide, have this little small empty boxed area that the user can click and get a dotted focus box around? How can I remove it? I only want a simple checkbox in the first column.</p> <p><a href="http://i.stack.imgur.com/6nsWd.png" rel="nofollow"><img src="http://i.stack.imgur.com/6nsWd.png" alt="enter image description here"></a></p> <p>Code:</p> <pre><code>import sys from PySide import QtGui, QtCore class SortModel(QtGui.QSortFilterProxyModel): def __init__(self, *args, **kwargs): super(SortModel, self).__init__(*args, **kwargs) def lessThan(self, left, right): leftData = self.sourceModel().data(left) rightData = self.sourceModel().data(right) if leftData: leftData = leftData.lower() if rightData: rightData = rightData.lower() print('L:', leftData, 'R:', rightData) return leftData &lt; rightData class Browser(QtGui.QDialog): def __init__(self, parent=None): super(Browser, self).__init__(parent) self.initUI() def initUI(self): self.resize(200, 300) self.setWindowTitle('Assets') self.setModal(True) self.results = "" self.uiItems = QtGui.QTreeView() self.uiItems.setAlternatingRowColors(True) self.uiItems.setSortingEnabled(True) self.uiItems.sortByColumn(1, QtCore.Qt.AscendingOrder) self.uiItems.setEditTriggers(QtGui.QAbstractItemView.NoEditTriggers) self.uiItems.header().setResizeMode(QtGui.QHeaderView.ResizeToContents) self.uiItems.setSelectionBehavior(QtGui.QAbstractItemView.SelectRows) self.uiItems.setSelectionMode(QtGui.QAbstractItemView.ExtendedSelection) self._model = self.create_model(self) self._spmodel = SortModel(self) self._spmodel.setSourceModel(self._model) self._spmodel.setDynamicSortFilter(True) self.uiItems.setModel(self._spmodel) grid = QtGui.QGridLayout() grid.setContentsMargins(0, 0, 0, 0) grid.addWidget(self.uiItems, 0, 0) self.setLayout(grid) self.setLayout(grid) self.update_asset_model() self.uiItems.doubleClicked.connect(self.doubleClickedItem) self.show() def doubleClickedItem(self, idx): if not idx.isValid(): return print idx.parent(), idx.parent().isValid() model = idx.model() print model.index(idx.row(), 0, parent=idx.parent()).data() def create_model(self, parent): model = QtGui.QStandardItemModel() model.setHorizontalHeaderLabels([]) return model def add_row_item(self, model, name, _type): model.insertRow(0) model.setData(model.index(0, 0), '') model.setData(model.index(0, 0), QtCore.Qt.Checked, role = QtCore.Qt.CheckStateRole ) model.setData(model.index(0, 1), name) model.setData(model.index(0, 2), _type) model.setData(model.index(0, 2), QtGui.QBrush(QtGui.QColor(200, 140, 70, 255)), role=QtCore.Qt.ForegroundRole ) def update_asset_model(self): model = self.uiItems.model().sourceModel() model.clear() model.setHorizontalHeaderLabels(['', 'Name', "Output"]) items = { 'Doug' : "C:/fire/cache_.jpeg", 'Mike' : "C:/smoke/cache_.tga", 'Kevin' : "C:/water/cache_.tif", 'Curt' : "C:/steam/cache_.jpg", 'Corey' : "C:/blood/cache_.png" } for n in items.keys(): self.add_row_item(model, n, items[n]) def showEvent(self, event): geom = self.frameGeometry() geom.moveCenter(QtGui.QCursor.pos()) self.setGeometry(geom) super(Browser, self).showEvent(event) def keyPressEvent(self, event): if event.key() == QtCore.Qt.Key_Escape: # self.hide() self.close() event.accept() else: super(Browser, self).keyPressEvent(event) def main(): app = QtGui.QApplication(sys.argv) ex = Browser() sys.exit(app.exec_()) if __name__ == '__main__': main() </code></pre>
1
2016-09-01T15:44:49Z
39,278,382
<p>The first column shows a focus-rectangle because you set its text to an empty string. So if you don't want that to happen, don't set any text all.</p> <p>Alternatively, you could make the view show a focus-rectangle for the whole row, rather than separately for each column:</p> <pre><code> self.uiItems.setAllColumnsShowFocus(True) </code></pre>
1
2016-09-01T18:16:55Z
[ "python", "checkbox", "pyside", "qtreeview" ]
Setting label_suffix for a Django model formset
39,275,925
<p>I have a Product model that I use to create ProductFormSet. How do I specify the label_suffix to be something other than the default colon? I want it to be blank. Solutions I've seen only seem to apply when initiating a form - <a href="http://stackoverflow.com/questions/23973954/adding-a-label-suffix-to-modelform">here</a>.</p> <pre><code>ProductFormSet = modelformset_factory(Product, exclude=('abc',)) products = Product.objects.order_by('product_name') pformset = ProductFormSet(queryset=products) </code></pre>
2
2016-09-01T15:47:38Z
39,276,286
<p>In Django 1.9+, you can use the <a href="https://docs.djangoproject.com/en/1.10/topics/forms/formsets/#passing-custom-parameters-to-formset-forms" rel="nofollow"><code>form_kwargs</code></a> option.</p> <pre><code>ProductFormSet = modelformset_factory(Product, exclude=('abc',)) products = Product.objects.order_by('product_name') pformset = ProductFormSet(queryset=products, form_kwargs={'label_suffix': ''}) </code></pre> <p>In earlier Django versions, you could define a ProductForm class that sets the <code>label_suffix</code> to blank in the <code>__init__</code> method, and then pass that form class to <code>modelformset_factory</code>.</p> <pre><code>class ProductForm(forms.ModelForm): ... def __init__(self, *args, **kwargs): super(ProductForm, self).__init__(*args, **kwargs) self.label_suffix = '' ProductFormSet = modelformset_factory(Product, form=ProductForm, exclude=('abc',)) </code></pre>
1
2016-09-01T16:08:28Z
[ "python", "django" ]
Python detect linux shutdown and run a command before shutting down
39,275,948
<p>Is it possible to detect and interrupt linux (Ubuntu 16.04) shutdown signal (e.g. power button clicked or runs out of battery). I have a python application that is always recording a video and I want to detect such signal so I close the recording properly before OS shutdown.</p>
2
2016-09-01T15:49:07Z
39,275,997
<p>When linux is shut down, all processes receive <code>SIGTERM</code> and if they won't terminate after some timeout they are killed with <code>SIGKILL</code>. You can implement a signal handler to properly shutdown your application using the <a href="https://docs.python.org/2.7/library/signal.html" rel="nofollow"><code>signal</code></a> module. <code>systemd</code> (opposed to <code>upstart</code> in earlier Ubuntu verions) additionally sends <code>SIGHUP</code> on shutdown.</p> <p>To verfiy that this actually works, I tried the following script on two Ubuntu VMs (12.04 and 16.04). The system waits for 10s (12.04/upstart) or 90s (16.04/systemd) before issuing <code>SIGKILL</code>.</p> <p>The script ignores <code>SIGHUP</code> (which would otherwise also kill the process ungracefully) and will continuously print the time since the <code>SIGTERM</code> signal has been received to a text file.</p> <p><strong>Note</strong> I used <code>disown</code> (built-in bash command) to detach the process from the terminal.</p> <pre><code>python signaltest.py &amp; disown </code></pre> <p><strong>signaltest.py</strong></p> <pre><code>import signal import time stopped = False out = open('log.txt', 'w') def stop(sig, frame): global stopped stopped = True out.write('caught SIGTERM\n') out.flush() def ignore(sig, frsma): out.write('ignoring signal %d\n' % sig) out.flush() signal.signal(signal.SIGTERM, stop) signal.signal(signal.SIGHUP, ignore) while not stopped: out.write('running\n') out.flush() time.sleep(1) stop_time = time.time() while True: out.write('%.4fs after stop\n' % (time.time() - stop_time)) out.flush() time.sleep(0.1) </code></pre> <p>The last line printed into <code>log.txt</code> was:</p> <pre><code>10.1990s after stop </code></pre> <p>for 12.04 and</p> <pre><code>90.2448s after stop </code></pre> <p>for 16.04.</p>
1
2016-09-01T15:51:59Z
[ "python", "linux", "python-2.7", "ubuntu" ]
Python detect linux shutdown and run a command before shutting down
39,275,948
<p>Is it possible to detect and interrupt linux (Ubuntu 16.04) shutdown signal (e.g. power button clicked or runs out of battery). I have a python application that is always recording a video and I want to detect such signal so I close the recording properly before OS shutdown.</p>
2
2016-09-01T15:49:07Z
39,276,283
<p>Look into <a href="http://askubuntu.com/questions/1175/execute-script-before-shutting-down">this</a> Basically, put a script in /etc/rc0.d/ with the right name and execute a call to you python script. </p>
0
2016-09-01T16:08:09Z
[ "python", "linux", "python-2.7", "ubuntu" ]
Accessing the most recent line in a continuously updating file using python
39,276,062
<p>I am reading data from a file 'aisha.txt'. The data is read line by line and the file is continuously updating and now I want to access the most recent line or end of file. How can I do that?</p> <p>The code used for writing to the file is: </p> <pre><code>import time a= open('c:/test/aisha.txt','w') while(1): a.write(str(i)) a.write("\n") i+=1 a.close </code></pre>
0
2016-09-01T15:55:33Z
39,276,656
<p>To read the last line in a file, it is easiest if you know the maximum length of any line in the file. Then you can seek back that number of characters from the end of the file and read forward. The last line you read before end of file is the one you want.</p> <pre><code>import os MAXLEN = 132 file = open('testfile','r') try: file.seek(-MAXLEN, os.SEEK_END) except IOError: pass last = "No lines in file" while True: s = file.readline() if len(s) == 0: break last = s file.close() print last </code></pre> <p>Note that if you are running this program on Windows, and the original file is still open, the program will fail because on Windows a file opened for write is not readable until it is closed.</p>
0
2016-09-01T16:32:31Z
[ "python", "file" ]
Creating an MD5 Hash of A ZipFile
39,276,248
<p>I want to create an MD5 hash of a <code>ZipFile</code>, not of one of the files inside it. However, <code>ZipFile</code> objects aren't easily convertible to streams.</p> <pre><code>from hashlib import md5 from zipfile import ZipFile zipped = ZipFile(r'/Foo/Bar/Filename.zip') hasher = md5() hasher.update(zipped) return hasher.hexdigest() </code></pre> <p>The above code generates the error :<code>TypeError: must be convertible to a buffer, not ZipFile</code>. </p> <p>Is there a straightforward way to turn a <code>ZipFile</code> into a stream?</p> <p>There's no security issues here, I just need a quick an easy way to determine if I've seen a file before. <code>hash(zipped)</code> works fine, but I'd like something a little more robust if possible.</p>
0
2016-09-01T16:05:48Z
39,276,476
<p>This function should return the MD5 hash of any file, provided it's path (requires <code>pycrypto</code> module):</p> <pre><code>from Crypto.Hash import MD5 def get_MD5(file_path): chunk_size = 8192 h = MD5.new() with open(file_path, 'rb') as f: while True: chunk = f.read(chunk_size) if len(chunk): h.update(chunk) else: break return h.hexdigest() print get_MD5('pics.zip') # example </code></pre> <p><strong>output:</strong></p> <pre><code>6a690fa3e5b34e30be0e7f4216544365 </code></pre> <p><a href="https://pypi.python.org/pypi/pycrypto" rel="nofollow">Info on pycrypto</a></p>
3
2016-09-01T16:20:52Z
[ "python" ]
Creating an MD5 Hash of A ZipFile
39,276,248
<p>I want to create an MD5 hash of a <code>ZipFile</code>, not of one of the files inside it. However, <code>ZipFile</code> objects aren't easily convertible to streams.</p> <pre><code>from hashlib import md5 from zipfile import ZipFile zipped = ZipFile(r'/Foo/Bar/Filename.zip') hasher = md5() hasher.update(zipped) return hasher.hexdigest() </code></pre> <p>The above code generates the error :<code>TypeError: must be convertible to a buffer, not ZipFile</code>. </p> <p>Is there a straightforward way to turn a <code>ZipFile</code> into a stream?</p> <p>There's no security issues here, I just need a quick an easy way to determine if I've seen a file before. <code>hash(zipped)</code> works fine, but I'd like something a little more robust if possible.</p>
0
2016-09-01T16:05:48Z
39,276,477
<p>Just open the ZipFile as a regular file. Following code works on my machine.</p> <pre><code>from hashlib import md5 m = md5() with open("/Foo/Bar/Filename.zip", "rb") as f: data = f.read() #read file in chunk and call update on each chunk if file is large. m.update(data) print m.hexdigest() </code></pre>
3
2016-09-01T16:21:01Z
[ "python" ]
merge two dataframe columns into 1 in pandas
39,276,249
<p>I have 2 columns in my data frame and I need to merge it into 1 single column</p> <pre><code>Index A Index B 0 A 0 NAN 1 NAN 1 D 2 B 2 NAN 3 NAN 3 E 4 C 4 NAN </code></pre> <p>there will always be one 1 value across the Columns, i want my result to look like</p> <pre><code>0 A 1 B 2 C 3 D 4 E </code></pre>
3
2016-09-01T16:05:50Z
39,276,308
<p><strong><em>Option 1</em></strong></p> <pre><code>df.stack().dropna().reset_index(drop=True) 0 A 1 D 2 B 3 E 4 C dtype: object </code></pre> <p><strong><em>Option 2</em></strong> If Missing values are always alternating</p> <pre><code>df.A.combine_first(df.B) Index 0 A 1 D 2 B 3 E 4 C Name: A, dtype: object </code></pre> <p><strong><em>Option 3</em></strong> What you asked for</p> <pre><code>df.A.append(df.B).dropna().reset_index(drop=True) 0 A 1 B 2 C 3 D 4 E dtype: object </code></pre> <p><strong><em>Option 4</em></strong> Similar to 3 but over arbitrary number of columns</p> <pre><code>pd.concat([i for _, i in df.iteritems()]).dropna().reset_index(drop=True) 0 A 1 B 2 C 3 D 4 E dtype: object </code></pre>
4
2016-09-01T16:09:26Z
[ "python", "pandas" ]
apply 1-to-group tranformations in pandas - python
39,276,293
<p>I have a dataframe like the following</p> <pre><code>import pandas as pd df = pd.DataFrame({"id": ["a", "b", "c", "d"], "v": [1,2,3,4], "type": ["X", "Y", "Y", "Y"]}).set_index("id") print(df) </code></pre> <p>which yields:</p> <pre><code> type v id a X 1 b Y 2 c Y 3 d Y 4 </code></pre> <p>and I want to subtract the mean fromeach value <strong>BY GROUP</strong>. after the operation I still want to have <strong>my SINGLE values</strong>. in other words I want to have</p> <pre><code> type v id a X 0 b Y -1 c Y 0 d Y 1 </code></pre> <p>so, the very useful <code>tranform</code> function applied to a <code>groupby</code> object (as detailed here <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/groupby.html</a>) is not very useful in my case and I was wondering how to approach the problem.</p> <p>this is not to say I cannot solve this problem when dealing with simple functions <em>(division/subtract => I can use an auxiliary datafrmae resulting from a groupy and then cross the results but when dealing with more complex stuff this is no longer the case)</em></p>
2
2016-09-01T16:08:41Z
39,276,392
<p>Transform will actually get you what you want (if I understand correctly):</p> <pre><code>df['v'] = df['v'] - df.groupby('type')['v'].transform('mean') </code></pre> <p>Transform calculates the applied function by group, but broadcasts the result on the original index.</p> <hr> <p><strong>Edit</strong>: timing comparisons</p> <pre><code>%timeit df.groupby("type")['v'].apply(lambda x: x-x.mean()) 100 loops, best of 3: 2.95 ms per loop %timeit df['v'] - df.groupby('type')['v'].transform('mean') 1000 loops, best of 3: 922 µs per loop </code></pre>
4
2016-09-01T16:14:51Z
[ "python", "function", "pandas", "grouping" ]
apply 1-to-group tranformations in pandas - python
39,276,293
<p>I have a dataframe like the following</p> <pre><code>import pandas as pd df = pd.DataFrame({"id": ["a", "b", "c", "d"], "v": [1,2,3,4], "type": ["X", "Y", "Y", "Y"]}).set_index("id") print(df) </code></pre> <p>which yields:</p> <pre><code> type v id a X 1 b Y 2 c Y 3 d Y 4 </code></pre> <p>and I want to subtract the mean fromeach value <strong>BY GROUP</strong>. after the operation I still want to have <strong>my SINGLE values</strong>. in other words I want to have</p> <pre><code> type v id a X 0 b Y -1 c Y 0 d Y 1 </code></pre> <p>so, the very useful <code>tranform</code> function applied to a <code>groupby</code> object (as detailed here <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/groupby.html</a>) is not very useful in my case and I was wondering how to approach the problem.</p> <p>this is not to say I cannot solve this problem when dealing with simple functions <em>(division/subtract => I can use an auxiliary datafrmae resulting from a groupy and then cross the results but when dealing with more complex stuff this is no longer the case)</em></p>
2
2016-09-01T16:08:41Z
39,278,439
<p>IIUC try this:</p> <pre><code> df ['v'] = df.groupby("type")['v'].apply(lambda x: x-x.mean()) df type v id a X 0.0 b Y -1.0 c Y 0.0 d Y 1.0 ​ </code></pre>
1
2016-09-01T18:21:16Z
[ "python", "function", "pandas", "grouping" ]
Unable to use filter on an iterable RDD in pyspark
39,276,344
<p>I'm trying to apply a function that filters out certain values in a dataset based on ranges of data in another dataset. I've performed a few groupBys and joins, so the format of the parameter I'm passing into the function has two Iterables, and goes as follows:</p> <pre><code>g1 = g0.map(lambda x: timefilter(x[0])) </code></pre> <p>where x[0] is <code>&lt;pyspark.resultiterable.ResultIterable object at 0x23b6610&gt;, &lt;pyspark.resultiterable.ResultIterable object at 0x23b6310&gt;)</code></p> <p>When I enter the function <code>timefilter</code> I need to now be able to filter out the values in x[1] based on values in x[0]. But when I try the following (on both <code>twoList</code> and <code>twoRDD</code>, although I just show twoList here):</p> <pre><code>def timefilter(RDDList): oneList = list(RDDList[0]) twoList = list(RDDList[1]) twoRDD = RDDList[1] test = twoList.filter(lambda x: x[4]=='helloworld') return test </code></pre> <p>It gives me the following errors: <code>AttributeError: 'ResultIterable' object has no attribute 'filter'</code> and then a bunch of errors after. </p> <p>It seems as though I can't use filter on any format of the iterables, but feel like I'm missing something very simple. Is there a transform I'm missing in the function?</p>
0
2016-09-01T16:11:59Z
39,278,814
<p>Turns out that filtering on an iterable RDD isn't possible, so I just used the python in-built filter function. Something along the lines of the following: <code>filter(lambda x: x[1] in oneList, twoList)</code>.</p>
0
2016-09-01T18:46:40Z
[ "python", "list", "filter", "pyspark", "iterable" ]
How get specific element from a div with same id and class in Python
39,276,346
<p>I want to print "United state" and California in separated lines like</p> <pre><code>Country is : United State State is : California </code></pre> <p>The problem is each list item has the same class and id so when I loop through the list items it gives United States and California together.</p> <p>I hope you guys will understand what I'm trying to say.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;ul class="breadcrumbs" id="BREADCRUMBS"&gt; &lt;li class="breadcrumb_item " itemscope="" itemtype="http://data- vocabulary.org/Breadcrumb"&gt; &lt;a class="breadcrumb_link" href="/Tourism-g191-United_States-Vacations.html" itemprop="url" onclick="ta.setEvtCookie('Breadcrumbs', 'click', 'Country', 1, this.href); "&gt; &lt;span itemprop="title"&gt;United States&lt;/span&gt; &lt;/a&gt; &lt;span class="separator"&gt;›&lt;/span&gt; &lt;/li&gt; &lt;li class="breadcrumb_item " itemscope="" itemtype="http://data-vocabulary.org/Breadcrumb"&gt; &lt;a class="breadcrumb_link" href="/Tourism-g28926-California-Vacations.html" itemprop="url" onclick="ta.setEvtCookie('Breadcrumbs', 'click', 'State', 2, this.href); "&gt; &lt;span itemprop="title"&gt;California (CA)&lt;/span&gt; &lt;/a&gt; &lt;span class="separator"&gt;›&lt;/span&gt; &lt;/li&gt; &lt;li class="breadcrumb_item " itemscope="" itemtype="http://data-vocabulary.org/Breadcrumb"&gt;&lt;a class="breadcrumb_link" href="/Tourism-g32655-Los_Angeles_California-Vacations.html" itemprop="url" onclick="ta.setEvtCookie('Breadcrumbs', 'click', 'City', 3, this.href); "&gt;&lt;span itemprop="title"&gt;Los Angeles&lt;/span&gt;&lt;/a&gt;&lt;span class="separator"&gt;›&lt;/span&gt; &lt;/li&gt; &lt;li class="breadcrumb_item " itemscope="" itemtype="http://data-vocabulary.org/Breadcrumb"&gt;&lt;a class="breadcrumb_link" href="/Restaurants-g32655-Los_Angeles_California.html" itemprop="url" onclick="ta.setEvtCookie('Breadcrumbs', 'click', '', 4, this.href); return setOneTimeCookie('mcreset','true');"&gt;&lt;span itemprop="title"&gt;Los Angeles Restaurants&lt;/span&gt;&lt;/a&gt; &lt;span class="separator"&gt;›&lt;/span&gt; &lt;/li&gt; &lt;li class="breadcrumb_item "&gt;Providence&lt;/li&gt; &lt;/ul&gt;</code></pre> </div> </div> </p> <pre><code>here is my scraping script with python beautifulsoup </code></pre> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>import sys,io,csv,requests from urllib.parse import urljoin from bs4 import BeautifulSoup url = "https://www.tripadvisor.com/Restaurant_Review-g32655-d594024-Reviews-Providence-Los_Angeles_California.html" r = requests.get(url) r.content soup = BeautifulSoup(r.content, "html.parser") maindiv = soup.find_all("body", {"class": "ltr domn_en_US lang_en globalNav2011_reset hr_tabs_placement_test tabs_below_meta scroll_tabs full_width_page content_blocks css_commerce_buttons flat_buttons sitewide xo_pin_user_review_to_top track_back"}) for div in maindiv: divone = soup.find_all("div", {"id": "PAGE"}) for listitem in divone: div12 = soup.find_all("div", {"class": "breadCrumbBackground blue bgwhite "}) for listitem in div12: ulpart = soup.find_all("ul", {"class": "breadcrumbs"}) for unorder in ulpart[0]: div2 = soup.find_all("li", {"class": "breadcrumb_item "}) for listitem in div2: tag = soup.find_all("a", {"class": "breadcrumb_link"}) for spandiv in tag: span = soup.find_all("span", {"itemprop": "title"}) for country_name in span: print(country_name.text) </code></pre> </div> </div> </p>
0
2016-09-01T16:12:08Z
39,276,398
<p>You have the relevant parts of <code>onclick</code> attribute defining what breadcrumb is country and which is state. I would use a partial match via <code>*=</code> <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors" rel="nofollow">CSS selector</a> to implement that:</p> <pre><code># -*- coding: utf-8 -*- from bs4 import BeautifulSoup data = u"""Your HTML""" soup = BeautifulSoup(data, "html.parser") country = soup.select_one("li.breadcrumb_item a[onclick*=Country]").get_text(strip=True) state = soup.select_one("li.breadcrumb_item a[onclick*=State]").get_text(strip=True) print("The country is: '%s'" % country) print("The state is: '%s'" % state) </code></pre> <p>Prints:</p> <pre><code>The country is: 'United States' The state is: 'California (CA)' </code></pre>
1
2016-09-01T16:15:19Z
[ "python", "html5", "python-3.x", "beautifulsoup" ]
Iteratively count elements in list and store count in dictionary
39,276,375
<p>I have a piece of code that loops through a set of nodes and counts the path length connecting the given node to each other node in my network. For each node my code returns me a list, <code>b</code> containing integer values giving me the path length for every possible connection. I want to count the number of occurences of given path lengths so I can create a histogram.</p> <pre><code>local_path_length_hist = {} for ver in vertices: dist = gt.shortest_distance(g, source=g.vertex(ver)) a = dist.a #Delete some erroneous entries b = a[a!=2147483647] for dist in b: if dist in local_path_length_hist: local_path_length_hist[dist]+=1 else: local_path_length_hist[dist]=1 </code></pre> <p>This presumably is very crude coding as far as the dictionary update is concerned. Is there a better way of doing this? What is the most efficient way of creating this histogram?</p>
2
2016-09-01T16:14:09Z
39,276,738
<p>The check that element exists in <code>dict</code> is not really necessary. You can just use <code>collections.defaultdict</code>. Its initialization accepts callable object (like function) that will be called if you want to access (or assign something to) element that does not exist to generate the value (i.e. function that generates default value). For your case, it can be just <code>int</code>. I.e.</p> <pre><code>import collections local_path_length_hist = collections.defaultdict(int) # you could say collections.defaultdict(lambda : 0) instead for ver in vertices: dist = gt.shortest_distance(g, source=g.vertex(ver)) a = dist.a #Delete some erroneous entries b = a[a!=2147483647] for dist in b: local_path_length_hist[dist] += 1 </code></pre> <p>You could turn the last two lines in one like that, but there is really no point.</p>
1
2016-09-01T16:37:40Z
[ "python", "performance", "list", "dictionary", "graph-tool" ]
Iteratively count elements in list and store count in dictionary
39,276,375
<p>I have a piece of code that loops through a set of nodes and counts the path length connecting the given node to each other node in my network. For each node my code returns me a list, <code>b</code> containing integer values giving me the path length for every possible connection. I want to count the number of occurences of given path lengths so I can create a histogram.</p> <pre><code>local_path_length_hist = {} for ver in vertices: dist = gt.shortest_distance(g, source=g.vertex(ver)) a = dist.a #Delete some erroneous entries b = a[a!=2147483647] for dist in b: if dist in local_path_length_hist: local_path_length_hist[dist]+=1 else: local_path_length_hist[dist]=1 </code></pre> <p>This presumably is very crude coding as far as the dictionary update is concerned. Is there a better way of doing this? What is the most efficient way of creating this histogram?</p>
2
2016-09-01T16:14:09Z
39,276,773
<p>Since <code>gt.shortest_distance</code> returns an <code>ndarray</code>, <code>numpy</code> math is fastest:</p> <pre><code>max_dist = len(vertices) - 1 hist_length = max_dist + 2 no_path_dist = max_dist + 1 hist = np.zeros(hist_length) for ver in vertices: dist = gt.shortest_distance(g, source=g.vertex(ver)) hist += np.bincount(dist.a.clip(max=no_path_dist)) </code></pre> <p>I use the <code>ndarray</code> method <code>clip</code> to bin the <code>2147483647</code> values returned by <code>gt.shortest_distance</code> at the last position of <code>hist</code>. Without use of <code>clip</code>, <code>hist's</code> <code>size</code> would have to be <code>2147483647 + 1</code> on 64-bit Python, or <code>bincount</code> would produce a <code>ValueError</code> on 32-bit Python. So the last position of <code>hist</code> will contain a count of all non-paths; you can ignore this value in your histogram analysis.</p> <hr> <p>As the below timings indicate, using <code>numpy</code> math to obtain a histogram is well over an order of magnitude faster than using either <code>defaultdicts</code> or <code>counters</code> (Python 3.4):</p> <pre><code># vertices numpy defaultdict counter 9000 0.83639 38.48990 33.56569 25000 8.57003 314.24265 262.76025 50000 26.46427 1303.50843 1111.93898 </code></pre> <p>My computer is too slow to test with <code>9 * (10**6)</code> vertices, but relative timings seem pretty consistent for varying number of vertices (as we would expect).</p> <hr> <p><em>timing code</em>:</p> <pre><code>from collections import defaultdict, Counter import numpy as np from random import randint, choice from timeit import repeat # construct distance ndarray such that: # a) 1/3 of values represent no path # b) 2/3 of values are a random integer value [0, (num_vertices - 1)] num_vertices = 50000 no_path_length = 2147483647 distances = [] for _ in range(num_vertices): rand_dist = randint(0,(num_vertices-1)) distances.append(choice((no_path_length, rand_dist, rand_dist))) dist_a = np.array(distances) def use_numpy_math(): max_dist = num_vertices - 1 hist_length = max_dist + 2 no_path_dist = max_dist + 1 hist = np.zeros(hist_length, dtype=np.int) for _ in range(num_vertices): hist += np.bincount(dist_a.clip(max=no_path_dist)) def use_default_dict(): d = defaultdict(int) for _ in range(num_vertices): for dist in dist_a: d[dist] += 1 def use_counter(): hist = Counter() for _ in range(num_vertices): hist.update(dist_a) t1 = min(repeat(stmt='use_numpy_math()', setup='from __main__ import use_numpy_math', repeat=3, number=1)) t2 = min(repeat(stmt='use_default_dict()', setup='from __main__ import use_default_dict', repeat= 3, number=1)) t3 = min(repeat(stmt='use_counter()', setup='from __main__ import use_counter', repeat= 3, number=1)) print('%0.5f, %0.5f. %0.5f' % (t1, t2, t3)) </code></pre>
1
2016-09-01T16:39:52Z
[ "python", "performance", "list", "dictionary", "graph-tool" ]
Iteratively count elements in list and store count in dictionary
39,276,375
<p>I have a piece of code that loops through a set of nodes and counts the path length connecting the given node to each other node in my network. For each node my code returns me a list, <code>b</code> containing integer values giving me the path length for every possible connection. I want to count the number of occurences of given path lengths so I can create a histogram.</p> <pre><code>local_path_length_hist = {} for ver in vertices: dist = gt.shortest_distance(g, source=g.vertex(ver)) a = dist.a #Delete some erroneous entries b = a[a!=2147483647] for dist in b: if dist in local_path_length_hist: local_path_length_hist[dist]+=1 else: local_path_length_hist[dist]=1 </code></pre> <p>This presumably is very crude coding as far as the dictionary update is concerned. Is there a better way of doing this? What is the most efficient way of creating this histogram?</p>
2
2016-09-01T16:14:09Z
39,282,221
<p>I think you can bypass this code entirely. Your question is tagged with <a href="/questions/tagged/graph-tool" class="post-tag" title="show questions tagged &#39;graph-tool&#39;" rel="tag">graph-tool</a>. Take a look at this section of their documentation: <a href="https://graph-tool.skewed.de/static/doc/stats.html?highlight=degree%20histogram#graph_tool.stats.vertex_hist" rel="nofollow">graph_tool.stats.vertex_hist</a>. </p> <p>Excerpt from linked documentation:</p> <blockquote> <p><strong>graph_tool.stats.vertex_hist(g, deg, bins=[0, 1], float_count=True)</strong><br> Return the vertex histogram of the given degree type or property.</p> <p><em>Parameters:</em><br> g : Graph &emsp;Graph to be used.<br> deg : string or PropertyMap<br> &emsp;Degree or property to be used for the histogram. It can be either “in”, “out” or “total”, for in-,<br> &emsp;out-, or total degree of the vertices. It can also be a vertex property map.<br> bins : list of bins (optional, default: [0, 1])<br> &emsp;List of bins to be used for the histogram. The values given represent the edges of the bins<br> &emsp;(i.e. lower and upper bounds). If the list contains two values, this will be used to automatically<br> &emsp;create an appropriate bin range, with a constant width given by the second value, and starting<br> &emsp;from the first value.<br> float_count : bool (optional, default: True)<br> &emsp;If True, the counts in each histogram bin will be returned as floats. If False, they will be<br> &emsp;returned as integers. </p> <p><em>Returns:</em> counts : ndarray<br> &emsp;The bin counts.<br> bins : ndarray<br> &emsp;The bin edges. </p> </blockquote> <p>This will return the edges grouped like a histogram in an <code>ndarray</code>. You can then just get the length of the <code>ndarray</code> columns to get your counts to generate the histogram.</p>
0
2016-09-01T23:20:22Z
[ "python", "performance", "list", "dictionary", "graph-tool" ]
Iteratively count elements in list and store count in dictionary
39,276,375
<p>I have a piece of code that loops through a set of nodes and counts the path length connecting the given node to each other node in my network. For each node my code returns me a list, <code>b</code> containing integer values giving me the path length for every possible connection. I want to count the number of occurences of given path lengths so I can create a histogram.</p> <pre><code>local_path_length_hist = {} for ver in vertices: dist = gt.shortest_distance(g, source=g.vertex(ver)) a = dist.a #Delete some erroneous entries b = a[a!=2147483647] for dist in b: if dist in local_path_length_hist: local_path_length_hist[dist]+=1 else: local_path_length_hist[dist]=1 </code></pre> <p>This presumably is very crude coding as far as the dictionary update is concerned. Is there a better way of doing this? What is the most efficient way of creating this histogram?</p>
2
2016-09-01T16:14:09Z
39,292,803
<p>There is a utility in the <code>collections</code> module called <code>Counter</code>. This is even cleaner than using a <code>defaultdict(int)</code></p> <pre><code>from collections import Counter hist = Counter() for ver in vertices: dist = gt.shortest_distance(g, source=g.vertex(ver)) a = dist.a #Delete some erroneous entries b = a[a!=2147483647] hist.update(b) </code></pre>
1
2016-09-02T12:46:58Z
[ "python", "performance", "list", "dictionary", "graph-tool" ]
How to create a csv file of data created in python
39,276,423
<p>I am new to programming. I was wondering if anyone can help me create a csv file for the data that I created in python. My data looks like this</p> <pre><code>import numpy as np print np.__version__ a = 0.75 + (1.25 - 0.75)*np.random.sample(10000) print a b = 8 + (12 - 8)*np.random.sample(10000) print b c = -12 + 2*np.random.sample(10000) print c x0 = (-b - np.sqrt(b**2 - (4*a*c)))/(2*a) print x0 </code></pre> <p>The csv file format I am looking to create is 1 column each for a,b,c and x0 (see example below)</p> <p><a href="http://i.stack.imgur.com/AnAcF.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/AnAcF.jpg" alt="enter image description here"></a></p> <p>Your expert assistance will be highly appreciated</p> <p>Thanks in advance :-)</p> <p>Edit 1>> Input code:</p> <pre><code>import numpy as np print np.__version__ import csv a = 0.75 + (1.25 - 0.75)*np.random.sample(10000) ##print a b = 8 + (12 - 8)*np.random.sample(10000) ##print b c = -12 + 2*np.random.sample(10000) ##print c x0 = (-b - np.sqrt(b**2 - (4*a*c)))/(2*a) ##print x0 with open("file.csv",'w') as f: f.write('a,b,c,x0\n') for val in a,b,c,x0: print val f.write(','.join(map(str,[a,b,c,x0]))+ '\n') </code></pre> <p>Output <a href="http://i.stack.imgur.com/M6rcv.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/M6rcv.jpg" alt="enter image description here"></a></p> <p>I am being able to generate the data using for loop command (see pic below). The csv format is not outputting as expected. </p> <p><a href="http://i.stack.imgur.com/mhplU.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/mhplU.jpg" alt="enter image description here"></a></p>
2
2016-09-01T16:17:31Z
39,276,585
<pre><code>with open("file.csv",'w') as f: f.write('a,b,c,x0\n') --forloop where you generate a,b,c,x0: f.write(','.join(map(str,[a,b,c,x0])) + '\n') </code></pre>
2
2016-09-01T16:28:04Z
[ "python", "python-2.7", "python-3.x", "csv", "export-to-csv" ]
How to create a csv file of data created in python
39,276,423
<p>I am new to programming. I was wondering if anyone can help me create a csv file for the data that I created in python. My data looks like this</p> <pre><code>import numpy as np print np.__version__ a = 0.75 + (1.25 - 0.75)*np.random.sample(10000) print a b = 8 + (12 - 8)*np.random.sample(10000) print b c = -12 + 2*np.random.sample(10000) print c x0 = (-b - np.sqrt(b**2 - (4*a*c)))/(2*a) print x0 </code></pre> <p>The csv file format I am looking to create is 1 column each for a,b,c and x0 (see example below)</p> <p><a href="http://i.stack.imgur.com/AnAcF.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/AnAcF.jpg" alt="enter image description here"></a></p> <p>Your expert assistance will be highly appreciated</p> <p>Thanks in advance :-)</p> <p>Edit 1>> Input code:</p> <pre><code>import numpy as np print np.__version__ import csv a = 0.75 + (1.25 - 0.75)*np.random.sample(10000) ##print a b = 8 + (12 - 8)*np.random.sample(10000) ##print b c = -12 + 2*np.random.sample(10000) ##print c x0 = (-b - np.sqrt(b**2 - (4*a*c)))/(2*a) ##print x0 with open("file.csv",'w') as f: f.write('a,b,c,x0\n') for val in a,b,c,x0: print val f.write(','.join(map(str,[a,b,c,x0]))+ '\n') </code></pre> <p>Output <a href="http://i.stack.imgur.com/M6rcv.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/M6rcv.jpg" alt="enter image description here"></a></p> <p>I am being able to generate the data using for loop command (see pic below). The csv format is not outputting as expected. </p> <p><a href="http://i.stack.imgur.com/mhplU.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/mhplU.jpg" alt="enter image description here"></a></p>
2
2016-09-01T16:17:31Z
39,278,929
<p>There are four range of values you need to iterate over. Every iteration should correspond to each new line written.</p> <p>Try this:</p> <pre><code>import numpy as np print np.__version__ import csv a_range = 0.75 + (1.25 - 0.75)*np.random.sample(10000) b_range = 8 + (12 - 8)*np.random.sample(10000) c_range = -12 + 2*np.random.sample(10000) x0_range = (-b_range - np.sqrt(b_range**2 - (4*a_range*c_range)))/(2*a_range) with open("file.csv",'w') as f: f.write('a,b,c,x0\n') for a,b,c,x0 in zip(a_range, b_range, c_range, x0_range): f.write(','.join(map(str,[a,b,c,x0]))+ '\n') </code></pre>
1
2016-09-01T18:54:08Z
[ "python", "python-2.7", "python-3.x", "csv", "export-to-csv" ]
Read a column then write in another column from a CSV file with Python
39,276,431
<p>I have a CSV file which contains 10 lines and 5 column.</p> <p>For each lines, excepted first line, I need to read column 2 and to recognize the beginning or the end of the cell. Depending on what the script reads on each column 2, it writes a letter in colomn 6 from the same line.</p> <p>Thank a lot in advance if you can help me !</p> <p>Here is my code : </p> <pre><code>with open('test.csv', 'rb') as u: read = csv.reader(u) for line in read: for row[2] in rows: if line.endswith('this_is_a_text'): writer.writerow(row[6]+["WIN"]) </code></pre>
-1
2016-09-01T16:17:49Z
39,277,149
<p>If that is your actual code, you do not have the variable "rows" available at the current scope. You have 'line'. Perhaps you mean something to the effect of </p> <pre><code>for row in reader </code></pre> <p>?? The documentation I'm reading says that a line is a row, there should be columns in your line/row.</p>
1
2016-09-01T17:00:24Z
[ "python", "csv", "pandas", "data-science" ]
Transform dictionary object
39,276,493
<p>This dict structure :</p> <pre class="lang-py prettyprint-override"><code>data = { 'a': { 'category': ['c', 'd'] }, 'b': { 'category': ['c', 'd'] } } </code></pre> <p>should become this dict structure:</p> <pre class="lang-py prettyprint-override"><code>data = { 'c' : ['a', 'b'], 'd' : ['a', 'b'] } </code></pre> <p>I have following approach:</p> <pre class="lang-py prettyprint-override"><code>for key, value in data.items(): if isinstance(value, dict): if 'category' in value: for cat in value['category']: if cat in categories: categories[cat].append(key) else: categories[cat] = [key] </code></pre> <p>I want to know if there is any way to simplify my approach. I am using python 3.5</p>
4
2016-09-01T16:22:03Z
39,276,521
<p>You can solve it with a <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow"><code>defaultdict(list)</code></a>, iterating over the categories and appending keys:</p> <pre><code>&gt;&gt;&gt; from collections import defaultdict &gt;&gt;&gt; &gt;&gt;&gt; d = defaultdict(list) &gt;&gt;&gt; for key, value in data.items(): ... for item in value['category']: ... d[item].append(key) ... &gt;&gt;&gt; dict(d) {'c': ['a', 'b'], 'd': ['a', 'b']} </code></pre>
4
2016-09-01T16:24:03Z
[ "python", "python-3.x", "dictionary" ]
Transform dictionary object
39,276,493
<p>This dict structure :</p> <pre class="lang-py prettyprint-override"><code>data = { 'a': { 'category': ['c', 'd'] }, 'b': { 'category': ['c', 'd'] } } </code></pre> <p>should become this dict structure:</p> <pre class="lang-py prettyprint-override"><code>data = { 'c' : ['a', 'b'], 'd' : ['a', 'b'] } </code></pre> <p>I have following approach:</p> <pre class="lang-py prettyprint-override"><code>for key, value in data.items(): if isinstance(value, dict): if 'category' in value: for cat in value['category']: if cat in categories: categories[cat].append(key) else: categories[cat] = [key] </code></pre> <p>I want to know if there is any way to simplify my approach. I am using python 3.5</p>
4
2016-09-01T16:22:03Z
39,276,624
<pre><code>if 'category' in value: </code></pre> <p>This line makes sure that the key <code>category</code> is in the dictionary, before using it. Then you get the value from the dictionary corresponding to <code>category</code> and iterate it. You can simplify this with <code>dict.get</code> method, which would return a default value, if the key is not there in the dictionary. So you can do something like this</p> <pre><code>for cat in value.get('category', []): </code></pre> <p>If the <code>category</code> doesn't exist in the dictionary, then an empty list is returned and the loop will have nothing to iterate.</p> <hr> <p>Similarly, you are checking if the key exists in the <code>categories</code> dictionary, if not, setting a default value (which is a list). You can avoid that <code>if...else</code> also, with <code>dict.setdefault</code>, like this</p> <pre><code>categories = {} for key, value in data.items(): if isinstance(value, dict): for cat in value.get('category', []): categories.setdefault(cat, []).append(key) </code></pre> <p><code>dict.setdefault</code> is similar to <code>dict.get</code>, except that it sets the default value against the key and returns the value corresponding to the key.</p>
2
2016-09-01T16:30:38Z
[ "python", "python-3.x", "dictionary" ]