title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
Removing white space and colon
| 39,359,816
|
<p>I have a file with a bunch of numbers that have white spaces and colons and I am trying to remove them. As I have seen on this forum the function <code>line.strip.split()</code> works well to achieve this. Is there a way of removing the white space and colon all in one go? Using the method posted by Lorenzo I have this:</p>
<pre><code>train = []
with open('C:/Users/Morgan Weiss/Desktop/STA5635/DataSets/dexter/dexter_train.data') as train_data:
train.append(train_data.read().replace(' ','').replace(':',''))
size_of_train = np.shape(train)
for i in range(size_of_train[0]):
for j in range(size_of_train[1]):
train[i][j] = int(train[i][j])
print(train)
</code></pre>
<p>Although I get this error:</p>
<pre class="lang-none prettyprint-override"><code>File "C:/Users/Morgan Weiss/Desktop/STA5635/Homework/Homework_1/HW1_Dexter.py", line 11, in <module>
for j in range(size_of_train[1]):
IndexError: tuple index out of range
</code></pre>
| 0
|
2016-09-07T00:58:10Z
| 39,360,725
|
<p>You did not provide an example of what your input file looks like so we can only speculate what solution you need. I'm going to suppose that you need to extract integers from your input text file and print their values.</p>
<p>Here's how I would do it:</p>
<ul>
<li>Instead of trying to eliminate whitespace characters and colons, I will be searching for digits using a <a href="https://docs.python.org/3/library/re.html" rel="nofollow">regular expression</a></li>
<li>Consecutive digits would constitute a number</li>
<li>I would convert this number to an integer form.</li>
</ul>
<p>And here's how it would look like:</p>
<pre><code>import re
input_filename = "/home/evens/Temporaire/Stack Exchange/StackOverflow/Input_file-39359816.txt"
matcher = re.compile(r"\d+")
with open(input_filename) as input_file:
for line in input_file:
for digits_found in matcher.finditer(line):
number_in_string_form = digits_found.group()
number = int(number_in_string_form)
print(number)
</code></pre>
<p>But before you run away with this code, you should continue to learn Python because you don't seem to grasp its basic elements yet.</p>
| 1
|
2016-09-07T03:21:25Z
|
[
"python",
"numpy"
] |
how to convert string to numeric in python
| 39,359,843
|
<p>I would like to convert string (%) to float.but my method didnt work well.
the result slightly differ from correct number.
for example,</p>
<pre><code>a=pd.Series(data=["0.1%","0.2%"])
0 0.1%
1 0.2%
dtype: object
</code></pre>
<p>first, I strip "%"</p>
<pre><code>a.str.rstrip("%")
0 0.1
1 0.2
dtype: object
</code></pre>
<p>I tried to convert to numeric, but the result is strange.</p>
<p>I guess this phenomena come from binary digit system...</p>
<pre><code>pd.to_numeric(a.str.rstrip("%"))
0 0.10000000000000000555
1 0.20000000000000001110
dtype: float64
</code></pre>
<p>and of course I couldnt convert % to numeric.</p>
<pre><code>pd.to_numeric(a.str.rstrip("%"))/100
0 0.00100000000000000002
1 0.00200000000000000004
dtype: float64
</code></pre>
<p>I also tried .astype(float) method. but The result was same..</p>
<p>why this phenomena happen ? and how can I avoid this phenomena</p>
| 2
|
2016-09-07T01:02:57Z
| 39,359,882
|
<p>Many rational numbers can't be represented exactly as a floating-point number. In particular, any number that has to have a five as a factor in the denominator, like 1/(2*5), can't be represented exactly. There isn't much you can do about this: either round the displayed number so it looks right, or use an infinite-precision library or a rational-numbers library. Here's a basic way to round the displayed number:</p>
<p><code>>>> print "%.20f" % 0.1</code><br>
<code>0.10000000000000000555</code><br>
<code>>>> print "%.4f" % 0.1</code><br>
<code>0.1000</code> </p>
| 2
|
2016-09-07T01:10:14Z
|
[
"python",
"pandas",
"numpy",
"floating-point"
] |
how to convert string to numeric in python
| 39,359,843
|
<p>I would like to convert string (%) to float.but my method didnt work well.
the result slightly differ from correct number.
for example,</p>
<pre><code>a=pd.Series(data=["0.1%","0.2%"])
0 0.1%
1 0.2%
dtype: object
</code></pre>
<p>first, I strip "%"</p>
<pre><code>a.str.rstrip("%")
0 0.1
1 0.2
dtype: object
</code></pre>
<p>I tried to convert to numeric, but the result is strange.</p>
<p>I guess this phenomena come from binary digit system...</p>
<pre><code>pd.to_numeric(a.str.rstrip("%"))
0 0.10000000000000000555
1 0.20000000000000001110
dtype: float64
</code></pre>
<p>and of course I couldnt convert % to numeric.</p>
<pre><code>pd.to_numeric(a.str.rstrip("%"))/100
0 0.00100000000000000002
1 0.00200000000000000004
dtype: float64
</code></pre>
<p>I also tried .astype(float) method. but The result was same..</p>
<p>why this phenomena happen ? and how can I avoid this phenomena</p>
| 2
|
2016-09-07T01:02:57Z
| 39,367,499
|
<p>As a folow-up to the suggestion by @D-Von, the following python packages can be useful to you: <a href="https://docs.python.org/3/library/decimal.html" rel="nofollow">decimal</a> and <a href="https://docs.python.org/3/library/fractions.html" rel="nofollow">fractions</a></p>
<p>Then you can do some things like:</p>
<pre><code>from fractions import Fraction
from decimal import Decimal
f = Fraction(1, 10)
d = Decimal('0.1')
f = f/100
d = d/100
str(d)
</code></pre>
<p>And all the time you are not working with floats but with rational numbers. See the documentation for more examples.</p>
| 1
|
2016-09-07T10:32:12Z
|
[
"python",
"pandas",
"numpy",
"floating-point"
] |
how to connect an app to a database with python?
| 39,359,845
|
<p>I'm new to python, so I want to know how to store the data in a database! </p>
<p>For a sample example, I want to store informations about users :</p>
<pre><code>ent1 = input('Enter your name: ')
ent2 = input('Enter your adress: ')
ent3 = int(input('Enter your number phone: '))
</code></pre>
<p>The question is how to connect the input with the database? </p>
<p>we can take the example of the sqlite3 that is preinstalled in python! and we suppose that we have a database called user.db connected to the cursur and a table called users that contains three columns (name, adress, number_phone)</p>
<p>thanks for the help!</p>
| 0
|
2016-09-07T01:03:32Z
| 39,359,941
|
<p>Use <code>sqllite3</code> library to perform SQL <code>insert</code> query to insert data into database. For example</p>
<pre><code>import sqlite3
conn = sqlite3.connect('user.db')
## Create table. Skip if table already exists
conn.execute('''CREATE TABLE IF NOT EXISTS users
(name TEXT NOT NULL,
adress TEXT,
number_phone TEXT);''')
ent1 = input('Enter your name: ')
ent2 = input('Enter your adress: ')
ent3 = int(input('Enter your number phone: '))
#Insert into users table
conn.execute("INSERT INTO users (name,adress,number_phone) \
VALUES (?,?,?)",[ent1,ent2,ent3]);
conn.commit() #don't forget commit to save data
conn.close()
</code></pre>
<p>You can also have a look at the <a href="https://docs.python.org/3/library/sqlite3.html" rel="nofollow">official documentation</a> for detailed info and API references.</p>
| 1
|
2016-09-07T01:19:34Z
|
[
"python",
"database",
"sqlite3"
] |
how to connect an app to a database with python?
| 39,359,845
|
<p>I'm new to python, so I want to know how to store the data in a database! </p>
<p>For a sample example, I want to store informations about users :</p>
<pre><code>ent1 = input('Enter your name: ')
ent2 = input('Enter your adress: ')
ent3 = int(input('Enter your number phone: '))
</code></pre>
<p>The question is how to connect the input with the database? </p>
<p>we can take the example of the sqlite3 that is preinstalled in python! and we suppose that we have a database called user.db connected to the cursur and a table called users that contains three columns (name, adress, number_phone)</p>
<p>thanks for the help!</p>
| 0
|
2016-09-07T01:03:32Z
| 39,377,519
|
<p>You can emulate a database with Python objects.</p>
<p>Create a <code>dict,</code> and store all your data therein.</p>
<p>Then, serialise (<a href="https://docs.python.org/3/library/pickle.html" rel="nofollow"><code>pickle</code></a>) the data to disk (with <code>pickle.dump</code>) and next time you need to access/manipulate the data, use <code>pickle.load.</code></p>
| 0
|
2016-09-07T19:14:47Z
|
[
"python",
"database",
"sqlite3"
] |
Will the file be closed after returning from function?
| 39,359,871
|
<p>Here is a python function:</p>
<pre><code>def read_data(filename):
f = zipfile.ZipFile(filename)
for name in f.namelist():
return tf.compat.as_str(f.read(name))
f.close()
</code></pre>
<p>Will the file be closed? There is no error when calling it.</p>
| 0
|
2016-09-07T01:08:15Z
| 39,359,994
|
<p>The file won't be closed. If you want to close the file, you can write it like:</p>
<pre><code>def read_data(filename):
with zipfile.ZipFile(filename) as f:
for name in f.namelist():
return tf.compat.as_str(f.read(name))
</code></pre>
<p>Testing your code:
<a href="http://i.stack.imgur.com/JrFKP.png" rel="nofollow"><img src="http://i.stack.imgur.com/JrFKP.png" alt="picture shows that the file wasn't closed"></a></p>
<p>Testing the code with <code>with</code>:
<a href="http://i.stack.imgur.com/L09WK.png" rel="nofollow"><img src="http://i.stack.imgur.com/L09WK.png" alt="picture shows that the file has been closed"></a></p>
| 1
|
2016-09-07T01:27:34Z
|
[
"python",
"file",
"zipfile"
] |
How do I have multiple windows in Kivy?
| 39,359,950
|
<p>I am trying to open one GUI from a completely different GUI. I am developing on a desktop and the windows have different sizes from each other. I looked at screen manager but I feel as if there is an easier way to do this.</p>
<p>Thanks in advance!</p>
| 0
|
2016-09-07T01:21:44Z
| 39,360,654
|
<p>It's possible, but kinda inconvenient. The issue is that kivy supports only one window per app, so you need to work around it somehow. I personally just use multiple *Layouts (which are different GUIs with different functions) in a single window, showing and hiding them as necessary. Obviously this approach has its restrictions, <em>eg</em> it doesn't support multiple monitors, but it's as simple as it gets.</p>
<p>Then there is <a href="http://stackoverflow.com/questions/31458331/running-multiple-kivy-apps-at-same-time-that-communicate-with-each-other">a question</a> here on SO where people spawn separate kivy apps for every window, thus getting windows that can be dragged and resized relatively. It requires some fiddling with subprocesses and communicating between apps, but this method is more powerful.</p>
<p>ScreenManager, as I understand, doesn't help you: it allows just to define multiple widget trees for the same window and switch between them on the fly. It's a normal use case on touchscreens, but makes pretty little sense on desktop. Which is true for quite a few things in kivy, to be honest. If you don't plan to move to mobiles later, Tkinter or PyQT may be a better choice than kivy.</p>
| 0
|
2016-09-07T03:09:20Z
|
[
"python",
"user-interface",
"kivy",
"kivy-language"
] |
In scrapy, I use XPATH to pick HTML and got many unnecessary "" and ,?
| 39,360,013
|
<p>I faced with a problem with parsing <a href="http://so.gushiwen.org/view_20788.aspx" rel="nofollow">http://so.gushiwen.org/view_20788.aspx</a></p>
<p><img src="http://i.stack.imgur.com/W3jfH.png" alt="Inspector"></p>
<p>This is what I want:</p>
<pre><code>"detail_text": ["
寥è½å¤è¡å®«ï¼å®«è±å¯å¯çº¢ãç½å¤´å®«å¥³å¨ï¼é²å说çå®ã
"],
</code></pre>
<p>but I got this :</p>
<pre><code>"detail_text": ["
", "
", "
", "
", "
寥è½å¤è¡å®«ï¼å®«è±å¯å¯çº¢ã", "ç½å¤´å®«å¥³å¨ï¼é²å说çå®ã
"],
</code></pre>
<p>and this is my code :</p>
<pre><code>#spider
class Tangshi3Spide(scrapy.Spider):
name = "tangshi3"
allowed_domains = ["gushiwen.org"]
start_urls = [
"http://so.gushiwen.org/view_20788.aspx"
]
def __init__(self):
self.items = []
def parse(self, response):
sel = Selector(response)
sites = sel.xpath('//div[@class="main3"]/div[@class="shileft"]')
domain = 'http://so.gushiwen.org'
for site in sites:
item = Tangshi3Item()
item['detail_title'] = site.xpath('div[@class="son1"]/h1/text()').extract()
item['detail_dynasty'] = site.xpath(
u'div[@class="son2"]/p/span[contains(text(),"æä»£ï¼")]/parent::p/text()').extract()
item['detail_translate_note_url'] = site.xpath('div[@id="fanyiShort676"]/p/a/u/parent::a/@href').extract()
item['detail_appreciation_url'] = site.xpath('div[@id="shangxiShort787"]/p/a/u/parent::a/@href').extract()
item['detail_background_url'] = site.xpath('div[@id="shangxiShort24492"]/p/a/u/parent::a/@href').extract()
#question line
item['detail_text'] = site.xpath('div[@class="son2"]/text()').extract()
self.items.append(item)
return self.items
#pipeline
class Tangshi3Pipeline(object):
def __init__(self):
self.file = codecs.open('tangshi_detail.json', 'w', encoding='utf-8')
def process_item(self, item, spider):
line = json.dumps(dict(item))
self.file.write(line.decode("unicode_escape"))
return item
</code></pre>
<p>how can I get the right text?</p>
| 2
|
2016-09-07T01:30:14Z
| 39,360,458
|
<p>You can add predicate <code>[normalize-space()]</code> to avoid picking up empty text nodes i.e those containing whitespaces only :</p>
<pre><code>item['detail_text'] = site.xpath('div[@class="son2"]/text()[normalize-space()]').extract()
</code></pre>
| 3
|
2016-09-07T02:41:13Z
|
[
"python",
"xpath",
"scrapy"
] |
How to load multiple csv into pandas without concatenate?
| 39,360,043
|
<p>I would like to apply two separate graphs to each text file in a folder with subdirectories, however I wouldn't want them to be joined into one data frame.
I am currently only able to load one file into pandas at a time. If I put the root directory, I get an error that the File doesn't exist.</p>
<pre><code>data = pd.read_csv(r'/Users/work/DexterStudio/DataFolder/*', sep=" ", header = None, na_values='NaN')
# organize data
data.drop(data.columns[[4]], axis=1, inplace=True)
data.columns = ["timestamp", "x", "y", "z"]
#get current axes object
frame1 = plt.gca()
#draw two graphs
plt.plot(data['timestamp'],data['x'],color='r', label='x-axis')
plt.plot(data['timestamp'],data['y'], color='b', label='y-axis')
# hide axes
frame1.axes.get_xaxis().set_visible(False)
plt.legend(loc='upper right')
plt.show()
plt.plot(data['timestamp'],data['z'],color='g', label='z-axis')
plt.legend(loc='upper right')
plt.show()
</code></pre>
| 0
|
2016-09-07T01:35:58Z
| 39,360,104
|
<p>Just do two read statements into two variables and go from there:</p>
<pre><code>data1 = pd.read_csv(r'/Users/work/DexterStudio/DataFolder/file1', sep=" ", header = None, na_values='NaN')
data2 = pd.read_csv(r'/Users/work/DexterStudio/DataFolder/file2', sep=" ", header = None, na_values='NaN')
</code></pre>
<p>Note naming the files in the read statement and that you now have data1 and data2</p>
| 0
|
2016-09-07T01:47:04Z
|
[
"python",
"csv",
"pandas"
] |
What is the fast way to get a list with the corresponding serial number with pandas?
| 39,360,107
|
<p>I have to drop some columns, sometimes the columns name is hart to type, so I want to get a list or tuple or array with the corresponding serial number, then I can drop them with <code>df1.drop(df1.columns[[0, 1, 3]], axis=1)</code>.</p>
<p>What is the fast way with pandas to do it like this?</p>
<pre><code>In [2]: df1.columns
Out[2]: Index(['Unnamed: 0', 'Unnamed: 1', 'Unnamed: 2', 'Unnamed: 3', 'ä¸ç', 'ä¸ç',
'ä¸ç.1', 'ä¸ç.1', 'Unnamed: 8', 'Unnamed: 9', 'Unnamed: 10',
'Unnamed: 11'],
dtype='object'
In [3]: a = df1.columns.tolist()
b = list(range(len(df1.columns)))
tuple(zip(a, b))
Out[3]: (('Unnamed: 0', 0),
('Unnamed: 1', 1),
('Unnamed: 2', 2),
('Unnamed: 3', 3),
('ä¸ç', 4),
('ä¸ç', 5),
('ä¸ç.1', 6),
('ä¸ç.1', 7),
('Unnamed: 8', 8),
('Unnamed: 9', 9),
('Unnamed: 10', 10),
('Unnamed: 11', 11))
In [4]: df1.drop(df1.columns[6:], axis=1)
</code></pre>
| 0
|
2016-09-07T01:47:33Z
| 39,788,263
|
<p>Finally, I found a way to do it fast and easy:</p>
<pre><code>for i, element in enumerate(df.columns):
print(i, element)
</code></pre>
| 0
|
2016-09-30T09:33:59Z
|
[
"python",
"pandas"
] |
How do I get astropy Column to store string of any length?
| 39,360,168
|
<p>I'm generating a few catalogues, and would like to have a column for comments. For some reason, when I generate the column and try to store a comment it only takes the first character. </p>
<pre><code>from astropy.table import Column
C1 = Column(['']*12, name = 'ID')
C1[4] = 'test comment'
</code></pre>
<p>Then </p>
<pre><code>print C1[4]
>> t
</code></pre>
<p>Looking at C1, I see that <code><Column name='ID' dtype='str1' length=12></code>
so it's obviously only storing a 1 char string. </p>
<p>if I try </p>
<pre><code>C2 = Column(['some really long silly string']*12, name = 'ID')
C2[4] = 'test comment'
</code></pre>
<p>then</p>
<pre><code>print C1[4]
>> test comment
</code></pre>
<p>but again, I can only store up to a 29 char string because <code><Column name='ID' dtype='str29' length=12></code> and this is a terrible solution anyway. </p>
<p>How do I tell Column to store any length string?</p>
| 0
|
2016-09-07T01:56:43Z
| 39,363,664
|
<p>For this use case I usually first collect the data as a Python list of strings and then call the <code>astropy.table.Column</code> constructor.</p>
<pre><code>>>> from astropy.table import Column
>>> data = ['short', 'something longer']
>>> Column(data=data, name='spam')
<Column name='spam' dtype='str3' length=2>
a
bbb
</code></pre>
<p>The <code>Column</code> will convert your data to a Numpy array with fixed width <code>dtype</code> for strings of the appropriate length (and left pad shorter strings with spaces).</p>
<p>Similarly, when constructing <code>astropy.table.Table</code> objects, I usually first collect the data as a Python list of dicts of row data, and then let the <code>Table</code> constructor figure out the appropriate <code>dtype</code> automatically.</p>
<pre><code>>>> from astropy.table import Table
>>> rows = [{'ham': 42, 'spam': 'a'}, {'ham': 99, 'spam': 'bbb'}]
>>> table = Table(rows=rows, names=['spam', 'ham'])
>>> table
<Table length=2>
spam ham
str3 int64
---- -----
a 42
bbb 99
</code></pre>
<p>Of course this isn't super fast or memory-efficient, but for my applications it's good enough.</p>
<p>More generally, note that working with strings stored in Numpy arrays (which is what <code>astropy.table.Column</code> is doing) simply is painful (in my opinion, no offense intended to Numpy developers or people that like it). The best support I'm aware of for this comes from <code>pandas</code>, so you could use <code>pandas</code> to work with your data and use the <code>to_pandas</code> and <code>from_pandas</code> method of <code>astropy.table.Table</code> if you need an Astropy table, e.g. to read / write to FITS files or do something else that <code>pandas.DataFrame</code> doesn't support.</p>
| 0
|
2016-09-07T07:23:35Z
|
[
"python",
"string",
"ascii",
"astropy"
] |
Corrupted GZIP file downloaded from Google Cloud Storage
| 39,360,222
|
<p>When I download a GZIP file stored in a bucket in Google Cloud Storage from the Storage platform web UI, everything goes well and I can unzip the file without any problem.</p>
<p>However, when I use googleapiclient with Python in order to download the file, I cannot unzip it. 7-Zip says that the file is broken.
My Code:</p>
<pre><code>import io
from apiclient.http import MediaIoBaseDownload
from googleapiclient import http
bucket='bqtoredshiftdaily'
out_file=os.path.join(current_dir,process_name,"Upload",gcsfile.replace("/", "_"))
with open(out_file, 'w') as f:
req = gcs_service.objects().get_media(bucket=bucket, object=gcsfile)
downloader = http.MediaIoBaseDownload(f, req)
done = False
while done is False:
status, done = downloader.next_chunk()
print("Download {}%.".format(int(status.progress() * 100)))
</code></pre>
<p>The download succeeds but as I said, I cannot unzip the downloaded GZIP file.
Any idea why?</p>
| 0
|
2016-09-07T02:05:57Z
| 39,364,805
|
<p>I changed the output file to binary and it solved it:</p>
<pre><code>with open(out_file, 'wb') as f:
</code></pre>
<p>instead of:</p>
<pre><code>with open(out_file, 'w') as f:
</code></pre>
| 1
|
2016-09-07T08:25:10Z
|
[
"python",
"gzip",
"google-cloud-storage"
] |
Access files relative to imported python module
| 39,360,242
|
<p>Preliminary:</p>
<p>I have Anaconda 3 on Windows 10, and a folder, <code>folder_default</code>, that I have put on the Python path. I'm not actually sure whether that's the right terminology, so to be clear: regardless to where my Python script is, if I have a line of code that says <code>import myFile</code>, that line of code will succeed if <code>myFile.py</code> is in <code>folder_default</code>.</p>
<p>My issue:</p>
<p>In <code>folder_default</code>, I have:</p>
<ol>
<li>A subfolder called <code>useful_files</code> which contains a text file called <code>useful_file_1.txt</code>.</li>
<li>A python script called <code>my_misc.py</code>.</li>
</ol>
<p><code>my_misc.py</code> has a line similar to: <code>np.loadtxt('useful_files/useful_file_1.txt')</code>. This line does not work if I use <code>import my_script</code> in a python file in a location other than <code>folder_default</code>, since <code>useful_files/useful_file_1.txt</code> is not the folder path relative to the python file that imports <code>my_misc.py</code>. I don't want to start using global file paths if I can avoid it.</p>
<p>How can I access files using file paths relative to the imported python module, rather than relative to the python script that imports that module?</p>
<p>Please let me know if the question is unclear - I tried to write a fake, minimal version of the setup that's actually on my computer in the hopes that that would simplify things, but I can change it if that actually makes things more confusing.</p>
<p>Thanks.</p>
| 0
|
2016-09-07T02:08:13Z
| 39,360,322
|
<p>You can get the path to current module using the <code>getfile</code> method of <code>inspect</code> module as <code>inspect.getfile(inspect.currentframe())</code>.
For example:</p>
<pre><code># File : my_misc.py
import os, inspect
module_dir_path = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) # get path to the directory of current module
useful_file_path = os.path.join(module_dir_path,'useful_files','useful_file_1.txt')
# path stored in useful_file_path. do whatever you want!
np.loadtxt(useful_file_path)
</code></pre>
| 0
|
2016-09-07T02:20:29Z
|
[
"python",
"import",
"relative-path",
"python-import"
] |
How to specify the level of json.loads()?
| 39,360,261
|
<p>I have a JSON string to be loaded:</p>
<pre><code>{"a": 1, "b": {"c": 123} }
</code></pre>
<p>If I use <code>json.loads</code>, it will load <code>{"c": 123}</code> as a <code>dict</code>. However, I don't want that to happen.</p>
<p>I want to be able to access the string directly, without interpreting the internal object in to a dict, like so:</p>
<p><code>json_dict['b'] == '{"c": 123}'</code></p>
<p>rather than,</p>
<p><code>json_dict['b'] == {"c": 123}</code></p>
<p>since the format (like spaces) and order (which will be random in a <code>dict</code>) may change and the exact string is needed to do an RSA verification.</p>
<p>The actual code follows:</p>
<pre><code>r = requests.get('https://openapi.alipaydev.com/gateway.do', params)
print(r.text)
response = json.loads(r.text)
print(response)
alipay_response = response['alipay_trade_precreate_response']
print(alipay_response['code'])
</code></pre>
<p>The output is:</p>
<pre><code>{"alipay_trade_precreate_response":{"code":"10000","msg":"Success","out_trade_no":"123","qr_code":"https:\/\/qr.alipay.com\/bax08377e7kxveupoqnt001c"},"sign":"EqocoROqXbpkGdkFZakEkoOymGS7+UcvNi1YmcQffF4wtyQcj/RTO1sLHY8tWZFx0rxQAPjkX+7Hrszn4pNWkuBbM/c88oEbxYc+pCvnF49SHZmfkBqY6eJlLIHgPHXus5KFtvlMmkzANNHmD7c72FLDAbvMHKVyEcRPkU9ANIk="}
{'sign': 'EqocoROqXbpkGdkFZakEkoOymGS7+UcvNi1YmcQffF4wtyQcj/RTO1sLHY8tWZFx0rxQAPjkX+7Hrszn4pNWkuBbM/c88oEbxYc+pCvnF49SHZmfkBqY6eJlLIHgPHXus5KFtvlMmkzANNHmD7c72FLDAbvMHKVyEcRPkU9ANIk=', 'alipay_trade_precreate_response': {'out_trade_no': '123', 'qr_code': 'https://qr.alipay.com/bax08377e7kxveupoqnt001c', 'msg': 'Success', 'code': '10000'}}
10000
</code></pre>
<p>We can see that <code>response['alipay_trade_precreate_response']</code> is a <code>dict</code>, and the order is different from the string. I need the string so I can verify the RSA signature.</p>
| -1
|
2016-09-07T02:10:15Z
| 39,360,531
|
<p>You have one solution using json.loads object_hook parameter.</p>
<p>object_hook is an optional function that will be called with the result of any object literal decoded (a dict). The return value of object_hook will be used instead of the dict. This feature can be used to implement custom decoders (e.g. JSON-RPC class hinting).</p>
<pre><code>def phook(x):
for item in x:
if isinstance(x[item], dict):
x[item] = json.dumps(x[item])
return x
x = '{"a": 1, "b": {"c": 123}}'
json.loads(x, object_hook=phook)
# the result will be
# {u'a': 1, u'b': '{"c": 123}'}
</code></pre>
| -1
|
2016-09-07T02:51:56Z
|
[
"python",
"json"
] |
How to specify the level of json.loads()?
| 39,360,261
|
<p>I have a JSON string to be loaded:</p>
<pre><code>{"a": 1, "b": {"c": 123} }
</code></pre>
<p>If I use <code>json.loads</code>, it will load <code>{"c": 123}</code> as a <code>dict</code>. However, I don't want that to happen.</p>
<p>I want to be able to access the string directly, without interpreting the internal object in to a dict, like so:</p>
<p><code>json_dict['b'] == '{"c": 123}'</code></p>
<p>rather than,</p>
<p><code>json_dict['b'] == {"c": 123}</code></p>
<p>since the format (like spaces) and order (which will be random in a <code>dict</code>) may change and the exact string is needed to do an RSA verification.</p>
<p>The actual code follows:</p>
<pre><code>r = requests.get('https://openapi.alipaydev.com/gateway.do', params)
print(r.text)
response = json.loads(r.text)
print(response)
alipay_response = response['alipay_trade_precreate_response']
print(alipay_response['code'])
</code></pre>
<p>The output is:</p>
<pre><code>{"alipay_trade_precreate_response":{"code":"10000","msg":"Success","out_trade_no":"123","qr_code":"https:\/\/qr.alipay.com\/bax08377e7kxveupoqnt001c"},"sign":"EqocoROqXbpkGdkFZakEkoOymGS7+UcvNi1YmcQffF4wtyQcj/RTO1sLHY8tWZFx0rxQAPjkX+7Hrszn4pNWkuBbM/c88oEbxYc+pCvnF49SHZmfkBqY6eJlLIHgPHXus5KFtvlMmkzANNHmD7c72FLDAbvMHKVyEcRPkU9ANIk="}
{'sign': 'EqocoROqXbpkGdkFZakEkoOymGS7+UcvNi1YmcQffF4wtyQcj/RTO1sLHY8tWZFx0rxQAPjkX+7Hrszn4pNWkuBbM/c88oEbxYc+pCvnF49SHZmfkBqY6eJlLIHgPHXus5KFtvlMmkzANNHmD7c72FLDAbvMHKVyEcRPkU9ANIk=', 'alipay_trade_precreate_response': {'out_trade_no': '123', 'qr_code': 'https://qr.alipay.com/bax08377e7kxveupoqnt001c', 'msg': 'Success', 'code': '10000'}}
10000
</code></pre>
<p>We can see that <code>response['alipay_trade_precreate_response']</code> is a <code>dict</code>, and the order is different from the string. I need the string so I can verify the RSA signature.</p>
| -1
|
2016-09-07T02:10:15Z
| 39,362,865
|
<p>To directly answer your question: no, <code>json.loads</code> doesn't support this.</p>
<p>I've included two ways to accomplish your goal below. The first is very robust and relies on the lexer from <a href="https://pypi.python.org/pypi/ijson/" rel="nofollow"><code>ijson</code></a>. The second method is much less robust and relies on a regular expression.</p>
<pre><code>json_string = '{"alipay_trade_precreate_response":{"code":"10000","msg":"Success","out_trade_no":"123","qr_code":"https:\/\/qr.alipay.com\/bax08377e7kxveupoqnt001c"},"sign":"EqocoROqXbpkGdkFZakEkoOymGS7+UcvNi1YmcQffF4wtyQcj/RTO1sLHY8tWZFx0rxQAPjkX+7Hrszn4pNWkuBbM/c88oEbxYc+pCvnF49SHZmfkBqY6eJlLIHgPHXus5KFtvlMmkzANNHmD7c72FLDAbvMHKVyEcRPkU9ANIk="}'
# METHOD 1:
# ---------
import ijson
import io
def get_raw_dict_string(json_string, key='alipay_trade_precreate_response'):
tokens = [[pos, symbol] for pos, symbol in ijson.backends.python.Lexer(io.StringIO(json_string))]
if tokens[0][1] != '{':
raise Exception("not a dictionary")
start = None
end = None
i = 0
level = 0
while i < len(tokens):
pos, symbol = tokens[i]
if symbol == '{':
level += 1
elif symbol == '}':
level -= 1
if level == 1:
end = pos
break
elif level == 1 and symbol[0] == '"':
if ijson.backends.python.unescape(symbol[1:-1]) == key:
if tokens[i+1][1] == ':' and tokens[i+2][1] == '{':
start = tokens[i+2][0]
i += 2
level += 1
i += 1
return json_string[start:end+1]
print(get_raw_dict_string(json_string))
# Output:
# {"code":"10000","msg":"Success","out_trade_no":"123","qr_code":"https:\/\/qr.alipay.com\/bax08377e7kxveupoqnt001c"}
# METHOD 2:
# ---------
import re
print(re.search(r'"alipay_trade_precreate_response"\s*:\s*({[^}]*})', json_string).group(1))
# Output:
# {"code":"10000","msg":"Success","out_trade_no":"123","qr_code":"https:\/\/qr.alipay.com\/bax08377e7kxveupoqnt001c"}
</code></pre>
| 0
|
2016-09-07T06:44:26Z
|
[
"python",
"json"
] |
Regex for format ( HHh MMs SSs ) with optional hours
| 39,360,329
|
<p>I am struggling to get a regex working for the below format. Pointers appeciated</p>
<pre><code>( 43m 12s )
( 13m 11s )
( 11h 43m 12s )
( 1h 43m 12s )
</code></pre>
<p>Edit:</p>
<p>The above examples are part of longer strings.</p>
<p>Edit2:</p>
<p>This is what I have now:</p>
<pre><code> \s\(\s\d{1,2}[a-z]\s.*\)
</code></pre>
| 0
|
2016-09-07T02:21:17Z
| 39,360,354
|
<p>You don't necessarily need to approach it with regular expressions.</p>
<p>Here is an another option - use a <a href="https://labix.org/python-dateutil" rel="nofollow"><code>dateutil</code></a> datetime parser:</p>
<pre><code>>>> from dateutil.parser import parse
>>> l = ["43m 12s", "13m 11s", "11h 43m 12s", "1h 43m 12s"]
>>> for item in l:
... dt = parse(item)
... print(item, dt.hour, dt.minute, dt.second)
...
('43m 12s', 0, 43, 12)
('13m 11s', 0, 13, 11)
('11h 43m 12s', 11, 43, 12)
('1h 43m 12s', 1, 43, 12)
</code></pre>
<p>Or, you can use <a href="https://docs.python.org/2/library/time.html#time.strptime" rel="nofollow"><code>time.strptime()</code></a> trying out <code>%Hh %Mm %Ss</code> and, if it fails, trying <code>%Mm %Ss</code> format.</p>
| 2
|
2016-09-07T02:25:16Z
|
[
"python",
"regex"
] |
Regex for format ( HHh MMs SSs ) with optional hours
| 39,360,329
|
<p>I am struggling to get a regex working for the below format. Pointers appeciated</p>
<pre><code>( 43m 12s )
( 13m 11s )
( 11h 43m 12s )
( 1h 43m 12s )
</code></pre>
<p>Edit:</p>
<p>The above examples are part of longer strings.</p>
<p>Edit2:</p>
<p>This is what I have now:</p>
<pre><code> \s\(\s\d{1,2}[a-z]\s.*\)
</code></pre>
| 0
|
2016-09-07T02:21:17Z
| 39,361,962
|
<p>If you don't need to capture hours minutes and seconds, this will work: <code>\(\s?(?:\d{1,2}\w )+\s?\)</code> you can see it working here: <a href="https://regex101.com/r/yC8iH6/1" rel="nofollow">https://regex101.com/r/yC8iH6/1</a></p>
<p><strong>[EDIT]</strong>: Add capturing if needed:</p>
<p>If you need to capture you can do this:
<code>\(\s?(?:(\d{1,2})\w\s?)?(?:(\d{1,2})\w\s?)(?:(\d{1,2})\w\s?)\s?\)</code>. Notice that the first grouping is optional. </p>
<p>You can see this working version here: <a href="https://regex101.com/r/yC8iH6/2" rel="nofollow">https://regex101.com/r/yC8iH6/2</a>.</p>
<p>Also mote that first non capturing regex can be written like this for more accuracy <code>\(\s?(?:\d{1,2}\w ){2,3}\s?\)</code>.</p>
<p>Hope this helps :)</p>
| 1
|
2016-09-07T05:41:59Z
|
[
"python",
"regex"
] |
How do I use raw input inside a function
| 39,360,336
|
<p>I have scoured the forum trying to find a good way to make a function that takes raw input and uses it.</p>
<pre><code>print "Roll for Agility"
def Rolling(a, b, value):
in1 = raw_input()
if in1 == 'roll':
irand = randrange(a, b)
elif in1 == 'Roll':
irand = randrange(a, b)
else:
print "Please Type <roll> in order to roll the dice."
Rolling ()
print "Your %d is %d" % (value, irand)
Rolling(1, 10, Agility)
</code></pre>
<p>It's supposed to take the numbers for the rolling range, and the number in the roll is inserted into a value (Agility in this case).</p>
<p>The code doesn't work because there's a problem with the raw input and the arguments put inside "Rolling function". I want the function not only to take raw input but also process it. I don't want to make raw input before the function and later add it manually into the function by putting the raw input into a string or int.</p>
<p>Thanks in advance!</p>
| -3
|
2016-09-07T02:22:45Z
| 39,360,487
|
<p>The code has some typos. Error message <code>NameError: name 'value' is not defined</code> is generated because that statement has been mistakenly put outside the <code>Rolling</code> function body where <code>value</code> is not defined.</p>
<p>Corrected code should look like:</p>
<pre><code>#Rolling for Agility
from random import randrange
print "Roll for Agility"
def Rolling(a, b, value):
in1 = raw_input()
if in1 == 'roll' or in1 == 'Roll':
irand = randrange(a, b)
print "Your %s is %d" % (value, irand)
else:
print "Please Type <roll> in order to roll the dice."
Rolling(a,b,value) # using recursion to call again incase of erroneous input
Rolling(1, 10, "Agility")
</code></pre>
| 2
|
2016-09-07T02:44:36Z
|
[
"python"
] |
Data won't load from text file - ValueError: not enough values to unpack (expected 3, got 1)
| 39,360,345
|
<p>Can't get this text data to load:</p>
<pre class="lang-none prettyprint-override"><code>Al,95191,619851,
Joe,651651,616951,
</code></pre>
<p>The load module:</p>
<pre><code>def loadPlayers():
Roster = {}
filename = input("Filename to load: ")
inFile = open(filename, "rt")
print("Loading data...")
while True:
inLine = inFile.readline()
if not inLine:
break
inLine = inLine[:-1]
name, phone, jersey = inLine.split(",")
Roster[name] = Players(name, phone, jersey)
print("Data Loaded Successfully.")
inFile.close()
return Roster
</code></pre>
<p>I get this error:</p>
<pre class="lang-none prettyprint-override"><code>line 103, in loadPlayers name, phone, jersey = inLine.split(",")
ValueError: not enough values to unpack (expected 3, got 1)
</code></pre>
| 0
|
2016-09-07T02:23:58Z
| 39,360,840
|
<p>You can improve the script a little to parse better some cases of extra data not formated as suspected.</p>
<pre><code>def loadPlayers():
Roster = {}
filename = input("Filename to load: ")
inFile = open(filename, "rt")
print("Loading data...")
while True:
inLine = inFile.readline()
if not inLine:
break
inLine = inLine[:-1].split(",")
if len(inLine) != 3:
continue
name, phone, jersey = inLine.split(",")
Roster[name] = Players(name, phone, jersey)
print("Data Loaded Successfully.")
inFile.close()
return Roster
</code></pre>
| 0
|
2016-09-07T03:36:28Z
|
[
"python"
] |
How to aggregate values of pandas series
| 39,360,479
|
<h3>Data manipulation using pandas</h3>
<p>Anyone having bright ways to manipulate the values of concatenated pandas series to find total counts? </p>
<hr>
<p>Current data (type: <code>pandas.core.series.Series</code>)
FYI, this data is generated by using 'groupby' function from the raw data.</p>
<pre><code>date device
2015-07-08 a 0
b 0
c 0
d 1
2015-07-09 a 0
c 1
d 1
2015-07-10 a 1
b 1
c 1
</code></pre>
<p>Expected result (type: <code>pandas.core.series.Series</code>)<br>
Value of each device denotes the total number of counts up to date A.<br>
As an example, total(2015-07-10, c) = 2 because (2015-07-09, c) = 1 and (2015-07-10, c) = 1</p>
<pre><code>date device
2015-07-08 a 0
b 0
c 0
d 1
2015-07-09 a 0
c 1
d 2
2015-07-10 a 1
b 1
c 2
</code></pre>
| 1
|
2016-09-07T02:43:41Z
| 39,361,527
|
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.cumsum.html" rel="nofollow"><code>DataFrameGroupBy.cumsum</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.groupby.html" rel="nofollow"><code>groupby</code></a> by second level:</p>
<pre><code>dates = pd.DatetimeIndex(['2015-07-08','2015-07-08','2015-07-08','2015-07-08',
'2015-07-09','2015-07-09','2015-07-09',
'2015-07-10','2015-07-10','2015-07-10'])
devices = ['a','b','c','d','a','c','d','a','b','c']
idx = pd.MultiIndex.from_tuples(list(zip(dates, devices)), names=['date', 'device'])
s = pd.Series([0,0,0,1,0,1,1,1,1,1], index= idx)
print (s)
date device
2015-07-08 a 0
b 0
c 0
d 1
2015-07-09 a 0
c 1
d 1
2015-07-10 a 1
b 1
c 1
dtype: int64
print (s.groupby(level=1).cumsum())
date device
2015-07-08 a 0
b 0
c 0
d 1
2015-07-09 a 0
c 1
d 2
2015-07-10 a 1
b 1
c 2
dtype: int64
</code></pre>
| 2
|
2016-09-07T05:01:38Z
|
[
"python",
"pandas",
"series",
"multi-index",
"cumsum"
] |
Trying to scrape this website using python but unable to get the required data
| 39,360,498
|
<p>I am trying to get the company names in this website <a href="https://siftery.com/microsoft-outlook" rel="nofollow">https://siftery.com/microsoft-outlook</a>
Basically it lists some companies that use Microsoft Outlook.
I used BeautifulSoup,requests,urllib and urllib2 but I still am not getting the names of the companies that use Microsoft Outlook not even in the first page of the website. </p>
<p>The code I wrote is below - </p>
<pre><code>r = requests.get('http://siftery.com/microsoft-outlook')
print(str(r.content))
f=open('abc.txt','w')
f.write(r.content)
f.close()
</code></pre>
<p>and part of the output that looks interesting is this -</p>
<p>({"name":"Marketing","handle":"marketing","categories":[{"name":"Marketing Automation","handle":"marketing-automation","external_id":"tgJ_49k7v4J-wV","parent_handle":null,"categories":[{"name":"Marketing Automation Platforms","handle":"marketing-automation-platforms","external_id":"tgJLE9aHoLdneT","parent_handle":"marketing-automation"},</p>
<p>BeautifulSoup also gives me the same output, so do the other libraries.
It seems like "external_id" is where the company name is ? I'm not sure. I also tried to manually find the name of a company for example Acxiom using gedit but couldn't find any occurrence.</p>
| -1
|
2016-09-07T02:46:49Z
| 39,360,846
|
<p>That site loads the information using javascript that means that when you do the requests, the DOM is rendered without the information because it is loaded asynchronously, for sites like that you should use selenium.</p>
<p>Note:
Before you build a scraper you should look if the site has an api or end-points with CORS disabled, in your case you can get the information doing a post request to <code>https://siftery.com/product-json/<product_name></code></p>
| 1
|
2016-09-07T03:36:57Z
|
[
"python",
"web-scraping"
] |
Trying to scrape this website using python but unable to get the required data
| 39,360,498
|
<p>I am trying to get the company names in this website <a href="https://siftery.com/microsoft-outlook" rel="nofollow">https://siftery.com/microsoft-outlook</a>
Basically it lists some companies that use Microsoft Outlook.
I used BeautifulSoup,requests,urllib and urllib2 but I still am not getting the names of the companies that use Microsoft Outlook not even in the first page of the website. </p>
<p>The code I wrote is below - </p>
<pre><code>r = requests.get('http://siftery.com/microsoft-outlook')
print(str(r.content))
f=open('abc.txt','w')
f.write(r.content)
f.close()
</code></pre>
<p>and part of the output that looks interesting is this -</p>
<p>({"name":"Marketing","handle":"marketing","categories":[{"name":"Marketing Automation","handle":"marketing-automation","external_id":"tgJ_49k7v4J-wV","parent_handle":null,"categories":[{"name":"Marketing Automation Platforms","handle":"marketing-automation-platforms","external_id":"tgJLE9aHoLdneT","parent_handle":"marketing-automation"},</p>
<p>BeautifulSoup also gives me the same output, so do the other libraries.
It seems like "external_id" is where the company name is ? I'm not sure. I also tried to manually find the name of a company for example Acxiom using gedit but couldn't find any occurrence.</p>
| -1
|
2016-09-07T02:46:49Z
| 39,361,131
|
<p>The data is available directly as JSON. You can use requests to get it like this:</p>
<pre><code>import requests
r = requests.post('https://siftery.com/product-json/microsoft-outlook')
data = r.json()['content']
companies = data['companies']
for company in companies:
print(companies[company]['name'])
</code></pre>
<p><strong>Output</strong></p>
<pre>
Public Technologies
Consalta
PagesJaunes.ca
Chumbak
Media Classified
P.I. Works
Saatchi & Saatchi Pro
Tribeck Strategies
Marketecture Solutions, LLC
Trinity Ventures
ARGOS
CFN Services
Last.Backend
Saatchi & Saatchi USA
Netcad
Central Element
NextGear Capital
Masao
Avalon
Motiwe
Bilge Adam
Impakt Athletics
SOZO Design
ThroughTek
Abovo42
Acxiom
ICEPAY
Connexta
Clearview
Mortgage Coach
</pre>
<p>There are other categories of information which you might want to investigate:</p>
<pre><code>>>> data.keys()
[u'product', u'vendor', u'users', u'group_members', u'companies', u'customers', u'other_categories', u'current_user', u'page_info', u'portfolio_products', u'primary_category', u'metadata']
</code></pre>
| 0
|
2016-09-07T04:19:31Z
|
[
"python",
"web-scraping"
] |
Using zipped queryset many times in template
| 39,360,519
|
<p>I have a model that is only a string :</p>
<pre><code>class Data(models.Model):
string = models.CharField(max_length=200);
</code></pre>
<p>There are <strong>2</strong> registered instances of the model in my database.</p>
<p>It is rendered by this view, which zips the queryset which another list: </p>
<pre><code>def index(request):
data = Data.objects.all();
data2 = [];
for x in data:
data2.append(0);
return render(request, 'testApp/index.html', {"data": zip(data, data2)})
</code></pre>
<p>and here's the template code: </p>
<pre><code>{% for element, e in data %}
{{ element.string }} {{ e }} <br/>
{% endfor %}
{% for element, e in data %}
{{ element.string }} {{ e }} <br/>
{% endfor %}
</code></pre>
<p>This template iterates over the data twice, printing out the elements in the zipped list.</p>
<p>Here's my output:</p>
<pre><code>hello there 0
i am a string 0
</code></pre>
<p><strong>I am expecting 4 lines of output, because the 2 instances are looped over twice.</strong> However it's only printing them out once. What am I doing wrong?</p>
| 0
|
2016-09-07T02:50:36Z
| 39,360,591
|
<p>In Pythno 3, <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow"><code>zip</code></a> will give you an iterator, meaning it will be consumed on the first loop and therefore not print anything on the second loop.</p>
<p>You can fix this by casting the iterator to a list, replacing <code>zip(data, data2)</code> with <code>list(zip(data, data2))</code>.</p>
| 2
|
2016-09-07T02:59:54Z
|
[
"python",
"django"
] |
Identification of rows containing column median in numpy matrix of cum percentiles
| 39,360,535
|
<p>Consider the matrix <code>quantiles</code> that's a subset <code>[:8,:3,0]</code> of a 3D matrix with shape <code>(10,355,8)</code>. </p>
<pre><code>quantiles = np.array([
[ 1. , 1. , 1. ],
[ 0.63763978, 0.61848863, 0.75348137],
[ 0.43439645, 0.42485407, 0.5341457 ],
[ 0.22682343, 0.18878366, 0.25253915],
[ 0.16229408, 0.12541476, 0.15263742],
[ 0.12306046, 0.10372971, 0.09832783],
[ 0.09271845, 0.08209844, 0.05982584],
[ 0.06363636, 0.05471266, 0.03855727]])
</code></pre>
<p>I want a boolean output of the same shape as the <code>quantiles</code> matrix where <code>True</code> marks the row in which the median is located:</p>
<pre><code>In [21]: medians
Out[21]:
array([[False, False, False],
[ True, True, False],
[False, False, True],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False]], dtype=bool)
</code></pre>
<p>To achieve this, I have the following algorithm in mind:</p>
<p>1) Identify the entries that are greater than <code>.5</code>:</p>
<pre><code>In [22]: quantiles>.5
Out[22]:
array([[ True, True, True],
[ True, True, True],
[False, False, True],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False]], dtype=bool)
</code></pre>
<p>2) Considering only the values subset by the <code>quantiles>.5</code> operation, mark the row that minimizes the <code>np.abs</code> distance between the entry and <code>.5</code>. Torturing the terminology a bit, I wish to intersect the two matrices of <code>np.argmin(np.abs(quantiles-.5),axis=0)</code> and <code>quantiles>.5</code> to get the above result. However, I cannot for my life figure out a way to perform the <code>np.argmin</code> on the subset and retain the shape of the <code>quantile</code> matrix. </p>
<p>PS. Yes, there is a similar question <a href="http://stackoverflow.com/questions/33658580/find-the-median-of-each-row-of-a-2-dimensional-array-in-python">here</a> but it doesn't implement my algorithm which could be, I think, more efficient on a larger scale</p>
| 1
|
2016-09-07T02:52:15Z
| 39,360,536
|
<p>Bumping into the old <code>mask</code> operation in <code>Numpy</code>, I found the following solution</p>
<pre><code>#mask quantities that are less than .5
masked_quantiles = ma.masked_where(quantiles<.5,quantiles)
#identify the minimum in column of the masked array
median_idx = np.where(masked_quantiles == masked_quantiles.min(axis=0))
#make a matrix of all False values
median_mat = np.zeros(quantiles.shape, dtype=bool)
#assign True value to corresponding rows
In [86]: median_mat[medians] = True
In [87]: median_mat
Out[87]:
array([[False, False, False],
[ True, True, False],
[False, False, True],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False]], dtype=bool)
</code></pre>
<h1>Update: comparison of my answer to that of Divakar's:</h1>
<p>I ran two comparisons, one on the sample 2D matrix provided for this question and one on my 3D <code>(10,380,8)</code> dataset (not large data by any means).</p>
<h3>Sample dataset:</h3>
<p>My code</p>
<pre><code>%%timeit
masked_quantiles = ma.masked_where(quantiles<=.5,quantiles)
median_idx = masked_quantiles.argmin(0)
10000 loops, best of 3: 65.1 µs per loop
</code></pre>
<p>Divakar's code</p>
<pre><code>%%timeit
mask1 = quantiles<=0.5
min_idx = (quantiles+mask1).argmin(0)
The slowest run took 17.49 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 5.92 µs per loop
</code></pre>
<h3>Full dataset</h3>
<p>My code:</p>
<pre><code>%%timeit
masked_quantiles = ma.masked_where(quantiles<=.5,quantiles)
median_idx = masked_quantiles.argmin(0)
1000 loops, best of 3: 490 µs per loop
</code></pre>
<p>Divakar's code:</p>
<pre><code>%%timeit
mask1 = quantiles<=0.5
min_idx = (quantiles+mask1).argmin(0)
10000 loops, best of 3: 172 µs per loop
</code></pre>
<h3>Conclusion:</h3>
<p>Divakar's answer seems about 3-12 times faster than mine. I presume that the <code>np.ma.where</code> masking operation takes longer than matrix addition. However, the addition operation needs to be stored whereas masking may be more efficient on larger datasets. I wonder how it would compare on something that doesn't or nearly doesn't fit into memory. </p>
| 1
|
2016-09-07T02:52:15Z
|
[
"python",
"arrays",
"numpy",
"boolean",
"median"
] |
Identification of rows containing column median in numpy matrix of cum percentiles
| 39,360,535
|
<p>Consider the matrix <code>quantiles</code> that's a subset <code>[:8,:3,0]</code> of a 3D matrix with shape <code>(10,355,8)</code>. </p>
<pre><code>quantiles = np.array([
[ 1. , 1. , 1. ],
[ 0.63763978, 0.61848863, 0.75348137],
[ 0.43439645, 0.42485407, 0.5341457 ],
[ 0.22682343, 0.18878366, 0.25253915],
[ 0.16229408, 0.12541476, 0.15263742],
[ 0.12306046, 0.10372971, 0.09832783],
[ 0.09271845, 0.08209844, 0.05982584],
[ 0.06363636, 0.05471266, 0.03855727]])
</code></pre>
<p>I want a boolean output of the same shape as the <code>quantiles</code> matrix where <code>True</code> marks the row in which the median is located:</p>
<pre><code>In [21]: medians
Out[21]:
array([[False, False, False],
[ True, True, False],
[False, False, True],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False]], dtype=bool)
</code></pre>
<p>To achieve this, I have the following algorithm in mind:</p>
<p>1) Identify the entries that are greater than <code>.5</code>:</p>
<pre><code>In [22]: quantiles>.5
Out[22]:
array([[ True, True, True],
[ True, True, True],
[False, False, True],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False]], dtype=bool)
</code></pre>
<p>2) Considering only the values subset by the <code>quantiles>.5</code> operation, mark the row that minimizes the <code>np.abs</code> distance between the entry and <code>.5</code>. Torturing the terminology a bit, I wish to intersect the two matrices of <code>np.argmin(np.abs(quantiles-.5),axis=0)</code> and <code>quantiles>.5</code> to get the above result. However, I cannot for my life figure out a way to perform the <code>np.argmin</code> on the subset and retain the shape of the <code>quantile</code> matrix. </p>
<p>PS. Yes, there is a similar question <a href="http://stackoverflow.com/questions/33658580/find-the-median-of-each-row-of-a-2-dimensional-array-in-python">here</a> but it doesn't implement my algorithm which could be, I think, more efficient on a larger scale</p>
| 1
|
2016-09-07T02:52:15Z
| 39,361,277
|
<h2>Approach #1</h2>
<p>Here's an approach using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> and some masking trick -</p>
<pre><code># Mask of quantiles lesser than or equal to 0.5 to select the invalid ones
mask1 = quantiles<=0.5
# Since we are dealing with quantiles, the elems won't be > 1,
# which can be leveraged here as we will add 1s to invalid elems, and
# then look for argmin across each col
min_idx = (np.abs(quantiles-0.5)+mask1).argmin(0)
# Let some broadcasting magic happen here!
out = min_idx == np.arange(quantiles.shape[0])[:,None]
</code></pre>
<p><strong>Step-by-step run</strong></p>
<p>1) Input :</p>
<pre><code>In [37]: quantiles
Out[37]:
array([[ 1. , 1. , 1. ],
[ 0.63763978, 0.61848863, 0.75348137],
[ 0.43439645, 0.42485407, 0.5341457 ],
[ 0.22682343, 0.18878366, 0.25253915],
[ 0.16229408, 0.12541476, 0.15263742],
[ 0.12306046, 0.10372971, 0.09832783],
[ 0.09271845, 0.08209844, 0.05982584],
[ 0.06363636, 0.05471266, 0.03855727]])
</code></pre>
<p>2) Run the code :</p>
<pre><code>In [38]: mask1 = quantiles<=0.5
...: min_idx = (np.abs(quantiles-0.5)+mask1).argmin(0)
...: out = min_idx == np.arange(quantiles.shape[0])[:,None]
...:
</code></pre>
<p>3) Analyze output at each step :</p>
<pre><code>In [39]: mask1
Out[39]:
array([[False, False, False],
[False, False, False],
[ True, True, False],
[ True, True, True],
[ True, True, True],
[ True, True, True],
[ True, True, True],
[ True, True, True]], dtype=bool)
In [40]: np.abs(quantiles-0.5)+mask1
Out[40]:
array([[ 0.5 , 0.5 , 0.5 ],
[ 0.13763978, 0.11848863, 0.25348137],
[ 1.06560355, 1.07514593, 0.0341457 ],
[ 1.27317657, 1.31121634, 1.24746085],
[ 1.33770592, 1.37458524, 1.34736258],
[ 1.37693954, 1.39627029, 1.40167217],
[ 1.40728155, 1.41790156, 1.44017416],
[ 1.43636364, 1.44528734, 1.46144273]])
In [41]: (np.abs(quantiles-0.5)+mask1).argmin(0)
Out[41]: array([1, 1, 2])
In [42]: min_idx == np.arange(quantiles.shape[0])[:,None]
Out[42]:
array([[False, False, False],
[ True, True, False],
[False, False, True],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False],
[False, False, False]], dtype=bool)
</code></pre>
<p><strong>Performance boost</strong> : Following the comments, it seems to get min_idx, we can just do :</p>
<pre><code>min_idx = (quantiles+mask1).argmin(0)
</code></pre>
<h2>Approach #2</h2>
<p>This is focused on memory efficiency.</p>
<pre><code># Mask of quantiles greater than 0.5 to select the valid ones
mask = quantiles>0.5
# Select valid elems
vals = quantiles.T[mask.T]
# Get vald count per col
count = mask.sum(0)
# Get the min val per col given the mask
minval = np.minimum.reduceat(vals,np.append(0,count[:-1].cumsum()))
# Get final boolean array by just comparing the min vals across each col
out = np.isclose(quantiles,minval)
</code></pre>
| 1
|
2016-09-07T04:35:46Z
|
[
"python",
"arrays",
"numpy",
"boolean",
"median"
] |
matplotlib multiple values under cursor
| 39,360,740
|
<p>This question is very similar to those answered here,</p>
<p><a href="http://stackoverflow.com/questions/14754931/matplotlib-values-under-cursor">matplotlib values under cursor</a></p>
<p><a href="http://stackoverflow.com/questions/14349289/in-a-matplotlib-figure-window-with-imshow-how-can-i-remove-hide-or-redefine">In a matplotlib figure window (with imshow), how can I remove, hide, or redefine the displayed position of the mouse?</a></p>
<p><a href="http://stackoverflow.com/questions/27704490/interactive-pixel-information-of-an-image-in-python">Interactive pixel information of an image in Python?</a></p>
<p>except that instead of pixel data (x,y,z) I have various measurements associated with (x,y) coordinates that I'd like to portray on a line plot. Specifically, the (x,y) are spatial positions (lat, lon) and at each (lat,lon) point there is a collection of data (speed, RPM, temp, etc.). I just sketched up something quickly to illustrate, a scatter plot with connecting lines, and then when you hover over a data point it displays all of the "z" values associated with that data point.</p>
<p>Is there an easy way to do something like this?</p>
<p><a href="http://i.stack.imgur.com/rJvn9.png" rel="nofollow"><img src="http://i.stack.imgur.com/rJvn9.png" alt="enter image description here"></a></p>
| 0
|
2016-09-07T03:23:27Z
| 39,364,042
|
<p>You could probably build on something like this example. It doesn't display the information inside the figure (for now only using a <code>print()</code> statement), but it demonstrates a simple method of capturing clicks on <code>scatter</code> points and showing information for those points:</p>
<pre><code>import numpy as np
import matplotlib.pylab as pl
pl.close('all')
n = 10
lat = np.random.random(n)
lon = np.random.random(n)
speed = np.random.random(n)
rpm = np.random.random(n)
temp = np.random.random(n)
def on_click(event):
i = event.ind[0]
print('lon={0:.2f}, lat={1:.2f}: V={2:.2f}, RPM={3:.2f}, T={4:.2f}'\
.format(lon[i], lat[i], speed[i], rpm[i], temp[i]))
fig=pl.figure()
pl.plot(lon, lat, '-')
pl.scatter(lon, lat, picker=True)
fig.canvas.mpl_connect('pick_event', on_click)
</code></pre>
<p>Clicking around a bit gives me:</p>
<pre><code>lon=0.63, lat=0.58: V=0.51, RPM=0.00, T=0.43
lon=0.41, lat=0.07: V=0.95, RPM=0.59, T=0.98
lon=0.86, lat=0.13: V=0.33, RPM=0.27, T=0.85
</code></pre>
| 1
|
2016-09-07T07:44:07Z
|
[
"python",
"matplotlib"
] |
Matching the structure of a list?
| 39,360,926
|
<p>For example:</p>
<pre><code>A=[1,[2,3],[4,[5,6]],7]
B=[2,3,4,5,6,7,8]
</code></pre>
<p>How can I get <code>[2,[3,4],[5,[6,7]],8]</code>?</p>
| 0
|
2016-09-07T03:49:45Z
| 39,361,013
|
<p>You could use a pretty simple recursive function:</p>
<pre><code>def match(struct, source):
try:
return [match(i, source) for i in struct]
except TypeError:
return next(source)
A=[1,[2,3],[4,[5,6]],7]
B=[2,3,4,5,6,7,8]
match(A, iter(B))
# [2, [3, 4], [5, [6, 7]], 8]
</code></pre>
<p>Here is a version of the function that might be a little easier for some people to understand:</p>
<pre><code>def match(struct, source, index=0):
if isinstance(struct, list):
r = []
for item in struct:
next, index = match(item, source, index)
r.append(next)
return r, index
else:
return source[index], index + 1
A=[1,[2,3],[4,[5,6]],7]
B=[2,3,4,5,6,7,8]
match(A, B)
</code></pre>
<p>The basic idea is to loop over the input structure depth first, and consume values from source accordingly. When we hit a number, we can simply take one number from source. If we hit a list we need to apply this algorithm to that list. Along the way need to keep track of how many items we've consumed.</p>
<p>The first version of the algorithm does all this, but in a slightly different way. <code>iter(B)</code> creates an iterator that tracks how many items from b have been consumed and provided the next item when i call <code>next(source)</code> so I don't have to track index explicitly. The try/except checks to see if I can loop over <code>struct</code>. If I can, a list is returned, if I cannot the expect block gets executed and <code>next(source)</code> is returned.</p>
| 9
|
2016-09-07T04:03:10Z
|
[
"python",
"list"
] |
Xlsxwriter Excel Chart Border
| 39,361,072
|
<p>Is there a way to remove an Excel chart border using Xlsxwriter? I need my chart to blend in to an Excel sheet without the grid lines showing and I haven't had any luck so far.</p>
| 1
|
2016-09-07T04:10:06Z
| 39,367,342
|
<p>You can use the <a href="http://xlsxwriter.readthedocs.io/chart.html#chart-set-chartarea" rel="nofollow"><code>set_chartarea()</code></a> method to set the border for the chart object:</p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('chart.xlsx')
worksheet = workbook.add_worksheet()
worksheet.write_column('A1', [3, 6, 9, 12, 9])
chart = workbook.add_chart({'type': 'column'})
chart.add_series({'values': '=Sheet1!$A$1:$A$5'})
# Turn off the chart border.
chart.set_chartarea({'border': {'none': True}})
worksheet.insert_chart('C2', chart, {'x_offset': 25, 'y_offset': 10})
workbook.close()
</code></pre>
| 3
|
2016-09-07T10:25:15Z
|
[
"python",
"xlsxwriter"
] |
Remove numbers from a string with a letter in front
| 39,361,073
|
<pre><code>combined['Cabin'] = combined['Cabin'].map(lambda c : c[0])
</code></pre>
<p>I'm following a tutorial, except this line throws the error <code>TypeError: 'float' object has no attribute '__getitem__'</code></p>
<p>Anyway to fix this? </p>
<p>My column's data looks like
<a href="http://i.stack.imgur.com/dYHmw.png" rel="nofollow">Column</a></p>
<p>Thanks!</p>
<pre><code>def process_cabin():
global combined
# replacing missing cabins with U (for Unknown)
combined.Cabin.fillna('U',inplace=True)
# mapping each Cabin value with the cabin letter
combined['Cabin'] = combined['Cabin'].map(lambda c : c[0])
# dummy encoding ...
cabin_dummies = pd.get_dummies(combined['Cabin'],prefix='Cabin')
combined = pd.concat([combined,cabin_dummies],axis=1)
combined.drop('Cabin',axis=1,inplace=True)
status('cabin')
</code></pre>
<p>`</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-152-70714b711c6d> in <module>()
----> 1 process_cabin()
<ipython-input-151-d9bb11cabd2c> in process_cabin()
7
8 # mapping each Cabin value with the cabin letter
----> 9 combined['Cabin'] = combined['Cabin'].map(lambda c : c[0])
10
11
C:\Users\Data.Steve-PC\Python\Anaconda\lib\site-packages\pandas\core\series.pyc in map(self, arg, na_action)
2014 index=self.index).__finalize__(self)
2015 else:
-> 2016 mapped = map_f(values, arg)
2017 return self._constructor(mapped,
2018 index=self.index).__finalize__(self)
pandas\src\inference.pyx in pandas.lib.map_infer (pandas\lib.c:58435)()
<ipython-input-151-d9bb11cabd2c> in <lambda>(c)
7
8 # mapping each Cabin value with the cabin letter
----> 9 combined['Cabin'] = combined['Cabin'].map(lambda c : c[0])
10
11
TypeError: 'float' object has no attribute '__getitem__'
</code></pre>
| 1
|
2016-09-07T04:10:26Z
| 39,361,138
|
<p>I suspect the problem is that your <code>X</code> isn't what you think it is. If it's a float, for example, 123.45:</p>
<p><code>>>> 123.45['Y']</code><br>
<code>Traceback (most recent call last):</code><br>
<code>File "<stdin>", line 1, in <module></code><br>
<code>TypeError: 'float' object has no attribute '__getitem__'</code> </p>
| 0
|
2016-09-07T04:20:53Z
|
[
"python"
] |
Remove numbers from a string with a letter in front
| 39,361,073
|
<pre><code>combined['Cabin'] = combined['Cabin'].map(lambda c : c[0])
</code></pre>
<p>I'm following a tutorial, except this line throws the error <code>TypeError: 'float' object has no attribute '__getitem__'</code></p>
<p>Anyway to fix this? </p>
<p>My column's data looks like
<a href="http://i.stack.imgur.com/dYHmw.png" rel="nofollow">Column</a></p>
<p>Thanks!</p>
<pre><code>def process_cabin():
global combined
# replacing missing cabins with U (for Unknown)
combined.Cabin.fillna('U',inplace=True)
# mapping each Cabin value with the cabin letter
combined['Cabin'] = combined['Cabin'].map(lambda c : c[0])
# dummy encoding ...
cabin_dummies = pd.get_dummies(combined['Cabin'],prefix='Cabin')
combined = pd.concat([combined,cabin_dummies],axis=1)
combined.drop('Cabin',axis=1,inplace=True)
status('cabin')
</code></pre>
<p>`</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-152-70714b711c6d> in <module>()
----> 1 process_cabin()
<ipython-input-151-d9bb11cabd2c> in process_cabin()
7
8 # mapping each Cabin value with the cabin letter
----> 9 combined['Cabin'] = combined['Cabin'].map(lambda c : c[0])
10
11
C:\Users\Data.Steve-PC\Python\Anaconda\lib\site-packages\pandas\core\series.pyc in map(self, arg, na_action)
2014 index=self.index).__finalize__(self)
2015 else:
-> 2016 mapped = map_f(values, arg)
2017 return self._constructor(mapped,
2018 index=self.index).__finalize__(self)
pandas\src\inference.pyx in pandas.lib.map_infer (pandas\lib.c:58435)()
<ipython-input-151-d9bb11cabd2c> in <lambda>(c)
7
8 # mapping each Cabin value with the cabin letter
----> 9 combined['Cabin'] = combined['Cabin'].map(lambda c : c[0])
10
11
TypeError: 'float' object has no attribute '__getitem__'
</code></pre>
| 1
|
2016-09-07T04:10:26Z
| 39,361,338
|
<p>I am not sure what is your concern. Is this what you are trying to do?</p>
<pre><code>import pandas as pd
X = pd.DataFrame({'Y': ["NAN", "C85", "NAN", "C123", "NAN"]})
print X
Y
0 NAN
1 C85
2 NAN
3 C123
4 NAN
lambda_f = lambda c: c[0]
X['Y'] = map(lambda_f, X['Y'])
print X
Y
0 N
1 C
2 N
3 C
4 N
</code></pre>
| 0
|
2016-09-07T04:41:14Z
|
[
"python"
] |
Python calling Matlab User Function from any directory using matlab module
| 39,361,081
|
<p><strong>Background</strong></p>
<p>I'm working with Python 2.7.6 and Matlab 2016a and have installed the official MathWorks Python to Matlab bridge. It is the matlab and matlab.engine modules. All of the other questions I've seen on SO regarding matlab/python use third-party wrappers that seem out of date. I have no experience programming in matlab itself, but plenty of python experience.</p>
<p>I'm currently porting this wrapper code from matlab_wrapper to the matlab module: <a href="https://github.com/javiergonzalezh/dpp" rel="nofollow">https://github.com/javiergonzalezh/dpp</a>. matlab_wrapper did not work for me (gave an undefined symbol in the openssl library that installed with matlab 2016a), hence the port to something that does work and will be maintained for future versions of matlab.</p>
<p><strong>Question</strong></p>
<p>This documentation shows how to call user defined functions (.m files) that are in the current directory.
<a href="http://www.mathworks.com/help/matlab/matlab_external/call-user-script-and-function-from-python.html" rel="nofollow">http://www.mathworks.com/help/matlab/matlab_external/call-user-script-and-function-from-python.html</a></p>
<p>How can I call matlab user functions from any cwd using the matlab module in python? Is there some kind of OS environment $PATH variable or some matlab equivilent? If it helps, the .m files reside in the same directory as the calling python code.</p>
| 0
|
2016-09-07T04:12:16Z
| 39,373,010
|
<p>Due to the comment by @excaza, I solved this by setting the MATLABPATH environment variable to point to the folder containing my *.m files.</p>
<p><a href="http://www.mathworks.com/help/matlab/matlab_env/add-folders-to-search-path-upon-startup-on-unix-or-macintosh.html" rel="nofollow">http://www.mathworks.com/help/matlab/matlab_env/add-folders-to-search-path-upon-startup-on-unix-or-macintosh.html</a></p>
| 0
|
2016-09-07T14:45:11Z
|
[
"python",
"matlab",
"path"
] |
Python detect character surrounded by spaces
| 39,361,214
|
<p>Anyone know how I can find the character in the center that is surrounded by spaces?</p>
<p><code>1 + 1</code></p>
<p>I'd like to be able to separate the <code>+</code> in the middle to use in a if/else statement.</p>
<p>Sorry if I'm not too clear, I'm a Python beginner.</p>
| 0
|
2016-09-07T04:29:01Z
| 39,361,262
|
<p>I think you are looking for something like the <code>split()</code> method which will split on white space by default.</p>
<p>Suppose we have a string <code>s</code></p>
<pre><code>s = "1 + 1"
chunks = s.split()
print(chunks[1]) # Will print '+'
</code></pre>
| 5
|
2016-09-07T04:34:37Z
|
[
"python",
"python-3.x"
] |
Python detect character surrounded by spaces
| 39,361,214
|
<p>Anyone know how I can find the character in the center that is surrounded by spaces?</p>
<p><code>1 + 1</code></p>
<p>I'd like to be able to separate the <code>+</code> in the middle to use in a if/else statement.</p>
<p>Sorry if I'm not too clear, I'm a Python beginner.</p>
| 0
|
2016-09-07T04:29:01Z
| 39,361,275
|
<p>You can use regex:</p>
<pre><code>s="1 + 1"
a=re.compile(r' (?P<sym>.) ')
a.search(s).group('sym')
</code></pre>
| 0
|
2016-09-07T04:35:36Z
|
[
"python",
"python-3.x"
] |
Python detect character surrounded by spaces
| 39,361,214
|
<p>Anyone know how I can find the character in the center that is surrounded by spaces?</p>
<p><code>1 + 1</code></p>
<p>I'd like to be able to separate the <code>+</code> in the middle to use in a if/else statement.</p>
<p>Sorry if I'm not too clear, I'm a Python beginner.</p>
| 0
|
2016-09-07T04:29:01Z
| 39,361,292
|
<p>This regular expression will detect a single character surrounded by spaces, if the character is a plus or minus or mult or div sign: <code>r' ([+-*/]) '</code>. Note the spaces inside the apostrophes. The parentheses "capture" the character in the middle. If you need to recognize a different set of characters, change the set inside the brackets.</p>
<p>If you haven't dealt with regular expressions before, read up on the <code>re</code> module. They are very useful for simple text processing. The two relevant features here are "character classes" (the square brackets in my example) and "capturing parentheses" (the round parens).</p>
| 0
|
2016-09-07T04:37:12Z
|
[
"python",
"python-3.x"
] |
Python detect character surrounded by spaces
| 39,361,214
|
<p>Anyone know how I can find the character in the center that is surrounded by spaces?</p>
<p><code>1 + 1</code></p>
<p>I'd like to be able to separate the <code>+</code> in the middle to use in a if/else statement.</p>
<p>Sorry if I'm not too clear, I'm a Python beginner.</p>
| 0
|
2016-09-07T04:29:01Z
| 39,361,302
|
<p>Not knowing how many spaces separate your central character, then I'd use the following:</p>
<pre><code>s = '1 + 1'
middle = filter(None, s.split())[1]
print middle # +
</code></pre>
<p>The <code>split</code> works as in the solution provided by Zac, but if there are more than a single space, then the returned list will have a bunch of <code>''</code> elements, which we can get rid of with the <code>filter(None, )</code> function.</p>
<p>Then it's just a matter of extracting your second element.</p>
<p>Check it in action at <a href="https://eval.in/636622" rel="nofollow">https://eval.in/636622</a></p>
<p>If we look at it step-by-step, then here is how it all works using a python console:</p>
<pre><code>>>> s = '1 + 1'
>>> s.split()
['1', '+', '', '', '1']
>>> filter(None, s.split())
['1', '+', '1']
>>> filter(None, s.split())[1]
'+'
</code></pre>
| 0
|
2016-09-07T04:38:13Z
|
[
"python",
"python-3.x"
] |
Python detect character surrounded by spaces
| 39,361,214
|
<p>Anyone know how I can find the character in the center that is surrounded by spaces?</p>
<p><code>1 + 1</code></p>
<p>I'd like to be able to separate the <code>+</code> in the middle to use in a if/else statement.</p>
<p>Sorry if I'm not too clear, I'm a Python beginner.</p>
| 0
|
2016-09-07T04:29:01Z
| 39,361,313
|
<pre><code>import re
def find_between(string, start_=' ', end_=' '):
re_str = r'{}([-+*/%^]){}'.format(start_, end_)
try:
return re.search(re_str, string).group(1)
except AttributeError:
return None
print(find_between('9 * 5', ' ', ' '))
</code></pre>
| 0
|
2016-09-07T04:38:55Z
|
[
"python",
"python-3.x"
] |
what's the usage of __traceback_hide__
| 39,361,321
|
<p>I have seen this line of code in some functions</p>
<pre><code>__traceback_hide__ = True
</code></pre>
<p>What does it do? It seems like its trying to suppress the error traceback. In what situations should the traceback be hidden?</p>
| 0
|
2016-09-07T04:39:52Z
| 39,361,498
|
<p>Looks like this is <a href="https://github.com/jazzband/django-debug-toolbar/issues/160" rel="nofollow">mostly a convenience for web frameworks</a> (Sentry, werkzeug, <a href="http://pythonpaste.org/modules/exceptions.html#paste.exceptions.collector.ExceptionCollector" rel="nofollow">Paste</a>, Django) to make it so framework functions aren't included in the high level exception reporting features of the frameworks.</p>
<p>The exact behavior likely differs by framework, for example, for Paste specifically, it's documented as:</p>
<blockquote>
<p>If set and true, this indicates that the frame should be hidden from abbreviated tracebacks. This way you can hide some of the complexity of the larger framework and let the user focus on their own errors.</p>
<p>By setting it to 'before', all frames before this one will be thrown away. By setting it to 'after' then all frames after this will be thrown away until 'reset' is found. In each case the frame where it is set is included, unless you append '_and_this' to the value (e.g., 'before_and_this').</p>
<p>Note that formatters will ignore this entirely if the frame that contains the error wouldnât normally be shown according to these rules.</p>
</blockquote>
<p>It's not a standard variable, and the core Python interpreter provides no support for it.</p>
| 0
|
2016-09-07T04:58:12Z
|
[
"python"
] |
what's the usage of __traceback_hide__
| 39,361,321
|
<p>I have seen this line of code in some functions</p>
<pre><code>__traceback_hide__ = True
</code></pre>
<p>What does it do? It seems like its trying to suppress the error traceback. In what situations should the traceback be hidden?</p>
| 0
|
2016-09-07T04:39:52Z
| 39,361,531
|
<p>Googling for "python __traceback_hide__", I learn that it's intended to let a complicated framework hide part of its inner workings by suppressing some stack frames from the exception printout, so that the user does not become confused lots of irrelevant output.</p>
| 0
|
2016-09-07T05:01:44Z
|
[
"python"
] |
what's the usage of __traceback_hide__
| 39,361,321
|
<p>I have seen this line of code in some functions</p>
<pre><code>__traceback_hide__ = True
</code></pre>
<p>What does it do? It seems like its trying to suppress the error traceback. In what situations should the traceback be hidden?</p>
| 0
|
2016-09-07T04:39:52Z
| 39,361,573
|
<p><code>__tracebackhide__</code> can be set to hide a function from the traceback when using PyTest. <code>__traceback_hide__</code> appears to be used in the Python Paste package for the same purpose.</p>
<p>Here's what the <a href="http://pythonpaste.org/modules/exceptions.html#module-paste.exceptions.collector" rel="nofollow">paste.exceptions.collector</a> documentation has to say about it:</p>
<blockquote>
<p>If set and true, this indicates that the frame should be hidden from abbreviated tracebacks. This way you can hide some of the complexity of the larger framework and let the user focus on their own errors.</p>
<p>By setting it to 'before', all frames before this one will be thrown away. By setting it to 'after' then all frames after this will be thrown away until 'reset' is found. In each case the frame where it is set is included, unless you append '_and_this' to the value (e.g., 'before_and_this').</p>
<p>Note that formatters will ignore this entirely if the frame that contains the error wouldnât normally be shown according to these rules.</p>
</blockquote>
<p>And the <a href="http://doc.pytest.org/en/latest/example/simple.html#writing-well-integrated-assertion-helpers" rel="nofollow">PyTest documentation</a> on its similar <code>__tracebackhide__</code>:</p>
<blockquote>
<p>If you have a test helper function called from a test you can use the pytest.fail marker to fail a test with a certain message. The test support function will not show up in the traceback if you set the __tracebackhide__ option somewhere in the helper function.</p>
</blockquote>
<p>So basically, they are to avoid cluttering your tracebacks with test helper functions or other functions that you know aren't part of the problem that you are trying to debug.</p>
| 1
|
2016-09-07T05:05:15Z
|
[
"python"
] |
Broadcast an operation along specific axis in python
| 39,361,341
|
<p>In python, suppose I have a square (numpy) matrix <strong>X</strong>, of size <em>n x n</em> and I have a numpy vector <strong>a</strong> of size <em>n</em>. </p>
<p>Very simply, I want to perform a broadcasting subtraction of <strong>X - a</strong>, but I want to be able to specify along which dimension, so that I can specify for the subtraction to be either along axis 0 or axis 1. </p>
<p>How can I specify the axis?</p>
<p>Thanks. </p>
| 1
|
2016-09-07T04:41:31Z
| 39,361,699
|
<p>Let's generate arrays with random elems</p>
<p>Inputs :</p>
<pre><code>In [62]: X
Out[62]:
array([[ 0.32322974, 0.50491961, 0.40854442, 0.36908488],
[ 0.58840196, 0.1696713 , 0.75428203, 0.01445901],
[ 0.27728281, 0.33722084, 0.64187916, 0.51361972],
[ 0.39151808, 0.6883594 , 0.93848072, 0.48946276]])
In [63]: a
Out[63]: array([ 0.01278876, 0.01854458, 0.16953393, 0.37159562])
</code></pre>
<p><strong>I. Subtraction along <code>axis=1</code></strong></p>
<p>Let's do the subtraction along <code>axis=1</code>, i.e. we want to subtract <code>a</code> from the first row of <code>X</code>, the second row of <code>X</code> and so on. For ease of inspecting correctness, let's just use the first row of <code>X</code> :</p>
<pre><code>In [64]: X[0] - a
Out[64]: array([ 0.31044099, 0.48637503, 0.23901049, -0.00251074])
</code></pre>
<p>Going deeper there, what's happening there is : </p>
<pre><code>X[0,0] - a[0], X[0,1] - a[1], X[0,2] - a[2] , X[0,3] - a[3]
</code></pre>
<p>So, we are matching the second axis of <code>X</code> with the first axis of <code>a</code>. Since, <code>X</code> is <code>2D</code> and <code>a</code> is <code>1D</code>, both are already aligned :</p>
<pre><code>X : n x n
a : n
</code></pre>
<p>So, we simply do <code>X-a</code> to get all subtractions :</p>
<pre><code>In [65]: X-a
Out[65]:
array([[ 0.31044099, 0.48637503, 0.23901049, -0.00251074],
[ 0.5756132 , 0.15112672, 0.5847481 , -0.3571366 ],
[ 0.26449405, 0.31867625, 0.47234523, 0.1420241 ],
[ 0.37872932, 0.66981482, 0.76894679, 0.11786714]])
</code></pre>
<p>And, finally see if we have <code>X[0] - a</code> obtained earlier is here.</p>
<p><strong>Important Note :</strong> Thing to be noted here is that <code>a</code> elems would be along one axis and along that subtraction would be done and the broadcasting would happen along the other axis. So, in this case, even though subtraction is happening along <code>axis=1</code>, elems of <code>a</code> would be broadcasted along the <code>axis=0</code>.</p>
<p><strong>II. Subtraction along <code>axis=0</code></strong></p>
<p>Similarly, let's do the subtraction along <code>axis=0</code>, i.e. we want to subtract <code>a</code> from the first col of <code>X</code>, the second col of <code>X</code> and so on. For ease of inspecting correctness, let's just use the first col of <code>X</code> :</p>
<pre><code>In [67]: X[:,0]-a
Out[67]: array([ 0.31044099, 0.56985738, 0.10774888, 0.01992247])
</code></pre>
<p>Going deeper there, what's happening there is : </p>
<pre><code>X[0,0] - a[0], X[1,0] - a[1], X[2,0] - a[2] , X[3,0] - a[3]
</code></pre>
<p>So, we are matching the first axis of <code>X</code> with the first axis of <code>a</code>. Since, <code>X</code> is <code>2D</code> and <code>a</code> is <code>1D</code>, we need to extend <code>a</code> to <code>2D</code> and keep all elems along its first axis with <code>a[:,None]</code> :</p>
<pre><code>X : n x n
a[:,None] : n x 1
</code></pre>
<p>So, we do <code>X-a[:,None]</code> to get all subtractions :</p>
<pre><code>In [68]: X-a[:,None]
Out[68]:
array([[ 0.31044099, 0.49213085, 0.39575566, 0.35629612],
[ 0.56985738, 0.15112672, 0.73573745, -0.00408557],
[ 0.10774888, 0.16768691, 0.47234523, 0.34408579],
[ 0.01992247, 0.31676379, 0.5668851 , 0.11786714]])
</code></pre>
<p>And, finally see if we have <code>X[:,0] - a</code> obtained earlier is here.</p>
| 1
|
2016-09-07T05:18:55Z
|
[
"python",
"arrays",
"numpy",
"matrix",
"numpy-broadcasting"
] |
Broadcast an operation along specific axis in python
| 39,361,341
|
<p>In python, suppose I have a square (numpy) matrix <strong>X</strong>, of size <em>n x n</em> and I have a numpy vector <strong>a</strong> of size <em>n</em>. </p>
<p>Very simply, I want to perform a broadcasting subtraction of <strong>X - a</strong>, but I want to be able to specify along which dimension, so that I can specify for the subtraction to be either along axis 0 or axis 1. </p>
<p>How can I specify the axis?</p>
<p>Thanks. </p>
| 1
|
2016-09-07T04:41:31Z
| 39,363,367
|
<p>Start with 2 dimensions that are different (in label at least)</p>
<ul>
<li><code>X</code> shape <code>(n,m)</code></li>
<li><code>a</code> shape <code>(n,)</code></li>
<li><code>b</code> shape <code>(m,)</code></li>
</ul>
<p>The ways to combine these are:</p>
<pre><code>(n,m)-(n,) => (n,m)-(n,1) => (n,m)
X - a[:,None]
(n,m)-(m,) => (n,m)-(1,m) => (n,m)
X - b[None,:]
X - b # [None,:] is automatic, if needed.
</code></pre>
<p>The basic point is that when the number dimensions differ, <code>numpy</code> can add new dimensions at the start, but you have to be explicit about adding new dimensions at the end.</p>
<p>Or to combine 2 1d arrays in a outer product (difference):</p>
<pre><code>(n,) - (m,) => (n,1)-(1,m) => (n,m)
a[:,None] - b[None,:]
a[:,None] - b
</code></pre>
<p>Without the these rules, <code>a-b</code> could result in a <code>(n,m)</code> or <code>(m,n)</code> or something else.</p>
<p>And with 2 matching length arrays:</p>
<pre><code>(n,) - (n,) => (n,)
a - a
</code></pre>
<p>or </p>
<pre><code>(n,) - (n,) => (n,1)-(1,n) => (n,n)
a[:,None]-a[None,:]
</code></pre>
<p>=============</p>
<p>To write a function that would take an <code>axis</code> parameter, you could use <code>np.expand_dims</code>:</p>
<pre><code>In [220]: np.expand_dims([1,2,3],0)
Out[220]: array([[1, 2, 3]]) # like [None,:]
In [221]: np.expand_dims([1,2,3],1)
Out[221]: # like [:,None]
array([[1],
[2],
[3]])
def foo(X, a, axis=0):
return X - np.expand_dims(a, axis=axis)
</code></pre>
<p>to be used as:</p>
<pre><code>In [223]: foo(np.eye(3),[1,2,3],axis=0)
Out[223]:
array([[ 0., -2., -3.],
[-1., -1., -3.],
[-1., -2., -2.]])
In [224]: foo(np.eye(3),[1,2,3],axis=1)
Out[224]:
array([[ 0., -1., -1.],
[-2., -1., -2.],
[-3., -3., -2.]])
</code></pre>
| 1
|
2016-09-07T07:09:17Z
|
[
"python",
"arrays",
"numpy",
"matrix",
"numpy-broadcasting"
] |
How to create a list from another list using specific criteria in Python?
| 39,361,381
|
<p>How can I create a list from another list using python?
If I have a list:</p>
<pre><code>input = ['a/b', 'g', 'c/d', 'h', 'e/f']
</code></pre>
<p>How can I create the list of only those letters that follow slash "/" i.e.</p>
<pre><code>desired_output = ['b','d','f']
</code></pre>
<p>A code would be very helpful.</p>
| 5
|
2016-09-07T04:45:45Z
| 39,361,427
|
<p>You probably have this input.You can get by simple list comprehension.</p>
<pre><code>input = ["a/b", "g", "c/d", "h", "e/f"]
print [i.split("/")[1] for i in input if i.find("/")==1 ]
</code></pre>
<p>or </p>
<pre><code>print [i.split("/")[1] for i in input if "/" in i ]
</code></pre>
<blockquote>
<p>Output: ['b', 'd', 'f']</p>
</blockquote>
| 7
|
2016-09-07T04:50:52Z
|
[
"python",
"list"
] |
How to create a list from another list using specific criteria in Python?
| 39,361,381
|
<p>How can I create a list from another list using python?
If I have a list:</p>
<pre><code>input = ['a/b', 'g', 'c/d', 'h', 'e/f']
</code></pre>
<p>How can I create the list of only those letters that follow slash "/" i.e.</p>
<pre><code>desired_output = ['b','d','f']
</code></pre>
<p>A code would be very helpful.</p>
| 5
|
2016-09-07T04:45:45Z
| 39,361,436
|
<p>If you fix your list by having all strings encapsulated by "", you can then use this to get what you want.</p>
<pre><code>input = ["a/b", "g", "c/d", "h", "e/f"]
output = []
for i in input:
if "/" in i:
output.append(i.split("/")[1])
print output
['b', 'd', 'f']
</code></pre>
| 0
|
2016-09-07T04:52:00Z
|
[
"python",
"list"
] |
How to create a list from another list using specific criteria in Python?
| 39,361,381
|
<p>How can I create a list from another list using python?
If I have a list:</p>
<pre><code>input = ['a/b', 'g', 'c/d', 'h', 'e/f']
</code></pre>
<p>How can I create the list of only those letters that follow slash "/" i.e.</p>
<pre><code>desired_output = ['b','d','f']
</code></pre>
<p>A code would be very helpful.</p>
| 5
|
2016-09-07T04:45:45Z
| 39,361,452
|
<p>With regex:</p>
<pre><code>>>> from re import match
>>> input = ['a/b', 'g', 'c/d', 'h', 'e/f', '/', 'a/']
>>> [m.groups()[0] for m in (match(".*/([\w+]$)", item) for item in input) if m]
['b', 'd', 'f']
</code></pre>
| 2
|
2016-09-07T04:53:42Z
|
[
"python",
"list"
] |
How to create a list from another list using specific criteria in Python?
| 39,361,381
|
<p>How can I create a list from another list using python?
If I have a list:</p>
<pre><code>input = ['a/b', 'g', 'c/d', 'h', 'e/f']
</code></pre>
<p>How can I create the list of only those letters that follow slash "/" i.e.</p>
<pre><code>desired_output = ['b','d','f']
</code></pre>
<p>A code would be very helpful.</p>
| 5
|
2016-09-07T04:45:45Z
| 39,361,462
|
<p>Simple one-liner could be to:</p>
<pre><code>>> input = ["a/b", "g", "c/d", "h", "e/f"]
>> list(map(lambda x: x.split("/")[1], filter(lambda x: x.find("/")==1, input)))
Result: ['b', 'd', 'f']
</code></pre>
| 1
|
2016-09-07T04:54:23Z
|
[
"python",
"list"
] |
How to create a list from another list using specific criteria in Python?
| 39,361,381
|
<p>How can I create a list from another list using python?
If I have a list:</p>
<pre><code>input = ['a/b', 'g', 'c/d', 'h', 'e/f']
</code></pre>
<p>How can I create the list of only those letters that follow slash "/" i.e.</p>
<pre><code>desired_output = ['b','d','f']
</code></pre>
<p>A code would be very helpful.</p>
| 5
|
2016-09-07T04:45:45Z
| 39,361,478
|
<pre><code>>>> input = ["a/b", "g", "c/d", "h", "e/f"]
>>> output=[]
>>> for i in input:
if '/' in i:
s=i.split('/')
output.append(s[1])
>>> output
['b', 'd', 'f']
</code></pre>
| 1
|
2016-09-07T04:56:43Z
|
[
"python",
"list"
] |
Python 3.4 calculating the mode,median reading through a file
| 39,361,434
|
<p>I was wondering if there was another way to code
At the core of this problem is the easiest way to solve this problem is to read a file and save the values in the list. Where then you'd have: </p>
<pre><code>a = [1,2,3,4,5,6,1,1,1,1]
import statistics
listMode = statistics.mode(a) # median, average, etc...
</code></pre>
<p>I was wondering instead of having to save these values in <code>a</code> (so memory as it can be quite large), whether I could calculate the mode on the fly as I read the file and update a single value everytime I read a new line i.e. incrementally calculate mode,median and average. So that at the end I'd have <code>a = [mode,median,average]</code>. </p>
| 1
|
2016-09-07T04:51:43Z
| 39,361,473
|
<p>If the set of input numbers comes from a reasonably small universe of values, as in your example, you could use a <code>Counter</code> to count how many of each value you see as they pass by. From that <code>Counter</code> you can get the mode easily, and the median with a little work. Calculating the average on the fly is easy, doesn't need the <code>Counter</code>: just keep a running total and a running count.</p>
| 3
|
2016-09-07T04:56:03Z
|
[
"python",
"python-3.x"
] |
Import celery task without importing dependencies
| 39,361,511
|
<p>I have two modules </p>
<pre><code>alpha.py
beta.py
</code></pre>
<p><code>beta.py</code> can only be run on <code>beta.server</code> because it requires a licensed solver than only exists on <code>beta.server</code>. </p>
<p>Within <code>alpha.py</code>, there's a portion of code that calls: </p>
<pre><code>beta_task.apply_async(kwargs={...})
</code></pre>
<p>As such, it requires</p>
<pre><code>from beta import beta_task
</code></pre>
<p>Which in turn requires the magical proprietary module that is only available on <code>beta.server</code>.</p>
<p>I need to enable <code>alpha_task</code> to run on <code>alpha.server</code>, having the ability to call <code>beta_task</code> without having the <code>beta_task</code> code on the server. </p>
<p>Is this possible?</p>
<h1>UPDATE</h1>
<p>Also, can I prevent <code>beta.task</code> from running on <code>alpha.server</code>?</p>
<p>Since <code>alpha.py</code> import <code>beta.py</code>, the daemon finds <code>beta.task</code> and listens for tasks of this type: </p>
<pre><code>- ** ---------- [config]
- ** ---------- .> app: app_app
- ** ---------- .> transport: asdfasdfasd
- ** ---------- .> results: adfasdfasdf
- *** --- * --- .> concurrency: 12 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. alpha.alpha_task
. beta.beta_task
</code></pre>
| 0
|
2016-09-07T04:59:56Z
| 39,375,215
|
<p>I ran into this before but never got it to work "right". I used a hacky workaround instead.</p>
<p>You can put the <code>import proprietary</code> statement in the <code>beta.beta_task</code> def itself. Your 'alpha' file doesn't actually run the 'beta' def, it just uses celery's task decorator to dispatch a message about it.</p>
<p>While PEP standards dictate a module should be at the top on the outermost scope, it's actually common practice for widely used PyPi modules to place the import within a registration or called function so that uninstalled dependencies for the unused files won't break the package [for example, a caching library will import redis/memcached modules within the backend activation, so the 3rd party modules aren't needed unless that backend is used].</p>
<p>alpha.py</p>
<pre><code>from beta import beta_task
beta_task.apply_async(kwargs={...})
</code></pre>
<p>beta.py</p>
<pre><code>@task
def beta_task(args):
import proprietary
proprietary.foo()
</code></pre>
<p>For the Update about running different tasks on each server: that is all covered in the "routing" chapter of the celery docs: <a href="http://docs.celeryproject.org/en/latest/userguide/routing.html" rel="nofollow">http://docs.celeryproject.org/en/latest/userguide/routing.html</a> </p>
<p>You basically configure different 'queues' (one for alpha, one for beta); start the workers to only handle the queues you specify; and either specify the route in the call to apply_async or configure the celery daemon to match a task to a route (there are several ways to do that, all explained in that chapter with examples.)</p>
| 1
|
2016-09-07T16:34:43Z
|
[
"python",
"celery"
] |
minimum cost path in matrix
| 39,361,555
|
<p>Question -</p>
<p>Given a m x n grid filled with non-negative numbers, find a path from top left to bottom right which minimizes the sum of all numbers along its path.</p>
<p>Note: You can only move either down or right at any point in time</p>
<p>I know this is a common question and most of you guys would know the question as well as its dynamic programming. I am trying the recursive code here but I am getting the correct output. what is missing in my recursive code? I don't want the iterative or dynamic programming approach. I am trying to build on my own. </p>
<p>It shows incorrect output.</p>
<p>Example -</p>
<pre><code>1 2
1 1
</code></pre>
<p>It gives the output as 2. where as the answer is 3.</p>
<p>Thanks. </p>
<pre><code>def minPathSum(self, grid):
"""
:type grid: List[List[int]]
:rtype: int
"""
def helper(i,j,grid,dp):
if i >= len(grid) or j >= len(grid[0]):
return 0
print grid[i][j]
return grid[i][j]+min(helper(i+1,j,grid,dp),helper(i,j+1,grid,dp))
dp = [[0] * len(grid[0]) for i in range(len(grid))]
val = helper(0,0,grid,dp)
print dp
return val
</code></pre>
| 2
|
2016-09-07T05:03:59Z
| 39,361,930
|
<p>I don't think it's correct to return 0 when you fall off the edge of the grid. That makes it look like you've succeeded. So I think the 2 that you are erroneously reporting is the 1 in upper left plus the 1 in lower left, followed by a "successful" falling off the bottom of the grid. I recommend you adjust your what-to-return logic so it looks like this:</p>
<pre><code>if at right or bottom edge:
there is only one direction to go, so
return the result of going in that direction
else you do have options, so
return the minimum of the two choices, like you do now
</code></pre>
| 2
|
2016-09-07T05:39:32Z
|
[
"python",
"algorithm",
"dynamic"
] |
How to create 2 csv files when using recursion
| 39,361,605
|
<p>I want to create 2 csv files.</p>
<p>I have 2 array in one function then i am looping through it and calling another function to write into an csv file so it will create 2 csv files</p>
<pre><code>import time
import datetime
import csv
time_value = time.time()
def csv_write(s):
print s
f3 = open("import_"+str(int(time_value))+".csv", 'wt')
writer = csv.writer(f3,delimiter = ',', lineterminator='\n',quoting=csv.QUOTE_ALL)
writer.writerow(s)
f3.close()
def process_array():
a = [["a","b"],["s","v"]]
for s in a:
csv_write(s)
process_array()
</code></pre>
<p>The whole idea is to create 2 csv files since it has 2 array elements but the above code just overwrites the files and the codes creates only one csv file at end</p>
<p>So if the array has 10 elements then the code should create 10 csv files</p>
<p>How to do it?</p>
| 1
|
2016-09-07T05:09:04Z
| 39,361,625
|
<p>You need to add the argument into the file name:</p>
<pre><code>f3 = open("import_"+"".join(s)+"_"+str(int(time_value))+".csv", 'wt')
</code></pre>
<p>In this case you will have two files (with <code>"ab"</code> and <code>"sv"</code> in the names) if <code>a</code> contains two elements from your example. Here we concatenate all the items of <code>s</code>, so it is not the best solution (since it is possible that result of this concatenation is the same for different elements of the list <code>a</code>).</p>
<p>So, I'd recommend to use this solution:</p>
<p>In the <code>for</code> loop you can count number of elements:</p>
<pre><code>idx = 0
for s in a:
csv_write(s, idx)
idx = idx + 1
</code></pre>
<p>In this case you need to extend your function <code>csv_write</code> (add one more argument):</p>
<pre><code>def csv_write(s, idx):
print s
f3 = open("import_"+str(i)+"_"+str(int(time_value))+".csv", 'wt')
...
</code></pre>
| 0
|
2016-09-07T05:12:19Z
|
[
"python",
"arrays",
"csv"
] |
How to create 2 csv files when using recursion
| 39,361,605
|
<p>I want to create 2 csv files.</p>
<p>I have 2 array in one function then i am looping through it and calling another function to write into an csv file so it will create 2 csv files</p>
<pre><code>import time
import datetime
import csv
time_value = time.time()
def csv_write(s):
print s
f3 = open("import_"+str(int(time_value))+".csv", 'wt')
writer = csv.writer(f3,delimiter = ',', lineterminator='\n',quoting=csv.QUOTE_ALL)
writer.writerow(s)
f3.close()
def process_array():
a = [["a","b"],["s","v"]]
for s in a:
csv_write(s)
process_array()
</code></pre>
<p>The whole idea is to create 2 csv files since it has 2 array elements but the above code just overwrites the files and the codes creates only one csv file at end</p>
<p>So if the array has 10 elements then the code should create 10 csv files</p>
<p>How to do it?</p>
| 1
|
2016-09-07T05:09:04Z
| 39,361,668
|
<p>I'm not an experienced Python guy but it looks like the time value that names the files is being defined once outside of code dealing with the numbers of elements. Each array their would have the same time value.csv name as that time value was defined at the start and isn't called again. Try nesting the time value bit in the for loop before the write command.</p>
| 0
|
2016-09-07T05:16:12Z
|
[
"python",
"arrays",
"csv"
] |
How to create 2 csv files when using recursion
| 39,361,605
|
<p>I want to create 2 csv files.</p>
<p>I have 2 array in one function then i am looping through it and calling another function to write into an csv file so it will create 2 csv files</p>
<pre><code>import time
import datetime
import csv
time_value = time.time()
def csv_write(s):
print s
f3 = open("import_"+str(int(time_value))+".csv", 'wt')
writer = csv.writer(f3,delimiter = ',', lineterminator='\n',quoting=csv.QUOTE_ALL)
writer.writerow(s)
f3.close()
def process_array():
a = [["a","b"],["s","v"]]
for s in a:
csv_write(s)
process_array()
</code></pre>
<p>The whole idea is to create 2 csv files since it has 2 array elements but the above code just overwrites the files and the codes creates only one csv file at end</p>
<p>So if the array has 10 elements then the code should create 10 csv files</p>
<p>How to do it?</p>
| 1
|
2016-09-07T05:09:04Z
| 39,361,762
|
<p>First convert it into Dataframe then save it in CSV format </p>
<pre><code>import pandas as pd
a = [["a","b"],["s","v"]]
df=pd.DataFrame(a)
i=len(df)
for j in range(i):
df.iloc[:,j].to_csv(str(j)+'.csv',index=False)
</code></pre>
<p>After executing this code, you will get two csv files (As length of array is two) </p>
| 1
|
2016-09-07T05:25:10Z
|
[
"python",
"arrays",
"csv"
] |
Python: extract pattern from stdout and save in csv
| 39,361,621
|
<p>I have log files of around 20000-30000 lines long, they contain all sort of data , each line starting with current time stamp, followed by path of files/linu numbers and then value of objects added with some additional (unnecessary info).</p>
<pre><code>2016/08/31 17:27:43/usr/log/data/old/objec: 540: Adjustment Stat
2016/08/31 17:27:43/usr/log/data/old/objec: 570: Position: 1
2016/08/31 17:27:43/usr/log/data/old/object::1150: Adding new object in department xxxx
2016/08/31 17:27:43/usr/log/data/old/file1.java:: 728: object ID: 0
2016/08/31 17:27:43/usr/log/data/old/file2.java:: 729: Start location:1
2016/08/31 17:27:43/usr/log/data/old/file1.java:: 730: End location:55
2016/08/31 17:27:43/usr/log/data/old/: 728: object ID: 1
2016/08/31 17:27:43/usr/log/data/old/: 729: Start location:56
2016/08/31 17:27:43/usr/log/data/old/: 730: End location:67
2016/08/31 17:27:43/usr/log/data/old/: 728: object ID: 2
2016/08/31 17:27:43/usr/log/data/old/: 729: Start location:68
2016/08/31 17:27:43/usr/log/data/old/: 730: End location:110
Timer to Calculate location of object x took 0.004935 seconds
</code></pre>
<p>....
...
...
Same info... for new object
There are 30-40 object groups per file and they varies (between ID 0-3) </p>
<pre><code>I want to extract information (next line after Adjustment Stat)and save in a text file like
Position ObjectID StartLocation EndLocation
0 1 55
1 56 67
2 68 110
</code></pre>
<p>...
...
...</p>
<p>(here there isn't any object with Id 0)
1 1 50
2 51 109
...</p>
<pre><code>Or may be store in csv file like
0,1,55
1,56,67
2,68,110
</code></pre>
| 1
|
2016-09-07T05:11:37Z
| 39,361,751
|
<pre><code>import csv
with open('out.csv', 'w') as output_file, open('in.txt') as input_file:
writer = csv.writer(output_file)
for l in input_file:
if 'object ID:' in l:
object_id = l.split(':')[-1].strip()
elif 'Start location:' in l:
start_loc = l.split(':')[-1].strip()
elif 'End location:' in l:
end_loc = l.split(':')[-1].strip()
writer.writerow((object_id, start_loc, end_loc))
</code></pre>
<p><hr>
2.6 version:</p>
<pre><code>import csv
import contextlib
with contextlib.nested(open('out.csv', 'w'), open('in.txt')) as (output_file, input_file):
writer = csv.writer(output_file)
for l in input_file:
if 'object ID:' in l:
object_id = l.split(':')[-1].strip()
elif 'Start location:' in l:
start_loc = l.split(':')[-1].strip()
elif 'End location:' in l:
end_loc = l.split(':')[-1].strip()
writer.writerow((object_id, start_loc, end_loc))
</code></pre>
<p><strong>out.csv</strong> (<code>in.txt</code> as in OP)</p>
<pre><code>0,1,55
1,56,67
2,68,110
</code></pre>
| 3
|
2016-09-07T05:24:32Z
|
[
"python",
"csv",
"pattern-matching",
"export-to-csv"
] |
python logging module AttributeError: 'str' object has no attribute 'write'
| 39,361,630
|
<p>I am using tornado,and in its app,I import logging just want to log some info about server.
I put this:</p>
<pre><code>logging.config.dictConfig(web_LOGGING)
</code></pre>
<p>right before:</p>
<pre><code> tornado.options.parse_command_line()
</code></pre>
<p>but when I run the server,when I click any link,I get error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python2.7/logging/__init__.py", line 874, in emit
stream.write(fs % msg)
AttributeError: 'str' object has no attribute 'write'
Logged from file web.py, line 1946
</code></pre>
<p>it just repeats when i click into any link.
what is the real problem?</p>
<p>I've already changed any file,directory to others to steer clear of namespace conflict...</p>
| 0
|
2016-09-07T05:12:53Z
| 39,361,698
|
<p>Psychic debugging says <code>web_LOGGING</code> has a key named <code>stream</code> with a <code>str</code> value (probably a file path); <a href="https://docs.python.org/3/library/logging.html#logging.basicConfig" rel="nofollow"><code>stream</code> is only for already opened files, if you want to pass a file path, it's passed as <code>filename</code></a>.</p>
| 1
|
2016-09-07T05:18:52Z
|
[
"python",
"logging",
"tornado"
] |
meaning of assignment operator in python2.7
| 39,361,706
|
<p>i do the following:</p>
<pre><code>a=12345
</code></pre>
<p>I am trying to undertstand the meaning of this.Please answer below questions.</p>
<ol>
<li><p>a points to the memory address of 12345
(True/False)</p></li>
<li><p>If i do b=12345. Then b also points to the memoery address of 12345
(True/False)</p></li>
<li><p>I have read that the ref count of 12345 should increase by 1 after b points to it.
(True/False)</p></li>
<li><p>How can i retrieve the memory address of 12345. I want to check that a and b both point to address of 12345.Please clarify</p></li>
</ol>
<p>I tried using id function(it only show same memory location for <=255 range)</p>
| 1
|
2016-09-07T05:20:12Z
| 39,361,837
|
<ol>
<li><p><em>"a points to the memory address of 12345 (True/False)"</em></p>
<p>True.</p></li>
<li><p><em>"If i do b=12345. Then b also points to the memoery address of 12345 (True/False)"</em></p>
<p>Maybe. If you had assigned <code>b=a</code>, the <code>b</code> would point to the same memory location as <code>a</code>. With <code>b=12345</code>, the answer is unknown: there may be more than one copy of <code>12345</code> in memory.</p></li>
<li><p><em>"I have read that the ref count of 12345 should increase by 1 after b points to it. (True/False)"</em></p>
<p>True if <code>a</code> and <code>b</code> pointed to the same location: see above.</p></li>
<li><p><em>"How can i retrieve the memory address of 12345. I want to check that a and b both point to address of 12345. Please clarify."</em></p>
<p>To check if <code>a</code> and <code>b</code> point to the same memory location, use <code>is</code> as in <code>a is b</code>. For example, in the following, <code>a</code> and <code>b</code> point to different memory locations:</p>
<pre><code>>>> a = 12345
>>> b = 12345
>>> a is b
False
</code></pre>
<p>In the following, by contrast, they point to the same location:</p>
<pre><code>>>> a = 1
>>> b = 1
>>> a is b
True
</code></pre></li>
</ol>
| 1
|
2016-09-07T05:31:06Z
|
[
"python",
"python-2.7"
] |
Unable to run Python script from specific directoy. What could be the reason?
| 39,361,732
|
<p>1) OS = MacOS (El Capitan)</p>
<p>2) Using native Python install</p>
<p>3) Problem: I wrote a script in a directory called "pythonscripts". This will not run from this directory. If I copy the same script to a directory above this, or one below this or to the user's home directory, it runs just fine. I have attempted all kinds of things including renaming the "pythonscripts" to other names and I have no idea. </p>
<p>Hints appreciated. Below the output of my tests to see for yourselves...</p>
<pre><code>MAC:pythonscripts user1$ cp googlemaps1.py ~
MAC:pythonscripts user1$ cp googlemaps1.py ../
MAC:pythonscripts user1$ cp googlemaps1.py Learning-Python/
MAC:pythonscripts user1$
MAC:pythonscripts user1$ cd ../
MAC:coding user1$ python googlemaps1.py
directions_result: <type 'list'> 1
PERFECT EXECUTION!
MAC:coding user1$ cd pythonscripts/Learning-Python/
MAC:Learning-Python user1$ python googlemaps1.py
directions_result: <type 'list'> 1
PERFECT EXECUTION!
MAC:Learning-Python user1$ cd ..
MAC:pythonscripts user1$ python googlemaps1.py
{'and': 3, 'envious': 1, 'already': 1, 'fair': 1, 'is': 3, 'through': 1, 'pale': 1, 'yonder': 1, 'what': 1, 'sun': 2, 'Who': 1, 'But': 1, 'moon': 1, 'window': 1, 'sick': 1, 'east': 1, 'breaks': 1, 'grief': 1, 'with': 1, 'light': 1, 'It': 1, 'Arise': 1, 'kill': 1, 'the': 3, 'soft': 1, 'Juliet': 1}
Traceback (most recent call last):
File "googlemaps1.py", line 8, in <module>
from googlemaps import Client
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/googlemaps/__init__.py", line 20, in <module>
from googlemaps.client import Client
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/googlemaps/client.py", line 30, in <module>
import requests
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/__init__.py", line 64, in <module>
from . import utils
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/utils.py", line 24, in <module>
from .compat import parse_http_list as _parse_list_header
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/compat.py", line 38, in <module>
from urllib2 import parse_http_list
ImportError: cannot import name parse_http_list
MAC:pythonscripts user1$ cd
MAC:~ user1$ python googlemaps1.py
directions_result: <type 'list'> 1
PERFECT EXECUTION!
MAC:~ user1$
</code></pre>
<p>This is driving me crazy! as you can see I can run the same exact script from 3 other locations except for the original directory where I saved the script.
Any ideas? anyone experienced anything like this before ?
Thanks,</p>
| 1
|
2016-09-07T05:22:56Z
| 39,361,860
|
<p>If you have a file called urllib2.py, you must rename it.
You should never name a script like a python package/module.</p>
| 0
|
2016-09-07T05:33:24Z
|
[
"python",
"osx"
] |
Using Sigmoid instead of Tanh activation function fails - Neural Networks
| 39,361,755
|
<p>I'm looking at the following <a href="http://www.bogotobogo.com/python/files/NeuralNetworks/nn3.py" rel="nofollow">code</a> from <a href="http://www.bogotobogo.com/python/python_Neural_Networks_Backpropagation_for_XOR_using_one_hidden_layer.php" rel="nofollow">this blog</a></p>
<p>It gives the option to use both the <code>sigmoid</code> and the <code>tanh</code> activation function.</p>
<p>The XOR test seems to work fine with the <code>tanh</code> function yielding ~ <code>(0,1,1,0)</code></p>
<p>But upon changing to <code>sigmoid</code> I get the wrong output ~ <code>(0.5,0.5,0.5,0.5)</code></p>
<p>I've tried this with <a href="http://arctrix.com/nas/python/bpnn.py" rel="nofollow">another piece of code</a> I found online and the exact same problem occurs.</p>
<p>It seems the only thing changing is the activation function (and its derivative). Does changing this require other changes, say in backpropogation?</p>
<p>Thanks a lot for any help!</p>
| 2
|
2016-09-07T05:24:49Z
| 39,362,349
|
<p>Looks like the model you use doesn't train biases.
The only difference between <code>tanh</code> and <code>sigmoid</code> is scaling and offset. Learning the new scaling will be done through the weights, but you'll also need to learn to compensate for the new offset, which should be done by learning the biases as well.</p>
| 0
|
2016-09-07T06:11:26Z
|
[
"python",
"neural-network",
"backpropagation",
"sigmoid"
] |
Move non empty cells to left in grouped columns pandas
| 39,361,839
|
<p>I have a dataframe where there are multiple columns with similar column names. I want the empty cells to be populated with those columns which have data to the right. </p>
<pre><code>Address1 Address2 Address3 Address4 Phone1 Phone2 Phone3 Phone4
ABC nan def nan 9091-XYz nan nan XYZ-ABZ
</code></pre>
<p>Should be column shifted to something like</p>
<pre><code>Address1 Address2 Address3 Address4 Phone1 Phone2 Phone3 Phone4
ABC def nan nan 9091-XYz XYZ-ABZ nan nan
</code></pre>
<p>There's another <a href="http://stackoverflow.com/questions/32062157/move-non-empty-cells-to-the-left-in-pandas-dataframe">question</a> which solves a similar problem.</p>
<pre><code>pdf = pd.read_csv('Data.txt',sep='\t')
# gets a set of columns removing the numerical part
columns = set(map(lambda x : x.rstrip('0123456789'),pdf.columns))
for col_pattern in columns:
# get columns with similar names
current = [col for col in pdf.columns if col_pattern in col]
coldf= pdf[current]
# shift columns to the left
</code></pre>
<p>The file <code>Data.txt</code> has columns sorted by column names so all the columns with similar names come together.</p>
<p>Any help with this is appreciated</p>
<p>I had tried adding this to the above code from the link, which ran out of memory :</p>
<pre><code> newdf=pd.read_csv(StringIO(u''+re.sub(',+',',',df.to_csv()).decode('utf-8')))
list_.append(newdf)
pd.concat(list_,axis=0).to_csv('test.txt')
</code></pre>
| 3
|
2016-09-07T05:31:23Z
| 39,362,818
|
<p><strong><code>pushna</code></strong><br>
Pushes all null values to the end of the series</p>
<p><strong><code>coltype</code></strong><br>
Uses <code>regex</code> to extract the non-numeric prefix from all column names</p>
<pre><code>def pushna(s):
notnull = s[s.notnull()]
isnull = s[s.isnull()]
values = notnull.append(isnull).values
return pd.Series(values, s.index)
coltype = df.columns.to_series().str.extract(r'(\D*)', expand=False)
df.groupby(coltype, axis=1).apply(lambda df: df.apply(pushna, axis=1))
</code></pre>
<p><a href="http://i.stack.imgur.com/l2Msb.png" rel="nofollow"><img src="http://i.stack.imgur.com/l2Msb.png" alt="enter image description here"></a></p>
| 2
|
2016-09-07T06:41:37Z
|
[
"python",
"pandas",
"multiple-columns",
null,
"shift"
] |
Move non empty cells to left in grouped columns pandas
| 39,361,839
|
<p>I have a dataframe where there are multiple columns with similar column names. I want the empty cells to be populated with those columns which have data to the right. </p>
<pre><code>Address1 Address2 Address3 Address4 Phone1 Phone2 Phone3 Phone4
ABC nan def nan 9091-XYz nan nan XYZ-ABZ
</code></pre>
<p>Should be column shifted to something like</p>
<pre><code>Address1 Address2 Address3 Address4 Phone1 Phone2 Phone3 Phone4
ABC def nan nan 9091-XYz XYZ-ABZ nan nan
</code></pre>
<p>There's another <a href="http://stackoverflow.com/questions/32062157/move-non-empty-cells-to-the-left-in-pandas-dataframe">question</a> which solves a similar problem.</p>
<pre><code>pdf = pd.read_csv('Data.txt',sep='\t')
# gets a set of columns removing the numerical part
columns = set(map(lambda x : x.rstrip('0123456789'),pdf.columns))
for col_pattern in columns:
# get columns with similar names
current = [col for col in pdf.columns if col_pattern in col]
coldf= pdf[current]
# shift columns to the left
</code></pre>
<p>The file <code>Data.txt</code> has columns sorted by column names so all the columns with similar names come together.</p>
<p>Any help with this is appreciated</p>
<p>I had tried adding this to the above code from the link, which ran out of memory :</p>
<pre><code> newdf=pd.read_csv(StringIO(u''+re.sub(',+',',',df.to_csv()).decode('utf-8')))
list_.append(newdf)
pd.concat(list_,axis=0).to_csv('test.txt')
</code></pre>
| 3
|
2016-09-07T05:31:23Z
| 39,364,043
|
<p>Solutions with <code>MultiIndex</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow"><code>dropna</code></a>:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'Address1': {0: 'ABC', 1: 'ABC'},
'Address2': {0: np.nan, 1: np.nan},
'Address3': {0: 'def', 1: 'def'},
'Phone4': {0: 'XYZ-ABZ', 1: 'XYZ-ABZ'},
'Address4': {0: np.nan, 1: np.nan},
'Phone1': {0: '9091-XYz', 1: 'Z9091-XYz'},
'Phone3': {0: np.nan, 1: 'aaa'},
'Phone2': {0: np.nan, 1: np.nan}})
print (df)
Address1 Address2 Address3 Address4 Phone1 Phone2 Phone3 Phone4
0 ABC NaN def NaN 9091-XYz NaN NaN XYZ-ABZ
1 ABC NaN def NaN Z9091-XYz NaN aaa XYZ-ABZ
</code></pre>
<pre><code>#multiindex from columns of df
cols = df.columns.str.extract('([[A-Za-z]+)(\d+)', expand=True).values.tolist()
mux = pd.MultiIndex.from_tuples(cols)
df.columns = mux
print (df)
Address Phone
1 2 3 4 1 2 3 4
0 ABC NaN def NaN 9091-XYz NaN NaN XYZ-ABZ
1 ABC NaN def NaN Z9091-XYz NaN aaa XYZ-ABZ
#unstack, remove NaN rows, convert to df (because cumcount)
df1 = df.unstack().dropna().reset_index(level=1, drop=True).to_frame()
#create new level of index
df1['g'] = (df1.groupby(level=[0,1]).cumcount() + 1).astype(str)
#add column g to multiindex
df1.set_index('g', append=True, inplace=True)
#reshape to original
df1 = df1.unstack(level=[0,2])
#remove first level of multiindex of column (0 from to_frame)
df1.columns = df1.columns.droplevel(0)
#reindex and replace None to NaN
df1 = df1.reindex(columns=mux).replace({None: np.nan})
#'reset' multiindex in columns
df1.columns = [''.join(col) for col in df1.columns]
print (df1)
Address1 Address2 Address3 Address4 Phone1 Phone2 Phone3 Phone4
0 ABC def NaN NaN 9091-XYz XYZ-ABZ NaN NaN
1 ABC def NaN NaN Z9091-XYz aaa XYZ-ABZ NaN
</code></pre>
<p>Old solution:</p>
<p>I find another problem - solution above doest work correctly if more rows in <code>DataFrame</code>. So you can use double <code>apply</code>. But problem of this solution is uncorrect order of values in rows:</p>
<pre><code>df = pd.DataFrame({'Address1': {0: 'ABC', 1: 'ABC'}, 'Address2': {0: np.nan, 1: np.nan}, 'Address3': {0: 'def', 1: 'def'}, 'Phone4': {0: 'XYZ-ABZ', 1: 'XYZ-ABZ'}, 'Address4': {0: np.nan, 1: np.nan}, 'Phone1': {0: '9091-XYz', 1: '9091-XYz'}, 'Phone3': {0: np.nan, 1: 'aaa'}, 'Phone2': {0: np.nan, 1: np.nan}})
print (df)
Address1 Address2 Address3 Address4 Phone1 Phone2 Phone3 Phone4
0 ABC NaN def NaN 9091-XYz NaN NaN XYZ-ABZ
1 ABC NaN def NaN 9091-XYz NaN aaa XYZ-ABZ
cols = df.columns.str.extract('([[A-Za-z]+)(\d+)', expand=True).values.tolist()
mux = pd.MultiIndex.from_tuples(cols)
df.columns = mux
df = df.groupby(axis=1, level=0)
.apply(lambda x: x.apply(lambda y: y.sort_values().values, axis=1))
df.columns = [''.join(col) for col in df.columns]
print (df)
Address1 Address2 Address3 Address4 Phone1 Phone2 Phone3 Phone4
0 ABC def NaN NaN 9091-XYz XYZ-ABZ NaN NaN
1 ABC def NaN NaN 9091-XYz XYZ-ABZ aaa NaN
</code></pre>
<p>Also I try modify <a href="http://stackoverflow.com/a/39362818/2901002"><code>piRSquared</code></a> solution - then you does not need <code>MultiIndex</code>:</p>
<pre><code>coltype = df.columns.str.extract(r'([[A-Za-z]+)', expand=False)
print (coltype)
Index(['Address', 'Address', 'Address', 'Address', 'Phone', 'Phone', 'Phone',
'Phone'],
dtype='object')
df = df.groupby(coltype, axis=1)
.apply(lambda x: x.apply(lambda y: y.sort_values().values, axis=1))
print (df)
Address1 Address2 Address3 Address4 Phone1 Phone2 Phone3 Phone4
0 ABC def NaN NaN 9091-XYz XYZ-ABZ NaN NaN
1 ABC def NaN NaN 9091-XYz XYZ-ABZ aaa NaN
</code></pre>
| 3
|
2016-09-07T07:44:10Z
|
[
"python",
"pandas",
"multiple-columns",
null,
"shift"
] |
NoReverseMatch in Django 1.10
| 39,361,877
|
<p>Here is my <strong>url.py</strong></p>
<pre><code>from django.conf.urls import url
from django.contrib import admin
from app import views, auth
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^$', views.index, name = 'index'),
url(r'^login/', auth.login, name = 'login'),
url(r'^logout/', auth.logout, name = 'logout'),
]
</code></pre>
<p>When I'm using in template <code><li><a href="{% url 'admin' %}">Administration</a></li></code> get error</p>
<pre><code>NoReverseMatch at /
Reverse for 'admin' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
</code></pre>
<p>So can any one tell me how to solve this? Thank you very much.</p>
| 2
|
2016-09-07T05:35:40Z
| 39,361,941
|
<p>You should use admin namespace, like written in the <a href="https://docs.djangoproject.com/en/dev/ref/contrib/admin/#reversing-admin-urls" rel="nofollow">docs</a>. You could also look on other admin urls in that namespace. </p>
<pre><code>{% url 'admin:index' %}
</code></pre>
| 1
|
2016-09-07T05:40:40Z
|
[
"python",
"django"
] |
NoReverseMatch in Django 1.10
| 39,361,877
|
<p>Here is my <strong>url.py</strong></p>
<pre><code>from django.conf.urls import url
from django.contrib import admin
from app import views, auth
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^$', views.index, name = 'index'),
url(r'^login/', auth.login, name = 'login'),
url(r'^logout/', auth.logout, name = 'logout'),
]
</code></pre>
<p>When I'm using in template <code><li><a href="{% url 'admin' %}">Administration</a></li></code> get error</p>
<pre><code>NoReverseMatch at /
Reverse for 'admin' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
</code></pre>
<p>So can any one tell me how to solve this? Thank you very much.</p>
| 2
|
2016-09-07T05:35:40Z
| 39,361,979
|
<p>Use <code>admin:index</code> if you want to have an url to <code>/admin/</code> site.
If you install <a href="https://pypi.python.org/pypi/django-extensions/0.7.1" rel="nofollow">django-extensions</a> you can use <code>./manage.py show_urls</code> to get the list of urls for your app</p>
| 1
|
2016-09-07T05:43:29Z
|
[
"python",
"django"
] |
NoReverseMatch in Django 1.10
| 39,361,877
|
<p>Here is my <strong>url.py</strong></p>
<pre><code>from django.conf.urls import url
from django.contrib import admin
from app import views, auth
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^$', views.index, name = 'index'),
url(r'^login/', auth.login, name = 'login'),
url(r'^logout/', auth.logout, name = 'logout'),
]
</code></pre>
<p>When I'm using in template <code><li><a href="{% url 'admin' %}">Administration</a></li></code> get error</p>
<pre><code>NoReverseMatch at /
Reverse for 'admin' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
</code></pre>
<p>So can any one tell me how to solve this? Thank you very much.</p>
| 2
|
2016-09-07T05:35:40Z
| 39,363,609
|
<p>Set admin url in your project urls.py, same folder as your settings.py</p>
<pre><code>Url(r'^admin/', admin.site.urls),
</code></pre>
<p>Then call it in your template:</p>
<p><code><a href="{% url 'admin:index' %} > link </a></code></p>
| 1
|
2016-09-07T07:21:25Z
|
[
"python",
"django"
] |
NoReverseMatch in Django 1.10
| 39,361,877
|
<p>Here is my <strong>url.py</strong></p>
<pre><code>from django.conf.urls import url
from django.contrib import admin
from app import views, auth
urlpatterns = [
url(r'^admin/', admin.site.urls),
url(r'^$', views.index, name = 'index'),
url(r'^login/', auth.login, name = 'login'),
url(r'^logout/', auth.logout, name = 'logout'),
]
</code></pre>
<p>When I'm using in template <code><li><a href="{% url 'admin' %}">Administration</a></li></code> get error</p>
<pre><code>NoReverseMatch at /
Reverse for 'admin' with arguments '()' and keyword arguments '{}' not found. 0 pattern(s) tried: []
</code></pre>
<p>So can any one tell me how to solve this? Thank you very much.</p>
| 2
|
2016-09-07T05:35:40Z
| 39,475,989
|
<p>1 . Make a <code>url.py</code> in app folder and use</p>
<pre><code>urlpatterns = [
url(r'^admin/', admin.site.urls),
]
</code></pre>
<p>and in template use</p>
<pre><code><a href="{% url 'admin:index' %}">Admin</a>
</code></pre>
<p>or</p>
<ol start="2">
<li>Use <code><a href="/admin">Admin</a></code> and left url.py as it is(make no changes)</li>
</ol>
| 1
|
2016-09-13T17:40:36Z
|
[
"python",
"django"
] |
Convert date column in dataframe to ticks in python
| 39,361,916
|
<p>Good day</p>
<p>I have a pandas dataframe with 3 columns. Column two is in date format yyyy-mm-dd but I would like to convert these to ticks. I would like to know the most efficient way to do this (i.e. I don't want to loop through each row).</p>
<p>My code is as follows:</p>
<pre><code>#Example of getting ticks
import time; # This is required to include time module.
ticks = time.time()
print("Number of ticks since 12:00am, January 1, 1970:", ticks)
#Example of converting to ticks
import datetime
s = "01/12/2015"
time.mktime(datetime.datetime.strptime(s, "%d/%m/%Y").timetuple())
#My code
import numpy as np
import pandas as pd
path = "C:/Users/geoff/OneDrive - Noble Brothers Consult (Pty) Ltd/Clients/GrAM/Daily Price Data/"
filename = "SST_5_Sept_2016_formatted.csv"
full_path = path + filename
data = pd.read_csv(full_path)
#here i need to convert the column data['Date'] to ticks
</code></pre>
<p>I have uploaded a screenshot my datafile as well:</p>
<p>Screenshot of datafile:</p>
<p><a href="http://i.stack.imgur.com/7FTTc.png" rel="nofollow"><img src="http://i.stack.imgur.com/7FTTc.png" alt="screenshot"></a></p>
<p>Thanks very much for your help.</p>
| 1
|
2016-09-07T05:38:41Z
| 39,362,414
|
<p>First, wrap the converting-to-ticks part in a function, such as</p>
<pre><code>def convert_to_tic(s):
return time.mktime(datetime.datetime.strptime(s, "%d/%m/%Y").timetuple())
</code></pre>
<p>Then, turn your column into a <code>DataFrame</code> and use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow">apply</a>. Something in the likes of what is suggested in the bottom part of <a href="http://stackoverflow.com/questions/12604909/pandas-how-to-change-all-the-values-of-a-column">this answer</a></p>
| 0
|
2016-09-07T06:15:58Z
|
[
"python",
"datetime",
"time"
] |
Pandas: Groupby and cut within a group
| 39,362,151
|
<p>I have a pandas dataframe which looks like this:</p>
<pre><code>userid name date
1 name1 2016-06-04
1 name2 2016-06-05
1 name3 2016-06-04
1 name1 2016-06-06
2 name23 2016-06-01
2 name2 2016-06-01
3 name1 2016-06-03
3 name6 2016-06-03
3 name12 2016-06-03
3 name65 2016-06-04
</code></pre>
<p>So, I want to retain only the rows of the users till the first date events, and cut the rest.</p>
<p>The final df would be as follows:</p>
<pre><code>userid name date
1 name1 2016-06-04
1 name2 2016-06-04
2 name23 2016-06-01
2 name2 2016-06-01
3 name1 2016-06-03
3 name6 2016-06-03
3 name12 2016-06-03
userid int64
name object
time object
</code></pre>
<p>The <code>type()</code> of data points in the time column is a <code>datetime.date</code></p>
<p>So, the tasks would involve <code>grouping with respect to userid</code>, <code>sorting according to the date</code>, then <code>retaining only the rows with first(/earliest) date</code>.</p>
| 3
|
2016-09-07T05:57:31Z
| 39,362,397
|
<p>You can first sort <code>DataFrame</code> by column <code>date</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow"><code>sort_values</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.apply.html" rel="nofollow"><code>apply</code></a> <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a> - get all rows where is first values:</p>
<pre><code>df = df.sort_values('date')
.groupby('userid')
.apply(lambda x: x[x.date == x.date.iloc[0]])
.reset_index(drop=True)
print (df)
userid name date
0 1 name1 2016-06-04
1 1 name3 2016-06-04
2 2 name23 2016-06-01
3 2 name2 2016-06-01
4 3 name1 2016-06-03
5 3 name6 2016-06-03
6 3 name12 2016-06-03
</code></pre>
| 3
|
2016-09-07T06:14:50Z
|
[
"python",
"python-2.7",
"pandas"
] |
Python - does range() mutate variable values while in recursion?
| 39,362,283
|
<p>While working on understanding string permutations and its implementation in python (regarding to <a href="http://stackoverflow.com/a/20955291/1055601">this post</a>) I stumbled upon something in a <code>for</code> loop using <code>range()</code> I just don't understand.</p>
<p>Take the following code:</p>
<pre><code>def recursion(step=0):
print "Step I: {}".format(step)
for i in range(step, 2):
print "Step II: {}".format(step)
print "Value i: {}".format(i)
print "Call recursion"
print "\n-----------------\n"
recursion(step + 1)
recursion()
</code></pre>
<p>That gives the following output:</p>
<pre><code>root@host:~# python range_test.py
Step I: 0
Step II: 0
Value i: 0
Call recursion
-----------------
Step I: 1
Step II: 1
Value i: 1
Call recursion
-----------------
Step I: 2
Step II: 0 <---- WHAT THE HECK?
Value i: 1
Call recursion
-----------------
Step I: 1
Step II: 1
Value i: 1
Call recursion
-----------------
Step I: 2
root@host:~#
</code></pre>
<p>As you can see the variable <code>step</code> gets a new value after a certain <code>for</code> loop run using <code>range()</code> - see the <code>WHAT THE HECK</code> mark.</p>
<p>Any ideas to lift the mist?</p>
| 1
|
2016-09-07T06:06:11Z
| 39,362,697
|
<p>Your conclusion is <strong>incorrect</strong>. <code>step</code> value does not change by using <code>range</code>.
This can be verified as:</p>
<pre><code>def no_recursion(step=0):
print "Step I: {}".format(step)
for i in range(step, 2):
print "Step II: {}".format(step)
print "Value i: {}".format(i)
no_recursion(step=2)
</code></pre>
<p>which produces the output:</p>
<pre><code> Step I: 2
</code></pre>
<p>which is expected since <code>range(2,2)</code> returns <code>[]</code>.</p>
<p>The illusion that <code>step</code> changes its value to 0 comes since the function <code>recursion</code> (called with <code>step=2</code>) returns after it prints <code>Step I: 2</code>, then control is returned to function <code>recursion</code> (called with <code>step=1</code>) which immediately returns since its <code>for loop</code> has terminated, and then control is returned to <code>recursion</code> (called with <code>step=0</code>) which continues since it has 1 iteration left, and that prints <code>Step II: 0</code> to console which is no surprise. This can be simpler to observe if we modify the code a little bit (by adding function entry and exit logging) and observe the output:</p>
<pre><code>def recursion(step=0):
print "recursion called with [step = {}]".format(step) # add entry logging
print "Step I: {}".format(step)
for i in range(step, 2):
print "Step II: {}".format(step)
print "Value i: {}".format(i)
print "Call recursion"
print "\n-----------------\n"
recursion(step + 1)
print "--> returned recursion [step = {}]".format(step) # add exit logging
recursion()
</code></pre>
<p>the output produced by this code is :</p>
<pre><code>recursion called with [step = 0]
Step I: 0
Step II: 0
Value i: 0
Call recursion
-----------------
recursion called with [step = 1]
Step I: 1
Step II: 1
Value i: 1
Call recursion
-----------------
recursion called with [step = 2]
Step I: 2
--> returned recursion [step = 2]
--> returned recursion [step = 1]
Step II: 0
Value i: 1
Call recursion
-----------------
recursion called with [step = 1]
Step I: 1
Step II: 1
Value i: 1
Call recursion
-----------------
recursion called with [step = 2]
Step I: 2
--> returned recursion [step = 2]
--> returned recursion [step = 1]
--> returned recursion [step = 0]
</code></pre>
<p>where we can clearly see that order in which recursion unfolds and observer that value of <code>step</code> is consistent in each step.</p>
| 1
|
2016-09-07T06:35:13Z
|
[
"python",
"recursion"
] |
Append Values from input to a sublist in Python
| 39,362,297
|
<p>I am trying to append values from an input to sublists in a list.
Each number of Student and name should be in a sublist.
ex: </p>
<pre><code>[[123,John],[124,Andrew]]
</code></pre>
<p>Where the outside list would be the number of students, and the sublists , the info of the students..</p>
<p>Here is what my code looks like:</p>
<pre><code>listStudents = [[] for _ in range(3)]
infoStudent = [[]]
while True:
choice = int(input("1- Register Student 0- Exit"))
cont = 0
if choice == 1:
snumber = str(input("Student number: "))
infoStudent[cont].append(str(snumber))
name = str(input("Name : "))
infoStudent[cont].append(str(name))
cont+=1
listStudents.append(infoStudent)
if choice == 0:
print("END")
break
print(listStudents)
print(infoStudent)
</code></pre>
<p>If I put on the first loop, <code>snumber = 123</code> , <code>name = john</code> , and <code>snumber = 124</code>, <code>name = andrew</code> on the second time it will show me : <code>[[123,john,124,andrew]]</code> instead of <code>[[123,john], [124,andrew]]</code>.</p>
| 0
|
2016-09-07T06:07:29Z
| 39,362,376
|
<p>Your code can be greatly simplified:</p>
<ol>
<li>You don't need to pre-allocate the lists and sublists. Just have one list, and append the sublists as you receive inputs.</li>
<li>You don't need to cast user input from <code>input</code> to strings, as they are strings already.</li>
</ol>
<p>Here's the modified code:</p>
<pre><code>listStudents = []
while True:
choice = int(input('1- Register Student 0- Exit'))
if choice == 1:
snumber = input('Student number: ')
name = input('Name : ')
listStudents.append([snumber, name])
if choice == 0:
print('END')
break
print(listStudents)
</code></pre>
| 2
|
2016-09-07T06:13:34Z
|
[
"python",
"list",
"append",
"sublist"
] |
Append Values from input to a sublist in Python
| 39,362,297
|
<p>I am trying to append values from an input to sublists in a list.
Each number of Student and name should be in a sublist.
ex: </p>
<pre><code>[[123,John],[124,Andrew]]
</code></pre>
<p>Where the outside list would be the number of students, and the sublists , the info of the students..</p>
<p>Here is what my code looks like:</p>
<pre><code>listStudents = [[] for _ in range(3)]
infoStudent = [[]]
while True:
choice = int(input("1- Register Student 0- Exit"))
cont = 0
if choice == 1:
snumber = str(input("Student number: "))
infoStudent[cont].append(str(snumber))
name = str(input("Name : "))
infoStudent[cont].append(str(name))
cont+=1
listStudents.append(infoStudent)
if choice == 0:
print("END")
break
print(listStudents)
print(infoStudent)
</code></pre>
<p>If I put on the first loop, <code>snumber = 123</code> , <code>name = john</code> , and <code>snumber = 124</code>, <code>name = andrew</code> on the second time it will show me : <code>[[123,john,124,andrew]]</code> instead of <code>[[123,john], [124,andrew]]</code>.</p>
| 0
|
2016-09-07T06:07:29Z
| 39,362,439
|
<p>Your code can be more pythonic and can make use of some basic error handling as well. Create the inner list inside the while loop and simply append to the outer student list. This should work.</p>
<pre><code>students = []
while True:
try:
choice = int(input("1- Register Student 0- Exit"))
except ValueError:
print("Invalid Option Entered")
continue
if choice not in (1, 9):
print("Invalid Option Entered")
continue
if choice == 1:
snumber = str(input("Student number: "))
name = str(input("Name : "))
students.append([snumber, name])
elif choice == 0:
print("END")
break
print(students)
</code></pre>
| 0
|
2016-09-07T06:17:57Z
|
[
"python",
"list",
"append",
"sublist"
] |
Python - multithreading with sockets gives random results
| 39,362,359
|
<p>I am really confused about my problem right now. I want to discover an open port over a list of hosts (or subnet).</p>
<p>So first let's show what I've done so far.. </p>
<pre><code>from multiprocessing.dummy import Pool as ThreadPool
from netaddr import IPNetwork as getAddrList
import socket, sys
this = sys.modules[__name__]
def threading(ip):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.settimeout(this.timeout)
failCode = 0
try:
if sock.connect_ex((str(ip), this.port)) == 0:
#port open
this.count += 1
else:
#port closed/filtered
failCode = 1
pass
except Exception:
#host unreachable
failCode = 2
pass
finally:
sock.close()
#set thread num
threads = 64
#set socket timeout
this.timeout = 1
#set ip list
ipList = getAddrList('8.8.8.0/24')
#set port
this.port = 53
#set count
this.count = 0
#threading
Pool = ThreadPool(threads)
Pool.map_async(threading, ipList).get(9999999)
Pool.close()
Pool.join()
#result
print str(this.count)
</code></pre>
<p>The Script works fine without any error. But I'm struggling about what it prints out..</p>
<p>So if I want to scan for example the subnet <code>8.8.8.0/24</code> and discover port <code>53</code>. I know the only server that has an open dns port is <code>8.8.8.8</code> (google-dns).</p>
<p>But when I run my script serveral times the <code>print str(this.count)</code> will randomly (as it seems to me..) return <code>0</code> or <code>1</code>.</p>
<p>What I also know:</p>
<ul>
<li>Scan only <code>8.8.8.8</code> prints always <code>1</code></li>
<li>Use only <em>1</em> thread for <code>/24</code> prints always <code>1</code></li>
<li>Use <em>256</em> threads for <code>/24</code> prints randomly <code>0</code> and <code>1</code></li>
<li>changing the timeout doesn't help</li>
</ul>
<p>So it seems like it has to do with the threading option that causes lags on my computer. But note that my CPU usage is <10% and the network usage is <50%! </p>
<p>But there is also another thing I don't understand..
If <code>print str(this.count)</code> returns <code>0</code> I would normally think it is because the threads are in conflict with each other and the socket doesn't get a connection.. but that isn't true because if <code>this.count</code> equals <code>0</code>, the <code>failCode</code> is set to <code>1</code> (on <code>8.8.8.8</code>)! Which means the port is closed.. but that must be also a bug of my script. I cannot think that this is caused by a lag of the server.. it's google there are no lags..</p>
<p>So additionally we know:</p>
<ul>
<li>output of <code>0</code> is because the server respond that the port is closed</li>
<li>and we are sure the port is definitely open </li>
</ul>
<p>So I also think about that if I have many threads that run <code>if sock.connect_ex((str(ip), this.port)) == 0:</code> at the same time maybe the host <code>8.8.8.8</code> looks in the wrong answer value. Maybe it struggles and look in the response of <code>8.8.8.7</code>. ..hope you know what I mean. </p>
<p>**max_socket_connections is set to 512 on my linux distribution</p>
<p>Anybody experienced with socket threading and can explain me the situation or give me an answer how to prevent this? </p>
<p><strong>Solved:</strong></p>
<p>.. look at the answer ..</p>
| 1
|
2016-09-07T06:12:17Z
| 39,402,783
|
<p><strong>The first mistake was:</strong></p>
<blockquote>
<p>But note that my CPU usage is <10% and the
network usage is <50%!</p>
</blockquote>
<p>Actually it was a 400 % network usage. I expected bits instead of bytes.<br>
Very embrassing..</p>
<p><strong>And the second mistake was:</strong></p>
<blockquote>
<p>and we are sure the port is definitely open</p>
</blockquote>
<p>Google does actually block the port after ~5 attempts in a short period of time. They release the restriction after a few seconds or minutes. </p>
| 0
|
2016-09-09T02:10:03Z
|
[
"python",
"multithreading",
"sockets",
"connection",
"multiprocessing"
] |
Find Image components using python/PIL
| 39,362,381
|
<p>Is there a function in PIL/Pillow that for a grayscale image, will separate the image into sub images containing the components that make up the original image? For example, a png grayscale image with a set of blocks in them. Here, the images types always have high contrast to the background.</p>
<p>I don't want to use openCV, I just need some general blob detection, and was hoping Pillow/PIL might have something that does that already.</p>
| 1
|
2016-09-07T06:13:56Z
| 39,362,636
|
<p>Yes, it is possible. You can use <code>edge</code> detection algorithms in PIL.
Sample code:</p>
<pre><code>from PIL import Image, ImageFilter
image = Image.open('/tmp/sample.png').convert('RGB')
image = image.filter(ImageFilter.FIND_EDGES)
image.save('/tmp/output.png')
</code></pre>
<p>sample.png :</p>
<p><a href="http://i.stack.imgur.com/tL5pc.png" rel="nofollow"><img src="http://i.stack.imgur.com/tL5pc.png" alt="enter image description here"></a></p>
<p>output.png:</p>
<p><a href="http://i.stack.imgur.com/JIHem.png" rel="nofollow"><img src="http://i.stack.imgur.com/JIHem.png" alt="enter image description here"></a></p>
| 0
|
2016-09-07T06:31:00Z
|
[
"python",
"pillow"
] |
Find Image components using python/PIL
| 39,362,381
|
<p>Is there a function in PIL/Pillow that for a grayscale image, will separate the image into sub images containing the components that make up the original image? For example, a png grayscale image with a set of blocks in them. Here, the images types always have high contrast to the background.</p>
<p>I don't want to use openCV, I just need some general blob detection, and was hoping Pillow/PIL might have something that does that already.</p>
| 1
|
2016-09-07T06:13:56Z
| 39,363,644
|
<p>Not using PIL, but worth a look I think:
I start with a list of image files that I've imported as a list of <code>numpy</code> arrays, and I create a list of boolean versions where the <code>threshold</code> is <code>> 0</code></p>
<pre><code>from skimage.measure import label, regionprops
import numpy as np
bool_array_list= []
for image in image_files:
bool_array = np.copy(image)
bool_array[np.where(bool_array > 0)] = 1
bool_array_list.append(bool_array)
img_region_list = []
</code></pre>
<p>Then I use label to identify the different areas, using 8-directional connectivity, and <code>regionprops</code> gives me a bunch of metrics, such as size and location.</p>
<pre><code>for item in bool_array_list:
tmp_region_list = regionprops(label(item,
connectivity=2
)
)
img_region_list.append(tmp_region_list)
</code></pre>
| 0
|
2016-09-07T07:22:38Z
|
[
"python",
"pillow"
] |
ttk.Separator appearing as dot "." when using .pack() layout manager
| 39,362,394
|
<p>My question is similar to <a href="http://stackoverflow.com/questions/37924785/ttk-separator-set-the-length-width">this one</a>, but I'm using the layout manager <code>pack</code> rather than <code>grid</code> so the answer in the alternate thread doesn't work for me.</p>
<p>Code:</p>
<pre><code> iconLabelImage = ttk.Label(labelFrame)
self.iconImage = PhotoImage(file='images\icon.png')
iconLabelImage['image'] = self.iconImage
iconLabelImage.pack(anchor='w')
sep = ttk.Separator(parameterFrame, orient=VERTICAL)
sep.pack(side="right", fill="y")
</code></pre>
<p>The <code>LabelFrame</code> is a child of the <code>parameterFrame</code>.</p>
<p>It doesn't matter what parameters I change I can't seem to get the separator to extend more than a pixel even though it exists in a larger frame.</p>
<p>Any ideas?</p>
| 0
|
2016-09-07T06:14:39Z
| 39,363,498
|
<p>Actually the idea is same with the <a href="http://stackoverflow.com/questions/37924785/ttk-separator-set-the-length-width">question</a> you have provided above. That means:</p>
<blockquote>
<p>The expand option tells the manager to assign additional space to the widget box. If the parent widget is made larger than necessary to hold all packed widgets, any exceeding space will be distributed among all widgets that have the expand option set to a non-zero value.<br>
-<a href="http://effbot.org/tkinterbook/pack.htm" rel="nofollow">effbot</a></p>
</blockquote>
<p>The point here you should focus on is: <b>non-zero value/weight</b>.<br>
So to solve this problem using <code>pack</code> method add <code>expand=True</code> option.</p>
| 1
|
2016-09-07T07:15:45Z
|
[
"python",
"python-3.x",
"tkinter",
"separator",
"ttk"
] |
how to access previous line while we are accessing lines one by one in a loop using python?
| 39,362,464
|
<p>this is my code to print 20 lines at a time but I want to print 20 lines and then i want to start printing from 20th line how can I do that can anyone tell me? please</p>
<pre><code>f1=open("sample.txt","r")
last_pos=0
line=0
while True:
for i,l in enumerate(f1):
#l=f1.readline()
if l=="":
break
line+=1
print l
if line == 20:
last_pos=f1.tell()
print(last_pos)
break
f1.close()
</code></pre>
| 0
|
2016-09-07T06:19:09Z
| 39,362,514
|
<p>To locate some line, you can use seek() function.</p>
| 0
|
2016-09-07T06:22:34Z
|
[
"python",
"file"
] |
How can I set the default position of a trackbar in OpenCV?
| 39,362,497
|
<p>In OpenCV, using the createTrackbar function, how can someone set the default slider position to a maximum?</p>
<p>I have several sliders, some representing minimum values, and some representing maximum values. It would be nice, if the sliders for a max value, started out at the maximum (255), rather than the minimum (0). </p>
<p>I have looked around on the <a href="http://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html" rel="nofollow">OpenCV documentation pages</a>, but I have not located a solution.</p>
<pre><code>import cv2
import numpy as np
def nothing(x):
pass
# Create a black image, a window
#img = np.zeros((300,512,3), np.uint8)
cv2.namedWindow('image')
cv2.namedWindow('hsv')
cv2.namedWindow('masq')
cap = cv2.VideoCapture(0)
# create trackbars for color change
cv2.createTrackbar('R-low','image',0,255,nothing)
cv2.createTrackbar('R-high','image',0,255,nothing)
cv2.createTrackbar('G-low','image',0,255,nothing)
cv2.createTrackbar('G-high','image',0,255,nothing)
cv2.createTrackbar('B-low','image',0,255,nothing)
cv2.createTrackbar('B-high','image',0,255,nothing)
while(1):
ret, img = cap.read()
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
cv2.imshow('image',img)
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
# get current positions of four trackbars
rl = cv2.getTrackbarPos('R-low','image')
rh = cv2.getTrackbarPos('R-high','image')
gl = cv2.getTrackbarPos('G-low','image')
gh = cv2.getTrackbarPos('G-high','image')
bl = cv2.getTrackbarPos('B-low','image')
bh = cv2.getTrackbarPos('B-high','image')
lower = np.array([rl,gl,bl])
upper = np.array([rh,gh,bh])
print(rl)
img[:] = [bl,gl,rl]
# Threshold the HSV image to get only certain colors
mask = cv2.inRange(hsv, lower, upper)
res = cv2.bitwise_and(img,img, mask= mask)
cv2.imshow('image',img)
cv2.imshow('masq',mask)
cv2.imshow('hsv',hsv)
cv2.destroyAllWindows()
</code></pre>
<p>On load, it ends up looking like this:</p>
<p><a href="http://i.stack.imgur.com/6psbg.png" rel="nofollow"><img src="http://i.stack.imgur.com/6psbg.png" alt="enter image description here"></a></p>
| 1
|
2016-09-07T06:21:04Z
| 39,362,553
|
<p>Just use the value field:</p>
<blockquote>
<p>Python: cv.CreateTrackbar(trackbarName, windowName, value, count,
onChange) â None </p>
<p>Parameters:<br>
trackbarname â Name of the created trackbar. </p>
<p>winname â Name of the window that will be used as a parent
of the created trackbar. </p>
<p>value â Optional pointer to an integer
variable whose value reflects the position of the slider. Upon
creation, the slider position is defined by this variable.</p>
<p>count â
Maximal position of the slider. The minimal position is always 0.</p>
<p>onChange â Pointer to the function to be called every time the slider
changes position. </p>
<p>This function should be prototyped as void
Foo(int,void*); , where the first parameter is the trackbar position
and the second parameter is the user data (see the next parameter). If
the callback is the NULL pointer, no callbacks are called, but only
value is updated. userdata â User data that is passed as is to the
callback. It can be used to handle trackbar events without using
global variables.</p>
</blockquote>
<p><a href="http://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html" rel="nofollow">Source</a></p>
| 2
|
2016-09-07T06:25:26Z
|
[
"python",
"opencv"
] |
How can I set the default position of a trackbar in OpenCV?
| 39,362,497
|
<p>In OpenCV, using the createTrackbar function, how can someone set the default slider position to a maximum?</p>
<p>I have several sliders, some representing minimum values, and some representing maximum values. It would be nice, if the sliders for a max value, started out at the maximum (255), rather than the minimum (0). </p>
<p>I have looked around on the <a href="http://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html" rel="nofollow">OpenCV documentation pages</a>, but I have not located a solution.</p>
<pre><code>import cv2
import numpy as np
def nothing(x):
pass
# Create a black image, a window
#img = np.zeros((300,512,3), np.uint8)
cv2.namedWindow('image')
cv2.namedWindow('hsv')
cv2.namedWindow('masq')
cap = cv2.VideoCapture(0)
# create trackbars for color change
cv2.createTrackbar('R-low','image',0,255,nothing)
cv2.createTrackbar('R-high','image',0,255,nothing)
cv2.createTrackbar('G-low','image',0,255,nothing)
cv2.createTrackbar('G-high','image',0,255,nothing)
cv2.createTrackbar('B-low','image',0,255,nothing)
cv2.createTrackbar('B-high','image',0,255,nothing)
while(1):
ret, img = cap.read()
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
cv2.imshow('image',img)
k = cv2.waitKey(1) & 0xFF
if k == 27:
break
# get current positions of four trackbars
rl = cv2.getTrackbarPos('R-low','image')
rh = cv2.getTrackbarPos('R-high','image')
gl = cv2.getTrackbarPos('G-low','image')
gh = cv2.getTrackbarPos('G-high','image')
bl = cv2.getTrackbarPos('B-low','image')
bh = cv2.getTrackbarPos('B-high','image')
lower = np.array([rl,gl,bl])
upper = np.array([rh,gh,bh])
print(rl)
img[:] = [bl,gl,rl]
# Threshold the HSV image to get only certain colors
mask = cv2.inRange(hsv, lower, upper)
res = cv2.bitwise_and(img,img, mask= mask)
cv2.imshow('image',img)
cv2.imshow('masq',mask)
cv2.imshow('hsv',hsv)
cv2.destroyAllWindows()
</code></pre>
<p>On load, it ends up looking like this:</p>
<p><a href="http://i.stack.imgur.com/6psbg.png" rel="nofollow"><img src="http://i.stack.imgur.com/6psbg.png" alt="enter image description here"></a></p>
| 1
|
2016-09-07T06:21:04Z
| 39,362,565
|
<p>I think you didn't pay much attention reading documentation, there you can find:<br>
<strong>value</strong> â Optional pointer to an integer variable whose value reflects the position of the slider. Upon creation, the slider position is defined by this variable.<br>
<strong>count</strong> â Maximal position of the slider. The minimal position is always 0.<br></p>
<p>As i can understand this, you need just to set a <code>value</code> to the same like a <code>count</code> </p>
| 1
|
2016-09-07T06:26:01Z
|
[
"python",
"opencv"
] |
Class and external method call
| 39,362,580
|
<p>I am going through a data structures course and I am not understanding how a Class can call a method that's in another Class.<br>
The code below has 2 classes: <code>Printer</code> and <code>Task</code>.<br>
Notice that class <code>Printer</code> has a method called <code>startNext</code>, and this has a variable <code>self.timeRemaining</code> that gets assigned the result of <code>newTask.getPages() * 60/self.pagerate</code>.</p>
<p>How can <code>newTaks</code> reference the <code>getPages()</code> method from the Task class?<br>
The code that passes this object to the Printer class never references the Task class.<br>
The code works, since this is what the course gives out but, I just cannot understand how that method is accessed.<br>
Code:</p>
<pre><code>from pythonds.basic.queue import Queue
import random
class Printer:
def __init__(self, ppm):
self.pagerate = ppm
self.currentTask = None
self.timeRemaining = 0
def tick(self):
if self.currentTask != None:
self.timeRemaining = self.timeRemaining - 1
if self.timeRemaining <= 0:
self.currentTask = None
def busy(self):
if self.currentTask != None:
return True
else:
return False
def startNext(self, newTask):
self.currentTask = newTask
self.timeRemaining = newTask.getPages() * 60/self.pagerate
class Task:
def __init__(self, time):
self.timeStamp = time
self.pages = random.randrange(1, 21)
def getStamp(self):
return self.timeStamp
def getPages(self):
return self.pages
def waitTime(self, currentTime):
return currentTime - self.timeStamp
def simulation(numSeconds, pagesPerMinute):
labPrinter = Printer(pagesPerMinute)
printQueue = Queue()
waitingTimes = []
for currentSecond in range(numSeconds):
if newPrintTask():
task = Task(currentSecond)
printQueue.enqueue(task)
if (not labPrinter.busy()) and (not printQueue.isEmpty()):
nextTask = printQueue.dequeue()
waitingTimes.append(nextTask.waitTime(currentSecond))
labPrinter.startNext(nextTask)
labPrinter.tick()
averageWait = sum(waitingTimes)/len(waitingTimes)
print "Average Wait %6.2f secs %3d tasks remaining." % (averageWait, printQueue.size())
def newPrintTask():
num = random.randrange(1, 181)
if num == 180:
return True
else:
return False
for i in range(10):
simulation(3600, 5)
</code></pre>
| 2
|
2016-09-07T06:27:15Z
| 39,363,055
|
<p>If I understand clearly your question, it is because you are adding task object to Queue list. Then when you are getting object (list item) back, you are getting again Task object:</p>
<pre><code>#creating Task object and adding to Queque list
task = Task(currentSecond)
printQueue.enqueue(task)
class Queue:
def __init__(self):
#list of task objects
self.items = []
def enqueue(self, item):
#you are inserting Task object item to list
self.items.insert(0,item)
def dequeue(self):
#returns task object
return self.items.pop()
</code></pre>
<p>So then you can call startNext() method from Printer class, because the dequeue() method is returning Task object.</p>
<p>And because of the object in startNext() is type of Task, you can call getPages() method on that object.</p>
<p>Is it sufficient answer?</p>
| 1
|
2016-09-07T06:53:42Z
|
[
"python"
] |
python download images are not saved to the correct directory
| 39,362,591
|
<p>When i use python 2.7 to download images from a website, the code as follows:</p>
<pre><code>pic = requests.get(src[0])
f = open("pic\\"+str(i) + '.jpg', "wb")
f.write(pic.content)
f.close()
i += 1
</code></pre>
<p>I want to save the picture into pic directory, but I find that images is saved in the same directory with the name like <code>pic\1.jpg</code>. Is this a bug?</p>
<p>In Windows, it's right, but on Ubuntu, it's an error!</p>
| 0
|
2016-09-07T06:28:00Z
| 39,362,645
|
<p><a href="http://www.howtogeek.com/181774/why-windows-uses-backslashes-and-everything-else-uses-forward-slashes/" rel="nofollow">Windows uses backslashes for file paths</a>, but Ubuntu uses forward slashes. This is why your save path with a backslash doesn't work on Ubuntu.</p>
<p>You probably want to use <a href="https://docs.python.org/2/library/os.path.html#os.path.join" rel="nofollow"><code>os.path.join</code></a> to make your path OS agnostic:</p>
<pre><code>import os
path = os.path.join('pic', '{}.jpg'.format(str(i)))
f = open(path, 'wb)
...
</code></pre>
| 2
|
2016-09-07T06:31:46Z
|
[
"python",
"file"
] |
python download images are not saved to the correct directory
| 39,362,591
|
<p>When i use python 2.7 to download images from a website, the code as follows:</p>
<pre><code>pic = requests.get(src[0])
f = open("pic\\"+str(i) + '.jpg', "wb")
f.write(pic.content)
f.close()
i += 1
</code></pre>
<p>I want to save the picture into pic directory, but I find that images is saved in the same directory with the name like <code>pic\1.jpg</code>. Is this a bug?</p>
<p>In Windows, it's right, but on Ubuntu, it's an error!</p>
| 0
|
2016-09-07T06:28:00Z
| 39,362,875
|
<pre><code>import os
f = open(os.sep.join(['pic', str(i), '.jpg']), 'wb')
</code></pre>
<p>Now the line should be os agnostic</p>
| 1
|
2016-09-07T06:44:53Z
|
[
"python",
"file"
] |
what does "()" syntax mean in python
| 39,362,622
|
<p>I am learning Exception Handling in python and came across following code snippet : an exception class:</p>
<pre><code>from flask import jsonify
class InvalidUsage(Exception):
status_code = 400
def __init__(self, message, status_code=None, payload=None):
Exception.__init__(self)
self.message = message
if status_code is not None:
self.status_code = status_code
self.payload = payload
def to_dict(self):
rv = dict(self.payload or ())
rv['message'] = self.message
return rv
</code></pre>
<p>In this code, what does this line actually mean</p>
<pre><code>rv = dict(self.payload or ())
</code></pre>
<p>inside <code>to_dict</code> method? What does <code>()</code> stand for?</p>
| 2
|
2016-09-07T06:29:58Z
| 39,362,658
|
<p>From python shell:</p>
<pre><code>>>> type(())
<type 'tuple'>
</code></pre>
<p>So it's a <a href="https://docs.python.org/2/library/functions.html?highlight=tuple#tuple" rel="nofollow">tuple</a>.</p>
| 3
|
2016-09-07T06:32:31Z
|
[
"python"
] |
what does "()" syntax mean in python
| 39,362,622
|
<p>I am learning Exception Handling in python and came across following code snippet : an exception class:</p>
<pre><code>from flask import jsonify
class InvalidUsage(Exception):
status_code = 400
def __init__(self, message, status_code=None, payload=None):
Exception.__init__(self)
self.message = message
if status_code is not None:
self.status_code = status_code
self.payload = payload
def to_dict(self):
rv = dict(self.payload or ())
rv['message'] = self.message
return rv
</code></pre>
<p>In this code, what does this line actually mean</p>
<pre><code>rv = dict(self.payload or ())
</code></pre>
<p>inside <code>to_dict</code> method? What does <code>()</code> stand for?</p>
| 2
|
2016-09-07T06:29:58Z
| 39,362,703
|
<p><code>()</code> stands for an empty tuple. On the other hand, <code>or</code> here acts like <a href="https://en.wikipedia.org/wiki/Null_coalescing_operator" rel="nofollow">null coalescing operator</a> in <code>self.payload or ()</code> where the entire expression returns an empty tuple if <code>self.payload</code> evaluates to false.</p>
<pre><code>>>> False or 5
5
>>> [] or (1, 2, 3)
(1, 2, 3)
</code></pre>
| 5
|
2016-09-07T06:35:40Z
|
[
"python"
] |
what does "()" syntax mean in python
| 39,362,622
|
<p>I am learning Exception Handling in python and came across following code snippet : an exception class:</p>
<pre><code>from flask import jsonify
class InvalidUsage(Exception):
status_code = 400
def __init__(self, message, status_code=None, payload=None):
Exception.__init__(self)
self.message = message
if status_code is not None:
self.status_code = status_code
self.payload = payload
def to_dict(self):
rv = dict(self.payload or ())
rv['message'] = self.message
return rv
</code></pre>
<p>In this code, what does this line actually mean</p>
<pre><code>rv = dict(self.payload or ())
</code></pre>
<p>inside <code>to_dict</code> method? What does <code>()</code> stand for?</p>
| 2
|
2016-09-07T06:29:58Z
| 39,362,751
|
<p>Basically what is happening is that as @turkus answered: </p>
<blockquote>
<p>From python shell:</p>
<pre><code>type(())
<type 'tuple'>
</code></pre>
<p>So it's a tuple.</p>
</blockquote>
<p>What it is doing is checking if <code>self.payload</code> is not <code>None</code>.
If it is <code>None</code> the variable <code>rv</code> is equal to an empty <code>dict</code> if not it is equal to <code>self.payload</code>.</p>
| 1
|
2016-09-07T06:38:33Z
|
[
"python"
] |
Showing columns in pandas
| 39,362,624
|
<p>I have a term x document matrix in pandas (made from a CSV) of the form:</p>
<pre><code>cheese, milk, bread, butter
0,2,1,0
1,1,0,0
1,1,1,1
0,1,0,1
</code></pre>
<p>So if I say '<em>give me the columns at index 1 and 2 where the values of a given row are both > 0</em>'. </p>
<p>I want to end up with this:</p>
<pre><code>cheese, milk,
[omitted]
1,1
1,1
[omitted]
</code></pre>
<p>This way, I can sum the <code>number of rows</code> / <code>number of documents</code> and arrive at a frequent itemset i.e. <code>(cheese, milk) --[2/4 support]</code></p>
<p>I've tried this approach as indicated on a seperate stackoverflow thread:</p>
<pre><code>fil_df.select([fil_df.columns[1] > 0 and fil_df.columns[2] > 0], [fil_df.columns[1], fil_df.columns[2]])
</code></pre>
<p>But it is not working for me sadly. I'm getting the error:</p>
<blockquote>
<p>TypeError: unorderable types: str() > int()</p>
</blockquote>
<p>Which I don't know how to fix as I can't make my row's cells be <code>integers</code> when I make the dataframe from a csv.</p>
| 2
|
2016-09-07T06:30:22Z
| 39,362,715
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow"><code>iloc</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>#get 1. and 2. columns
subset = df.iloc[:, [0,1]]
print (subset)
cheese milk
0 0 2
1 1 1
2 1 1
3 0 1
#mask
print ((subset > 0))
cheese milk
0 False True
1 True True
2 True True
3 False True
#get all values where True by rows
print ((subset > 0).all(1))
0 False
1 True
2 True
3 False
dtype: bool
#get first and second columns names
print (df.columns[[0,1]])
Index(['cheese', 'milk'], dtype='object')
print (df.ix[(subset > 0).all(1), df.columns[[0,1]]])
cheese milk
1 1 1
2 1 1
</code></pre>
| 1
|
2016-09-07T06:36:30Z
|
[
"python",
"pandas",
"indexing",
"condition",
"multiple-columns"
] |
Showing columns in pandas
| 39,362,624
|
<p>I have a term x document matrix in pandas (made from a CSV) of the form:</p>
<pre><code>cheese, milk, bread, butter
0,2,1,0
1,1,0,0
1,1,1,1
0,1,0,1
</code></pre>
<p>So if I say '<em>give me the columns at index 1 and 2 where the values of a given row are both > 0</em>'. </p>
<p>I want to end up with this:</p>
<pre><code>cheese, milk,
[omitted]
1,1
1,1
[omitted]
</code></pre>
<p>This way, I can sum the <code>number of rows</code> / <code>number of documents</code> and arrive at a frequent itemset i.e. <code>(cheese, milk) --[2/4 support]</code></p>
<p>I've tried this approach as indicated on a seperate stackoverflow thread:</p>
<pre><code>fil_df.select([fil_df.columns[1] > 0 and fil_df.columns[2] > 0], [fil_df.columns[1], fil_df.columns[2]])
</code></pre>
<p>But it is not working for me sadly. I'm getting the error:</p>
<blockquote>
<p>TypeError: unorderable types: str() > int()</p>
</blockquote>
<p>Which I don't know how to fix as I can't make my row's cells be <code>integers</code> when I make the dataframe from a csv.</p>
| 2
|
2016-09-07T06:30:22Z
| 39,363,070
|
<pre><code>df.loc[[1, 2], df.loc[[1, 2]].gt(0).all()]
</code></pre>
<p><a href="http://i.stack.imgur.com/qHCu1.png" rel="nofollow"><img src="http://i.stack.imgur.com/qHCu1.png" alt="enter image description here"></a></p>
| 1
|
2016-09-07T06:54:25Z
|
[
"python",
"pandas",
"indexing",
"condition",
"multiple-columns"
] |
List out of index after deleting elements- python
| 39,362,632
|
<p>I'm trying to implement a simple SSTF program and trying to iterate over the queue of processes that have come in and subsequently delete anything that I have taken into account but it says "Out of index error".</p>
<pre><code>for i in xrange(0,len(queue)):
for j in xrange(0,(len(queue))):
a = abs(queue[j]-initial_position)
if(min>a):
min = a
pos = j
initial_position=queue[pos]
final_queue.append(initial_position)
del(queue[pos])
</code></pre>
<p>The complete error message: </p>
<pre><code>initial_position=queue[pos]
IndexError: list index out of range
</code></pre>
<p>I'm really confused.</p>
| -1
|
2016-09-07T06:30:52Z
| 39,363,017
|
<p>My guess is that you didn't initialize min and pos properly (e.g. min is not small enough), thus the value pos has never been set. Then queue[pos] points to an undefined position.</p>
| 1
|
2016-09-07T06:51:29Z
|
[
"python",
"algorithm",
"python-2.7"
] |
What is the most elegant way to create a dictionary with the value as lists?
| 39,362,707
|
<p>So I have something like</p>
<pre><code>retval = {}
# ...
# some code here to fetch data
# ...
for row in cursor.fetchall():
if row.someid not in retval:
retval[row.someid] = [dict(zip(columns,rows))]
else:
retval[row.someid].append(dict(zip(columns,rows))
</code></pre>
<p>which yields:</p>
<pre><code>retval = {
1: [{'someid': 1, 'samplefield': 'valueX', ... },
{'someid': 1, 'samplefield': 'valueY', ... }],
2: [{'someid': 2, 'samplefield': 'valueX', ... }]
}
</code></pre>
<p>I feel like there is a much more pythonic way to attain the result that I need.</p>
<p>To be precise, is there a way to reduce these lines of code:</p>
<pre><code>if row.someid not in retval:
retval[row.someid] = [dict(zip(columns,rows))]
else:
retval[row.someid].append(dict(zip(columns,rows))
</code></pre>
<p>to be in a single line?</p>
<p><strong>Answer:</strong>
It was in the <a href="https://docs.python.org/2/library/collections.html#defaultdict-examples" rel="nofollow">docs</a> all along! Thank you to <a href="http://stackoverflow.com/users/2797476/christian-ternus">Christian Ternus</a> & <a href="http://stackoverflow.com/users/1648033/chthonicdaemon">chthonicdaemon</a> for pointing me in the right direction. I updated this cause I found that there could be multiple ways of doing it based on the docs.</p>
<pre><code>from collections import defaultdict
retval = defaultdict(list) ## Or retval = defaultdict(lambda: []) based on my accepted answer.
# ...
# some code here to fetch data
# ...
for row in cursor.fetchall():
retval[row.someid].append(dict(zip(columns,rows))
</code></pre>
<p><strong><em>OR</em></strong> </p>
<pre><code>retval = {}
# ...
# some code here to fetch data
# ...
for row in cursor.fetchall():
retval.setdefault(row.someid, []).append(dict(zip(columns,rows))
</code></pre>
<p>Hopes this helps you as much as it helped me!</p>
| 3
|
2016-09-07T06:35:53Z
| 39,362,753
|
<p>Try using <code>defaultdict</code> from the builtin <code>collections</code> module:</p>
<pre class="lang-python prettyprint-override"><code>from collections import defaultdict
retval = defaultdict(lambda: [])
# ...
# some code here to fetch data
# ...
for row in cursor.fetchall():
retval[row.someid].append(dict(zip(columns,rows))
</code></pre>
| 3
|
2016-09-07T06:38:46Z
|
[
"python"
] |
indexing issues when extracting specified latitude and longitude
| 39,362,967
|
<p>I want to extract a specified latitude and longitude from a <code>netCDF</code> file. In the past, I have never had issues with extracting the data. I am assuming that the reason it is not working this time is because I read in my data differently (see below)</p>
<pre><code>data = netCDF4.Dataset('/home/eburrows/metr173/regional_cm/Lab1/air.mon.mean.nc', mode = 'r')
lat = data.variables['lat'][:] #90 through -90
lon = data.variables['lon'][:] #0 through 360
air_temp = data.variables['air'][:] #degrees C
air_temp[air_temp>10000] = n.NaN
</code></pre>
<p>Previously I have been able to do the following:</p>
<pre><code>us_lat = n.ravel(n.where((lat>=___)&(lat<=___)))
us_lon = n.ravel(n.where((lon>=___)&(lon<=___)))
us_annual_temp = n.nanmean(air_temp[:,us_lat, us_lon],0)
</code></pre>
<p>This time however, it is returning a <code>Type Error</code> stating that <code>list indices must be integers, not tuple</code>.</p>
<p>I then forced the <code>tuple</code> into a <code>list</code> by changing <code>us_lat</code> and <code>us_lon</code> to have <code>list(n.ravel(n.where(...))</code>, but it still returns the same error. In the past I have been able to index this way and am not entirely sure why it is not working this time around.</p>
| 0
|
2016-09-07T06:48:32Z
| 39,373,646
|
<p>The results of <code>lat_us</code> from the <code>where</code> command are a tuple of indices, not the actual indices that are needed for slicing <code>air_temp</code>. To fix this, you need to index the first result from <code>lat_us</code> to access the array of latitude indices. </p>
<p>For instance,</p>
<pre><code>>>> import numpy as np
>>> lat = np.arange(-90,91,10)
>>> lat
array([-90, -80, -70, -60, -50, -40, -30, -20, -10, 0, 10, 20, 30,
40, 50, 60, 70, 80, 90])
>>> lat_us = np.where((lat >= -30) & (lat <= 30))
>>> lat_us
(array([ 6, 7, 8, 9, 10, 11, 12]),)
>>> lat_us[0]
array([ 6, 7, 8, 9, 10, 11, 12])
</code></pre>
<p>So the line </p>
<pre><code>us_lat = n.ravel(n.where((lat>=___)&(lat<=___)))
</code></pre>
<p>Should be modified to (note: I don't think you need to ravel this either):</p>
<pre><code>us_lat = n.where((lat>=___) & (lat<=___))[0]
</code></pre>
<p>Also, you are currently reading in only one dimensional for the variable <code>air_temp</code>, but it appears to be 3D (time x lat x lon). So you need to modify the read in of this variable to include all three dimensions:</p>
<pre><code>air_temp = data.variables['air'][:,:,:]
</code></pre>
| 0
|
2016-09-07T15:14:58Z
|
[
"python",
"list",
"indexing",
"tuples",
"typeerror"
] |
How to use `scipy.optimize.leastsq` to optimize in the joint least squares direction?
| 39,363,046
|
<p>I want to be able to move along a gradient in the <em>joint least squares direction</em>.</p>
<p>I thought I could do this using <code>scipy.optimize.leastsq</code> (<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html</a>). (Perhaps I'm wrong, maybe there's an easier way to do this?).</p>
<p>I'm having difficulty understanding what to use, and how to move in the joint least squares direction, while still increasing the parameters. </p>
<p>What I need to do is input something like this:</p>
<pre><code>[1,0]
</code></pre>
<p>And, have it move along the least squares direction, which would mean, increasing either or both values <code>1</code> and <code>0</code>, but doing so such that the sum of the squared values is as small as possible. </p>
<p>This would mean <code>[1,0]</code> would increase to <code>[1, <something barely greater than 0>]</code>, and eventually would reach <code>[1,1]</code>. At which point, both <code>1</code>'s would increase at the same rate.</p>
<p>How would I program this? It seems to me like <code>scipy.optimize.leastsq</code> would be of use here, but I cannot figure out how to use it? </p>
<p>Thankyou.</p>
| 0
|
2016-09-07T06:53:05Z
| 39,366,922
|
<p>I don't think you need <code>scipy.optimize.leastsq</code> because your problem can be solved analytically. At any moment, the gradient of the fuction <code>np.sum(x)</code> where <code>x</code> is an array, is <code>2*x</code>. So, if you want to obtain the smallest increase, then you have to increase the smallest component of the gradient, which you can find with <code>np.argmin</code>. Here is a simple solution:</p>
<pre><code>def g(x):
return np.array(2*x)
x = np.array([1.,0.])
for _ in range(200):
eps = np.zeros_like(x)
index = np.argmin(g(x))
eps[index] = 0.01 #or whatever
x += eps
print(x)
</code></pre>
<p>When multiple indices have the same value, <code>np.argmin</code> returns the first occurrence, so you will encounter certain oscillations that you can minimize reducing <code>eps</code></p>
| 1
|
2016-09-07T10:05:22Z
|
[
"python",
"numpy",
"optimization",
"scipy",
"least-squares"
] |
Brightness/Histogram normalization for time lapse images
| 39,363,183
|
<p>I'm working in a laboratory and we often make time lapse series (image every hour) of stem cells. The current idea is to put all frames together and make a video showing this growing cells (similar to this <a href="https://www.youtube.com/watch?v=zSuLDqRfhXo" rel="nofollow">youtube video</a>). Which could done simple and cool using OpenCV + Python.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import os
import cv2
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
timelapse_folder = '../myTimeLapse/'
for file in os.listdir(timelapse_folder):
frame = cv2.imread(timelapse_folder+file, 0)
out.write(frame)
out.release()
</code></pre>
<p>But we have the problem, that all images vary a little bit in brightness, so we get some flickering in our output video. </p>
<p>I'm not allowed to upload the videos but here are some simple examples generated with gimp to visualize the problem:</p>
<p>That's the video I get from the frames</p>
<p><a href="http://i.stack.imgur.com/ZLYUq.gif" rel="nofollow"><img src="http://i.stack.imgur.com/ZLYUq.gif" alt="enter image description here"></a></p>
<p>and that's my desired video (it would be also great to minimize the flickering instead of removing it completely)</p>
<p><a href="http://i.stack.imgur.com/Gtu7C.gif" rel="nofollow"><img src="http://i.stack.imgur.com/Gtu7C.gif" alt="enter image description here"></a></p>
<p>Is there a way to adjust the histogram or brightness over all images (or maybe between 2 images) to remove those flickering using OpenCV?</p>
<p>Thanks for every idea or hint! </p>
<p><em>Edit: The gif sequence produced by Andrew's idea (Answer below)</em></p>
<p><a href="http://i.stack.imgur.com/8oGio.gif" rel="nofollow"><img src="http://i.stack.imgur.com/8oGio.gif" alt="enter image description here"></a></p>
| 0
|
2016-09-07T06:59:40Z
| 39,363,906
|
<p>Are those images RGB or gray values? I would just perform a normalization in your loop after reading each frame:</p>
<pre><code>frame = frame/np.max(frame)
</code></pre>
<p>In case of gray values each image should then have values between 0 and 1 but depending on how your images look like you could also try other normalizations, e.g. using the <code>np.median</code> or <code>np.mean</code> instead of <code>np.max</code>.</p>
| 0
|
2016-09-07T07:37:05Z
|
[
"python",
"opencv",
"timelapse"
] |
Brightness/Histogram normalization for time lapse images
| 39,363,183
|
<p>I'm working in a laboratory and we often make time lapse series (image every hour) of stem cells. The current idea is to put all frames together and make a video showing this growing cells (similar to this <a href="https://www.youtube.com/watch?v=zSuLDqRfhXo" rel="nofollow">youtube video</a>). Which could done simple and cool using OpenCV + Python.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import os
import cv2
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
timelapse_folder = '../myTimeLapse/'
for file in os.listdir(timelapse_folder):
frame = cv2.imread(timelapse_folder+file, 0)
out.write(frame)
out.release()
</code></pre>
<p>But we have the problem, that all images vary a little bit in brightness, so we get some flickering in our output video. </p>
<p>I'm not allowed to upload the videos but here are some simple examples generated with gimp to visualize the problem:</p>
<p>That's the video I get from the frames</p>
<p><a href="http://i.stack.imgur.com/ZLYUq.gif" rel="nofollow"><img src="http://i.stack.imgur.com/ZLYUq.gif" alt="enter image description here"></a></p>
<p>and that's my desired video (it would be also great to minimize the flickering instead of removing it completely)</p>
<p><a href="http://i.stack.imgur.com/Gtu7C.gif" rel="nofollow"><img src="http://i.stack.imgur.com/Gtu7C.gif" alt="enter image description here"></a></p>
<p>Is there a way to adjust the histogram or brightness over all images (or maybe between 2 images) to remove those flickering using OpenCV?</p>
<p>Thanks for every idea or hint! </p>
<p><em>Edit: The gif sequence produced by Andrew's idea (Answer below)</em></p>
<p><a href="http://i.stack.imgur.com/8oGio.gif" rel="nofollow"><img src="http://i.stack.imgur.com/8oGio.gif" alt="enter image description here"></a></p>
| 0
|
2016-09-07T06:59:40Z
| 39,370,737
|
<p>If your data is in a 3D array, you shouldn't need to loop over it to do this. With 5 images of, say 256 x 256, you should be able to construct an array that is <code>arr.shape == (256, 256, 5)</code>. My initial comment was a little off I think, but the below example should do it.</p>
<pre><code>target_array = []
for file in os.listdir(timelapse_folder):
frame = cv2.imread(timelapse_folder+file, 0)
if target_array:#Not entirely happy with this, but it should work
target_array = np.dstack((target_array, frame))
elif not target_array:
target_array = np.asarray(frame)
target_array = target_array / np.max(target_array)
#target_array *= 255 #If you want an intensity value with a more common range here
for idx in xrange(target_array.shape[2]):
out.write(target_array[:, :, idx])
</code></pre>
<p>EDIT:
I used <a href="http://stackoverflow.com/questions/18357065/get-mean-of-2d-slice-of-a-3d-array-in-numpy">this page</a> to iron out some kinks with addressing the 3D array</p>
| 1
|
2016-09-07T13:05:15Z
|
[
"python",
"opencv",
"timelapse"
] |
Why python request working but C# request not working?
| 39,363,348
|
<p>First, i try to post the script in PostMan tool.</p>
<pre><code>{"AO":"ECHO"}
</code></pre>
<p>It working fine. Then i'm writing this request in C# but it not working.
And more i wrote the request again in Python, and it working well.
But my project is in Microsoft C#. I dont want to run script Python in C# at all.</p>
<p>==== Python =========</p>
<pre><code>import httplib
import json
import sys
data = '{"AO":"ECHO"}'
headers = {"Content-Type": "application/json", "Connection": "Keep-Alive" }
conn = httplib.HTTPConnection("http://10.10.10.1",1040)
conn.request("POST", "/guardian", data, headers)
response = conn.getresponse()
print response.status, response.reason
print response.msg
</code></pre>
<p>==== C# ============</p>
<pre><code>var httpWebRequest = (HttpWebRequest)WebRequest.Create("http://10.10.10.1:1040/guardian");
httpWebRequest.Method = "POST";
httpWebRequest.ContentType = "application/json";
using (var streamWriter = new StreamWriter(httpWebRequest.GetRequestStream()))
{
string json = "{\"AO\":\"ECHO\"}";
streamWriter.Write(json);
streamWriter.Flush();
streamWriter.Close();
var httpResponse = (HttpWebResponse)httpWebRequest.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
var result = streamReader.ReadToEnd();
Console.WriteLine(result);
}
}
</code></pre>
<p>I try to put "ContentLength" but it still timeout exception.
And i try to using RestSharp, it's not timeout but return null.
Any one please help...</p>
<pre><code> var client = new RestClient("http://10.10.10.1:1040/guardian");
var request = new RestRequest();
request.Method = Method.POST;
request.AddHeader("Content-Type", "application/json");
request.Parameters.Clear();
request.RequestFormat = DataFormat.Json;
request.AddBody(new { AO = "ECHO" });
var response = client.Execute(request);
var content = response.Content;
</code></pre>
<p>Please help me,
I dont understand why it working fine in python.
But why it not working in C#.
I try to find many request in C# but it got error exception with timeout.</p>
| 2
|
2016-09-07T07:08:05Z
| 39,403,931
|
<p>Python will automatically add Content-Length http header.
<a href="https://docs.python.org/2/library/httplib.html#httpconnection-objects" rel="nofollow">https://docs.python.org/2/library/httplib.html#httpconnection-objects</a></p>
<p>I think you might have to set this header manually in C#. </p>
<pre><code>httpWebRequest.ContentLength = json.length;
</code></pre>
<p>Depending on the server, you may have to set UserAgent as well.</p>
<pre><code>httpWebRequest.UserAgent=".NET Framework Test Client";
</code></pre>
| 0
|
2016-09-09T04:35:34Z
|
[
"c#",
"python",
"postman"
] |
Best algorithm to compare list or dict
| 39,363,462
|
<p>I'm dealing with a bit complicated data set via Python. I'm a novice Python coder. The data set is the collection of Date, Title, Contents and URL.</p>
<p>Conceptually, it will be like this.</p>
<pre><code>1st scraping runs, then I get,
[9/6 9:00, title1, content1]
[9/6 9:00, title2, content2]
[9/6 8:22, title3, content3]
[9/6 11:01, title4, content4]
...
2nd scraping runs, then I get,
[9/6 13:05, title5, content5]
[9/6 12:13, title6, content6]
[9/6 9:00, title1, content1]
[9/6 14:21, title4, content4'] ---> This is updated of content4
...
</code></pre>
<p>I could run scraping code.
What I want to do is to compare the output of 1st scraping run and 2nd.
I expect to show only diff.</p>
<pre><code>[9/6 13:05, title5, content5]
[9/6 12:13, title6, content6]
[9/6 10:21, title4', content4']
</code></pre>
<p>I don't believe I have to compare "content".
I can get the diff by "date" and "title" only.</p>
<p>I spent hours but cannot think of elegant approach to make this work..
What would be the best approach here? Basically, I'm thinking to store the output as pickle then compare the 2nd scrape run output on the fly. However, I'm not sure how to compare to get two elements of list simultaneously then compare with two elements from second list. It does not seem it is simple for loop...</p>
<p>Or, can this be done by dict? I don't think so... but welcome to any suggestion.</p>
<p>It will be much appreciated if experienced folks could comment.</p>
| 1
|
2016-09-07T07:14:07Z
| 39,364,341
|
<p>Did you try something like that?</p>
<pre><code>>>> common_elements = []
>>> a = [['date', 'title1', 'content1'], ['date2', 'title2', 'content2']]
>>> b = [['date3', 'title3', 'content3'], ['date2', 'title2', 'content2']]
>>> for element in a:
... if element in b:
... common_elements.append(element)
...
>>> common_elements
[['date2', 'title2', 'content2']]
</code></pre>
| 0
|
2016-09-07T08:00:09Z
|
[
"python",
"python-3.x",
"compare"
] |
Best algorithm to compare list or dict
| 39,363,462
|
<p>I'm dealing with a bit complicated data set via Python. I'm a novice Python coder. The data set is the collection of Date, Title, Contents and URL.</p>
<p>Conceptually, it will be like this.</p>
<pre><code>1st scraping runs, then I get,
[9/6 9:00, title1, content1]
[9/6 9:00, title2, content2]
[9/6 8:22, title3, content3]
[9/6 11:01, title4, content4]
...
2nd scraping runs, then I get,
[9/6 13:05, title5, content5]
[9/6 12:13, title6, content6]
[9/6 9:00, title1, content1]
[9/6 14:21, title4, content4'] ---> This is updated of content4
...
</code></pre>
<p>I could run scraping code.
What I want to do is to compare the output of 1st scraping run and 2nd.
I expect to show only diff.</p>
<pre><code>[9/6 13:05, title5, content5]
[9/6 12:13, title6, content6]
[9/6 10:21, title4', content4']
</code></pre>
<p>I don't believe I have to compare "content".
I can get the diff by "date" and "title" only.</p>
<p>I spent hours but cannot think of elegant approach to make this work..
What would be the best approach here? Basically, I'm thinking to store the output as pickle then compare the 2nd scrape run output on the fly. However, I'm not sure how to compare to get two elements of list simultaneously then compare with two elements from second list. It does not seem it is simple for loop...</p>
<p>Or, can this be done by dict? I don't think so... but welcome to any suggestion.</p>
<p>It will be much appreciated if experienced folks could comment.</p>
| 1
|
2016-09-07T07:14:07Z
| 39,365,130
|
<p>Try this for comparison between <code>list</code> in python 3:</p>
<pre><code>a= [['9/6 9:00', 'title1', 'content1'],
['9/6 9:00', 'title2', 'content2'],
['9/6 8:22', 'title3', 'content3'],
['9/6 11:01','title4', 'content4']]
b=[['9/6 13:05', 'title5', 'content5'],
['9/6 12:13', 'title6', 'content6'],
['9/6 9:00', 'title1', 'content1'],
['9/6 14:21', 'title4', 'content4']]
for i in b:
if i not in a:
print(i)
</code></pre>
<p>Output:</p>
<pre><code>['9/6 13:05', 'title5', 'content5']
['9/6 12:13', 'title6', 'content6']
['9/6 14:21', 'title4', 'content4']
</code></pre>
<p>Here it is directly comparing whole list to another list like <code>['9/6 11:01','title4', 'content4']</code> to <code>['9/6 14:21', 'title4', 'content4']</code> so if any single element is different in <code>list</code> it shows that <code>list</code> but if you want to compare different element of <code>list</code> to another element in another <code>list</code> then you have to apply another method. </p>
<p><strong>Alternate Method</strong> (Which does the same but using <em>list comprehension</em>) :</p>
<pre><code>print(*[i for i in b if i not in a],sep='\n')
</code></pre>
<p>It will also gives same output:</p>
<pre><code>['9/6 13:05', 'title5', 'content5']
['9/6 12:13', 'title6', 'content6']
['9/6 14:21', 'title4', 'content4']
</code></pre>
<blockquote>
<p>Here <em>list comprehension</em> part is only <code>[i for i in b if i not in a]</code>
other <code>sep='\n'</code> is for displaying every element on next line. For
understanding <em>list comprehension</em> see this document : <a href="http://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/" rel="nofollow">Python List
Comprehensions: Explained Visually</a></p>
</blockquote>
<p>If you tell what difference we have to print then I can help because in question i don't understand the how we get <strong>9/6 10:21</strong> this output in line <code>[9/6 10:21, title4', content4']</code></p>
| 1
|
2016-09-07T08:41:24Z
|
[
"python",
"python-3.x",
"compare"
] |
Best algorithm to compare list or dict
| 39,363,462
|
<p>I'm dealing with a bit complicated data set via Python. I'm a novice Python coder. The data set is the collection of Date, Title, Contents and URL.</p>
<p>Conceptually, it will be like this.</p>
<pre><code>1st scraping runs, then I get,
[9/6 9:00, title1, content1]
[9/6 9:00, title2, content2]
[9/6 8:22, title3, content3]
[9/6 11:01, title4, content4]
...
2nd scraping runs, then I get,
[9/6 13:05, title5, content5]
[9/6 12:13, title6, content6]
[9/6 9:00, title1, content1]
[9/6 14:21, title4, content4'] ---> This is updated of content4
...
</code></pre>
<p>I could run scraping code.
What I want to do is to compare the output of 1st scraping run and 2nd.
I expect to show only diff.</p>
<pre><code>[9/6 13:05, title5, content5]
[9/6 12:13, title6, content6]
[9/6 10:21, title4', content4']
</code></pre>
<p>I don't believe I have to compare "content".
I can get the diff by "date" and "title" only.</p>
<p>I spent hours but cannot think of elegant approach to make this work..
What would be the best approach here? Basically, I'm thinking to store the output as pickle then compare the 2nd scrape run output on the fly. However, I'm not sure how to compare to get two elements of list simultaneously then compare with two elements from second list. It does not seem it is simple for loop...</p>
<p>Or, can this be done by dict? I don't think so... but welcome to any suggestion.</p>
<p>It will be much appreciated if experienced folks could comment.</p>
| 1
|
2016-09-07T07:14:07Z
| 39,365,712
|
<pre><code>a = [['9/6 9:00', 'title1', 'content1'],
['9/6 9:00', 'title2', 'content2'],
['9/6 8:22', 'title3', 'content3'],
['9/6 11:01','title4', 'content4']]
b = [['9/6 13:05', 'title5', 'content5'],
['9/6 12:13', 'title6', 'content6'],
['9/6 9:00', 'title1', 'content1'],
['9/6 14:21', 'title4', 'content4']]
[i for i in b if i not in a]
</code></pre>
<p>You can use generator expression as well.</p>
| 0
|
2016-09-07T09:08:41Z
|
[
"python",
"python-3.x",
"compare"
] |
Filter a smaller file using another huge file
| 39,363,480
|
<p>I have a huge csv file with about 10^9 lines where each line has a pair of ids such as:</p>
<pre><code>IDa,IDb
IDb,IDa
IDc,IDd
</code></pre>
<p>Call this file1. I have another much smaller csv file with about 10^6 lines in the same format. Call this file2. </p>
<p>I want to simply find the lines in file2 which contain at least one ID that exists somewhere in file1.</p>
<p>Is there a fast way to do this? I don't mind if it is in awk, python or perl.</p>
| 1
|
2016-09-07T07:14:58Z
| 39,363,699
|
<p>I would actually use <code>sqlite</code> for something like that. You could create a new database from the same directory as two files with <code>sqlite3 test.sqlite</code> and then do something like that:</p>
<pre><code>create table file1(id1, id2);
create table file2(id1, id2);
.separator ","
.import file1.csv file1
.import file2.csv file2
WITH all_ids AS (
SELECT id1 FROM file1 UNION SELECT id2 FROM file1
)
SELECT * FROM file2 WHERE id1 IN all_ids OR id2 IN all_ids;
</code></pre>
<p>The advantage of using <code>sqlite</code> is that you can manage the memory more intelligently than a simple script that you could write in some scripting language.</p>
| 4
|
2016-09-07T07:25:34Z
|
[
"python",
"perl",
"awk"
] |
Filter a smaller file using another huge file
| 39,363,480
|
<p>I have a huge csv file with about 10^9 lines where each line has a pair of ids such as:</p>
<pre><code>IDa,IDb
IDb,IDa
IDc,IDd
</code></pre>
<p>Call this file1. I have another much smaller csv file with about 10^6 lines in the same format. Call this file2. </p>
<p>I want to simply find the lines in file2 which contain at least one ID that exists somewhere in file1.</p>
<p>Is there a fast way to do this? I don't mind if it is in awk, python or perl.</p>
| 1
|
2016-09-07T07:14:58Z
| 39,364,010
|
<p>In perl,</p>
<pre><code>use strict;
use warnings;
use autodie;
# read file2
open my $file2, '<', 'file2';
chomp( my @file2 = <$file2> );
close $file2;
# record file2 line numbers each id is found on
my %id;
for my $line_number (0..$#file2) {
for my $id ( split /,/, $file2[$line_number] ) {
push @{ $id{$id} }, $line_number;
}
}
# look for those ids in file1
my @use_line;
open my $file1, '<', 'file1';
while ( my $line = <$file1> ) {
chomp $line;
for my $id ( split /,/, $line ) {
if ( exists $id{$id} ) {
@use_line[ @{ $id{$id} } ] = @{ $id{$id} };
}
}
}
close $file1;
# print lines whose ids were found
print "$_\n" for @file2[ grep defined, @use_line ];
</code></pre>
| 1
|
2016-09-07T07:42:34Z
|
[
"python",
"perl",
"awk"
] |
Filter a smaller file using another huge file
| 39,363,480
|
<p>I have a huge csv file with about 10^9 lines where each line has a pair of ids such as:</p>
<pre><code>IDa,IDb
IDb,IDa
IDc,IDd
</code></pre>
<p>Call this file1. I have another much smaller csv file with about 10^6 lines in the same format. Call this file2. </p>
<p>I want to simply find the lines in file2 which contain at least one ID that exists somewhere in file1.</p>
<p>Is there a fast way to do this? I don't mind if it is in awk, python or perl.</p>
| 1
|
2016-09-07T07:14:58Z
| 39,364,025
|
<pre><code>$ cat > file2 # make test file2
IDb,IDa
$ awk -F, 'NR==FNR{a[$1];a[$2];next} ($1 in a&&++a[$1]==1){print $1} ($2 in a&&++a[$2]==1){print $2}' file2 file1 > file3
$ cat file3 # file2 ids in file1 put to file3
IDa
IDb
$ awk -F, 'NR==FNR{a[$1];next} ($1 in a)||($2 in a){print $0}' file3 file2
IDb,IDa
</code></pre>
| 5
|
2016-09-07T07:43:09Z
|
[
"python",
"perl",
"awk"
] |
Filter a smaller file using another huge file
| 39,363,480
|
<p>I have a huge csv file with about 10^9 lines where each line has a pair of ids such as:</p>
<pre><code>IDa,IDb
IDb,IDa
IDc,IDd
</code></pre>
<p>Call this file1. I have another much smaller csv file with about 10^6 lines in the same format. Call this file2. </p>
<p>I want to simply find the lines in file2 which contain at least one ID that exists somewhere in file1.</p>
<p>Is there a fast way to do this? I don't mind if it is in awk, python or perl.</p>
| 1
|
2016-09-07T07:14:58Z
| 39,364,064
|
<p>Sample files:</p>
<pre><code>cat f1
IDa,IDb
IDb,IDa
IDc,IDd
cat f2
IDt,IDy
IDb,IDj
</code></pre>
<p>Awk solution: </p>
<pre><code>awk -F, 'NR==FNR {a[$1]=$1;b[$2]=$2;next} ($1 in a)||($2 in b)' f1 f2
IDb,IDj
</code></pre>
<p>This will store first and second columns of file1 in array a and b. Then print those line if either first or second column is seen for second file.</p>
| 0
|
2016-09-07T07:45:14Z
|
[
"python",
"perl",
"awk"
] |
Filter a smaller file using another huge file
| 39,363,480
|
<p>I have a huge csv file with about 10^9 lines where each line has a pair of ids such as:</p>
<pre><code>IDa,IDb
IDb,IDa
IDc,IDd
</code></pre>
<p>Call this file1. I have another much smaller csv file with about 10^6 lines in the same format. Call this file2. </p>
<p>I want to simply find the lines in file2 which contain at least one ID that exists somewhere in file1.</p>
<p>Is there a fast way to do this? I don't mind if it is in awk, python or perl.</p>
| 1
|
2016-09-07T07:14:58Z
| 39,372,191
|
<p>Using these input files for testing:</p>
<pre><code>$ cat file1
IDa,IDb
IDb,IDa
IDc,IDd
$ cat file2
IDd,IDw
IDx,IDc
IDy,IDz
</code></pre>
<p>If file1 can fit in memory:</p>
<pre><code>$ awk -F, 'NR==FNR{a[$1];a[$2];next} ($1 in a) || ($2 in a)' file1 file2
IDd,IDw
IDx,IDc
</code></pre>
<p>If not but file2 can fit in memory:</p>
<pre><code>$ awk -F, '
ARGIND==2 {
if ($1 in inBothFiles) {
inBothFiles[$1] = 1
}
if ($2 in inBothFiles) {
inBothFiles[$2] = 1
}
next
}
ARGIND==1 {
inBothFiles[$1] = 0
inBothFiles[$2] = 0
next
}
ARGIND==3 {
if (inBothFiles[$1] || inBothFiles[$2]) {
print
}
}
' file2 file1 file2
IDd,IDw
IDx,IDc
</code></pre>
<p>The above uses GNU awk for ARGIND - with other awks just add a <code>FNR==1{ARGIND++}</code> block at the start.</p>
<p>I have the <code>ARGIND==2</code> block (i.e. the part that processes the 2nd argument which in this case is the 10^9 <code>file1</code>) listed first for efficiency so we don't unnecessarily test <code>ARGIND==1</code> for every line in the much larger file.</p>
| 2
|
2016-09-07T14:09:27Z
|
[
"python",
"perl",
"awk"
] |
client=MongoClient() syntax error using pyspark
| 39,363,491
|
<p>I'm trying to realize analystics through Spark using Pyspark. The event, once analyzed, should be send in a Mongo database.
The python code is in the file myFile.py
However, when running this command:</p>
<pre><code>spark-2.0.0/bin/spark-submit myFile.py
</code></pre>
<p>I have the following error:</p>
<pre><code>client=MongoClient('localhost' , 27017)
^
SyntaxError: invalid syntax
</code></pre>
<p>This being said, I have tried with spaces around the "=" but it did not change a thing, I checked for unwanted spaces before the line or wrong indentation but everything looks fine. Moreover, pymongo is imported at the beginning of the file</p>
<p>I found this configuration online, looking on tutorials and checking in the pymongo documentation.</p>
<p>Can anyone help on this matter?
I apologize for any error in my English, it is not my native language.</p>
<p><strong>Edit:</strong>
Here is the surrounding code:</p>
<pre><code>if "User dn cn" in part2:
usrDnCnMsg = part2.split(",")
usrDnCnElements = list()
for element in usrDnCnMsg:
usrDnCnElements.append(element.split("=")[1].lstrop().rstrip()
# sending event to DB
client=MongoClient('localhost' , 27017)
db = client.TEST
found = db.user.findOne({"usrName":usrDnCnElements[0]})
if found == None:
result = db.user.insertOne({"usrName":usrDnCnElements[0],"usrGrp":"null"})
result = db.usrDnCn.insertOne({"timestamp":timestamp,"usrID":db.user.findOne({"usrName":usrDnCnElements[0]})["_id"],"country":usrDnCnElements[1],"ou":usrDnCnElements[2],"dn":usrDnCnElements[3],"dn1":usrDnCnElements[4]})
# close connection
client.close()
</code></pre>
<p>This code is itself inside another if.</p>
| 0
|
2016-09-07T07:15:20Z
| 39,367,071
|
<p>So I'm answering my own question.</p>
<p>Some typing errors were in my code inducing error messages that were not explicitely designating those typing errors. Also, my pymongo installation was not correctly done.
I corrected the typing errors in my code and reinstalled pymongo and it seems to work now.</p>
| 0
|
2016-09-07T10:12:53Z
|
[
"python",
"mongodb",
"apache-spark",
"pymongo"
] |
Difference between adding path to PYTHONPATH and installing your own module
| 39,363,544
|
<p>I'm working on a python project that contains a number of routines I use repeatedly. Instead of rewriting code all the time, I just want to update my package and import it; however, it's nowhere near done and is constantly changing. I host the package on a repo so that colleagues on various machines (UNIX + Windows) can pull it into their local repos and use it.</p>
<p>It sounds like I have two options, either I can keeping installing the package after every change or I can just add the folder directory to my system's path. If I change the package, does it need to be reinstalled? <a href="http://www.kennethreitz.org/essays/repository-structure-and-python" rel="nofollow">I'm using this blog post as inspiration</a>, but the author there doesn't stress the issue of a continuously changing package structure, so I'm not sure how to deal with this.</p>
<p>Also if I wanted to split the project into multiple files and bundle it as a package, at what level in the directory structure does the PTYHONPATH need to be at? To the main project directory, or the .sample/ directory?</p>
<pre><code>README.rst
LICENSE
setup.py
requirements.txt
sample/__init__.py
sample/core.py
sample/helpers.py
docs/conf.py
docs/index.rst
tests/test_basic.py
tests/test_advanced.py
</code></pre>
<p>In this example, I want to be able to just import the package itself and call the modules within it like this:</p>
<pre><code>import sample
arg = sample.helper.foo()
out = sample.core.bar(arg)
return out
</code></pre>
<p>Where core contains a function called foo</p>
| 1
|
2016-09-07T07:18:15Z
| 39,363,685
|
<p>PYTHONPATH is a valid way of doing this, but in my (personal) opinion it's more useful if you have a whole different place where you keep your python variables. Like <code>/opt/pythonpkgs</code> or so.</p>
<p>For projects where I want it to be installed and also I have to keep developing, I use <code>develop</code> instead of <code>install</code> in setup.py:</p>
<p>When installing the package, don't do:</p>
<pre><code>python setup.py install
</code></pre>
<p>Rather, do:</p>
<pre><code>python setup.py develop
</code></pre>
<p>What this does is that it creates a synlink/shortcut (I believe it's called egglink in python) in the python libs (where the packages are installed) to point to your module's directory. Hence, as it's a shortcut/symlink/egglink when ever you change a python file, it will immediately reflect the next time you import that file.</p>
<p>Note: Using this, if you delete the repository/directory you ran this command from, the package will cease to exist (as its only a shortcut)</p>
<p>The equivalent in <code>pip</code> is -e (for editable):</p>
<pre><code>pip install -e .
</code></pre>
<p>Instead of:</p>
<pre><code>pip install .
</code></pre>
| 0
|
2016-09-07T07:25:09Z
|
[
"python",
"module",
"packaging"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.