title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Python calling C DLL function output issues and restype | 39,173,454 | <p>I was having this issue with my C++ DLL function. My C++ DLL function description goes like this: </p>
<pre><code>// Combine the lower 32 bits and upper 8 bits of a 40 bit
// integer into a 64 bit double precision floating point number
//
// Inputs:
// - Lower: lower 32 bits of the 40 bit timestamp
// - Upper: upper 8 bits of the 40 bit timestamp
//
// Outputs:
// - function return value is the 64 bit double precision floating point number
//
extern "C" double WINAPI Make64BitDoubleIntegerFromUpperLower(uint32_t Lower, uint8_t Upper);
</code></pre>
<p>When I make a simple C++ client test code, the function works and I was getting the results I expect. I'm trying to make a python client that will use this function (and others). I've set up everything I need to run the function in python (libraries, import locations etc.), and I put in the correct parameters, however when I ran a test code script for my python client I always got a value of 0 after the function. I found that the python function was returning an Int and not a c_double. I looked online and I found there is a way to fix this using the 'restype' function. I used it like so: </p>
<pre><code>def pc_make_64_bit_double_integer_from_upper_lower(self, Lower, Upper):
self.opxclient_dll.OPX_Make64BitDoubleIntegerFromUpperLower.restype = c_double
return self.opxclient_dll.OPX_Make64BitDoubleIntegerFromUpperLower(Lower, Upper)
</code></pre>
<p>Now everything works and I'm getting the right output. This is great but I'm also confused as to how this is working. My concern is I have many dll functions I want to implement in my python client. Most of them that just fill in arrays that I pass in by reference, with c_long and c_int values which I will convert to python types. These functions then return a 0 (got data) or -1 (no data) which python easily can read as an int. When should I use the restype function? Is it necessary for all dll functions in order to get the right output values?</p>
<p>You help is greatly appreciated. Thank you! </p>
| 0 | 2016-08-26T19:26:15Z | 39,173,780 | <p>Just like any C function, Python needs to know what the return type is, in order to convert it to it's own type.
By default, python infers that the return type is an int - something that isn't always the case.</p>
<p>In all cases, when the return type is not int, you need to explicitly state the return type for the function so python will translate and convert it correctly.</p>
| 1 | 2016-08-26T19:52:35Z | [
"python",
"c++",
"c",
"dll"
] |
Remove third-party installed Python on Mac? | 39,173,459 | <p>So I installed python 2.7.11 a few months ago, now the class I'm about to take uses 3. So I installed 3 and it works fine. I also uninstalled 2.7.11 by going to applications and removing it, but going to terminal and typing which python, the directory is <code>Library/Frameworks/Python.framework/Versions/2.7/bin/python</code>, which means this it's still not removed.</p>
<p>What should I do...leave it alone? I only need Python 3, but this is bothering me a bit.</p>
<p>Thanks.</p>
| 0 | 2016-08-26T19:26:24Z | 39,173,554 | <p>This doesn't answer the question in the post's title, but leave Python 2 as the default <code>python</code>. If you want to run Python 3, you run <code>python3</code> or maybe <code>python3.4</code> or <code>python3.5</code>, depending on your installation. The system and other third-party software depend on <code>python</code> being Python 2. If you change it, you may encounter puzzles down the road. </p>
<p>I'm not sure if having a third-party Python 2 is good (OS X ships with Python 2 already), but it should be fine.</p>
<p>Edit: Sorry, didn't see there was already an answer. It was posted as I was typing.</p>
| 1 | 2016-08-26T19:33:22Z | [
"python",
"osx",
"python-2.7"
] |
How to create new calculated columns in pandas and remove original? | 39,173,540 | <p>I have a pandas dataframe 'df' that has:</p>
<pre><code>name a b
greg 1 1
george 2 2
giles 3 3
giovanni 4 5
</code></pre>
<p>I want to run this dataframe through a calcuate function to create new columns: c and d such that I get the following resulting dataframe:</p>
<pre><code>name c d
greg 11 21
george 12 22
giles 13 23
giovanni 14 24
</code></pre>
<p>Currently, my code is as follows:</p>
<p>My calcluate function:</p>
<pre><code>def calculate(row):
return row['a']+10, row['b']+20
</code></pre>
<p>My function to modify the dataframe:</p>
<pre><code>df['c'] = df.apply(calculate, axis=1)
</code></pre>
<p>The resulting dataframe I am getting is this:</p>
<pre><code>name a b c
greg 1 1 (11, 21)
george 2 2(12, 22)
giles 3 3 (13, 23)
giovanni 4 4 (14, 24)
</code></pre>
<p>How do I get my dataframe to look like: </p>
<pre><code>name c d
greg 11 21
george 12 22
giles 13 23
giovanni 14 24
</code></pre>
| 1 | 2016-08-26T19:32:23Z | 39,173,583 | <p>Row iteration is very slow. You are much better off doing something like the following:</p>
<pre><code>df['c'] = df.a + 10
df['d'] = df.b + 20
df.drop(['a', 'b'], axis='columns', inplace=True)
</code></pre>
<p>To implement your method, however, you would need to do this:</p>
<pre><code>df['c'], df['d'] = zip(*df.apply(calculate, axis=1))
>>> df
name a b c d
0 greg 1 1 11 21
1 george 2 2 12 22
2 giles 3 3 13 23
3 giovanni 4 5 14 25
</code></pre>
| 0 | 2016-08-26T19:35:05Z | [
"python",
"pandas"
] |
How to combine an index "prefix" to the data in a Pandas series? Does one create a numpy array? | 39,173,560 | <p>I have a pandas series:</p>
<pre><code>import pandas
ser = pd.Series(...)
ser
idx1 23421535123415135
idx2 98762981356281343
idx3 394123942916498173
idx4 41234189756983411
...
idx50 123412938479283419
</code></pre>
<p>I would like to combined the index append to the front of each data row. The output I'm looking for is:</p>
<pre><code>idx1 23421535123415135
idx2 98762981356281343
idx3 394123942916498173
idx4 41234189756983411
...
idx50 123412938479283419
</code></pre>
<p>This could either be a pandas Series (where the array is naturally indexed) or a numpy array. </p>
<p>For a dataframe, in order to combine two columns, you use:</p>
<pre><code>df["newcolumn"] = df[['columnA','columnB']].astype(str).sum(axis=1)
</code></pre>
<p>but I'm confused how to accomplish this with a pandas series.</p>
| 1 | 2016-08-26T19:33:41Z | 39,173,803 | <p>Say you start with your Series:</p>
<pre><code> In [34]: s = pd.Series(data=[1, 2], index=['idx0', 'idx1'])
</code></pre>
<p>Then you can do </p>
<pre><code>In [35]: t = s.reset_index()
In [36]: t['index'].astype(str) + ' ' + t[0].astype(str)
Out[36]:
0 idx0 1
1 idx1 2
dtype: object
</code></pre>
<p>Note that if you don't need to introduce the space in between, it's shorter:</p>
<pre><code>In [37]: s.reset_index().astype(str).sum(axis=1)
Out[37]:
0 idx01
1 idx12
dtype: object
</code></pre>
| 1 | 2016-08-26T19:53:40Z | [
"python",
"arrays",
"pandas",
"numpy"
] |
How to combine an index "prefix" to the data in a Pandas series? Does one create a numpy array? | 39,173,560 | <p>I have a pandas series:</p>
<pre><code>import pandas
ser = pd.Series(...)
ser
idx1 23421535123415135
idx2 98762981356281343
idx3 394123942916498173
idx4 41234189756983411
...
idx50 123412938479283419
</code></pre>
<p>I would like to combined the index append to the front of each data row. The output I'm looking for is:</p>
<pre><code>idx1 23421535123415135
idx2 98762981356281343
idx3 394123942916498173
idx4 41234189756983411
...
idx50 123412938479283419
</code></pre>
<p>This could either be a pandas Series (where the array is naturally indexed) or a numpy array. </p>
<p>For a dataframe, in order to combine two columns, you use:</p>
<pre><code>df["newcolumn"] = df[['columnA','columnB']].astype(str).sum(axis=1)
</code></pre>
<p>but I'm confused how to accomplish this with a pandas series.</p>
| 1 | 2016-08-26T19:33:41Z | 39,174,103 | <p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.cat.html" rel="nofollow">pandas.series.str.cat</a> :</p>
<pre><code>In[8]:ser
Out[8]:
idx0 23421535123415135
idx1 98762981356281343
idx2 394123942916498173
idx3 41234189756983411
dtype: int64
In[9]:ser=pd.Series(ser.index.astype(str).str.cat(ser.astype(str),' '))
In[10]:ser
Out[10]:
0 idx0 23421535123415135
1 idx1 98762981356281343
2 idx2 394123942916498173
3 idx3 41234189756983411
dtype: object
</code></pre>
| 1 | 2016-08-26T20:18:45Z | [
"python",
"arrays",
"pandas",
"numpy"
] |
number instead of date string when writing xlsx file | 39,173,565 | <p>In a LibreOffice xlsx cell, the value is like this: 01/13/2016.
When I am creating a new xlsx file using python2, then that 01/13/2016 is converting to 42461.</p>
<p>python code :</p>
<pre><code>sheet1.write(row,col,sheet.cell(row,col).value)
tab_matching = 0
for sheet_name in book.sheet_names():
temp_sheet_name = sheet_name.lower()
if temp_sheet_name == tab_name:
tab_matching = 1
sheet = book.sheet_by_name(sheet_name)
temp_sheet_name = file_prefix+part_file_name+"_"+file_type+".xlsx"
if os.path.exists(detail_path):
xlsx_file_name = detail_path+"/"+temp_sheet_name
else:
xlsx_file_name = dirname+"/"+temp_sheet_name
new_book = xlsxwriter.Workbook(xlsx_file_name)
sheet1 = new_book.add_worksheet()
for row in range(sheet.nrows):
for col in range(sheet.ncols):
sheet1.write(row,col,sheet.cell(row,col).value)
new_book.close()
</code></pre>
<p>Could you tell me why this is happening?</p>
| 1 | 2016-08-26T19:34:11Z | 39,173,643 | <p>You could do this.</p>
<pre><code>>>> import datetime
>>> today=datetime.datetime.now()
>>> today
datetime.datetime(2016, 8, 27, 1, 7, 1, 909049)
>>> value=today.strftime("%d/%m/%Y")
'27/08/2016'
>>> sheet1.write(row,col,sheet.cell(row,col).value)
</code></pre>
| 0 | 2016-08-26T19:40:36Z | [
"python",
"libreoffice",
"xlsxwriter"
] |
number instead of date string when writing xlsx file | 39,173,565 | <p>In a LibreOffice xlsx cell, the value is like this: 01/13/2016.
When I am creating a new xlsx file using python2, then that 01/13/2016 is converting to 42461.</p>
<p>python code :</p>
<pre><code>sheet1.write(row,col,sheet.cell(row,col).value)
tab_matching = 0
for sheet_name in book.sheet_names():
temp_sheet_name = sheet_name.lower()
if temp_sheet_name == tab_name:
tab_matching = 1
sheet = book.sheet_by_name(sheet_name)
temp_sheet_name = file_prefix+part_file_name+"_"+file_type+".xlsx"
if os.path.exists(detail_path):
xlsx_file_name = detail_path+"/"+temp_sheet_name
else:
xlsx_file_name = dirname+"/"+temp_sheet_name
new_book = xlsxwriter.Workbook(xlsx_file_name)
sheet1 = new_book.add_worksheet()
for row in range(sheet.nrows):
for col in range(sheet.ncols):
sheet1.write(row,col,sheet.cell(row,col).value)
new_book.close()
</code></pre>
<p>Could you tell me why this is happening?</p>
| 1 | 2016-08-26T19:34:11Z | 39,176,297 | <p><code>42461</code> is the underlying date value for 04/01/2016. To show the date instead of the number, specify a date format:</p>
<pre><code>format1 = new_book.add_format({'num_format': 'mm/dd/yyyy'})
sheet1.write('B1', 42461, format1) # 04/01/2016
sheet1.write('B2', 42382, format1) # 01/13/2016
</code></pre>
<p>Documentation is at <a href="http://xlsxwriter.readthedocs.io/working_with_dates_and_time.html" rel="nofollow">http://xlsxwriter.readthedocs.io/working_with_dates_and_time.html</a>.</p>
| 2 | 2016-08-27T00:25:05Z | [
"python",
"libreoffice",
"xlsxwriter"
] |
How to match nested italic font tags with Xpath? | 39,173,604 | <p>Consider an <code>xml</code> structure like the following</p>
<pre><code><p class="long">
<i>Malicious</i>
" is the adjective based on the noun "
<i>malice</i>
", which means the desire to harm others. Both words come from the latin word "
</p>
</code></pre>
<p>I want to select all text inside the <code><p></code> tag.
I tried with </p>
<pre><code>examples = tree.xpath('//p[@class="long"]/text()')
</code></pre>
<p>With this, however, all text between <code><i></code> tags is ignored for some reason.</p>
<p>What is the correct way to extract all the text inside the <code><p></code> tags, regardless of it being also contained in other nested tags?</p>
| -2 | 2016-08-26T19:36:51Z | 39,173,868 | <p>Try with</p>
<pre><code>examples=tree.xpath('//p[@class="long"]//text()')
</code></pre>
<p>(with the double slash before <code>text()</code>, which also matches nodes that aren't direct children)</p>
| 0 | 2016-08-26T19:58:09Z | [
"python",
"xpath",
"lxml"
] |
How to match nested italic font tags with Xpath? | 39,173,604 | <p>Consider an <code>xml</code> structure like the following</p>
<pre><code><p class="long">
<i>Malicious</i>
" is the adjective based on the noun "
<i>malice</i>
", which means the desire to harm others. Both words come from the latin word "
</p>
</code></pre>
<p>I want to select all text inside the <code><p></code> tag.
I tried with </p>
<pre><code>examples = tree.xpath('//p[@class="long"]/text()')
</code></pre>
<p>With this, however, all text between <code><i></code> tags is ignored for some reason.</p>
<p>What is the correct way to extract all the text inside the <code><p></code> tags, regardless of it being also contained in other nested tags?</p>
| -2 | 2016-08-26T19:36:51Z | 39,174,710 | <p>Avoid the use of text() unless you have very special requirements - for exactly this reason. You're probably interested in the string value of the <code>p</code> element, not in its child text and element nodes. Exactly how to select this depends on the environment (does your XPath API allow returning a string rather than a node-set? Does it support XPath 2.0? Does your path expression select more than one "p" element? Can you just return the <code>p</code> element, and then get its string value in the host application?)</p>
| 1 | 2016-08-26T21:14:33Z | [
"python",
"xpath",
"lxml"
] |
Mac OS X error: directory that is not on PYTHONPATH and which Python does not read ".pth" files from | 39,173,637 | <p>I'm getting an error while trying to install FEnicS on Mac OS X 10.11.6. I've read the responses to similar questions on this website, and have tried the suggested solutions, but I must be doing something wrong. </p>
<p>On running the command:</p>
<pre><code>curl -s https://fenicsproject.org/fenics-install.sh | bash
</code></pre>
<p>I get an error while the cython package is being installed:</p>
<pre><code>[cython] Building cython/e2t4ieqlgjl3, follow log with:
[cython] tail -f /Users/sophiaw/.hashdist/tmp/cython-e2t4ieqlgjl3-1/_hashdist/build.log
[cython|ERROR] Command '[u'/bin/bash', '_hashdist/build.sh']' returned non-zero exit status 1
[cython|ERROR] command failed (code=1); raising.
</code></pre>
<p>The message from build.log is:</p>
<blockquote>
<p>Checking .pth file support in
/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages/
/Users/sophiaw/.hashdist/bld/python/pf77qttkbtzn/bin/python -E -c pass</p>
<p>TEST FAILED:
/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages/
does NOT support .pth files error: bad install directory or
PYTHONPATH</p>
<p>You are attempting to install a package to a directory that is not on
PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:</p>
<p>/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages/</p>
<p>and your PYTHONPATH environment variable currently contains:
'/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/Python.framework/Versions/2.7/lib/python2.7/site-packages:'</p>
<p>Here are some of your options for correcting the problem:</p>
<ul>
<li><p>You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files</p></li>
<li><p>You can add the installation directory to the PYTHONPATH environment variable. (It must then also be on PYTHONPATH whenever you run Python
and want to use the package(s) you are installing.)</p></li>
<li><p>You can set up the installation directory to support ".pth" files by using one of the approaches described here:</p></li>
</ul>
<p><a href="https://pythonhosted.org/setuptools/easy_install.html#custom-installation-locations" rel="nofollow">https://pythonhosted.org/setuptools/easy_install.html#custom-installation-locations</a></p>
<p>Please make the appropriate changes for your system and try again.</p>
</blockquote>
<p>I've tried adding this to the bash_profile, but get the same error:</p>
<pre><code>export PYTHONPATH=/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages:$PYTHONPATH.
</code></pre>
<p>How can I fix this error?</p>
| 0 | 2016-08-26T19:39:49Z | 39,273,105 | <p>This was resolved by the fenics support group. to install FEniCS on OS X, Docker is a more convenient option.</p>
| 0 | 2016-09-01T13:38:39Z | [
"python",
"osx",
"python-2.7",
"pythonpath",
"pth"
] |
keep tracking time in while loop and interacting with other commands in python3 | 39,173,700 | <p>So i created a while loop that the user input and an output is returned</p>
<pre><code>choice = input()
while True:
if choice == "Mark":
print("Zuckerberg")
elif choice == "Sundar":
print("Pichai")
</code></pre>
<p>and i want to keep time so when i hit Facebook is going to keep time for FB and when i type Google is going to keep time for google like this</p>
<pre><code>import time
choice = input()
while True:
if choice == "Facebook":
endb = time.time()
starta = time.time()
if choice == "google":
enda = time.time()
startb = time.time()
if choice == "Mark":
print("Zuckerberg")
elif choice == "Sundar":
print("Pichai")
</code></pre>
<p>if i make this like above when i get to print the elapsed time is going to print
the same number but is going to be minus instead of plus, and vice versa</p>
<pre><code>elapseda = enda - starta
elapsedb = endb - startb
print(elapseda)
print(elapsedb)
</code></pre>
<p>How do i keep track of the time but be able to interact with my other input/outputs?</p>
<p>Thanks</p>
<p>##############################################################################</p>
<p>Edit: Sorry for making it not clear. What i meant by tracking time it that instead of print an output when you type a keyword is going to track time. This will be used to take the possession time of a sport match but meanwhile count other stats like Penalty Kicks and stuff. I cant post my code due to character limit but here is an idea:</p>
<pre><code>while True:
choice = input()
if choice == "pk":
print("pk")
elif choice == "fk":
print("fk")
elif choice == "q":
break
</code></pre>
<p>and in there i should put possession time but meanwhile i want to interact with the others</p>
| -1 | 2016-08-26T19:45:29Z | 39,173,757 | <p>In the while loop you could count seconds like so.</p>
<pre><code>import time
a = 0
while True:
a = a + 1
time.sleep(1)
</code></pre>
<p>That would mean that a is about how many seconds it look to do the while loop.</p>
| 0 | 2016-08-26T19:50:34Z | [
"python",
"python-3.x",
"input",
"time",
"while-loop"
] |
I have a issue which is that I use TabularInline django 1.7 does not let me save | 39,173,708 | <p><img src="http://i.stack.imgur.com/S11GJ.png" alt=""></p>
<p><img src="http://i.stack.imgur.com/uUalb.png" alt=""></p>
<p>I have a problem which is that I use TabularInline Django 1.7 does not let me save shows me everything perfectly to give to keep me throws an error this is problem that I utlizando another database other than the default and exception throws me that the table does not exist in the database.</p>
| -2 | 2016-08-26T19:45:46Z | 39,174,771 | <p>Firstly, the images aren't very clear, but what I understand from your description of the problem is that the models do not have corresponding tables in the database. Please check/ share your configuration in settings.py and run</p>
<pre><code>python manage.py makemigrations
</code></pre>
<p>and</p>
<pre><code>python manage.py migrate
</code></pre>
<p>If there is still any issues, please share more information</p>
| 0 | 2016-08-26T21:19:49Z | [
"python",
"django"
] |
Server-side implementation of instagram-like tags | 39,173,717 | <p>I'm looking for some pythonic way to implement instagram-like tags functionality.</p>
<p>Let's suppose my Tag Model is:</p>
<pre><code>class Tag(models.Model):
slug = models.SlugField(max_length=120, unique=True)
products = models.ManyToManyField(Product)
def get_absolute_url(self):
return reverse("tags:detail", kwargs={"slug": self.slug})
</code></pre>
<p>And Comment Model is:</p>
<pre><code>class Comment(models.Model):
text = models.TextField(max_length=350)
user = models.ForeignKey(settings.AUTH_USER_MODEL)
product = models.ForeignKey(Product)
timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
</code></pre>
<p>What's the best way to implement addition of links inside CharField so that the message code of comment "Some sort of cool stackoverflow comment"
would look like this:</p>
<pre><code><p>
Some sort of <a href="{% url "tags:detail" kwargs={"slug": "cool"}
%}">#cool</a> <a href="{% url "tags:detail" kwargs={"slug": "stackoverflow"} %}">#stackoverflow</a> comment
</p>
</code></pre>
| 0 | 2016-08-26T19:46:47Z | 39,174,717 | <p>I would suggest use of another model to link the Tags to the Comments. This model will have the tag and comment both as Many to Many relations. Now this model can be used to establish relations and perform operations. Rest depends on your logic, how well you use this model.</p>
| 0 | 2016-08-26T21:15:30Z | [
"python",
"django"
] |
Reshaping pyspark dataframe to 4-dimensional numpy array for Keras/Theano | 39,173,761 | <p>I'm trying to get a spark dataframe, traindf into a 4-d numpy array. I've tried this:</p>
<pre><code>traindf = sqlContext.createDataFrame([
(1, 1, 2, 3),
(1, 2, 2, 3),
(1, 3, 2, 3),
(1, 4, 2, 3),
(2, 4, 5, 6),
(2, 4, 5, 6),
(3, 7, 8, 9),
(2, 4, 5, 6),
(3, 7, 8, 9),
(3, 7, 8, 9)
], ("id", "image", "s", "t"))
values = (traindf.rdd.map(lambda l: [map(lambda r: float(r), l)]).collect())
x = np.array(values)
x = np.array_split(x, x.shape[0]/2)
x = np.asarray(x)
x.shape
</code></pre>
<p>This yields (5, 2, 1, 4), but it appears keras needs (5, 1, 2, 4). I've tried a couple ways, but am not seeing a good way to get the correct format.</p>
<p>Any suggestions?</p>
| 0 | 2016-08-26T19:50:59Z | 39,174,017 | <p>Just figured it out, tack this onto the end</p>
<pre><code>x = np.reshape(x, (5, 1, 2, 4))
</code></pre>
| 0 | 2016-08-26T20:12:06Z | [
"python",
"numpy",
"apache-spark",
"pyspark",
"keras"
] |
Making dynamic lists to print out variables | 39,173,784 | <p>How can I make a dynamic list and then print it out, this example is just an idea of what I want to do, the lists could be in the hundreds. </p>
<p>But I want anything that has cat to print "cat" with any value associated with it listed below it,</p>
<p>data = [["cat","one"], ["dog", "one"], ["cat", "ten"], ["frog", "one"], ["dog", "ten", "green"]]</p>
<pre><code>#Would like this to print out like:
cat
one
ten
dog
one
ten
green
frog
one
</code></pre>
| -1 | 2016-08-26T19:52:48Z | 39,173,809 | <p>You can use a defaultdict.</p>
<pre><code>from collections import defaultdict
groups = defaultdict(list)
for animal, value in data:
groups[animal].append(value)
for animal, values in groups.items():
print(animal)
for value in values:
print(value)
</code></pre>
| 6 | 2016-08-26T19:54:23Z | [
"python"
] |
Making dynamic lists to print out variables | 39,173,784 | <p>How can I make a dynamic list and then print it out, this example is just an idea of what I want to do, the lists could be in the hundreds. </p>
<p>But I want anything that has cat to print "cat" with any value associated with it listed below it,</p>
<p>data = [["cat","one"], ["dog", "one"], ["cat", "ten"], ["frog", "one"], ["dog", "ten", "green"]]</p>
<pre><code>#Would like this to print out like:
cat
one
ten
dog
one
ten
green
frog
one
</code></pre>
| -1 | 2016-08-26T19:52:48Z | 39,173,900 | <p>You can also use <code>setdefault</code>.</p>
<pre><code>result = {}
for animal, value in data:
result.setdefault(animal, []).append(value)
print(result)
</code></pre>
| 0 | 2016-08-26T20:01:32Z | [
"python"
] |
Pandas: convert dtype 'object' to int | 39,173,813 | <p>I've read an SQL query into Pandas and the values are coming in as dtype 'object', although they are strings, dates and integers. I am able to convert the date 'object' to a Pandas datetime dtype, but I'm getting an error when trying to convert the string and integers.</p>
<p>Here is an example:</p>
<pre><code>>>> import pandas as pd
>>> df = pd.read_sql_query('select * from my_table', conn)
>>> df
id date purchase
1 abc1 2016-05-22 1
2 abc2 2016-05-29 0
3 abc3 2016-05-22 2
4 abc4 2016-05-22 0
>>> df.dtypes
id object
date object
purchase object
dtype: object
</code></pre>
<p>Converting the df['date'] to a datetime works...</p>
<pre><code>>>> pd.to_datetime(df['date'])
1 2016-05-22
2 2016-05-29
3 2016-05-22
4 2016-05-22
Name: date, dtype: datetime64[ns]
</code></pre>
<p>But I get an error when trying to convert the df['purchase'] to an integer. </p>
<pre><code>>>> df['purchase'].astype(int)
....
pandas/lib.pyx in pandas.lib.astype_intsafe (pandas/lib.c:16667)()
pandas/src/util.pxd in util.set_value_at (pandas/lib.c:67540)()
TypeError: long() argument must be a string or a number, not 'java.lang.Long'
</code></pre>
<p>NOTE: I get a similar error when I tried .astype('float')</p>
<p>And when trying to convert to a string, nothing seems to happen...</p>
<pre><code>>>> df['id'].apply(str)
1 abc1
2 abc2
3 abc3
4 abc4
Name: id, dtype: object
</code></pre>
| 0 | 2016-08-26T19:54:30Z | 39,216,001 | <p>Documenting the answer that worked for me based on the comment by @piRSquared. </p>
<p>I needed to convert to a string first, then an integer.</p>
<pre><code>>>> df['purchase'].astype(str).astype(int)
</code></pre>
| 0 | 2016-08-29T22:14:43Z | [
"python",
"pandas",
"numpy"
] |
Appending to a dataframe's multi-index and restacking in pandas | 39,173,829 | <p>How do I append to a dataframe's multi-index and restack by the new index column's sort order?</p>
<p>I have a dataframe with a multi-index <code>['section_id','last_checkout']</code> that represents books in a library as follows:</p>
<pre><code> book_id author_id
section_id last_checkout
4 2016-04-04 07:01:59.223 1 10
2016-04-04 07:01:59.223 2 11
2016-04-04 07:01:59.223 3 12
2016-04-04 07:01:59.233 4 13
2016-04-04 07:01:59.247 5 13
2016-04-04 07:01:59.253 6 14
5 2016-04-04 07:01:59.253 10 15
2016-04-04 07:01:59.268 11 10
</code></pre>
<p>so books <code>1</code> to <code>6</code> are in section <code>4</code>. I plan to add another column, <code>pd.Series({'floor': [1,1,2,1,2,3,4,1]})</code> to the index:</p>
<pre><code> book_id author_id
section_id floor last_checkout
4 1 2016-04-04 07:01:59.223 1 10
1 2016-04-04 07:01:59.223 2 11
2 2016-04-04 07:01:59.223 3 12
1 2016-04-04 07:01:59.233 4 13
2 2016-04-04 07:01:59.247 5 13
3 2016-04-04 07:01:59.253 6 14
5 4 2016-04-04 07:01:59.253 10 15
1 2016-04-04 07:01:59.268 11 10
</code></pre>
<p>After this, I want to stack by rows by floor while maintaining the ordering that already exists:</p>
<pre><code> book_id author_id
section_id floor last_checkout
4 1 2016-04-04 07:01:59.223 1 10
1 2016-04-04 07:01:59.223 2 11
1 2016-04-04 07:01:59.233 4 13
5 1 2016-04-04 07:01:59.268 11 10
4 2 2016-04-04 07:01:59.223 3 12
2 2016-04-04 07:01:59.247 5 13
3 2016-04-04 07:01:59.253 6 14
5 4 2016-04-04 07:01:59.253 10 15
</code></pre>
<p>I thought it should be pretty simple but the API seems unintuitive after I tried various permutations of these unsuccessfully:</p>
<pre><code># Cannot append equal length series to multi-index
#1: df.index = df.index.append(series)
# Underlying mergesort does not 'stack' the groups in original ordering
#2: df['floor'] = series
#3: df.sort_values('floor', ascending=True)
#4: df.sort_values(['floor', 'last_checkout'], ascending=[True,True])
</code></pre>
| 0 | 2016-08-26T19:55:15Z | 39,187,545 | <p>Here is a solution for you.</p>
<p>First the way you are defining your series is quite unorthodox. Best to define the series as:</p>
<pre><code>test = pd.Series([1,1,2,1,2,3,4,1],name='floor')
</code></pre>
<p>then take your multi-index data-frame and reset the index. To "append"/vertically stack a column use 'join' instead of 'append'. so this is how the code should look like</p>
<hr>
<pre><code>df = df.reset_index()
floor_series = pd.Series([1,1,2,1,2,3,4,1],name='floor')
df = df.join(test)
df = df.sort('floor')
df = df.set_index(['floor','section_id'])
</code></pre>
| 0 | 2016-08-28T02:36:41Z | [
"python",
"pandas"
] |
compute the difference of all possible rows | 39,173,897 | <p>Based on a selection <code>ds</code> of a dataframe <code>d</code> with:</p>
<p><code>{ 'x': d.x, 'y': d.y, 'a':d.a, 'b':d.b, 'c':d.c 'row:d.n'})</code> </p>
<p>Having <code>n</code> rows, <code>x</code> ranges from <code>0</code> to <code>n-1</code>. The column <code>n</code> is needed since it's a selection and indices need to be kept for a later query.</p>
<p>How do you efficiently compute the difference between each row (e.g.<code>a_0, a_1, etc</code>) of each column (<code>a, b, c</code>) without losing the rows information (e.g. new column with the indices of the rows that were used) ?</p>
<p><strong>MWE</strong> </p>
<p>Sample selection <code>ds</code>:</p>
<pre><code> x y a b c n
554.607085 400.971878 9789 4151 6837 146
512.231450 405.469524 8796 3811 6596 225
570.427284 694.369140 1608 2019 2097 291
</code></pre>
<p>Desired output:</p>
<p><code>dist</code> euclidean distance <code>math.hypot(x2 - x1, y2 - y1)</code></p>
<p><code>da, db, dc</code> for <code>da: np.abs(a1-a2)</code> </p>
<p><code>ns</code> a string with both <code>n</code>s of the employed rows</p>
<p>the result would look like:</p>
<pre><code> dist da db dc ns
42.61365102824963 993 340 241 146-225
293.82347069813255 8181 2132 4740 146-291
.. .. .. .. 225-291
</code></pre>
| 0 | 2016-08-26T20:01:14Z | 39,176,214 | <p>You can use <code>itertools.combinations()</code> to generate the pairs:</p>
<p>Read data first:</p>
<pre><code>import pandas as pd
from io import StringIO
import numpy as np
text = """ x y a b c n
554.607085 400.971878 9789 4151 6837 146
512.231450 405.469524 8796 3811 6596 225
570.427284 694.369140 1608 2019 2097 291"""
df = pd.read_csv(StringIO(text), delim_whitespace=True)
</code></pre>
<p>Create the index and calculate the results:</p>
<pre><code>from itertools import combinations
index = np.array(list(combinations(range(df.shape[0]), 2)))
df1, df2 = [df.iloc[idx].reset_index(drop=True) for idx in index.T]
res = pd.concat([
np.hypot(df1.x - df2.x, df1.y - df2.y),
df1[["a", "b", "c"]] - df2[["a", "b", "c"]],
df1.n.astype(str) + "-" + df2.n.astype(str)
], axis=1)
res.columns = ["dist", "da", "db", "dc", "ns"]
res
</code></pre>
<p>the output:</p>
<pre><code> dist da db dc ns
0 42.613651 993 340 241 146-225
1 293.823471 8181 2132 4740 146-291
2 294.702805 7188 1792 4499 225-291
</code></pre>
| 1 | 2016-08-27T00:08:43Z | [
"python",
"pandas"
] |
compute the difference of all possible rows | 39,173,897 | <p>Based on a selection <code>ds</code> of a dataframe <code>d</code> with:</p>
<p><code>{ 'x': d.x, 'y': d.y, 'a':d.a, 'b':d.b, 'c':d.c 'row:d.n'})</code> </p>
<p>Having <code>n</code> rows, <code>x</code> ranges from <code>0</code> to <code>n-1</code>. The column <code>n</code> is needed since it's a selection and indices need to be kept for a later query.</p>
<p>How do you efficiently compute the difference between each row (e.g.<code>a_0, a_1, etc</code>) of each column (<code>a, b, c</code>) without losing the rows information (e.g. new column with the indices of the rows that were used) ?</p>
<p><strong>MWE</strong> </p>
<p>Sample selection <code>ds</code>:</p>
<pre><code> x y a b c n
554.607085 400.971878 9789 4151 6837 146
512.231450 405.469524 8796 3811 6596 225
570.427284 694.369140 1608 2019 2097 291
</code></pre>
<p>Desired output:</p>
<p><code>dist</code> euclidean distance <code>math.hypot(x2 - x1, y2 - y1)</code></p>
<p><code>da, db, dc</code> for <code>da: np.abs(a1-a2)</code> </p>
<p><code>ns</code> a string with both <code>n</code>s of the employed rows</p>
<p>the result would look like:</p>
<pre><code> dist da db dc ns
42.61365102824963 993 340 241 146-225
293.82347069813255 8181 2132 4740 146-291
.. .. .. .. 225-291
</code></pre>
| 0 | 2016-08-26T20:01:14Z | 39,177,020 | <p>This approach makes good use of Pandas and the underlying numpy capabilities, but the matrix manipulations are a little hard to keep track of:</p>
<pre><code>import pandas as pd, numpy as np
ds = pd.DataFrame(
[
[554.607085, 400.971878, 9789, 4151, 6837, 146],
[512.231450, 405.469524, 8796, 3811, 6596, 225],
[570.427284, 694.369140, 1608, 2019, 2097, 291]
],
columns = ['x', 'y', 'a', 'b', 'c', 'n']
)
def concat_str(*arrays):
result = arrays[0]
for arr in arrays[1:]:
result = np.core.defchararray.add(result, arr)
return result
# Make a panel with one item for each column, with a square data frame for
# each item, showing the differences between all row pairs.
# This creates perpendicular matrices of values based on the underlying numpy arrays;
# then numpy broadcasts them along the missing axis when calculating the differences
p = pd.Panel(
(ds.values[np.newaxis,:,:] - ds.values[:,np.newaxis,:]).transpose(),
items=['d'+c for c in ds.columns], major_axis=ds.index, minor_axis=ds.index
)
# calculate euclidian distance
p['dist'] = np.hypot(p['dx'], p['dy'])
# create strings showing row relationships
p['ns'] = concat_str(ds['n'].values.astype(str)[:,np.newaxis], '-', ds['n'].values.astype(str)[np.newaxis,:])
# remove unneeded items
del p['dx'], p['dy'], p['dn']
# convert to frame
diffs = p.to_frame().reindex_axis(['dist', 'da', 'db', 'dc', 'ns'], axis=1)
diffs
</code></pre>
<p>This gives:</p>
<pre><code> dist da db dc ns
major minor
0 0 0.000000 0 0 0 146-146
1 42.613651 993 340 241 146-225
2 293.823471 8181 2132 4740 146-291
1 0 42.613651 -993 -340 -241 225-146
1 0.000000 0 0 0 225-225
2 294.702805 7188 1792 4499 225-291
2 0 293.823471 -8181 -2132 -4740 291-146
1 294.702805 -7188 -1792 -4499 291-225
2 0.000000 0 0 0 291-291
</code></pre>
| 1 | 2016-08-27T02:55:50Z | [
"python",
"pandas"
] |
Floodfill segmented image in numpy/python | 39,173,947 | <p>I have a numpy array which represents a segmented 2-dimensional matrix from an image. Basically, it's a sparse matrix with a bunch of closed shapes that are the outlines of the segments of the image. What I need to do is colorize the empty pixels within each closed shape with a different color/label in numpy. </p>
<p>I know I could do this with floodfill in PIL but I'm trying not to have to convert the matrix back and forth from numpy to PIL. It would be nice if there was a function in someting like skimage or sklearn that could "auto-label" all the different closed regions of my matrix with a different label for me (it could be a monotonically incrementing integer or a color. I don't care so long as it represents the correct grouping of adjacent pixels within its region). </p>
<p>I've already spent a lot of time trying to implement my own floodfill and at this point I'd just like somehting that could label the image out of the box for me.</p>
| 0 | 2016-08-26T20:05:30Z | 39,181,829 | <p>I'm assuming that your matrix is binary where non-zero values represent the extracted segments and zero values are values you don't care about. The <code>scikit-image</code> <code>label</code> function from the <code>measure</code> module may be of interest: <a href="http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label" rel="nofollow">http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label</a></p>
<p>It essentially performs a connected components analysis and labels all separately closed components together with an integer number. You need to be careful though with regards to how you specify connectivity. There is 4-connectedness and 8-connectedness where the former finds connected regions using only the North, South, East and West directions whereas 8-connectedness uses all 8 directions (North, South, East, West, Northeast, Southeast, Northwest, Southwest). You would use the <code>connectivity</code> option and specify <code>1</code> for the 4-connectedness and <code>2</code> for the 8-connectedness. </p>
<p>However, the default connectivity would be a full connectivity, so for the case of 2D it would be the <code>2</code> option. I suspect for you it would be this way. Any blobs in your matrix that are zero would be labelled as zero. Without further ado, here's a very simple reproducible example:</p>
<pre><code>In [1]: from skimage.measure import label
In [2]: import numpy as np
In [3]: x = np.zeros((8,8))
In [4]: x[0:4,0:4] = 1
In [5]: x[6:8,6:8] = 1
In [6]: x
Out[6]:
array([[ 1., 1., 1., 1., 0., 0., 0., 0.],
[ 1., 1., 1., 1., 0., 0., 0., 0.],
[ 1., 1., 1., 1., 0., 0., 0., 0.],
[ 1., 1., 1., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 1., 1.],
[ 0., 0., 0., 0., 0., 0., 1., 1.]])
In [7]: label(x)
Out[7]:
array([[1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 2, 2],
[0, 0, 0, 0, 0, 0, 2, 2]], dtype=int64)
</code></pre>
<p>We can see that there are two separate islands that I created on the top left and bottom right corner. Once you run the <code>label</code> function, it returns a label matrix identifying regions of pixels that belong to each other. Pixels that have the same ID mean that the belong to the same region.</p>
<p>To show you how the connectivity comes into play, here's another simple example:</p>
<pre><code>In [1]: import numpy as np
In [2]: from skimage.measure import label
In [3]: y = np.array([[0,1,0,0],[1,1,1,0],[0,1,0,1]])
In [4]: y
Out[4]:
array([[0, 1, 0, 0],
[1, 1, 1, 0],
[0, 1, 0, 1]])
In [5]: label(y, connectivity=1)
Out[5]:
array([[0, 1, 0, 0],
[1, 1, 1, 0],
[0, 1, 0, 2]], dtype=int64)
In [6]: label(y)
Out[6]:
array([[0, 1, 0, 0],
[1, 1, 1, 0],
[0, 1, 0, 1]], dtype=int64)
</code></pre>
<p>The input has a cross pattern on the top left corner and a separate non-zero value at the bottom right corner. If we use 4-connectivity, the bottom right corner would be classified as a different label but if we use the default connectivity (full), every pixel would be classified as the same label.</p>
| 0 | 2016-08-27T13:44:53Z | [
"python",
"image",
"numpy",
"image-processing",
"image-segmentation"
] |
Drop all data in a pandas dataframe | 39,173,992 | <p>I would like to drop all data in a pandas dataframe, but am getting <code>TypeError: drop() takes at least 2 arguments (3 given)</code>. I essentially want a blank dataframe with just my columns headers.</p>
<pre><code>import pandas as pd
web_stats = {'Day': [1, 2, 3, 4, 2, 6],
'Visitors': [43, 43, 34, 23, 43, 23],
'Bounce_Rate': [3, 2, 4, 3, 5, 5]}
df = pd.DataFrame(web_stats)
df.drop(axis=0, inplace=True)
print df
</code></pre>
| 1 | 2016-08-26T20:09:10Z | 39,174,024 | <p>You need to pass the labels to be dropped.</p>
<pre><code>df.drop(df.index, inplace=True)
</code></pre>
<p>By default, it operates on <code>axis=0</code>.</p>
<p>You can achieve the same with <code>df.iloc[0:0]</code>.</p>
| 4 | 2016-08-26T20:12:46Z | [
"python",
"python-2.7",
"pandas"
] |
Faster AUC in sklearn or python | 39,174,005 | <p>I have over half a million pairs of true labels and predicted scores (the length of each 1d array varies and can be between 10,000-30,000 in length) that I need to calculate the AUC for. Right now, I have a for-loop that calls:</p>
<pre><code># Simple Example with two pairs of true/predicted values instead of 500,000
from sklearn import metrics
import numpy as np
pred = [None] * 2
pred[0] = np.array([3,2,1])
pred[1] = np.array([15,12,14,11,13])
true = [None] * 2
true[0] = np.array([1,0,0])
true[1] = np.array([1,1,1,0,0])
for i in range(2):
fpr, tpr, thresholds = metrics.roc_curve(true[i], pred[i])
print metrics.auc(fpr, tpr)
</code></pre>
<p>However, it takes about 1-1.5 hours to process the entire dataset and calculate the AUC for each true/prediction pair. Is there a faster/better way to do this?</p>
<p><strong>Update</strong></p>
<p>Each of the 500k entries can have shape (1, 10k+). I understand that I could parallelize it but I'm stuck on a machine with only two processors and so my time can really only be effectively cut down to say, 30-45, minutes which is still too long. I've identified that the AUC calculation itself is slow and was hoping to find a faster AUC algorithm than what is available in sklearn. Or, at least, find a better way to vectorize the AUC calculation so that it can be broadcasted across multiple rows.</p>
| 0 | 2016-08-26T20:10:37Z | 39,174,643 | <blockquote>
<p>Is there a faster/better way to do this?</p>
</blockquote>
<p>Since the calculation of each true/pred pair is independent (if I understood your setup), you should be able to reduce total processing time by using <a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow"><code>multiprocessing</code></a>, effectively parallelizing the calculations:</p>
<pre><code>import multiprocessing as mp
def roc(v):
""" calculate one pair, return (index, auc) """
i, true, pred = v
fpr, tpr, thresholds = metrics.roc_curve(true, pred, drop_intermediate=True)
auc = metrics.auc(fpr, tpr)
return i, auc
pool = mp.Pool(3)
result = pool.map_async(roc, ((i, true[i], pred[i]) for i in range(2)))
pool.close()
pool.join()
print result.get()
=>
[(0, 1.0), (1, 0.83333333333333326)]
</code></pre>
<p>Here <code>Pool(3)</code> creates a pool of 3 processes, <code>.map_async</code> maps all true/pred pairs and calls the <code>roc</code> function, passing one pair at a time. The index is sent along to map back results. </p>
<p>If the true/pred pairs are too large to serialize and send to the processes, you might need to write the data into some external data structure before calling <code>roc</code>, passing it just the reference <code>i</code> and read the data for each pair <code>true[i]/pred[i]</code> from within <code>roc</code> before processing.</p>
<p>A <a href="https://docs.python.org/2/library/multiprocessing.html#module-multiprocessing.pool" rel="nofollow">Pool</a> automatically manages the scheduling of processes. To reduce the risk of a memory hog, you might need to pass the <code>Pool(...., maxtasksperchild=1)</code> parameter which would start a new process for each true/pred pair (choose any other number as you see fit).</p>
<p><strong>Update</strong></p>
<blockquote>
<p>I'm stuck on a machine with only two processors</p>
</blockquote>
<p>naturally this is a limiting factor. However considering the availability of cloud computing resources at very reasonable cost that you only pay for the time you actually need it, you might want to consider alternatives in hardware before you spend eons of hours optimizing a calculation that can be so effectively parallelized. That's a luxury in its own right, really.</p>
| 1 | 2016-08-26T21:09:28Z | [
"python",
"scikit-learn",
"data-science",
"auc"
] |
Faster AUC in sklearn or python | 39,174,005 | <p>I have over half a million pairs of true labels and predicted scores (the length of each 1d array varies and can be between 10,000-30,000 in length) that I need to calculate the AUC for. Right now, I have a for-loop that calls:</p>
<pre><code># Simple Example with two pairs of true/predicted values instead of 500,000
from sklearn import metrics
import numpy as np
pred = [None] * 2
pred[0] = np.array([3,2,1])
pred[1] = np.array([15,12,14,11,13])
true = [None] * 2
true[0] = np.array([1,0,0])
true[1] = np.array([1,1,1,0,0])
for i in range(2):
fpr, tpr, thresholds = metrics.roc_curve(true[i], pred[i])
print metrics.auc(fpr, tpr)
</code></pre>
<p>However, it takes about 1-1.5 hours to process the entire dataset and calculate the AUC for each true/prediction pair. Is there a faster/better way to do this?</p>
<p><strong>Update</strong></p>
<p>Each of the 500k entries can have shape (1, 10k+). I understand that I could parallelize it but I'm stuck on a machine with only two processors and so my time can really only be effectively cut down to say, 30-45, minutes which is still too long. I've identified that the AUC calculation itself is slow and was hoping to find a faster AUC algorithm than what is available in sklearn. Or, at least, find a better way to vectorize the AUC calculation so that it can be broadcasted across multiple rows.</p>
| 0 | 2016-08-26T20:10:37Z | 39,178,912 | <blockquote>
<p>find a better way to vectorize the AUC calculation so that it can be broadcasted across multiple rows</p>
</blockquote>
<p>Probably not - sklearn already uses efficient numpy operations for its calculation of relevant parts:</p>
<pre><code>#Â -- calculate tps, fps, thresholds
#Â sklearn.metrics.ranking:_binary_clf_curve()
(...)
distinct_value_indices = np.where(np.logical_not(isclose(
np.diff(y_score), 0)))[0]
threshold_idxs = np.r_[distinct_value_indices, y_true.size - 1]
# accumulate the true positives with decreasing threshold
tps = (y_true * weight).cumsum()[threshold_idxs]
if sample_weight is not None:
fps = weight.cumsum()[threshold_idxs] - tps
else:
fps = 1 + threshold_idxs - tps
return fps, tps, y_score[threshold_idxs]
# -- calculate auc
# sklearn.metrics.ranking:auc()
...
area = direction * np.trapz(y, x)
...
</code></pre>
<p>You might be able to optimize this by profiling these functions and removing operations that you can apply more efficiently beforehand. A quick profiling of your example input scaled to 5M rows reveals a few potential bottlenecks (marked <code>>>></code>):</p>
<pre><code># your for ... loop wrapped in function roc()
%prun -s cumulative roc
722 function calls (718 primitive calls) in 5.005 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 5.005 5.005 <string>:1(<module>)
1 0.000 0.000 5.005 5.005 <ipython-input-51-27e30c04d997>:1(roc)
2 0.050 0.025 5.004 2.502 ranking.py:417(roc_curve)
2 0.694 0.347 4.954 2.477 ranking.py:256(_binary_clf_curve)
>>>2 0.000 0.000 2.356 1.178 fromnumeric.py:823(argsort)
>>>2 2.356 1.178 2.356 1.178 {method 'argsort' of 'numpy.ndarray' objects}
6 0.062 0.010 0.961 0.160 arraysetops.py:96(unique)
>>>6 0.750 0.125 0.750 0.125 {method 'sort' of 'numpy.ndarray' objects}
>>>2 0.181 0.090 0.570 0.285 numeric.py:2281(isclose)
2 0.244 0.122 0.386 0.193 numeric.py:2340(within_tol)
2 0.214 0.107 0.214 0.107 {method 'cumsum' of 'numpy.ndarray' objects}
</code></pre>
| 0 | 2016-08-27T08:07:11Z | [
"python",
"scikit-learn",
"data-science",
"auc"
] |
Why is there no lookup method for Python sets? | 39,174,020 | <p>In Python, objects can be equal despite not being the same object (<code>==</code> vs. <code>is</code>). Consider the following sequence for arbitrary objects <code>obj1</code> and <code>obj2</code>.</p>
<pre><code>assert obj1 == obj2
assert obj1 is not obj2
s = set((obj1,))
del obj1
</code></pre>
<p>Is there a general and efficient method to obtain <code>obj1</code> from <code>s</code> and <code>obj2</code> (for arbitrarily large sets <code>s</code> happening to contain an object that is equal to the one being looked up)? (It seems that constructs relying on <code>set.intersect</code>ing singleton <code>set</code>s are not reliable.)</p>
<p>If not, why?</p>
<p>The obvious alternative is to use a <code>dict</code> where each key is stored as its own value. It's not clear how much memory that approach wastes compared to the prospective <code>set</code>-based approach.</p>
| 0 | 2016-08-26T20:12:18Z | 39,174,042 | <p>No, sets are not mappings, If <code>obj1 == obj2</code>, it <em>should not matter</em>. Sets are there to test membership and to hold unique <em>values</em> (as defined by equality), not to map back to specific objects. Use a dictionary for that instead.</p>
<p>Otherwise, you'd have to iterate and pick the one object that is equal:</p>
<pre><code>obj1 = next(ob for ob in s if ob == obj2)
</code></pre>
<p>Sets in Python were borne out of the <code>dict</code> type, in that before sets were added (in 2.3 in the form of the <a href="https://docs.python.org/2/whatsnew/2.3.html#pep-218-a-standard-set-datatype" rel="nofollow"><code>sets</code> module</a>, then as a <a href="https://docs.python.org/2/whatsnew/2.4.html#pep-218-built-in-set-objects" rel="nofollow">built-in type</a>), you used dictionaries that mapped all keys to <code>None</code> to track unique values. Adding 'lookup' functionality would undo this specialisation; there is no need to add functionality to sets that dictionaries already offer, especially since dictionaries have since grown many other set features such as set algebra operations.</p>
| 7 | 2016-08-26T20:14:21Z | [
"python",
"set"
] |
Webcam is still held even after Python program ends | 39,174,086 | <p>I'm trying to use my webcam <a href="http://www.trust.com/en/product/16428-spotlight-webcam-pro" rel="nofollow">Trust Spotlight Webcam PRO</a> with Python and OpenCV and I have a problem with holding the webcam after the program ends.</p>
<p>Simple script:</p>
<pre><code>import cv2
vc = cv2.VideoCapture(1)
while True:
_, frame = vc.read()
cv2.imshow('Web cam', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
vc.release()
cv2.destroyAllWindows()
</code></pre>
<p>Everything works well when I connect the webcam to my laptop and I run the script - I see the camera image. When I stop the capturing loop by pressing 'q' key I would suppose that the <code>vc.release()</code> command releases the camera from the use. But after this first run I cannot run the script again, because this time I get the error message:</p>
<pre><code>OpenCV Error: Assertion failed (size.width>0 && size.height>0) in cv::imshow, file ..\..\..\..\opencv\modules\highgui\src\window.cpp, line 261
Traceback (most recent call last):
File ".../sample.py", line 8, in <module>
cv2.imshow('Web cam', frame)
cv2.error: ..\..\..\..\opencv\modules\highgui\src\window.cpp:261: error: (-215) size.width>0 && size.height>0 in function cv::imshow
</code></pre>
<p>I am pretty sure that some process is still holding my webcam. I also cannot connect to this webcam in this time from any other program (I tried Skype). And I also get the same error when I connect the webcam to the laptop, connect to the webcam via Skype and run the script above.</p>
<p>How can I release my webcam for future use?</p>
| 0 | 2016-08-26T20:17:23Z | 39,179,842 | <p>It seems there really probably was some (for me hidden) process, what still held the camera. When I shut down my laptop and turn it back on everything works fine.</p>
| 0 | 2016-08-27T10:03:56Z | [
"python",
"opencv",
"webcam"
] |
Create a dynamic nested dictionary in python using for loop | 39,174,170 | <p>I am new to Python, and trying to learn it "on the job". And I am required to do this.</p>
<p>Is it possible to create a 'dictionary1' dynamically which takes another 'dictionary2' as value, where 'dictionary2' is also getting updated in every for loop. Basically in terms of code, I tried:</p>
<pre><code>fetch_value = range(5) #random list of values (not in sequence)
result = {} #dictionary2
ret = {} #dictionary1
list1 = [0, 1] #this list is actually variable length, ranging from 1 to 10 (assumed len(list1) = 2 for example purpose only)
for idx in list1:
result[str(idx)] = float(fetch_value[1])
ret['key1'] = (result if len(list1) > 1 else float(fetch_value[1])) # key names like 'key1' are just for representations, actual names vary
result[str(idx)] = float(fetch_value[2])
ret['key2'] = (result if len(list1) > 1 else float(fetch_value[2]))
result[str(idx)] = float(fetch_value[3])
ret['key3'] = (result if len(list1) > 1 else float(fetch_value[3]))
result[str(idx)] = float(fetch_value[4])
ret['key4'] = (result if len(list1) > 1 else float(fetch_value[4]))
print ret
</code></pre>
<p>This outputs to:</p>
<pre><code>{'key1': {'0': 4, '1', 4}, 'key2': {'0': 4, '1', 4}, 'key3': {'0': 4, '1', 4}, 'key4': {'0': 4, '1', 4}}
</code></pre>
<p>What I need:</p>
<pre><code>{'key1': {'0': 1, '1', 1}, 'key2': {'0': 2, '1', 2}, 'key3': {'0': 3, '1', 3}, 'key4': {'0': 4, '1', 4}}
</code></pre>
<p>anything obvious I am doing wrong here?</p>
| 3 | 2016-08-26T20:25:24Z | 39,174,455 | <p>There are two problems:</p>
<ol>
<li>You needed to create a copy of the result dictionary when you set a key in <code>ret</code> to it. Otherwise, it will always hold a reference back to the same dictionary.</li>
<li>With that change, you would be keeping the last <code>ret</code> dictionary (containing <code>{'0': 4}</code>) at the beginning of your second loop, and that would get copied to all of the keys.</li>
</ol>
<p>A more concise way to do this would be a dictionary comprehension:</p>
<pre><code>fetch_value = range(5)
list1 = [0, 1]
print {
'key{}'.format(i): {
str(list_item): float(fetch_value[i]) for list_item in list1
} if len(list1) > 1 else float(fetch_value[i])
for i in xrange(1, 5)
}
</code></pre>
<p>Output:</p>
<pre><code>{
'key3': {'1': 3.0, '0': 3.0},
'key2': {'1': 2.0, '0': 2.0},
'key1': {'1': 1.0, '0': 1.0},
'key4': {'1': 4.0, '0': 4.0}
}
</code></pre>
<p>And with <code>list1 = [0]</code>, where it seems you want a float value instead of a dictionary, the output would be:</p>
<pre><code>{'key3': 3.0, 'key2': 2.0, 'key1': 1.0, 'key4': 4.0}
</code></pre>
| 2 | 2016-08-26T20:50:49Z | [
"python",
"for-loop",
"dictionary",
"nested",
"dictionary-comprehension"
] |
Map pandas dataframe on multiple keys as columns or multiIndex | 39,174,255 | <p>Setup: two pandas dataframes; data from df2 needs to be added to df1, as explained below:</p>
<ul>
<li>df1 and df2 are multiIndexed with the same four levels</li>
<li>df1 contains more rows than df2</li>
<li>df1 has three copies (in rows) of a value per unique combination of three out of the four levels of the index; that is, each row differs only with respect to the 4th level</li>
<li>df2 only partially aligns with df1 on the other 3 levels (df2 contains extraneous rows)</li>
<li>df2 contains only one column</li>
</ul>
<p>I want to add values from the one column of df2 to all three copies of the rows in df1 where the three corresponding levels match.</p>
<p>Having learned that 'merging with more than one level overlap on a multiIndex is not implemented' in pandas, I propose to map the values, but have not found a way to map on (multiple) index levels, or multiple columns, if reset index levels to columns:</p>
<pre><code>df1 = pd.DataFrame(np.array([['Dec', 'NY', 'Ren', 'Q1', 10],
['Dec', 'NY', 'Ren', 'Q2', 12],
['Dec', 'NY', 'Ren', 'Q3', 14],
['Dec', 'FL', 'Mia', 'Q1', 6],
['Dec', 'FL', 'Mia', 'Q2', 8],
['Dec', 'FL', 'Mia', 'Q3', 17],
['Apr', 'CA', 'SC', 'Q1', 1],
['Apr', 'CA', 'SC', 'Q2', 2],
['Apr', 'CA', 'SC', 'Q3', 3]]), columns=['Date', 'State', 'County', 'Quarter', 'x'])
df1.set_index(['Date', 'State', 'County', 'Quarter'], inplace=True)
df2 = pd.DataFrame(np.array([['Dec', 'NY', 'Ren', 0.4],
['Dec', 'FL', 'Mia', 0.3]]), columns=['Date', 'State', 'County', 'y'])
df2.set_index(['Date', 'State', 'County', 'y'], inplace=True)
df_combined = df1['Date', 'State', 'County'].map(df2)
</code></pre>
| 0 | 2016-08-26T20:32:11Z | 39,182,675 | <p>You can temporarily change <code>df1</code> to change the index to do the join:</p>
<pre><code>df_combined = df1.reset_index(3).join(df2,how='left')
>>> df_combined
level_3 x y
Apr CA SC Q1 1 NaN
SC Q2 2 NaN
SC Q3 3 NaN
Dec FL Mia Q1 6 0.3
Mia Q2 8 0.3
Mia Q3 17 0.3
NY Ren Q1 10 0.4
Ren Q2 12 0.4
Ren Q3 14 0.4
df_combined.set_index('level_3',append=True, inplace=True)
df_combined.index.rename(None,3,inplace=True)
>>> df_combined
x y
Apr CA SC Q1 1 NaN
Q2 2 NaN
Q3 3 NaN
Dec FL Mia Q1 6 0.3
Q2 8 0.3
Q3 17 0.3
NY Ren Q1 10 0.4
Q2 12 0.4
Q3 14 0.4
</code></pre>
<p>The reset_index method is used to temporarily turn the index that isn't in <code>df2</code> into a column so that you can do a normal join. Then turn the column back into an index when you're done.</p>
| 1 | 2016-08-27T15:18:29Z | [
"python",
"pandas"
] |
Python: OLS Regression does not generate intercept | 39,174,317 | <p>Can someone please tell me where I'm missing as the summary output does not provide constant at all though I have explicitly called it out? My df is 6212 rows à 64 columns. Thanks much.</p>
<pre><code>import statsmodels.api as sm
from statsmodels.api import add_constant
y1 = df.ix[:,-1:]
x1 = df.ix[:,16:-1]
x1 = add_constant(x1)
model1 = sm.OLS(y1 , x1 ).fit()
model1.summary()
</code></pre>
| 1 | 2016-08-26T20:38:06Z | 39,174,452 | <p>Check your data to see if it already has a column with variance zero. <code>add_constant()</code> will not, by default, add a constant column to your dataset if it already has a zero-variance column; you should explicitly tell it to add the constant even if a zero-variance column exists:</p>
<pre><code>x1 = add_constant(x1, has_constant = 'add')
</code></pre>
<p>You can read more about different options for the <code>has_constant</code> argument here: <a href="http://statsmodels.sourceforge.net/stable/generated/statsmodels.tsa.tsatools.add_constant.html" rel="nofollow">http://statsmodels.sourceforge.net/stable/generated/statsmodels.tsa.tsatools.add_constant.html</a></p>
| 1 | 2016-08-26T20:50:27Z | [
"python",
"regression"
] |
can't invoke "canvas" command: application has been destroyed | 39,174,438 | <p>When I typed the following code as per the <a href="http://greenteapress.com/wp/think-python/" rel="nofollow">Think Python</a> text book, I'm getting the error message below.</p>
<p>The window does actually get displayed, but it doesn't contain the desired content.</p>
<pre><code>from swampy.World import World
world=World()
world.mainloop()
canvas = world.ca(width=500, height=500, background='white')
bbox = [[-150,-100], [150, 100]]
canvas.rectangle(bbox, outline='black', width=2, fill='green4')
</code></pre>
<p>The error message was like this:</p>
<pre><code>Traceback (most recent call last):
File "15.4.py", line 4, in <module>
canvas = world.ca(width=500, height=500, background='white')
File "/usr/local/lib/python2.7/dist-packages/swampy/Gui.py", line 244, in ca
return self.widget(GuiCanvas, width=width, height=height, **options)
File "/usr/local/lib/python2.7/dist-packages/swampy/Gui.py", line 359, in widget
widget = constructor(self.frame, **widopt)
File "/usr/local/lib/python2.7/dist-packages/swampy/Gui.py", line 612, in __init__
Tkinter.Canvas.__init__(self, w, **options)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 2234, in __init__
Widget.__init__(self, master, 'canvas', cnf, kw)
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 2094, in __init__
(widgetName, self._w) + extra + self._options(cnf))
_tkinter.TclError: can't invoke "canvas" command: application has been destroyed
</code></pre>
| 1 | 2016-08-26T20:49:33Z | 39,174,522 | <p>The main application loop needs to be pretty much the last thing you run in your application. So move <code>world.mainloop()</code> to the end of your code like this:</p>
<pre><code>from swampy.World import World
world = World()
canvas = world.ca(width=500, height=500, background='white')
bbox = [[-150, -100], [150, 100]]
canvas.rectangle(bbox, outline='black', width=2, fill='green4')
world.mainloop()
</code></pre>
<p>What happens in your code is that when the line with <code>world.mainloop()</code> is hit, it builds the user interface elements and goes in to the main loop, which continuously provides your application with user input.</p>
<p>During it's lifetime, that main loop is where your application will spend 99% of its time.</p>
<p>But once you quit your application, the main loop terminates and tears down all those user interface elements and the world. Then the remaining lines after the main loop will be executed. In those, you're trying to build and attach a canvas to a world that has already been destroyed, hence the error message.</p>
| 3 | 2016-08-26T20:56:47Z | [
"python",
"python-2.7",
"python-3.x"
] |
How to make multiple windows through a loop? | 39,174,446 | <p>I'm a beginner python coder and I'm trying to make a GUI where you can enter information for multiple semesters. Once the user inputs the number of semesters, I want to ask about each semester individually but when I use a loop to do this, all of the windows open at once. Is there a way to put them into a sequence?</p>
<p>this is what I have so far</p>
<pre><code>def createSemesterWin(self, numSemesters):
for x in range(numSemesters):
semesterWin = Toplevel()
semesterg = SemesterGUI(semesterWin, x+1)
semesterWin.mainloop
</code></pre>
| 1 | 2016-08-26T20:49:54Z | 39,205,399 | <p>Something like this should get you on your way:</p>
<pre><code>from Tkinter import *
## Define root and geometry
root = Tk()
root.geometry('200x200')
# Define Frames
win1, win2, win3 = Frame(root, bg='red'), Frame(root, bg='green'), Frame(root, bg='blue')
# Configure Rows
root.grid_rowconfigure(0, weight = 1)
root.grid_columnconfigure(0, weight = 1)
# Place Frames
for window in [win1, win2, win3]:
window.grid(row=0, column = 0, sticky = 'news')
# Raises first window 'To the top'
win1.tkraise()
# Function to raise 'window' to the top
def raise_frame(window):
window.tkraise()
# Page1 label / button
l1 = Label(win1, text = 'This is Page1').pack(side=TOP)
P1 = Button(win1, text = 'Next Page', command = lambda:raise_frame(win2)).pack(side=BOTTOM)
# Page2 label / button
l2 = Label(win2, text = 'This is Page2').pack(side=TOP)
p2 = Button(win2, text = 'Next Page', command = lambda:raise_frame(win3)).pack(side=BOTTOM)
# Page3 label / button
l3 = Label(win3, text = 'This is Page3').pack(side=TOP)
p3 = Button(win3, text = 'Next Page', command = lambda:raise_frame(win1)).pack(side=BOTTOM)
root.mainloop()
</code></pre>
<p>Let me know what you think!</p>
| 0 | 2016-08-29T11:38:18Z | [
"python",
"tkinter"
] |
Merge Dictionaries Preserving Old Keys and New Values | 39,174,616 | <p>I'm writing a Python script that parses RSS feeds. I want to maintain a dictionary of entries from the feed that gets updated periodically. Entries that no longer exist in the feed should be removed, new entries should get a default value, and the values for previously seen entries should remain unchanged.</p>
<p>This is best explained by example, I think:</p>
<pre><code>>>> old = {
... 'a': 1,
... 'b': 2,
... 'c': 3
... }
>>> new = {
... 'c': 'x',
... 'd': 'y',
... 'e': 'z'
... }
>>> out = some_function(old, new)
>>> out
{'c': 3, 'd': 'y', 'e': 'z'}
</code></pre>
<p>Here's my current attempt at this:</p>
<pre><code>def merge_preserving_old_values_and_new_keys(old, new):
out = {}
for k, v in new.items():
out[k] = v
for k, v in old.items():
if k in out:
out[k] = v
return out
</code></pre>
<p>This works, but it seems to me there might be a better or more clever way.</p>
<p>EDIT: If you feel like testing your function:</p>
<pre><code>def my_merge(old, new):
pass
old = {'a': 1, 'b': 2, 'c': 3}
new = {'c': 'x', 'd': 'y', 'e': 'z'}
out = my_merge(old, new)
assert out == {'c': 3, 'd': 'y', 'e': 'z'}
</code></pre>
<p>EDIT 2:
Defining Martijn Pieters' answer as <code>set_merge</code>, bravosierra99's as <code>loop_merge</code>, and my first attempt as <code>orig_merge</code>, I get the following timing results:</p>
<pre><code>>>> setup="""
... old = {'a': 1, 'b': 2, 'c': 3}
... new = {'c': 'x', 'd': 'y', 'e': 'z'}
... from __main__ import set_merge, loop_merge, orig_merge
... """
>>> timeit.timeit('set_merge(old, new)', setup=setup)
3.4415210600000137
>>> timeit.timeit('loop_merge(old, new)', setup=setup)
1.161155690000669
>>> timeit.timeit('orig_merge(old, new)', setup=setup)
1.1776735319999716
</code></pre>
<p>I find this surprising, since I didn't expect the dictionary view approach to be that much slower.</p>
| 1 | 2016-08-26T21:06:17Z | 39,174,642 | <p>This should be more efficient, since you are no longer iterating through the entire old.items(). Additionally, it's more clear what you are trying to do this way since you aren't overwriting some values.</p>
<pre><code>for k, v in new.items():
if k in old.keys():
out[k] = old[k]
else:
out[k] = v
return out
</code></pre>
| 2 | 2016-08-26T21:09:23Z | [
"python",
"dictionary"
] |
Merge Dictionaries Preserving Old Keys and New Values | 39,174,616 | <p>I'm writing a Python script that parses RSS feeds. I want to maintain a dictionary of entries from the feed that gets updated periodically. Entries that no longer exist in the feed should be removed, new entries should get a default value, and the values for previously seen entries should remain unchanged.</p>
<p>This is best explained by example, I think:</p>
<pre><code>>>> old = {
... 'a': 1,
... 'b': 2,
... 'c': 3
... }
>>> new = {
... 'c': 'x',
... 'd': 'y',
... 'e': 'z'
... }
>>> out = some_function(old, new)
>>> out
{'c': 3, 'd': 'y', 'e': 'z'}
</code></pre>
<p>Here's my current attempt at this:</p>
<pre><code>def merge_preserving_old_values_and_new_keys(old, new):
out = {}
for k, v in new.items():
out[k] = v
for k, v in old.items():
if k in out:
out[k] = v
return out
</code></pre>
<p>This works, but it seems to me there might be a better or more clever way.</p>
<p>EDIT: If you feel like testing your function:</p>
<pre><code>def my_merge(old, new):
pass
old = {'a': 1, 'b': 2, 'c': 3}
new = {'c': 'x', 'd': 'y', 'e': 'z'}
out = my_merge(old, new)
assert out == {'c': 3, 'd': 'y', 'e': 'z'}
</code></pre>
<p>EDIT 2:
Defining Martijn Pieters' answer as <code>set_merge</code>, bravosierra99's as <code>loop_merge</code>, and my first attempt as <code>orig_merge</code>, I get the following timing results:</p>
<pre><code>>>> setup="""
... old = {'a': 1, 'b': 2, 'c': 3}
... new = {'c': 'x', 'd': 'y', 'e': 'z'}
... from __main__ import set_merge, loop_merge, orig_merge
... """
>>> timeit.timeit('set_merge(old, new)', setup=setup)
3.4415210600000137
>>> timeit.timeit('loop_merge(old, new)', setup=setup)
1.161155690000669
>>> timeit.timeit('orig_merge(old, new)', setup=setup)
1.1776735319999716
</code></pre>
<p>I find this surprising, since I didn't expect the dictionary view approach to be that much slower.</p>
| 1 | 2016-08-26T21:06:17Z | 39,174,645 | <p>Dictionaries have <a href="https://docs.python.org/2/library/stdtypes.html#dictionary-view-objects" rel="nofollow"><em>dictionary view objects</em></a> that act as sets. Use these to get the intersection between old and new:</p>
<pre><code>def merge_preserving_old_values_and_new_keys(old, new):
result = new.copy()
result.update((k, old[k]) for k in old.viewkeys() & new.viewkeys())
return result
</code></pre>
<p>The above uses the Python 2 syntax; use <code>old.keys() & new.keys()</code> if you are using Python 3, for the same results:</p>
<pre><code>def merge_preserving_old_values_and_new_keys(old, new):
# Python 3 version
result = new.copy()
result.update((k, old[k]) for k in old.keys() & new.keys())
return result
</code></pre>
<p>The above takes all key-value pairs from <code>new</code> as a starting point, then adds the values for <code>old</code> for any key that appears in both.</p>
<p>Demo:</p>
<pre><code>>>> merge_preserving_old_values_and_new_keys(old, new)
{'c': 3, 'e': 'z', 'd': 'y'}
</code></pre>
<p>Note that the function, like your version, produces a new dictionary (albeit that the key and value objects are shared; it is a shallow copy).</p>
<p>You could also just update the new dictionary in-place if you don't need that new dictionary for anything else:</p>
<pre><code>def merge_preserving_old_values_and_new_keys(old, new):
new.update((k, old[k]) for k in old.viewkeys() & new.viewkeys())
return new
</code></pre>
<p>You could also use a one-liner dict comprehension to build a new dictionary:</p>
<pre><code>def merge_preserving_old_values_and_new_keys(old, new):
return {k: old[k] if k in old else v for k, v in new.items()}
</code></pre>
| 4 | 2016-08-26T21:09:33Z | [
"python",
"dictionary"
] |
Merge Dictionaries Preserving Old Keys and New Values | 39,174,616 | <p>I'm writing a Python script that parses RSS feeds. I want to maintain a dictionary of entries from the feed that gets updated periodically. Entries that no longer exist in the feed should be removed, new entries should get a default value, and the values for previously seen entries should remain unchanged.</p>
<p>This is best explained by example, I think:</p>
<pre><code>>>> old = {
... 'a': 1,
... 'b': 2,
... 'c': 3
... }
>>> new = {
... 'c': 'x',
... 'd': 'y',
... 'e': 'z'
... }
>>> out = some_function(old, new)
>>> out
{'c': 3, 'd': 'y', 'e': 'z'}
</code></pre>
<p>Here's my current attempt at this:</p>
<pre><code>def merge_preserving_old_values_and_new_keys(old, new):
out = {}
for k, v in new.items():
out[k] = v
for k, v in old.items():
if k in out:
out[k] = v
return out
</code></pre>
<p>This works, but it seems to me there might be a better or more clever way.</p>
<p>EDIT: If you feel like testing your function:</p>
<pre><code>def my_merge(old, new):
pass
old = {'a': 1, 'b': 2, 'c': 3}
new = {'c': 'x', 'd': 'y', 'e': 'z'}
out = my_merge(old, new)
assert out == {'c': 3, 'd': 'y', 'e': 'z'}
</code></pre>
<p>EDIT 2:
Defining Martijn Pieters' answer as <code>set_merge</code>, bravosierra99's as <code>loop_merge</code>, and my first attempt as <code>orig_merge</code>, I get the following timing results:</p>
<pre><code>>>> setup="""
... old = {'a': 1, 'b': 2, 'c': 3}
... new = {'c': 'x', 'd': 'y', 'e': 'z'}
... from __main__ import set_merge, loop_merge, orig_merge
... """
>>> timeit.timeit('set_merge(old, new)', setup=setup)
3.4415210600000137
>>> timeit.timeit('loop_merge(old, new)', setup=setup)
1.161155690000669
>>> timeit.timeit('orig_merge(old, new)', setup=setup)
1.1776735319999716
</code></pre>
<p>I find this surprising, since I didn't expect the dictionary view approach to be that much slower.</p>
| 1 | 2016-08-26T21:06:17Z | 39,174,719 | <pre><code>old = {
'a': 1,
'b': 2,
'c': 3
}
new = {
'c': 'x',
'd': 'y',
'e': 'z'
}
def merge_preserving_old_values_and_new_keys(o, n):
out = {}
for k in n:
if k in o:
out[k] = o[k]
else:
out[k] = n[k]
return out
print merge_preserving_old_values_and_new_keys(old, new)
</code></pre>
| 0 | 2016-08-26T21:15:37Z | [
"python",
"dictionary"
] |
Merge Dictionaries Preserving Old Keys and New Values | 39,174,616 | <p>I'm writing a Python script that parses RSS feeds. I want to maintain a dictionary of entries from the feed that gets updated periodically. Entries that no longer exist in the feed should be removed, new entries should get a default value, and the values for previously seen entries should remain unchanged.</p>
<p>This is best explained by example, I think:</p>
<pre><code>>>> old = {
... 'a': 1,
... 'b': 2,
... 'c': 3
... }
>>> new = {
... 'c': 'x',
... 'd': 'y',
... 'e': 'z'
... }
>>> out = some_function(old, new)
>>> out
{'c': 3, 'd': 'y', 'e': 'z'}
</code></pre>
<p>Here's my current attempt at this:</p>
<pre><code>def merge_preserving_old_values_and_new_keys(old, new):
out = {}
for k, v in new.items():
out[k] = v
for k, v in old.items():
if k in out:
out[k] = v
return out
</code></pre>
<p>This works, but it seems to me there might be a better or more clever way.</p>
<p>EDIT: If you feel like testing your function:</p>
<pre><code>def my_merge(old, new):
pass
old = {'a': 1, 'b': 2, 'c': 3}
new = {'c': 'x', 'd': 'y', 'e': 'z'}
out = my_merge(old, new)
assert out == {'c': 3, 'd': 'y', 'e': 'z'}
</code></pre>
<p>EDIT 2:
Defining Martijn Pieters' answer as <code>set_merge</code>, bravosierra99's as <code>loop_merge</code>, and my first attempt as <code>orig_merge</code>, I get the following timing results:</p>
<pre><code>>>> setup="""
... old = {'a': 1, 'b': 2, 'c': 3}
... new = {'c': 'x', 'd': 'y', 'e': 'z'}
... from __main__ import set_merge, loop_merge, orig_merge
... """
>>> timeit.timeit('set_merge(old, new)', setup=setup)
3.4415210600000137
>>> timeit.timeit('loop_merge(old, new)', setup=setup)
1.161155690000669
>>> timeit.timeit('orig_merge(old, new)', setup=setup)
1.1776735319999716
</code></pre>
<p>I find this surprising, since I didn't expect the dictionary view approach to be that much slower.</p>
| 1 | 2016-08-26T21:06:17Z | 39,175,047 | <p>I'm not 100% the best way to add this information to the discussion: feel free to edit/redistribute it if necessary.</p>
<p>Here are timing results for all of the methods discussed here.</p>
<pre><code>from timeit import timeit
def loop_merge(old, new):
out = {}
for k, v in new.items():
if k in old:
out[k] = old[k]
else:
out[k] = v
return out
def set_merge(old, new):
out = new.copy()
out.update((k, old[k]) for k in old.keys() & new.keys())
return out
def comp_merge(old, new):
return {k: old[k] if k in old else v for k, v in new.items()}
def orig_merge(old, new):
out = {}
for k, v in new.items():
out[k] = v
for k, v in old.items():
if k in out:
out[k] = v
return out
old = {'a': 1, 'b': 2, 'c': 3}
new = {'c': 'x', 'd': 'y', 'e': 'z'}
out = {'c': 3, 'd': 'y', 'e': 'z'}
assert loop_merge(old, new) == out
assert set_merge(old, new) == out
assert comp_merge(old, new) == out
assert orig_merge(old, new) == out
setup = """
from __main__ import old, new, loop_merge, set_merge, comp_merge, orig_merge
"""
for a in ['loop', 'set', 'comp', 'orig']:
time = timeit('{}_merge(old, new)'.format(a), setup=setup)
print('{}: {}'.format(a, time))
size = 10**4
large_old = {i: 'old' for i in range(size)}
large_new = {i: 'new' for i in range(size//2, size)}
setup = """
from __main__ import large_old, large_new, loop_merge, set_merge, comp_merge, orig_merge
"""
for a in ['loop', 'set', 'comp', 'orig']:
time = timeit('{}_merge(large_old, large_new)'.format(a), setup=setup)
print('{}: {}'.format(a, time))
</code></pre>
<p>The winner is the improved looping method!</p>
<pre><code>$ python3 merge.py
loop: 0.7791572390015062 # small dictionaries
set: 3.1920828100010112
comp: 1.1180207730030816
orig: 1.1681104259987478
loop: 927.2149353210007 # large dictionaries
set: 1696.8342713210004
comp: 902.039078668
orig: 1373.0389542560006
</code></pre>
<p>I'm disappointed, because the dictionary view/set operation method is much cooler.</p>
<p>With larger dictionaries (10^4 items), the dictionary comprehension method pulls ahead of the improved looping method and far ahead of the original method. The set operation method still performs the slowest.</p>
| 0 | 2016-08-26T21:47:17Z | [
"python",
"dictionary"
] |
python printing with os.startfile on windows | 39,174,681 | <p>I tried to printing an image with </p>
<pre><code>os.startfile("pathroimage", "print")
</code></pre>
<p>It opens windows printing dialog. How can I print iages in background.
Is there another way to do printing with python. </p>
<p>regards.</p>
| 1 | 2016-08-26T21:12:09Z | 39,283,123 | <p>on windows i use win32</p>
<p>Check this: <a href="http://timgolden.me.uk/python/win32_how_do_i/print.html" rel="nofollow">http://timgolden.me.uk/python/win32_how_do_i/print.html</a></p>
<p>or this: <a href="http://nullege.com/codes/show/src@c@o@colony_plugins-HEAD@printing@src@printing@win32@system.py" rel="nofollow">http://nullege.com/codes/show/src@c@o@colony_plugins-HEAD@printing@src@printing@win32@system.py</a></p>
<p>a few days ago I was printing image in this way. the exact size and correctly position was important for me.</p>
| 0 | 2016-09-02T01:35:55Z | [
"python",
"printing"
] |
label a point in graph using matplotlib for timeseries | 39,174,694 | <p>I have a pandas dataframe with 3 columns.
I plot col1 on Y axis and a time_stamps series on X axis.
For this series whenever col2 is -1, I want to highlight that point on graph as anomaly. I tried to get the coordinate and highlight using ax.text but I cannot get the correct coordinate since X axis is a time series. In the example below I am trying to plot third row coordinates since col2[2]==-1. </p>
<pre><code>import pandas
import matplotlib.pyplot as plt
df=df[["time_stamps","col1"]]
df.set_index("time_stamps",inplace=True)
ax=df.plot()
ticklabels = [l.get_text() for l in ax.xaxis.get_ticklabels()]
new_labels=[tick[-6:] for tick in ticklabels]
ax.xaxis.set_ticklabels(new_labels)
x1="16965 days 17:52:03"
y1=0.7
ax.text(x1, y1, "anaomly", fontsize=15)
plt.show()
</code></pre>
<p>Sample data looks like </p>
<pre><code>time_stamp=[16965 days 17:52:00,16965 days 17:52:02
16965 days 17:52:03,16965 days 17:52:05
16965 days 17:52:06,16965 days 17:52:08
16965 days 17:52:09,16965 days 17:52:11
16965 days 17:52:12,16965 days 17:52:14]
col1=[0.02,0.01,0.7,0.019,0.019,0.017,0.023,0.04,0.072,0.05]
col2=[1,1,-1,1,1,1,1,1,1,1]
</code></pre>
| 0 | 2016-08-26T21:13:19Z | 39,175,835 | <p>I figured it out that I can convert it to seconds and then label the points as anomalies. This is what i did.</p>
<pre><code>def changetotimedelta(row):
return pd.to_timedelta(row["time_stamps"])/ np.timedelta64(1,'D')
def main()
df=pd.read_csv(inputFile)
df["time"]=df.apply(changetotimedelta,axis=1)
new_df=df[["time","col1"]]
new_df.set_index("time",inplace=True)
ax=new_df.plot()
x1=pd.to_timedelta("16965 days 17:52:03")/ np.timedelta64(1,'D')
y1=0.7
ax.annotate('anomaly', xy=(x1, y1), xytext=(x2, 1),
arrowprops=dict(facecolor='red', shrink=0.01),)
plt.show()
</code></pre>
| 1 | 2016-08-26T23:16:45Z | [
"python",
"pandas",
"matplotlib",
"dataframe",
"time-series"
] |
Reading file as array | 39,174,768 | <p>I have a <em>cvs</em> file which has three columns of numbers up to three digits each:</p>
<pre><code>1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
9 0 0
10 0 0
11 0 0
</code></pre>
<p>I want to read the columns separately and be able to use them as arrays with:</p>
<pre><code>data = np.loadtxt('file.csv')
x = data[:, 0]
y = data[:, 1]
</code></pre>
<p>But I'm getting:</p>
<pre><code>X = np.array(X, dtype)
ValueError: setting an array element with a sequence.
</code></pre>
<p>If instead I use the line <code>x,y = np.loadtxt('Beamprofile.txt', usecols=(0,1), unpack=True)</code> the error disappears but <em>x</em> and <em>y</em> don't seem to be read correctly in further operations.</p>
| 0 | 2016-08-26T21:19:29Z | 39,175,474 | <p>Simplest solution to this could be to use pandas module in python, it gives the freedom to use column values as numpy array, which you can convert to list easily.</p>
<pre><code> import pandas as pd
df = pd.read_csv(open("name_of_the_csv.csv"))
# For one column, lets say the 1st one
column_1_list = df[df.columns[0]].values
</code></pre>
<p>If you want to access all the columns you can use for loop and do it like shown below:</p>
<pre><code> df = pd.read_csv(open("name_of_the_csv.csv"))
for i in xrange(len(df.columns)):
column_list = df[df.columns[i]].values
</code></pre>
| 0 | 2016-08-26T22:31:05Z | [
"python",
"arrays",
"numpy"
] |
Reading file as array | 39,174,768 | <p>I have a <em>cvs</em> file which has three columns of numbers up to three digits each:</p>
<pre><code>1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
9 0 0
10 0 0
11 0 0
</code></pre>
<p>I want to read the columns separately and be able to use them as arrays with:</p>
<pre><code>data = np.loadtxt('file.csv')
x = data[:, 0]
y = data[:, 1]
</code></pre>
<p>But I'm getting:</p>
<pre><code>X = np.array(X, dtype)
ValueError: setting an array element with a sequence.
</code></pre>
<p>If instead I use the line <code>x,y = np.loadtxt('Beamprofile.txt', usecols=(0,1), unpack=True)</code> the error disappears but <em>x</em> and <em>y</em> don't seem to be read correctly in further operations.</p>
| 0 | 2016-08-26T21:19:29Z | 39,175,524 | <p>With your sample:</p>
<pre><code>In [1]: data=np.loadtxt('stack39174768.txt')
In [2]: data
Out[2]:
array([[ 1., 0., 0.],
[ 2., 0., 0.],
[ 3., 0., 0.],
[ 4., 0., 0.],
[ 5., 0., 0.],
[ 6., 0., 0.],
[ 7., 0., 0.],
[ 8., 0., 0.],
[ 9., 0., 0.],
[ 10., 0., 0.],
[ 11., 0., 0.]])
In [3]: x=data[:,0]
In [4]: y=data[:,1]
In [5]: x
Out[5]: array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.])
In [6]: y
Out[6]: array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
</code></pre>
<p>Where's the problem? Is there's something about the file that we're not seeing in the clip?</p>
<pre><code>In [8]: x,y = np.loadtxt('stack39174768.txt', usecols=(0,1), unpack=True)
</code></pre>
<p>produces the same <code>x</code> and <code>y</code>.</p>
<p>I asked about shape and dtype</p>
<pre><code>In [11]: data.shape
Out[11]: (11, 3)
In [12]: data.dtype
Out[12]: dtype('float64')
</code></pre>
<p>I like <code>np.genfromtxt</code> a little better, but in this case it produces the same <code>data</code>.</p>
| 0 | 2016-08-26T22:37:57Z | [
"python",
"arrays",
"numpy"
] |
Read data from large tar.gz file from the website | 39,174,778 | <p>1) How should I read the data from all the csv files in the tar.gz file on website and write them to the CSVs on a folder in the most memory and space efficient way?
2) How can I loop it to go over all the CSVs in the tar.gz file?
3) Since the CSV files are huge, how can I loop it to read and write, let's say, 1 million rows at a time?</p>
<p>I have gone only so far using the codes on other stackoverflow answers!</p>
<pre><code>import pandas as pd
import urllib2
import tarfile
url='https://ghtstorage.blob.core.windows.net/downloads/mysql-2016-08-01.tar.gz'
r=urllib2.Request(url)
o=urllib2.urlopen(r)
thetarfile=tarfile.open(o, mode='r:gz')
thetarfile.close()
</code></pre>
| 0 | 2016-08-26T21:20:32Z | 39,176,276 | <ol>
<li>Download an archive to your local storage.</li>
<li>Display the list of files in the archive. Run <strong>man tar</strong> to see options for command line.</li>
<li>Extract files one by one from the archive.</li>
<li>Use SAX xml parser <a href="https://docs.python.org/2/library/xml.sax.reader.html" rel="nofollow">https://docs.python.org/2/library/xml.sax.reader.html</a>.</li>
<li>Remove file after parsing.</li>
<li>Remove the archive.</li>
</ol>
| -1 | 2016-08-27T00:22:08Z | [
"python",
"python-2.7",
"csv",
"urllib2",
"tar"
] |
how np.argsort works in pandas data frame | 39,174,810 | <p>i have pandas dataframe named "index" like</p>
<pre><code>tz
521.0
Africa/Cairo 3.0
Africa/Casablanca 1.0
Africa/Ceuta 2.0
Africa/Johannesburg 1.0
dtype: float64
</code></pre>
<p>when i applied <code>index.argsort()</code> i get something like this</p>
<pre><code>tz
2
Africa/Cairo 4
Africa/Casablanca 3
Africa/Ceuta 1
Africa/Johannesburg 0
dtype: int64
</code></pre>
<p>can someone explain how the number "2,4,3,1,0" came? i know there are index range from 0 to 4 but i can't think of a logic in there order.</p>
| 0 | 2016-08-26T21:23:46Z | 39,175,107 | <p><code>argsort</code> returns the index positions of the values being sorted if they were to be sorted. Keep in mind that this is a numpy function and its assignment to series or dataframe indices is erroneous.</p>
<ul>
<li><code>2</code> refers to the item in the <code>2</code> position (3rd) was the minimum
<ul>
<li>this was <code>1.0</code> </li>
</ul></li>
<li><code>4</code> refers to the item in the <code>4</code> position (5th) was next
<ul>
<li>also <code>1.0</code></li>
</ul></li>
<li><code>3</code> (4th position) was a <code>2.0</code></li>
<li><code>1</code> (2nd position) was a <code>3.0</code></li>
<li><code>0</code> (1st position) was a <code>521.0</code> and the maximum</li>
</ul>
<hr>
<p>It's more appropriate to assign to an array and use as a slice</p>
<pre><code>a = s.values.argsort()
s.iloc[a]
tz
Africa/Casablanca 1.0
Africa/Johannesburg 1.0
Africa/Ceuta 2.0
Africa/Cairo 3.0
521.0
Name: value, dtype: float64
</code></pre>
| 0 | 2016-08-26T21:53:03Z | [
"python",
"python-3.x",
"pandas",
"numpy"
] |
Archive Dynamodb based on date/days | 39,174,834 | <p>I want to archive dynamodb table, keeping data only for 90 days. I have a field called recorded_on in the table which I can use to track 90days. Looked at Datapipeline and it seems little overkill with EMR since we don't need it. Any better ways to do this? </p>
<pre><code>1. Cronjob that will continue to run everyday and match recorded_on + 90days > today's date and put those rows in s3 and delete those rows.
2. Separate cronjob to put data from s3 to redshift everyday.
</code></pre>
| 0 | 2016-08-26T21:26:36Z | 39,177,213 | <p>Why do you think using AWS data pipeline is overkill? You can use custom job but it will require additional work which pipeline does it for you automatically. </p>
<p>The fact that it uses EMR cluster behind the scenes shouldn't be a problem as its details are anyway abstracted away from you. Setting up pipeline to archive dynamoDb to s3 is very easy. For deleting data older than 90 days you can write a custom script & use Data Pipeline ShellCommandActivity (<a href="http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html" rel="nofollow">http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-shellcommandactivity.html</a>) to execute it.</p>
<p>Here are some benefits of Data Pipeline over CRON:</p>
<ol>
<li>Retries in case of failures. </li>
<li>Monitoring/Alarms.</li>
<li>No need to provision EC2, AWS takes care of everything behind the scenes. </li>
<li>Control how much dynamoDb capacity the export can use, this is very important in preventing the export job from impacting other systems.</li>
</ol>
<p>Its also very cheap, <a href="https://aws.amazon.com/datapipeline/pricing/" rel="nofollow">https://aws.amazon.com/datapipeline/pricing/</a>. </p>
<p>Regards
Dinesh Solanki </p>
| 0 | 2016-08-27T03:35:02Z | [
"python",
"amazon-s3",
"amazon-dynamodb",
"boto3",
"amazon-data-pipeline"
] |
Archive Dynamodb based on date/days | 39,174,834 | <p>I want to archive dynamodb table, keeping data only for 90 days. I have a field called recorded_on in the table which I can use to track 90days. Looked at Datapipeline and it seems little overkill with EMR since we don't need it. Any better ways to do this? </p>
<pre><code>1. Cronjob that will continue to run everyday and match recorded_on + 90days > today's date and put those rows in s3 and delete those rows.
2. Separate cronjob to put data from s3 to redshift everyday.
</code></pre>
| 0 | 2016-08-26T21:26:36Z | 39,180,460 | <p>You could create a scheduled Lambda function that runs daily (or at whatever interval you want) that performs the query and archives the items. </p>
<p>Or, if you want that to scale and perform better, you could have the Lambda function perform the query and then write a message to an SNS topic for each item that needs to be archived and have another Lambda function trigger on that SNS topic and perform the archive operation.</p>
| 0 | 2016-08-27T11:10:09Z | [
"python",
"amazon-s3",
"amazon-dynamodb",
"boto3",
"amazon-data-pipeline"
] |
number of weeks in a given month python | 39,174,857 | <p>I'm trying to get the number of weeks in any month. for example:</p>
<p><code>for this month August 2016, the days of the month streches over 5 weeks. while for October 2016 the number of weeks are 6</code>. </p>
<p>Is there any elegant way to find out this number? tried using calendar and datetime but I couldn't find anything that can help me solve my problem.</p>
<blockquote>
<p>P.S I'm using python 2.6</p>
</blockquote>
<p>Thanks in advance!</p>
| 0 | 2016-08-26T21:28:14Z | 39,174,902 | <p>Here's a quicky:</p>
<pre><code>>>> import calendar
>>> print calendar.month(2016,10)
October 2016
Mo Tu We Th Fr Sa Su
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31
</code></pre>
<p>Count the lines :)</p>
<pre><code>len(calendar.month(2016,10).split('\n')) - 3
</code></pre>
| 3 | 2016-08-26T21:32:23Z | [
"python",
"calendar"
] |
number of weeks in a given month python | 39,174,857 | <p>I'm trying to get the number of weeks in any month. for example:</p>
<p><code>for this month August 2016, the days of the month streches over 5 weeks. while for October 2016 the number of weeks are 6</code>. </p>
<p>Is there any elegant way to find out this number? tried using calendar and datetime but I couldn't find anything that can help me solve my problem.</p>
<blockquote>
<p>P.S I'm using python 2.6</p>
</blockquote>
<p>Thanks in advance!</p>
| 0 | 2016-08-26T21:28:14Z | 39,174,914 | <p>First thing that comes to mind: get the week number for the first day of the month, and the week number of the last day of the month. The number of weeks strechted by the month is then the difference of the week numbers.</p>
<p>To get the week number for a date: <a href="http://stackoverflow.com/questions/2600775/how-to-get-week-number-in-python">How to get week number in Python?</a></p>
| 1 | 2016-08-26T21:33:42Z | [
"python",
"calendar"
] |
Django CustomField inheritance from models.CharField -- unexpected keyword argument | 39,174,935 | <p>My application needs few attributes that are required for the fields, so I went and followed the code to create <a href="https://docs.djangoproject.com/en/1.10/howto/custom-model-fields/" rel="nofollow">custom fields</a>. </p>
<p>This is my CustomCharacterField:</p>
<pre><code>class CustomCharField(models.CharField):
def __int__(self, success_order=None, *args, **kwargs):
self.success_order = success_order
super(CustomCharField, self).__int__( *args, **kwargs)
def get_success_order(self):
return int(self.success_order)
def deconstruct(self):
name, path, args, kwargs = super(CustomCharField, self).deconstruct()
del kwargs["success_order"]
return name, path, args, kwargs
</code></pre>
<p>Here is my models.py </p>
<pre><code>class NameModel(models.Model):
name = fields.CustomCharField(max_length=250, unique=True, success_order=1)
</code></pre>
<p>Here is the traceback:</p>
<pre><code> File "/home/kt/Documents/phc/phc/Forms/models.py", line 204, in <module>
class SchemeModel(models.Model):
File "/home/kt/Documents/phc/phc/Forms/models.py", line 220, in SchemeModel
scheme_name = fields.CustomCharField(verbose_name="Scheme", max_length=250, unique=True, success_order=1)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/fields/__init__.py", line 1072, in __init__
super(CharField, self).__init__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'success_order'
</code></pre>
| 0 | 2016-08-26T21:36:06Z | 39,175,041 | <p>It is because of the order in which you pass the arguments. The trace back shows that success_order is being passed to the constructor of CharField, which should not be. This is because it is being passed in kwargs. Changing the order should do the trick. unique=True will be accepted by the CharField constructor.</p>
| -1 | 2016-08-26T21:46:40Z | [
"python",
"django",
"postgresql",
"django-models"
] |
Django CustomField inheritance from models.CharField -- unexpected keyword argument | 39,174,935 | <p>My application needs few attributes that are required for the fields, so I went and followed the code to create <a href="https://docs.djangoproject.com/en/1.10/howto/custom-model-fields/" rel="nofollow">custom fields</a>. </p>
<p>This is my CustomCharacterField:</p>
<pre><code>class CustomCharField(models.CharField):
def __int__(self, success_order=None, *args, **kwargs):
self.success_order = success_order
super(CustomCharField, self).__int__( *args, **kwargs)
def get_success_order(self):
return int(self.success_order)
def deconstruct(self):
name, path, args, kwargs = super(CustomCharField, self).deconstruct()
del kwargs["success_order"]
return name, path, args, kwargs
</code></pre>
<p>Here is my models.py </p>
<pre><code>class NameModel(models.Model):
name = fields.CustomCharField(max_length=250, unique=True, success_order=1)
</code></pre>
<p>Here is the traceback:</p>
<pre><code> File "/home/kt/Documents/phc/phc/Forms/models.py", line 204, in <module>
class SchemeModel(models.Model):
File "/home/kt/Documents/phc/phc/Forms/models.py", line 220, in SchemeModel
scheme_name = fields.CustomCharField(verbose_name="Scheme", max_length=250, unique=True, success_order=1)
File "/usr/local/lib/python3.4/dist-packages/django/db/models/fields/__init__.py", line 1072, in __init__
super(CharField, self).__init__(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'success_order'
</code></pre>
| 0 | 2016-08-26T21:36:06Z | 39,175,045 | <p>I think you just have a typo here - <code>def __int__</code> should be <code>def __init__</code>, and the <code>super(...).__int__(..)</code> call should be <code>super(...).__init__(..)</code>.</p>
| 3 | 2016-08-26T21:47:10Z | [
"python",
"django",
"postgresql",
"django-models"
] |
python code with lambda and filter | 39,175,195 | <p>Can anyone help me understand the following piece of python code:</p>
<pre><code>for i, char in filter(lambda x: x[1] in str1, enumerate(str2)):
# do something here ...
</code></pre>
<p>str1 and str2 are strings, I sort of understand that the "lambda x: x[1] in str1" is filtering condition, but why x[1] ?</p>
<p>How can I convert this for loop into a lower level (but easier to understand) python code ?</p>
<p>Thanks</p>
| 0 | 2016-08-26T22:00:55Z | 39,175,228 | <p>Because of <code>enumerates</code>.</p>
<p>Enumerates returns tuples of <code>(index, value)</code> for an iterable of values.</p>
<p><code>x</code> is a tuple of index, char.</p>
<p>I would write <code>lambda (index, char): char in str1</code> for clarity</p>
| 0 | 2016-08-26T22:03:55Z | [
"python",
"lambda",
"filter"
] |
python code with lambda and filter | 39,175,195 | <p>Can anyone help me understand the following piece of python code:</p>
<pre><code>for i, char in filter(lambda x: x[1] in str1, enumerate(str2)):
# do something here ...
</code></pre>
<p>str1 and str2 are strings, I sort of understand that the "lambda x: x[1] in str1" is filtering condition, but why x[1] ?</p>
<p>How can I convert this for loop into a lower level (but easier to understand) python code ?</p>
<p>Thanks</p>
| 0 | 2016-08-26T22:00:55Z | 39,175,251 | <p>This appears functionality equivalent to:</p>
<pre><code>for i, char in enumerate(str2):
if char in str1:
# do something here
</code></pre>
<p><code>filter</code> is taking a list of tuples consisting of the index and elements of <code>str2</code>, filtering out those elements that do not appear in <code>str1</code>, then returning a iterable of the remaining indices and elements from <code>str2</code>.</p>
| 3 | 2016-08-26T22:06:25Z | [
"python",
"lambda",
"filter"
] |
Simple python iteration exercise..stuck with try and except | 39,175,218 | <p>Write a program which repeatedly reads numbers until the user enters "done". Once "done" is entered, print out the total, count, and average of the numbers. If the user enters anything other than a number, detect their mistake using try and except and print an error message and skip to the next number.</p>
<p>This is what I have.</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
if number == "done":
break
try:
total += numbers
count += 1
average = total / len(number)
except:
print ("Invalid input")
continue
print (total, count, average)
</code></pre>
<p>When I run this, I always get invalid input for some reason. My except part must be wrong.</p>
<p>EDIT:
This is what I have now and it works. I do need, however, try and except, for non numbers.</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
if number == "done":
break
total += float(number)
count += 1
average = total / count
print (total, count, average)
</code></pre>
<p>I think I got it?!?!</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
try:
if number == "done":
break
total += float(number)
count += 1
average = total / count
except:
print ("Invalid input")
print ("total:", total, "count:", count, "average:", average)
</code></pre>
<p>Should I panic if this took me like an hour?
This isn't my first programming language but it's been a while.</p>
| 0 | 2016-08-26T22:03:01Z | 39,175,299 | <p>The problem is when you try to use your input:</p>
<pre><code>try:
total += numbers
</code></pre>
<p>First, there is no value <strong>numbers</strong>; your variable is singular, not plural. Second, you have to convert the text input to a number. Try this:</p>
<pre><code>try:
total += int(number)
</code></pre>
| 0 | 2016-08-26T22:11:46Z | [
"python"
] |
Simple python iteration exercise..stuck with try and except | 39,175,218 | <p>Write a program which repeatedly reads numbers until the user enters "done". Once "done" is entered, print out the total, count, and average of the numbers. If the user enters anything other than a number, detect their mistake using try and except and print an error message and skip to the next number.</p>
<p>This is what I have.</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
if number == "done":
break
try:
total += numbers
count += 1
average = total / len(number)
except:
print ("Invalid input")
continue
print (total, count, average)
</code></pre>
<p>When I run this, I always get invalid input for some reason. My except part must be wrong.</p>
<p>EDIT:
This is what I have now and it works. I do need, however, try and except, for non numbers.</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
if number == "done":
break
total += float(number)
count += 1
average = total / count
print (total, count, average)
</code></pre>
<p>I think I got it?!?!</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
try:
if number == "done":
break
total += float(number)
count += 1
average = total / count
except:
print ("Invalid input")
print ("total:", total, "count:", count, "average:", average)
</code></pre>
<p>Should I panic if this took me like an hour?
This isn't my first programming language but it's been a while.</p>
| 0 | 2016-08-26T22:03:01Z | 39,175,303 | <p>It's because there is no len(number) when number is an int. len is for finding the length of lists/arrays. you can test this for yourself by commenting out the try/except/continue. I think the code below is more what you are after?</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
if number == "done":
break
try:
total += number
count += 1
average = total / count
except:
print ("Invalid input")
continue
print (total, count, average)
</code></pre>
<p>note there are still some issues. for example you literally have to type "done" in the input box in order to not get an error, but this fixes your initial problem because you had len(number) instead of count in your average. also note that you had total += numbers. when your variable is number not numbers. be careful with your variable names/usage.</p>
| 0 | 2016-08-26T22:12:06Z | [
"python"
] |
Simple python iteration exercise..stuck with try and except | 39,175,218 | <p>Write a program which repeatedly reads numbers until the user enters "done". Once "done" is entered, print out the total, count, and average of the numbers. If the user enters anything other than a number, detect their mistake using try and except and print an error message and skip to the next number.</p>
<p>This is what I have.</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
if number == "done":
break
try:
total += numbers
count += 1
average = total / len(number)
except:
print ("Invalid input")
continue
print (total, count, average)
</code></pre>
<p>When I run this, I always get invalid input for some reason. My except part must be wrong.</p>
<p>EDIT:
This is what I have now and it works. I do need, however, try and except, for non numbers.</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
if number == "done":
break
total += float(number)
count += 1
average = total / count
print (total, count, average)
</code></pre>
<p>I think I got it?!?!</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
try:
if number == "done":
break
total += float(number)
count += 1
average = total / count
except:
print ("Invalid input")
print ("total:", total, "count:", count, "average:", average)
</code></pre>
<p>Should I panic if this took me like an hour?
This isn't my first programming language but it's been a while.</p>
| 0 | 2016-08-26T22:03:01Z | 39,175,354 | <p>A solution...</p>
<pre><code>total = 0
count = 0
average = 0
while True:
number = input("Enter a number:")
if number == "done":
break
else:
try:
total += int(number)
count += 1
average = total / count
except ValueError as ex:
print ("Invalid input")
print('"%s" cannot be converted to an int: %s' % (number, ex))
print (total, count, average)
</code></pre>
<p>Problems with your code:</p>
<ul>
<li>total+=numbers # <em>numbers</em> don't exist; is <em>number</em></li>
<li>len(number) # <em>number</em> is a string. for the average you need count</li>
<li>if is not <strong><em>done</em></strong>, else process it</li>
<li>Use try ... except ValueError to catch problem when convert the <em>number</em> to int.</li>
<li>Also, you can use try ... except ValueError as ex to get an error message more comprehensible.</li>
</ul>
| 0 | 2016-08-26T22:17:22Z | [
"python"
] |
Change the axis length of a plot to make the diagrams look better in Python | 39,175,245 | <p>I used the following code in Python to plot four vectors. However, as you can see the plot does not look nice as different curves stick to each other in some points and they are very close to each other. How can I change the plot so that the curves get separated better and the plot look better?</p>
<pre><code>plt.gca().set_color_cycle(['red', 'green','blue','purple'])
plt.plot(UB_user_util_list)
plt.plot(UB_Greedy_user_util_list)
plt.plot(IB_user_util_list,)
plt.plot(IB_Greedy_user_util_list)
plt.legend(['UB', 'UB_Optimized','IB','IB_opimized'], loc='upper left')
plt.title("User Utility values over time/split data based on time")
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/il4xA.png" rel="nofollow"><img src="http://i.stack.imgur.com/il4xA.png" alt="enter image description here"></a></p>
| 1 | 2016-08-26T22:05:41Z | 39,177,407 | <p>How about separating them into four subplots?</p>
<p><a href="http://i.stack.imgur.com/46bl5.png" rel="nofollow"><img src="http://i.stack.imgur.com/46bl5.png" alt="enter image description here"></a></p>
<p>You could mimic a seaborn/ggplot/pandas style just using matplotlib like:</p>
<pre><code>import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
mpl.rcParams['axes.linewidth'] = 0.0
vex = (np.random.rand(100),)*4
v_attr = [('r','v1'), ('orange', 'v2'),
('g', 'v3'), ('b', 'v4')]
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='col',
sharey='row')
for s,v,(c,l) in zip([ax1, ax2, ax3, ax4], vex, v_attr):
s.set_axis_bgcolor('#dddddd')
s.grid(b=True, which='major', c='white', ls='-', zorder=-1,
lw=0.75, alpha=0.64)
s.set_ylim(0,max(v)*1.35)
s.tick_params('both', pad=4, labelsize=8, which='major',
direction='out', top='off', right='off')
s.plot(v, label=l, c=c, zorder=3)
s.legend(frameon=False, )
</code></pre>
<hr>
<p>Or, so you can still do same-plot comparisons, maybe you could use alpha to individually highlight each line with the others in the background:</p>
<p><a href="http://i.stack.imgur.com/NkwXz.png" rel="nofollow"><img src="http://i.stack.imgur.com/NkwXz.png" alt="enter image description here"></a></p>
<pre><code>import numpy.random as rand
def rand_line():
return rand.normal(rand.randint(5,12),
rand.ranf()*3, 100)
lines = [rand_line() for _ in range(4)]
labels = [('r','v1'), ('purple','v2'),
('g','v3'), ('b','v4')]
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='col',
sharey='row')
subplots = [ax1, ax2, ax3, ax4]
for i,s in enumerate(subplots):
for j,(v,l) in enumerate(zip(lines,labels)):
a = (.9 if j==i else 0.25)
s.plot(v, zorder=3, alpha=a, c=l[0], label=l[1])
s.legend(loc='lower center', ncol=4, fontsize=10)
fig.tight_layout()
plt.savefig('line_subplots-highlight.png')
</code></pre>
| 0 | 2016-08-27T04:17:16Z | [
"python",
"matplotlib",
"plot"
] |
How to get the parent xml element after finding a child xml element using lxml and python | 39,175,263 | <p>I have a bit of a tough nut to crack. It has been so long since I've used LXML that I need some help to get started. I have an XML file that has a list of categories and proxy elements. Please see the snippet below:</p>
<pre><code><categories>
<category name="Light">
<proxy>fan</proxy>
</category>
<category name="UI">
<proxy>doorbell</proxy>
</category>
</categories>
</code></pre>
<p>What I would like to do is search through all the proxy elements to find "doorbell". If found, I would like to know the name of the parent element it came from. So in the above example, doorbell would be found under the parent category elementnamed "UI". In the end, I just need the value of the "name" attribute for the parent element in which the proxy fell under.</p>
<p>Any gurus out there want to help me tackle this?</p>
| 0 | 2016-08-26T22:07:36Z | 39,175,280 | <p>If all you need is the name, might as well do it all in one search:</p>
<pre><code>import lxml.etree as ET
root = ET.XML('''
<categories>
<category name="Light">
<proxy>fan</proxy>
</category>
<category name="UI">
<proxy>doorbell</proxy>
</category>
</categories>
''')
category_names = root.xpath(
'.//proxy[. = $proxy_type]/parent::category/@name',
proxy_type='doorbell')
print category_names
</code></pre>
<p>...emits, as one would expect:</p>
<pre><code>['UI']
</code></pre>
| 2 | 2016-08-26T22:09:23Z | [
"python",
"xml",
"lxml"
] |
Passing all elements of tuple in function 1 (from *args) into function 2 (as *args) in python | 39,175,288 | <p>I am writing a function to take *args inputs, assess the data, then pass all inputs to the next appropriate function (align), also taking *args</p>
<p>*args seems to be a tuple. I have tried various ways of passing each element of the tuple into the next function, the latest two being:</p>
<pre><code> for x in args:
align(*x)
</code></pre>
<p>and</p>
<pre><code> for x in args:
align(args[0:len(args)])
</code></pre>
| 1 | 2016-08-26T22:10:30Z | 39,175,384 | <p>You "unpack them" with <code>*args</code>. Then the receiving function can mop them up into a tuple again (or not!). </p>
<p>These examples should enlighten things: </p>
<pre><code>>>> def foo(*f_args):
... print('foo', type(f_args), len(f_args), f_args)
... bar(*f_args)
...
>>> def bar(*b_args):
... print('bar', type(b_args), len(b_args), b_args)
...
>>> foo('a', 'b', 'c')
('foo', <type 'tuple'>, 3, ('a', 'b', 'c'))
('bar', <type 'tuple'>, 3, ('a', 'b', 'c'))
</code></pre>
<p>Now, let's redefine <code>bar</code> and break the argspec:</p>
<pre><code>>>> def bar(arg1, arg2, arg3):
... print('bar redefined', arg1, arg2, arg3)
...
>>> foo('a', 'b', 'c')
('foo', <type 'tuple'>, 3, ('a', 'b', 'c'))
('bar redefined', 'a', 'b', 'c')
>>> foo('a', 'b')
('foo', <type 'tuple'>, 2, ('a', 'b'))
---> TypeError: bar() takes exactly 3 arguments (2 given)
</code></pre>
| 1 | 2016-08-26T22:21:07Z | [
"python",
"function",
"tuples",
"args"
] |
Python script for searching variable strings between two constant strings | 39,175,350 | <pre><code>import re
infile = open('document.txt','r')
outfile= open('output.txt','w')
copy = False
for line in infile:
if line.strip() == "--operation():":
bucket = []
copy = True
elif line.strip() == "StartOperation":
for strings in bucket:
outfile.write( strings + ',')
for strings in bucket:
outfile.write('\n')
copy = False
elif copy:
bucket.append(line.strip()
</code></pre>
<p>CSV format is like this:</p>
<pre><code>id, name, poid, error
5896, AutoAuthOSUserSubmit, 900105270, 0x4002
</code></pre>
<p>My log file has several sections starting with <code>==== START ====</code> and ending with <code>==== END ====</code>. I want to extract the string between <code>--operation():</code> and <code>StartOperation</code>. For example, <code>AutoAuthOSUserSubmit.</code> I also want to extract the <code>poid</code> value from line <code>poid: 900105270, poidLen: 9</code>. Finally, I want to extract the return value, e.g <code>0x4002</code> if <code>Roll back all updates</code> is found after it. </p>
<p>I am not even able to extract point the original text if <code>Start</code> and <code>End</code> are not on the same line. How do I go about doing that?</p>
<p>This is a sample LOG extract with two paragraphs:</p>
<pre><code>-- 08/24 02:07:56 [mds.ecas(5896) ECAS_CP1] **==== START ====**
open file /ecas/public/onsite-be/config/timer.conf failed
INFO 08/24/16 02:07:56 salt1be-d1-ap(**5896**/0) main.c(780*****):--operation(): AutoAuthOSUserSubmit. StartOperation*****
INFO 08/24/16 02:07:56 salt1be-d1-ap(5896/0) main.c(784):--Client Information: Request from host 'malt-d1-wb' process id 12382.
DEBUG 08/24/16 02:07:56 salt1be-d1-ap(5896/0) TOci.cc(571):FetchServiceObjects: ServiceCert.sql
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) vsserviceagent.cpp(517):Generate Certificate 2: c1cd00d5c3de082360a08730fef9cd1d
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) junk.c(1373):GenerateWebPin : poid: **900105270**, poidLen: 9
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) junk.c(1408):GenerateWebPin : pinStr
DEBUG 08/24/16 02:07:56 salt1be-d1-ap(5896/0) uaadapter_vasco_totp.c(275):UAVascoTOTPImpl.close() -- Releasing Adapter Context
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) vsenterprise.cpp(288):VSEnterprise::Engage returns 0x4002 - Unknown error code **(0x4002)**
ERROR 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) vsautoauth.cpp(696):OSAAEndUserEnroll: error occurred. **Roll back** all updates!
INFO 08/24/16 02:07:56 salt1be-d1-ap(5896/0) uaotptokenstoreqmimpl.cpp(199):Close token store
INFO 08/24/16 02:07:56 salt1be-d1-ap(5896/0) main.c(990):-- EndOperation
-- 08/24 02:07:56 [mds.ecas(5896) ECAS_CP1] **==== END ====**
OPERATION = AutoAuthOSUserSubmit, rc = 0x0 (0)
SYSINFO Elapse = 0.687, Heap = 1334K, Stack = 64K
</code></pre>
| 1 | 2016-08-26T22:16:56Z | 39,175,788 | <p>It looks like you are simply trying to find strings within the LOG document and trying to parse the lines of characters using keywords. You can go line by line which is what you are doing currently or you could go through the document once (assuming the LOG document never gets huge) and add each subsequent line to an existing string.</p>
<p>Check this out for finding substrings
<a href="http://www.tutorialspoint.com/python/string_index.htm" rel="nofollow">http://www.tutorialspoint.com/python/string_index.htm</a> <--- for finding the location of where a string is within another string, this will help you determine a start index and an end index. Once you have those you can extract your desired information.</p>
<p>Check this out for your CSV problem
<a href="http://www.tutorialspoint.com/python/string_split.htm" rel="nofollow">http://www.tutorialspoint.com/python/string_split.htm</a> <--- for splitting a string around a specific character i.e. "," for your CSV files.</p>
<p><a href="http://stackoverflow.com/questions/3437059/does-python-have-a-string-contains-substring-method">Does Python have a string contains substring method?</a> will be more useful than your current method of using the strip() method</p>
<p>Hopefully this will point you in the right direction!</p>
| 1 | 2016-08-26T23:11:05Z | [
"python"
] |
Python script for searching variable strings between two constant strings | 39,175,350 | <pre><code>import re
infile = open('document.txt','r')
outfile= open('output.txt','w')
copy = False
for line in infile:
if line.strip() == "--operation():":
bucket = []
copy = True
elif line.strip() == "StartOperation":
for strings in bucket:
outfile.write( strings + ',')
for strings in bucket:
outfile.write('\n')
copy = False
elif copy:
bucket.append(line.strip()
</code></pre>
<p>CSV format is like this:</p>
<pre><code>id, name, poid, error
5896, AutoAuthOSUserSubmit, 900105270, 0x4002
</code></pre>
<p>My log file has several sections starting with <code>==== START ====</code> and ending with <code>==== END ====</code>. I want to extract the string between <code>--operation():</code> and <code>StartOperation</code>. For example, <code>AutoAuthOSUserSubmit.</code> I also want to extract the <code>poid</code> value from line <code>poid: 900105270, poidLen: 9</code>. Finally, I want to extract the return value, e.g <code>0x4002</code> if <code>Roll back all updates</code> is found after it. </p>
<p>I am not even able to extract point the original text if <code>Start</code> and <code>End</code> are not on the same line. How do I go about doing that?</p>
<p>This is a sample LOG extract with two paragraphs:</p>
<pre><code>-- 08/24 02:07:56 [mds.ecas(5896) ECAS_CP1] **==== START ====**
open file /ecas/public/onsite-be/config/timer.conf failed
INFO 08/24/16 02:07:56 salt1be-d1-ap(**5896**/0) main.c(780*****):--operation(): AutoAuthOSUserSubmit. StartOperation*****
INFO 08/24/16 02:07:56 salt1be-d1-ap(5896/0) main.c(784):--Client Information: Request from host 'malt-d1-wb' process id 12382.
DEBUG 08/24/16 02:07:56 salt1be-d1-ap(5896/0) TOci.cc(571):FetchServiceObjects: ServiceCert.sql
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) vsserviceagent.cpp(517):Generate Certificate 2: c1cd00d5c3de082360a08730fef9cd1d
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) junk.c(1373):GenerateWebPin : poid: **900105270**, poidLen: 9
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) junk.c(1408):GenerateWebPin : pinStr
DEBUG 08/24/16 02:07:56 salt1be-d1-ap(5896/0) uaadapter_vasco_totp.c(275):UAVascoTOTPImpl.close() -- Releasing Adapter Context
DEBUG 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) vsenterprise.cpp(288):VSEnterprise::Engage returns 0x4002 - Unknown error code **(0x4002)**
ERROR 08/22/16 23:15:53 pepper1be-d1-ap(2680/0) vsautoauth.cpp(696):OSAAEndUserEnroll: error occurred. **Roll back** all updates!
INFO 08/24/16 02:07:56 salt1be-d1-ap(5896/0) uaotptokenstoreqmimpl.cpp(199):Close token store
INFO 08/24/16 02:07:56 salt1be-d1-ap(5896/0) main.c(990):-- EndOperation
-- 08/24 02:07:56 [mds.ecas(5896) ECAS_CP1] **==== END ====**
OPERATION = AutoAuthOSUserSubmit, rc = 0x0 (0)
SYSINFO Elapse = 0.687, Heap = 1334K, Stack = 64K
</code></pre>
| 1 | 2016-08-26T22:16:56Z | 39,176,691 | <p>This looks like a job for Regular Expressions! Several in fact. Thankfully, they are not very complicated to use in this case.</p>
<p>There are 2 main observations that would make me choose regexes over something else:</p>
<ol>
<li>Need to extract one bit of variable text from between two known constant values</li>
<li>Need to follow this same pattern several times for different strings</li>
</ol>
<p>You can try something like this:</p>
<pre><code>import re
def capture(text, pattern_string, flags=0):
pattern = re.compile(pattern_string, flags)
match = pattern.search(text)
if match:
output = match.group(1)
print '{}\n'.format(output)
return output
return ''
if __name__ == '__main__':
file = read_my_file()
log_pattern = "\*\*==== START ====\*\*(.+)\*\*==== END ====\*\*"
log_text = capture(file, log_pattern, flags=re.MULTILINE|re.DOTALL)
op_pattern = "--operation\(\): (.+). StartOperation\*\*\*\*\*"
op_name = capture(log_text, op_pattern)
poid_pattern = "poid: \*\*([\d]+)\*\*, poidLen: "
op_name = capture(log_text, poid_pattern)
retcode_pattern = "Unknown error code \*\*\((.+)\)\*\*.+\*\*Roll back\*\* all updates!"
retcode = capture(log_text, retcode_pattern, flags=re.MULTILINE|re.DOTALL)
</code></pre>
<p>This approach essentially divides up the problem into several largely independent steps. I'm using capturing groups in each regex - the parens like <code>(.+)</code> and <code>([\d]+)</code> - in between long strings of constant characters. The multiline and dotall flags allow you to easily deal with line breaks in the text and treat them just like any other part of the string.</p>
<p><strong>I'm also making a big assumption here and that is your logs are not huge files, maybe a few hundred megabytes tops.</strong> Note the call to <code>read_my_file()</code> - rather than try to solve this problem a line at a time, I chose to read the entire file and work in memory. If the files get really big though, or you're building an app that will get a lot of traffic, this may be a bad idea. </p>
<p>Hope this helps!</p>
| 1 | 2016-08-27T01:44:20Z | [
"python"
] |
Print a list of every combination of dictionary keys with the associated sum of the key values to the right | 39,175,538 | <p>I have several dictionaries and I want to print a table where each row is a unique combination of the keys in all dictionaries. For each row, I also want to print the sum of the values for the keys in that particular combination.</p>
<p>So, if I have these dictionaries:</p>
<pre><code>dict1 = {"Main": 8, "Optional": 6, "Obscure": 4}
dict2 = {"Global": 8, "Regional": 4, "Local": 2}
...
</code></pre>
<p>The output would look like this (sorted by sum highest to lowest):</p>
<pre><code>Main, Global, 16
Optional, Global, 14
Main, Regional, 12
Obscure, Global, 12
Main, Local, 10
Optional, Regional, 10
Optional, Local, 8
Obscure, Regional, 8
Obscure, Local, 6
</code></pre>
<p>From what I've read, itertools.product will be what I'm looking for, but none of the existing questions are quite my use case and I'm struggling to even get started.</p>
<p>Any help would be appreciated.</p>
<p>Thanks</p>
| 1 | 2016-08-26T22:39:30Z | 39,175,662 | <p>I think this would be something like:</p>
<pre><code>import itertools
dict1 = {"Main": 8, "Optional": 6, "Obscure": 4}
dict2 = {"Global": 8, "Regional": 4, "Local": 2}
merged = {'{}, {}'.format(prod[0], prod[1]): dict1[prod[0]] + dict2[prod[1]]
for prod in itertools.product(dict1, dict2)}
for k, v in merged.items():
print('{}: {}'.format(k, v))
</code></pre>
<p>Output:</p>
<pre><code>Optional, Regional: 10
Main, Regional: 12
Optional, Local: 8
Main, Global: 16
Optional, Global: 14
Main, Local: 10
Obscure, Regional: 8
Obscure, Global: 12
Obscure, Local: 6
</code></pre>
| 1 | 2016-08-26T22:53:46Z | [
"python",
"dictionary",
"product",
"itertools",
"cartesian-product"
] |
Print a list of every combination of dictionary keys with the associated sum of the key values to the right | 39,175,538 | <p>I have several dictionaries and I want to print a table where each row is a unique combination of the keys in all dictionaries. For each row, I also want to print the sum of the values for the keys in that particular combination.</p>
<p>So, if I have these dictionaries:</p>
<pre><code>dict1 = {"Main": 8, "Optional": 6, "Obscure": 4}
dict2 = {"Global": 8, "Regional": 4, "Local": 2}
...
</code></pre>
<p>The output would look like this (sorted by sum highest to lowest):</p>
<pre><code>Main, Global, 16
Optional, Global, 14
Main, Regional, 12
Obscure, Global, 12
Main, Local, 10
Optional, Regional, 10
Optional, Local, 8
Obscure, Regional, 8
Obscure, Local, 6
</code></pre>
<p>From what I've read, itertools.product will be what I'm looking for, but none of the existing questions are quite my use case and I'm struggling to even get started.</p>
<p>Any help would be appreciated.</p>
<p>Thanks</p>
| 1 | 2016-08-26T22:39:30Z | 39,175,677 | <p>Use <code>product</code> from <code>itertools</code> on the dictionary items() where you can get the both key and value at the same time, and with the combination of key-value pairs you can construct the final result pretty straightforwardly:</p>
<pre><code>from itertools import product
sorted([(k1, k2, v1+v2) for (k1, v1), (k2, v2) in product(dict1.items(), dict2.items())], \
key = lambda x: x[2], reverse=True)
# [('Main', 'Global', 16),
# ('Optional', 'Global', 14),
# ('Obscure', 'Global', 12),
# ('Main', 'Regional', 12),
# ('Main', 'Local', 10),
# ('Optional', 'Regional', 10),
# ('Obscure', 'Regional', 8),
# ('Optional', 'Local', 8),
# ('Obscure', 'Local', 6)]
</code></pre>
| 1 | 2016-08-26T22:55:12Z | [
"python",
"dictionary",
"product",
"itertools",
"cartesian-product"
] |
Print a list of every combination of dictionary keys with the associated sum of the key values to the right | 39,175,538 | <p>I have several dictionaries and I want to print a table where each row is a unique combination of the keys in all dictionaries. For each row, I also want to print the sum of the values for the keys in that particular combination.</p>
<p>So, if I have these dictionaries:</p>
<pre><code>dict1 = {"Main": 8, "Optional": 6, "Obscure": 4}
dict2 = {"Global": 8, "Regional": 4, "Local": 2}
...
</code></pre>
<p>The output would look like this (sorted by sum highest to lowest):</p>
<pre><code>Main, Global, 16
Optional, Global, 14
Main, Regional, 12
Obscure, Global, 12
Main, Local, 10
Optional, Regional, 10
Optional, Local, 8
Obscure, Regional, 8
Obscure, Local, 6
</code></pre>
<p>From what I've read, itertools.product will be what I'm looking for, but none of the existing questions are quite my use case and I'm struggling to even get started.</p>
<p>Any help would be appreciated.</p>
<p>Thanks</p>
| 1 | 2016-08-26T22:39:30Z | 39,175,681 | <p>You've read right. Just add <code>sorted()</code>:</p>
<pre><code>from itertools import product
from operator import itemgetter
results = [(k1, k2, dict1[k1] + dict2[k2])
for k1, k2 in product(dict1.keys(), dict2.keys())]
for k1, k2, sum_ in sorted(results, key=itemgetter(2), reverse=True):
print(k1, k2, sum_, sep=', ')
</code></pre>
| 1 | 2016-08-26T22:56:01Z | [
"python",
"dictionary",
"product",
"itertools",
"cartesian-product"
] |
Print a list of every combination of dictionary keys with the associated sum of the key values to the right | 39,175,538 | <p>I have several dictionaries and I want to print a table where each row is a unique combination of the keys in all dictionaries. For each row, I also want to print the sum of the values for the keys in that particular combination.</p>
<p>So, if I have these dictionaries:</p>
<pre><code>dict1 = {"Main": 8, "Optional": 6, "Obscure": 4}
dict2 = {"Global": 8, "Regional": 4, "Local": 2}
...
</code></pre>
<p>The output would look like this (sorted by sum highest to lowest):</p>
<pre><code>Main, Global, 16
Optional, Global, 14
Main, Regional, 12
Obscure, Global, 12
Main, Local, 10
Optional, Regional, 10
Optional, Local, 8
Obscure, Regional, 8
Obscure, Local, 6
</code></pre>
<p>From what I've read, itertools.product will be what I'm looking for, but none of the existing questions are quite my use case and I'm struggling to even get started.</p>
<p>Any help would be appreciated.</p>
<p>Thanks</p>
| 1 | 2016-08-26T22:39:30Z | 39,175,866 | <p>This method is built to support a variable amount of dictionaries. You pass your dictionaries to the <code>get_product_sums()</code> method, which then creates a cartesian product from the tuple of dictionaries. </p>
<p>We then iterate through our new <code>subitem</code> to calculate sums by doing a look up in our <code>flattened</code> which is just a 1D dictionary now. We then sort by the sum, and return a sorted list of tuples for our final <code>result</code>.</p>
<pre><code>from itertools import product
def get_product_sums(* args):
result = []
flattened = {k:v for d in args for k, v in d.items()}
for subitem in product(* args, repeat=1):
data = subitem + (sum(flattened[key] for key in subitem),)
result.append(data)
return sorted(result, key=lambda x: x[-1], reverse=True)
</code></pre>
<p><strong>Sample Output:</strong> </p>
<pre><code>>>> dict1 = {"Global": 8, "Regional": 4, "Local": 2}
>>> dict2 = {"Main": 8, "Optional": 6, "Obscure": 4}
>>> for item in get_product_sums(dict1, dict2):
... print ', '.join(str(element) for element in item)
Global, Main, 16
Global, Optional, 14
Global, Obscure, 12
Regional, Main, 12
Local, Main, 10
Regional, Optional, 10
Local, Optional, 8
Regional, Obscure, 8
Local, Obscure, 6
</code></pre>
| 2 | 2016-08-26T23:21:59Z | [
"python",
"dictionary",
"product",
"itertools",
"cartesian-product"
] |
mongoDB: store audio files (Best way) using python | 39,175,540 | <p>I've a bunch of audio files (.wav) and I'd like know from you guys, what's the best way to store them in mongoDB?
What I'm doing today is, just storing the path of the file.(as you can see below).
But I think it's not good because I'm creating a "fake reference" to the file and I wonder If by chance I delete the file, how could I consist it?</p>
<pre><code>{
"_id" : ObjectId("57c0a06cd92f49222ce2f42d"),
"eps" : "GPSP",
"terminal" : 989638523,
"main_path" : "W:\\Python\\Speech\\audio\\teste\\teste_9",
"motivo" : "Classic",
"audio" : [
{
"path" : "W:\\Python\\Speech\\audio\\teste\\teste_9\\01_audio.wav",
"confidence" : 0.8332507,
"transcript" : "Alô bom dia com quem eu falo",
"sequence" : 1
},
{
"path" : "W:\\Python\\Speech\\audio\\teste\\teste_9\\02_audio.wav",
"confidence" : 0.90813386,
"transcript" : "Um novo benefÃcio pra minha da senhora, sem impostos e nada mais do que isso",
"sequence" : 2
}
}
</code></pre>
<p>Thank you,</p>
| 0 | 2016-08-26T22:39:42Z | 39,181,683 | <p>Take a look at MongoDB <a href="https://docs.mongodb.com/manual/core/gridfs/" rel="nofollow"><code>gridfs</code></a>:</p>
<blockquote>
<p>GridFS is a specification for storing and retrieving files that exceed the
BSON-document size limit of 16 MB</p>
</blockquote>
<p>Using pymongo you can put files inside like this:</p>
<pre><code>from pymongo import MongoClient
import gridfs
fs = gridfs.GridFS(db)
file_id = fs.put(open( r'audio.wav', 'rb')
</code></pre>
| 0 | 2016-08-27T13:30:17Z | [
"python",
"mongodb",
"audio"
] |
Why I use list as a parameter of a function and the function can't change the value of the actual parameter but only the formal parameter? | 39,175,549 | <p>It's the similar question with this one I asked (Visit <a href="http://stackoverflow.com/questions/39160682/how-can-python-function-actually-change-the-parameter-rather-than-the-formal-par?noredirect=1#comment65665408_39160682">How can python function actually change the parameter rather than the formal parameter?</a> ), Only this time I am using list to be formal parameter. So because list is reference type? It should worked as before.
So here is my code:</p>
<pre><code>def twomerger(lx, ly):
if lx[0] == ly[0]:
lx[0] = ly[0]
ly[0] = 0
if lx[1] == ly[1]:
lx[1] = ly[1]
ly[1] = 0
if lx[2] == ly[2]:
lx[2] = ly[2]
ly[2] = 0
if lx[3] == ly[3]:
lx[3] = ly[3]
ly[3] = 0
row1 = [0, 2, 4, 4]
row2 = [2, 2, 4, 4]
twomerger(row1, row2)
print (row1)
print (row2)
print ("it's a new situation:")
print (row1)
print (row2)
</code></pre>
<p>Here I try to transfer [0, 2, 4, 4] and [2, 2, 4, 4] into [0, 4, 8, 8] and [2, 0, 0, 0]. (make the same vertical number adds up). I passed two list -- row1 and row2 into the parameter of my function <code>twomerger</code>. I thought because row1 and row2 is list, they are reference so this function should be able to change the row1 and row2. However it gives me the result <code>[0, 2, 4, 4]</code> and <code>[2, 2, 4, 4]</code> which means the function doesn't work. So I am confused again. </p>
| 0 | 2016-08-26T22:40:57Z | 39,175,590 | <p>I think your issue is that you should be adding to the values in the list, but instead you're reassigning. Change <code>=</code> to <code>+=</code>:</p>
<pre><code>def twomerger(lx, ly):
if lx[0] == ly[0]:
lx[0] += ly[0]
ly[0] = 0
if lx[1] == ly[1]:
lx[1] += ly[1]
ly[1] = 0
if lx[2] == ly[2]:
lx[2] += ly[2]
ly[2] = 0
if lx[3] == ly[3]:
lx[3] += ly[3]
ly[3] = 0
row1 = [0, 2, 4, 4]
row2 = [2, 2, 4, 4]
twomerger(row1, row2)
print (row1)
print (row2)
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>[0, 4, 8, 8]
[2, 0, 0, 0]
</code></pre>
<p>For what it's worth, your method can be written much more concisely:</p>
<pre><code>def twomerger(lx, ly):
for i, num in enumerate(lx):
if lx[i] == ly[i]:
lx[i] += ly[i]
ly[i] = 0
</code></pre>
<p>Or if you're comfortable with <a href="https://docs.python.org/2/library/functions.html#zip" rel="nofollow"><code>zip</code></a>:</p>
<pre><code>def twomerger(lx, ly):
for i, (a, b) in enumerate(zip(lx, ly)):
if a == b:
lx[i] += b
ly[i] = 0
</code></pre>
| 3 | 2016-08-26T22:46:44Z | [
"python",
"function",
"python-3.x"
] |
Pig: is it possible to use pytz or dateutils for Python udfs? | 39,175,648 | <p>I am using <code>datetime</code> in some Python udfs that I use in my <code>pig</code> script. So far so good. I use pig 12.0 on Cloudera 5.5</p>
<p>However, I also need to use the <code>pytz</code> or <code>dateutil</code> packages as well and they dont seem to be part of a vanilla python install. </p>
<p>Can I use them in my <code>Pig</code> udfs in some ways? If so, how? I think <code>dateutil</code> is installed on my nodes (I am not admin, so how can I actually check that is the case?), but when I type:</p>
<pre><code>import sys
#I append the path to dateutil on my local windows machine. Is that correct?
sys.path.append('C:/Users/me/AppData/Local/Continuum/Anaconda2/lib/site-packages')
from dateutil import tz
</code></pre>
<p>in my <code>udfs.py</code> script, I get:</p>
<pre><code>2016-08-30 09:56:06,572 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1121: Python Error. Traceback (most recent call last):
File "udfs.py", line 23, in <module>
from dateutil import tz
ImportError: No module named dateutil
</code></pre>
<p>when I run my pig script.</p>
<p>All my other python udfs (using <code>datetime</code> for instance) work just fine. Any idea how to fix that?</p>
<p>Many thanks!</p>
<p><strong>UPDATE</strong></p>
<p>after playing a bit with the python path, I am now able to </p>
<pre><code>import dateutil
</code></pre>
<p>(at least Pig does not crash). But if I try:</p>
<pre><code>from dateutil import tz
</code></pre>
<p>I get an error. </p>
<pre><code> from dateutil import tz
File "/opt/python/lib/python2.7/site-packages/dateutil/tz.py", line 16, in <module>
from six import string_types, PY3
File "/opt/python/lib/python2.7/site-packages/six.py", line 604, in <module>
viewkeys = operator.methodcaller("viewkeys")
AttributeError: type object 'org.python.modules.operator' has no attribute 'methodcaller'
</code></pre>
<p>How to overcome that? I use tz in the following manner</p>
<pre><code>to_zone = dateutil.tz.gettz('US/Eastern')
from_zone = dateutil.tz.gettz('UTC')
</code></pre>
<p>and then I change the timezone of my timestamps. Can I just import dateutil to do that? what is the proper syntax?</p>
<p><strong>UPDATE 2</strong></p>
<p>Following yakuza's suggestion, I am able to </p>
<pre><code>import sys
sys.path.append('/opt/python/lib/python2.7/site-packages')
sys.path.append('/opt/python/lib/python2.7/site-packages/pytz/zoneinfo')
import pytz
</code></pre>
<p>but now I get and error again </p>
<pre><code>Caused by: Traceback (most recent call last): File "udfs.py", line 158, in to_date_local File "__pyclasspath__/pytz/__init__.py", line 180, in timezone pytz.exceptions.UnknownTimeZoneError: 'America/New_York'
</code></pre>
<p>when I define</p>
<pre><code>to_zone = pytz.timezone('America/New_York')
from_zone = pytz.timezone('UTC')
</code></pre>
<p>Found some hints here <a href="http://stackoverflow.com/questions/9158846/unknowntimezoneerror-exception-raised-with-python-application-compiled-with-py2e">UnknownTimezoneError Exception Raised with Python Application Compiled with Py2Exe</a></p>
<p>What to do? Awww, I just want to convert timezones in Pig :(</p>
| 6 | 2016-08-26T22:52:34Z | 39,206,029 | <p>From the answer to <a href="http://stackoverflow.com/questions/7831649/how-do-i-make-hadoop-find-imported-python-modules-when-using-python-udfs-in-pig">a different but related question</a>, it seems that you should be able to use resources as long as they are available on each of the nodes.</p>
<p>I think you can then add the path as described in <a href="http://stackoverflow.com/a/23807481/983722">this answer regarding jython</a>, and load the modules as usual.</p>
<blockquote>
<p>Append the location to the sys.path in the Python script: </p>
<pre><code>import sys
sys.path.append('/usr/local/lib/python2.7/dist-packages')
import happybase
</code></pre>
</blockquote>
| 1 | 2016-08-29T12:11:07Z | [
"python",
"apache-pig",
"jython",
"cloudera",
"pytz"
] |
Pig: is it possible to use pytz or dateutils for Python udfs? | 39,175,648 | <p>I am using <code>datetime</code> in some Python udfs that I use in my <code>pig</code> script. So far so good. I use pig 12.0 on Cloudera 5.5</p>
<p>However, I also need to use the <code>pytz</code> or <code>dateutil</code> packages as well and they dont seem to be part of a vanilla python install. </p>
<p>Can I use them in my <code>Pig</code> udfs in some ways? If so, how? I think <code>dateutil</code> is installed on my nodes (I am not admin, so how can I actually check that is the case?), but when I type:</p>
<pre><code>import sys
#I append the path to dateutil on my local windows machine. Is that correct?
sys.path.append('C:/Users/me/AppData/Local/Continuum/Anaconda2/lib/site-packages')
from dateutil import tz
</code></pre>
<p>in my <code>udfs.py</code> script, I get:</p>
<pre><code>2016-08-30 09:56:06,572 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1121: Python Error. Traceback (most recent call last):
File "udfs.py", line 23, in <module>
from dateutil import tz
ImportError: No module named dateutil
</code></pre>
<p>when I run my pig script.</p>
<p>All my other python udfs (using <code>datetime</code> for instance) work just fine. Any idea how to fix that?</p>
<p>Many thanks!</p>
<p><strong>UPDATE</strong></p>
<p>after playing a bit with the python path, I am now able to </p>
<pre><code>import dateutil
</code></pre>
<p>(at least Pig does not crash). But if I try:</p>
<pre><code>from dateutil import tz
</code></pre>
<p>I get an error. </p>
<pre><code> from dateutil import tz
File "/opt/python/lib/python2.7/site-packages/dateutil/tz.py", line 16, in <module>
from six import string_types, PY3
File "/opt/python/lib/python2.7/site-packages/six.py", line 604, in <module>
viewkeys = operator.methodcaller("viewkeys")
AttributeError: type object 'org.python.modules.operator' has no attribute 'methodcaller'
</code></pre>
<p>How to overcome that? I use tz in the following manner</p>
<pre><code>to_zone = dateutil.tz.gettz('US/Eastern')
from_zone = dateutil.tz.gettz('UTC')
</code></pre>
<p>and then I change the timezone of my timestamps. Can I just import dateutil to do that? what is the proper syntax?</p>
<p><strong>UPDATE 2</strong></p>
<p>Following yakuza's suggestion, I am able to </p>
<pre><code>import sys
sys.path.append('/opt/python/lib/python2.7/site-packages')
sys.path.append('/opt/python/lib/python2.7/site-packages/pytz/zoneinfo')
import pytz
</code></pre>
<p>but now I get and error again </p>
<pre><code>Caused by: Traceback (most recent call last): File "udfs.py", line 158, in to_date_local File "__pyclasspath__/pytz/__init__.py", line 180, in timezone pytz.exceptions.UnknownTimeZoneError: 'America/New_York'
</code></pre>
<p>when I define</p>
<pre><code>to_zone = pytz.timezone('America/New_York')
from_zone = pytz.timezone('UTC')
</code></pre>
<p>Found some hints here <a href="http://stackoverflow.com/questions/9158846/unknowntimezoneerror-exception-raised-with-python-application-compiled-with-py2e">UnknownTimezoneError Exception Raised with Python Application Compiled with Py2Exe</a></p>
<p>What to do? Awww, I just want to convert timezones in Pig :(</p>
| 6 | 2016-08-26T22:52:34Z | 39,259,780 | <p>Well, as you probably know all Python UDF functions are not executed by Python interpreter, but Jython that is distributed with Pig. By default in 0.12.0 it should be <a href="http://stackoverflow.com/questions/17711451/python-udf-version-with-jython-pig">Jython 2.5.3</a>. Unfortunately <code>six</code> package supports Python starting from <a href="https://pypi.python.org/pypi/six" rel="nofollow">Python 2.6</a> and it's package required by <a href="https://github.com/dateutil/dateutil/blob/master/setup.py" rel="nofollow"><code>dateutil</code></a>. However <code>pytz</code> seems not to have such dependency, and should support Python versions starting from <a href="https://pypi.python.org/pypi/pytz" rel="nofollow">Python 2.4</a>.</p>
<p>So to achieve your goal you should distribute <code>pytz</code> package to all your nodes for version 2.5 and in your Pig UDF add it's path to <code>sys.path</code>. If you complete same steps you did for <code>dateutil</code> everything should work as you expect. We are using very same approach with <code>pygeoip</code> and it works like a charm.</p>
<h1>How does it work</h1>
<p>When you run Pig script that references some Python UDF (more precisely Jython UDF), you script gets compiled to map/reduce job, all <code>REGISTER</code>ed files are included in JAR file, and are distributed on nodes where code is actually executed. Now when your code is executed, Jython interpreter is started and executed from Java code. So now when Python code is executed on each node taking part in computation, all Python imports are resolved locally on node. Imports from standard libraries are taken from Jython implementation, but all "packages" have to be install otherwise, as there is no <code>pip</code> for it. So to make external packages available to Python UDF you have to install required packages manually using other <code>pip</code> or install from sources, but remember to download package <strong>compatible with Python 2.5</strong>! Then in every single UDF file, you have to append <code>site-packages</code> on each node, where you installed packages (it's important to use same directory on each node). For example:</p>
<pre><code>import sys
sys.path.append('/path/to/site-packages')
# Imports of non-stdlib packages
</code></pre>
<h1>Proof of concept</h1>
<p>Let's assume some we have following files:</p>
<p><code>/opt/pytz_test/test_pytz.pig</code>:</p>
<pre><code>REGISTER '/opt/pytz_test/test_pytz_udf.py' using jython as test;
A = LOAD '/opt/pytz_test/test_pytz_data.csv' AS (timestamp:int);
B = FOREACH A GENERATE
test.to_date_local(timestamp);
STORE B INTO '/tmp/test_pytz_output.csv' using PigStorage(',');
</code></pre>
<p><code>/opt/pytz_test/test_pytz_udf.py</code>:</p>
<pre><code>from datetime import datetime
import sys
sys.path.append('/usr/lib/python2.6/site-packages/')
import pytz
@outputSchema('date:chararray')
def to_date_local(unix_timestamp):
"""
converts unix timestamp to a rounded date
"""
to_zone = pytz.timezone('America/New_York')
from_zone = pytz.timezone('UTC')
try :
as_datetime = datetime.utcfromtimestamp(unix_timestamp)
.replace(tzinfo=from_zone).astimezone(to_zone)
.date().strftime('%Y-%m-%d')
except:
as_datetime = unix_timestamp
return as_datetime
</code></pre>
<p><code>/opt/pytz_test/test_pytz_data.csv</code>:</p>
<pre><code>1294778181
1294778182
1294778183
1294778184
</code></pre>
<p>Now let's install <code>pytz</code> on our node (it has to be installed using Python version on which <code>pytz</code> is compatible with Python 2.5 (2.5-2.7), in my case I'll use Python 2.6):</p>
<p><code>sudo pip2.6 install pytz</code></p>
<p><strong>Please make sure, that file</strong> <code>/opt/pytz_test/test_pytz_udf.py</code> <strong>adds to <code>sys.path</code> reference to <code>site-packages</code> where <code>pytz</code> is installed.</strong></p>
<p>Now once we run Pig with our test script:</p>
<p><code>pig -x local /opt/pytz_test/test_pytz.pig</code></p>
<p>We should be able to read output from our job, which should list:</p>
<pre><code>2011-01-11
2011-01-11
2011-01-11
2011-01-11
</code></pre>
| 3 | 2016-08-31T22:10:42Z | [
"python",
"apache-pig",
"jython",
"cloudera",
"pytz"
] |
ImportError with module in same directory | 39,175,653 | <p>I am trying to test the mechanism that one python program calling python functions defined in other files. For instance, the main program is <code>run.py</code>, </p>
<pre><code>import os
import shutil
import ae.autoencoder
if __name__ == '__main__':
main()
</code></pre>
<p>which calls the <code>autoencoder.py</code> located under the subdirectory of <code>ae</code>. </p>
<p>autoencoer.py is </p>
<pre><code>import data.py
# import data
</code></pre>
<p>However, either <code>import data.py</code> or <code>import data</code> will always give the following error message</p>
<pre><code>python run.py
Traceback (most recent call last):
File "run.py", line 3, in <module>
import ae.autoencoder
File "/home/autoencoder/ae/autoencoder.py", line 1, in <module>
import data.py
ImportError: No module named 'data'
</code></pre>
<p>The file structure is as follows:(/home/autoencoder is the working directory, where run.py locates)</p>
<p><a href="http://i.stack.imgur.com/XlBIw.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/XlBIw.jpg" alt="enter image description here"></a></p>
| 0 | 2016-08-26T22:52:50Z | 39,176,000 | <p>Probably even if you import the module <code>data</code> from the module <code>autoencoder</code> that is in the same folder, it doesn't work because the path where modules are searched is always the path containing the first file that gets executed, so the path of <code>run.py</code> in your case. Both <code>autoencoder</code> and <code>data</code> are in the <code>ae</code> subdirectory.</p>
<p>You imported <code>autoencoder</code> from <code>run.py</code> doing: <code>import ae.autoencoder</code></p>
<p>You can import <code>data</code> in the same way: import <code>ae.data</code> becase it still refers to the path of <code>run.py</code> so you have to enter the <code>ae</code> directory.</p>
<p>If you want to avoid mentioning the folder all the time you can add the <code>ae</code> directory to the system path.</p>
<p>Open <code>run.py</code>, then you need the module <code>os</code> that you already imported, also import <code>sys</code> module,</p>
<p>then:</p>
<pre><code>sys.path.append(os.path.dirname(os.path.realpath(__file__)) + "./ae")
</code></pre>
<hr>
<p>So you <strong>run.py</strong> file should look like this:</p>
<pre><code>import os
import shutil
import sys
sys.path.append(os.path.dirname(os.path.realpath(__file__)) + "./ae")
import autoencoder
def main():
pass
if __name__ == "__main__":
main()
</code></pre>
<p>And your <strong>autoencoder.py</strong> should look like this:</p>
<pre><code>import data
</code></pre>
<p>If you use an IDE and you try the <code>sys.path.append</code> method it may show as the modules <code>autoencoder</code> and <code>data</code> aren't found by the IDE, but actually they are found at runtime after their path is included, so it should run.</p>
<hr>
<p>Another way of loading a module located in the same directory is:</p>
<pre><code>from . import module_name
</code></pre>
<hr>
<p>When you need to import many modules or import modules at runtime it is recommended to use the <strong>imp</strong> (now deprecated) or <strong>importlib</strong> module.</p>
<p>For Python 3.5+:</p>
<pre><code>import importlib.util
spec = importlib.util.spec_from_file_location("module_name", "/path/to/file")
foo = importlib.util.module_from_spec(spec)
spec.loader.exec_module(foo)
foo.MyClass()
</code></pre>
<p>For Python 3.3 and 3.4:</p>
<pre><code>from importlib.machinery import SourceFileLoader
foo = SourceFileLoader("module_name", "/path/to/file").load_module()
foo.MyClass()
</code></pre>
<p>For Python 2:</p>
<pre><code>import imp
foo = imp.load_source('module_name', '/path/to/file')
foo.MyClass()
</code></pre>
| 0 | 2016-08-26T23:40:24Z | [
"python"
] |
ImportError with module in same directory | 39,175,653 | <p>I am trying to test the mechanism that one python program calling python functions defined in other files. For instance, the main program is <code>run.py</code>, </p>
<pre><code>import os
import shutil
import ae.autoencoder
if __name__ == '__main__':
main()
</code></pre>
<p>which calls the <code>autoencoder.py</code> located under the subdirectory of <code>ae</code>. </p>
<p>autoencoer.py is </p>
<pre><code>import data.py
# import data
</code></pre>
<p>However, either <code>import data.py</code> or <code>import data</code> will always give the following error message</p>
<pre><code>python run.py
Traceback (most recent call last):
File "run.py", line 3, in <module>
import ae.autoencoder
File "/home/autoencoder/ae/autoencoder.py", line 1, in <module>
import data.py
ImportError: No module named 'data'
</code></pre>
<p>The file structure is as follows:(/home/autoencoder is the working directory, where run.py locates)</p>
<p><a href="http://i.stack.imgur.com/XlBIw.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/XlBIw.jpg" alt="enter image description here"></a></p>
| 0 | 2016-08-26T22:52:50Z | 39,176,103 | <p>First things first, no need to add the <code>.py</code> extension when importing :) That's implied. </p>
<p>The syntax for relative imports (importing modules in the same directory) got a little stricter between Python versions 2 and 3. The way you have it set up will work with Python 2.x, but I'm guessing you're running Python 3.x (you can check by running <code>python --version</code>). With Python 3, you need to explicitly use a dot to do relative imports now.</p>
<p>Python 2 relative import:</p>
<pre><code>import data
</code></pre>
<p>Better Python 2.5+ relative import:</p>
<pre><code>from . import data
</code></pre>
<p>You should use the second version in any code you write, unless you're specifically limited to using ancient python versions.</p>
| 0 | 2016-08-26T23:51:39Z | [
"python"
] |
Frames showing on top of othe frames in tkinter | 39,175,675 | <p>I'm trying to make a simple GUI with multiple frames in python with tkinter, but what is happening is one frame is showing on top of the other frame. Can anybody help me with why this is and how it can be fixed? Here's my code:
'''
SCRATCH GUI
By Sigton</p>
<pre><code>This is a GUI built on Dylan5797's Scratch API
'''
import tkinter as tk
from tkinter import ttk
import scratchapi
LARGE_FONT = ("Verdana", 12)
class ScratchGUIApp(tk.Tk):
'''
Main backend class, this is what makes stuff work.
'''
def __init__(self, *args, **kwargs):
''' Constructor '''
# Call the parents constructor
tk.Tk.__init__(self, *args, **kwargs)
# Set the window title
tk.Tk.wm_title(self, "Scratch GUI")
# Create the container
self.container = tk.Frame(self)
self.container.pack(side="top", fill="both", expand=True)
# And configure the grid
self.container.grid_rowconfigure(0, weight=1)
self.container.grid_columnconfigure(0, weight=1)
# Create a dictionary of frames and append all pages to it
self.frames = {}
for f in (LoginPage, MainPage):
frame = f(self.container, self)
self.frames[f] = frame
frame.grid(row=0, column=0, sticky="nsew")
# Set the starting page
self.show_frame(LoginPage)
def show_frame(self, cont):
# A simple function to switch pages
frame = self.frames[cont]
frame.tkraise()
class LoginPage(tk.Frame):
'''
This is all content on the login page
'''
def __init__(self, parent, controller):
''' Constructor '''
# Call the parents constructor
tk.Frame.__init__(self, parent)
self.parent = parent
self.controller = controller
# Add the title
self.title = ttk.Label(self, text="Log in to your Scratch account", font=LARGE_FONT)
self.title.grid(row=0,column=0,columnspan=2,pady=10)
# Add the login form
self.usernameTag = ttk.Label(self, text="Username:")
self.usernameTag.grid(row=1,column=0,sticky="e",pady=2)
self.usernameEntry = ttk.Entry(self)
self.usernameEntry.grid(row=1,column=1,pady=2)
self.passwordTag = ttk.Label(self, text="Password:")
self.passwordTag.grid(row=2,column=0,sticky="e",pady=2)
self.passwordEntry = ttk.Entry(self)
self.passwordEntry.grid(row=2,column=1,pady=2)
# Just in case theres anything to report
self.errorMessage = ttk.Label(self, text="", foreground="red")
self.errorMessage.grid(row=3,column=0,columnspan=2,pady=5)
# Add the disclaimer
self.subtitle = ttk.Label(self, text="Account information is not collected in any way.")
self.subtitle.grid(row=4,column=0,columnspan=2)
# And finally add the login button
self.button = ttk.Button(self, text="Login",
command= lambda: self.login())
self.button.grid(row=5,column=0,columnspan=2,pady=10)
def login(self):
# Attempts to log the user in to the scratchapi
usernameData = self.usernameEntry.get()
passwordData = self.passwordEntry.get()
if usernameData == "" or passwordData == "":
# Stop the function if the fields are empty.
self.errorMessage.config(text="These fields are required.")
return
# Attempt to login to the scratchapi with the given username and password
try:
scratch = scratchapi.ScratchUserSession(usernameData, passwordData)
except:
# Stop the function if there was an error
self.errorMessage.config(text="Login failed.")
return
self.controller.show_frame(MainPage)
class MainPage(tk.Frame):
'''
This is all content on the main page.
'''
def __init__(self, parent, controller):
''' Constructor '''
# Call the parents constructor
tk.Frame.__init__(self, parent)
self.parent = parent
self.controller = controller
self.label=ttk.Label(text="hi")
self.label.pack(in_=self)
app = ScratchGUIApp()
app.mainloop()
</code></pre>
| -2 | 2016-08-26T22:54:49Z | 39,176,068 | <p>The frames are appearing exactly how they should.The problem is that you are putting the "hi" label in the root window, which is also where you put the container for the frames. Even though you use the <code>_in</code> parameter, you need to make that label have a parent of <code>self</code> due to the way this specific code works (by raising and lowering frames).</p>
| 0 | 2016-08-26T23:47:50Z | [
"python",
"tkinter"
] |
Python Unit Testing Mock incorrectly | 39,175,756 | <p>I am new to unit testing in Python. I trying to write unit tests for my class. However I am running into issues:</p>
<pre><code>from <path1> import InnerObj
from <path2> import new_obj
from <path3> import XYZ
class ClassToBeTested(object):
def __init__(self):
obj = new_obj(param1 = "XYZ", time = 1, innerObj = InnerObj())
self.attr1 = XYZ(obj)
def method(self, random, paramStr):
// Remainder of class
</code></pre>
<p>Test class:</p>
<pre><code>from mock import patch, PropertyMock, MagicMock
from <path1> import InnerObj
from <path2> import new_obj
from <path3> import XYZ
@pytest.fixture()
@patch('<path1>.InnerObj', new=MagicMock())
@patch('<path2>.new_obj', new=MagicMock())
@patch('<path3>.XYZ', new=MagicMock())
def mock_test():
return ClassToBeTested()
def test_method_true(mock_test):
random = Random_Object()
booleanResult = mock_test.method(random, paramStr)
assert booleanResult == True
</code></pre>
<p>The error I get is <code>ERROR at setup of test_method_true ______</code></p>
<p>The error stack mentions
innerObj/<strong>init</strong>.py:26: in <strong>init</strong>
qwerty_main = qwerty_assistant.get_root() </p>
<p>I am likely to believe that mocking is not done correctly for InnerObj as it should not be invoking the code in the init method of mocked objects. </p>
<p>Am I doing something wrong here? Can someone please help in pointing to right direction?</p>
<p>Thanks</p>
| 1 | 2016-08-26T23:06:48Z | 39,192,559 | <p><code>patch</code> should target the import that is being used and not the path to the import.</p>
<p>For example </p>
<p><code>@patch('<path1>.InnerObj', new=MagicMock())</code></p>
<p><code><path1></code> is the location to the definition of <code>InnerObj</code> and not the file using it. To address this, the <code>InnerObj</code> import should be patched in the module that imports <code>InnerObj</code></p>
<p>Pretend that the path to <code>ClassToBeTested</code> is <code>path.to.class.to.be.tested</code></p>
<p>The patch would be:</p>
<p><code>@patch('path.to.class.to.be.tested.InnerObj', new=MagicMock())</code></p>
| 1 | 2016-08-28T14:56:59Z | [
"python",
"unit-testing",
"mocking"
] |
Difference between coding of dictionaries in python versions | 39,175,767 | <p>I have following codestrings:</p>
<pre><code>#coding: utf
import json
import base64
from lxml import html, etree
import urllib2
somedictionary={}
url1="someurl1"
base64string = base64.b64encode('%s:%s' % ('user', 'pass'))
xml1request = urllib2.Request(url1)
xml1request.add_header("Authorization", "Basic %s" % base64string)
xml1=etree.parse(urllib2.urlopen(xml1request))
somelist=xml1.xpath("//list1//a/text()")
for element in somelist:
url2="part of url2"+element+"part of url2"
xml2request=urllib2.Request(url2)
xml2request.add_header("Authorization", "Basic %s" % base64string)
xml2=etree.parse(urllib2.urlopen(xml2request))
b=xml2.xpath("//list2//b/text()")
c=xml2.xpath("//list2//c/text()")
d=xml2.xpath("//list2//d/text()")
e=xml2.xpath("//list2//e/text()")
somedictionary[key.index(element)]={key.index(element):{"a": element, "b": b, "c": c, "d": d, "e": e}}
#json.dump(bamboo, open("12345.txt","w"))
</code></pre>
<p>in python 3.4.0 it works.
But in python 2.7.10 it returns me an error:</p>
<pre><code> Traceback (most recent call last):
File "C:\Users\user\11.py", line 25, in <module>
somedictionary[key.index(element)]={key.index(element):{"a": a, "b": b, "c": c, "d": d, "e": e}}
NameError: name 'key' is not defined
>>>
</code></pre>
<p>Variables <code>b</code>,<code>c</code>,<code>d</code>,<code>e</code> announced in cycle;
<code>somedictionary</code> announced before cycle
I don't found some information about this moment in pythondocks
How to fix it, if it works in python 3.4.0?</p>
| 0 | 2016-08-26T23:07:55Z | 39,175,905 | <p>The only way it could work in python3 and not python2 is if <em>somelist</em> is empty in python3 so you never reach the code inside the loop:</p>
<pre><code>In [20]: l = []
In [21]: for ele in l:
print(not_defined) # never reach here
....:
In [22]: l = [1]
In [23]: for ele in l:
print(not_defined) # loop once so we get here and error
....:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-23-6e24b65bf7e0> in <module>()
1 for ele in l:
----> 2 print(not_defined)
3
NameError: name 'not_defined' is not defined
</code></pre>
<p>You never defined the name <em>key</em> anywhere so bar <em>somelist</em> is empty you would get a <em>NameError</em> as above in both <em>python2</em> and <em>python3</em>. </p>
<p>So you have two problems, in <em>python3</em> your code is not finding anything, if it did you still have the <em>key</em> issue as you don't define it anywhere so you need to figure out what <em>key</em> should be and debug your <em>python3</em> logic. </p>
| 1 | 2016-08-26T23:26:26Z | [
"python"
] |
'datetime.datetime' object is not subscriptable | 39,175,822 | <p>I've checked every other post about this, but none can fix my issue.</p>
<p>I made a list that holds tuples with an id and a datetime object. Everytime I try to clean up the list with:
<code>last_encounters = [item for item in last_encounters if item[1] < datetime.utcnow]</code>
I get the error that <code>'datetime.datetime' object is not subscriptable</code>. It's getting pretty annoying, I tried dicts.. didn't work.</p>
<p>Also tested the item[1], according to my print it is a datetime.</p>
<p>Even tried changing it to <code>(x,y) for x,y in last_encounters if y < ...</code> also did NOT work.</p>
<p>Some useful code:</p>
<pre><code>list = []
d_t = datetime.utcfromtimestamp(9000000)
list += [('lel', d_t)]
list = [item for item in list if item[1] < datetime.utcnow]
</code></pre>
<p>I hope someone can tell me what I am doing wrong here.</p>
<p>Thanks in advance,</p>
<p>Kevin</p>
| 0 | 2016-08-26T23:15:00Z | 39,175,890 | <p>Try calling <code>utcnow</code> as a method <code>utcnow()</code>:</p>
<pre><code>last_encounters = [item for item in last_encounters if item[1] < datetime.utcnow()]
</code></pre>
<p>I couldn't reproduce your error with a version of your code, but a version with items in the list lead to this fix.</p>
| 0 | 2016-08-26T23:25:23Z | [
"python",
"python-2.7",
"datetime",
"tuples"
] |
'datetime.datetime' object is not subscriptable | 39,175,822 | <p>I've checked every other post about this, but none can fix my issue.</p>
<p>I made a list that holds tuples with an id and a datetime object. Everytime I try to clean up the list with:
<code>last_encounters = [item for item in last_encounters if item[1] < datetime.utcnow]</code>
I get the error that <code>'datetime.datetime' object is not subscriptable</code>. It's getting pretty annoying, I tried dicts.. didn't work.</p>
<p>Also tested the item[1], according to my print it is a datetime.</p>
<p>Even tried changing it to <code>(x,y) for x,y in last_encounters if y < ...</code> also did NOT work.</p>
<p>Some useful code:</p>
<pre><code>list = []
d_t = datetime.utcfromtimestamp(9000000)
list += [('lel', d_t)]
list = [item for item in list if item[1] < datetime.utcnow]
</code></pre>
<p>I hope someone can tell me what I am doing wrong here.</p>
<p>Thanks in advance,</p>
<p>Kevin</p>
| 0 | 2016-08-26T23:15:00Z | 39,175,939 | <p>When you do <code>last_encounters += (a, b)</code>, you are adding two sequences together, <code>last_encounters</code> and <code>(a,b)</code>. This means you end up with <code>a</code> and <code>b</code> stuck on the end of the list, rather than just adding the tuple to the list.</p>
<p>There are two options to fix your problem:</p>
<ol>
<li><p>Add a sequence containing your tuple:</p>
<pre><code> last_encounters += [(d["id"], d["d_t"])]
</code></pre></li>
<li><p>Or preferably, use the <code>append</code> method:</p>
<pre><code> last_encounters.append((d["id"], d["d_t"]))
</code></pre></li>
</ol>
| 1 | 2016-08-26T23:31:17Z | [
"python",
"python-2.7",
"datetime",
"tuples"
] |
'datetime.datetime' object is not subscriptable | 39,175,822 | <p>I've checked every other post about this, but none can fix my issue.</p>
<p>I made a list that holds tuples with an id and a datetime object. Everytime I try to clean up the list with:
<code>last_encounters = [item for item in last_encounters if item[1] < datetime.utcnow]</code>
I get the error that <code>'datetime.datetime' object is not subscriptable</code>. It's getting pretty annoying, I tried dicts.. didn't work.</p>
<p>Also tested the item[1], according to my print it is a datetime.</p>
<p>Even tried changing it to <code>(x,y) for x,y in last_encounters if y < ...</code> also did NOT work.</p>
<p>Some useful code:</p>
<pre><code>list = []
d_t = datetime.utcfromtimestamp(9000000)
list += [('lel', d_t)]
list = [item for item in list if item[1] < datetime.utcnow]
</code></pre>
<p>I hope someone can tell me what I am doing wrong here.</p>
<p>Thanks in advance,</p>
<p>Kevin</p>
| 0 | 2016-08-26T23:15:00Z | 39,175,950 | <p>It looks like your problem is the way you add the tuple to the list. Here is an example to show the problem :</p>
<pre><code>l = []
l += ("a", "b")
print l
l = []
l.append( ("a", "b"))
print l
</code></pre>
<p>Which gives :</p>
<pre><code>>>> ['a', 'b']
>>> [('a', 'b')]
</code></pre>
<p>So <code>list+=tuple</code> is equivalent to calling <code>list.extend(tuple)</code> and not <code>list.append(tuple)</code> which is what you want.</p>
<p>A side note on the meaning of the exception that was raised :
<code>X is not subscriptable</code> means that your are trying to call that syntax <code>X[some int]</code> while the object doesn't support it.</p>
| 0 | 2016-08-26T23:32:06Z | [
"python",
"python-2.7",
"datetime",
"tuples"
] |
python3 upload files to ondrive or sharepoint? | 39,175,891 | <p>Anyone know if this is possible?
I just want to automate dropping some documents into my onedrive for business account.</p>
<p>I tried </p>
<pre><code> import onedrivesdk
from onedrivesdk.helpers import GetAuthCodeServer
from onedrivesdk.helpers.resource_discovery import ResourceDiscoveryRequest
redirect_uri = 'http://localhost:8080'
client_id = 'appid'
client_secret = 'mysecret'
discovery_uri = 'https://api.office.com/discovery/'
auth_server_url='https://login.live.com/oauth20_authorize.srf?scope=wl.skydrive_update'
#auth_server_url='https://login.microsoftonline.com/common/oauth2/authorize',
auth_token_url='https://login.microsoftonline.com/common/oauth2/token'
http = onedrivesdk.HttpProvider()
auth = onedrivesdk.AuthProvider(http,
client_id,
auth_server_url=auth_server_url,
auth_token_url=auth_token_url)
auth_url = auth.get_auth_url(redirect_uri)
code = GetAuthCodeServer.get_auth_code(auth_url, redirect_uri)
auth.authenticate(code, redirect_uri, client_secret, resource=resource)
# If you have access to more than one service, you'll need to decide
# which ServiceInfo to use instead of just using the first one, as below.
service_info = ResourceDiscoveryRequest().get_service_info(auth.access_token)[0]
auth.redeem_refresh_token(service_info.service_resource_id)
client = onedrivesdk.OneDriveClient(service_info.service_resource_id + '/_api/v2.0/', auth, http)
</code></pre>
<p>I registered an APP and got a secret and id. But when I ran this I got scope is invalid errors. Plus it tries to launch a webpage which isn't great for a command line kinda environment. I think this SDK might be outdated as well because originally this script had login.microsoftonline, but that wasn't reachable so I changed it to login.live.com.</p>
| 0 | 2016-08-26T23:25:24Z | 39,211,002 | <p>I wrote this sample code you posted. You replaced the <code>auth_server_URL</code>with the authentication URL for Microsoft Account authentication, which can only be used to access OneDrive (the consumer product). You need to continue using the <code>login.microsoftonline.com</code> URL to log into your OneDrive for Business account.</p>
<p>You are correct that this pops up a dialog. However, you can write a little supporting code so that only happens the first time you log into a particular app. Follow these steps (assuming you are using the default implementation of <code>AuthProvider</code>:</p>
<ol>
<li>Use the sample code above up through the line <code>auth.redeem_refresh_token()</code></li>
<li>The <code>AuthProvider</code> will now have a <code>Session</code> object, which caches the credentials of the current user and session. Use <code>AuthProvider.save_session()</code> to save the credentials for later.</li>
<li>Next time you start your app, use <code>AuthProvider.load_session()</code> and <code>AuthProvider.refresh_token()</code> to retrieve the previous session and refresh the auth token. This will all be headless.</li>
</ol>
<p>Take note that the default implementation of <code>SessionBase</code> (<a href="https://github.com/OneDrive/onedrive-sdk-python/blob/master/src/onedrivesdk/session.py" rel="nofollow">found here</a>) uses <code>Pickle</code> and is not safe for product use. Make sure to create a new implementation of <code>Session</code> if you intend to deploy this app to other users.</p>
| 0 | 2016-08-29T16:33:41Z | [
"python",
"python-3.x",
"sharepoint",
"upload",
"onedrive"
] |
python3 upload files to ondrive or sharepoint? | 39,175,891 | <p>Anyone know if this is possible?
I just want to automate dropping some documents into my onedrive for business account.</p>
<p>I tried </p>
<pre><code> import onedrivesdk
from onedrivesdk.helpers import GetAuthCodeServer
from onedrivesdk.helpers.resource_discovery import ResourceDiscoveryRequest
redirect_uri = 'http://localhost:8080'
client_id = 'appid'
client_secret = 'mysecret'
discovery_uri = 'https://api.office.com/discovery/'
auth_server_url='https://login.live.com/oauth20_authorize.srf?scope=wl.skydrive_update'
#auth_server_url='https://login.microsoftonline.com/common/oauth2/authorize',
auth_token_url='https://login.microsoftonline.com/common/oauth2/token'
http = onedrivesdk.HttpProvider()
auth = onedrivesdk.AuthProvider(http,
client_id,
auth_server_url=auth_server_url,
auth_token_url=auth_token_url)
auth_url = auth.get_auth_url(redirect_uri)
code = GetAuthCodeServer.get_auth_code(auth_url, redirect_uri)
auth.authenticate(code, redirect_uri, client_secret, resource=resource)
# If you have access to more than one service, you'll need to decide
# which ServiceInfo to use instead of just using the first one, as below.
service_info = ResourceDiscoveryRequest().get_service_info(auth.access_token)[0]
auth.redeem_refresh_token(service_info.service_resource_id)
client = onedrivesdk.OneDriveClient(service_info.service_resource_id + '/_api/v2.0/', auth, http)
</code></pre>
<p>I registered an APP and got a secret and id. But when I ran this I got scope is invalid errors. Plus it tries to launch a webpage which isn't great for a command line kinda environment. I think this SDK might be outdated as well because originally this script had login.microsoftonline, but that wasn't reachable so I changed it to login.live.com.</p>
| 0 | 2016-08-26T23:25:24Z | 39,521,342 | <p>Onerive's website shows "Not Yet" on "OneDrive SDK for Python" to "OneDrive for Business"
<a href="https://dev.onedrive.com/SDKs.htm" rel="nofollow">https://dev.onedrive.com/SDKs.htm</a></p>
<p>The github sample codes did not work for me either, it tried to popup a window of authentication, but IE can not find the address:</p>
<p>http://('https//login.microsoftonline.com/common/oauth2/authorize',)?redirect_uri=http%3A%2F%2Flocalhost%3A8080&client_id=034xxxx9-9xx8-4xxf-bexx-1bc5xxxxbd0c&response_type=code</p>
<p>or removed all the "-" in client id</p>
<p>http://('https//login.microsoftonline.com/common/oauth2/authorize',)?redirect_uri=http%3A%2F%2Flocalhost%3A8080&client_id=034xxxx99xx84xxfbexx1bc5xxxxbd0c&response_type=code</p>
<p>Either way, I got the same result, IE did not show the popup with a line "This page canât be displayed"</p>
| 0 | 2016-09-15T22:36:49Z | [
"python",
"python-3.x",
"sharepoint",
"upload",
"onedrive"
] |
Pythai on Mac OS | 39,175,912 | <p>Dose anyone have experience installing pythai on mac OS.
I get the following error when I try and install it with "pip install pythai"</p>
<pre><code>gcc -fno-strict-aliasing -I/Users/roopal/anaconda/include -arch x86_64 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/roopal/anaconda/include/python2.7 -c pythai/libthai.c -o build/temp.macosx-10.6-x86_64-2.7/pythai/libthai.o
pythai/libthai.c:3:10: fatal error: 'thai/thbrk.h' file not found
#include <thai/thbrk.h>
^
1 error generated.
error: command 'gcc' failed with exit status 1
</code></pre>
<p>Not sure where to get and put the thai/thbrk.h file</p>
| -1 | 2016-08-26T23:27:14Z | 39,176,153 | <p>If you Google <code>pythai</code>, the first result that comes up is the project's <a href="https://github.com/hermanschaaf/pythai" rel="nofollow">Github page</a>. If you click on it, then scroll down to the readme, it says in the Installation section</p>
<blockquote>
<p>PyThai requires <code>libthai-dev</code> to work.</p>
</blockquote>
<p>and gives installation instructions for Debian/Ubuntu. Since you're on OS X, you'll need to build the source from scratch. Going back to Google, a search for <code>libthai</code> leads you to the <a href="https://linux.thai.net/projects/libthai/" rel="nofollow">LibThai website</a>. Reading that page, you'll find that the code is hosted on <a href="https://github.com/tlwg/libthai" rel="nofollow">Github</a>. Click on Releases and download the <code>tar.gz</code> of the latest version, 0.1.25. Unpack the archive, enter its base folder, and run <code>./autogen.sh</code>. If you get an error about an undefined macro, do what it says and rerun <code>autogen.sh</code> with the <code>m4_pattern_allow</code> argument. Once that completes, do the usual for building a package from source - run <code>./configure --help</code> to see if there are any particular options you want to set, then run <code>./configure</code> plus any desired flags, then run <code>make</code>, then run <code>sudo make install</code> and you should be all set.</p>
<p>Obviously, this requires you to have XCode and the XCode command-line tools installed and activated.</p>
| 2 | 2016-08-26T23:59:09Z | [
"python",
"osx",
"python-2.7",
"pip"
] |
Python COM server with VBA late biding + skip win register (no admin rights) | 39,175,926 | <p>I'm trying to <code>import</code> Python code into in VBA.</p>
<p>The code below works but <strong>requires admin rights</strong>. Is there a way to go around the win register need (assume I just don't have admin rights) but keep the 'late biding' behavior (dont want to Tools>>Reference every time I compile something new)</p>
<pre><code>class ProofOfConcept(object):
def __init__(self):
self.output = []
def GetData(self):
with open('C:\Users\MyPath\Documents\COMs\SourceData.txt') as FileObj:
for line in FileObj:
self.output.append(line)
return self.output
class COMProofOfConcept(object):
_reg_clsid_ = "{D25A5B2A-9544-4C07-8077-DB3611BE63E7}"
_reg_progid_= 'RiskTools.ProofOfConcept'
_public_methods_ = ['GetData']
def __init__(self):
self.__ProofOfConcept = ProofOfConcept()
def GetData(self):
return self.__ProofOfConcept.GetData()
if __name__=='__main__':
print "Registering COM server..."
import win32com.server.register
win32com.server.register.UseCommandLine(COMProofOfConcept)
</code></pre>
<p>VBA Code that calls it:</p>
<pre><code>Sub TestProofOfConcept()
Set PoF = CreateObject("RiskTools.ProofOfConcept")
x = PoF.GetData()
MsgBox x(0)
End Sub
</code></pre>
| 0 | 2016-08-26T23:28:55Z | 39,176,974 | <p>In short, no. The VBA runtime basically uses the <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms684007(v=vs.85).aspx" rel="nofollow">CoGetClassObject</a> COM API under the hood - the <code>CreateObject()</code> function is essentially just a thin wrapper around it (it calls <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms680589(v=vs.85).aspx" rel="nofollow">CLSIDFromString</a> to locate the CLSID from the parameter first). Both of these functions require that the class be registered.</p>
| 3 | 2016-08-27T02:48:23Z | [
"python",
"vba",
"com",
"winreg"
] |
Apply rotation matrix to vector + plot it | 39,175,928 | <p>I have created a vector (v) and would like to perform the rotMatrix function on it. I cannot figure out how to call the function rotMatrix with a degree of 30 on the vector (v). I am also plotting the vectors. </p>
<p>Here is my code: </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("white")
import math
def rotMatrix(angle):
return np.array([[np.cos(np.degrees(angle)), np.arcsin(np.degrees(angle))], [np.sin(np.degrees(angle)), np.cos(np.degrees(angle))]])
v = np.array([3,7])
v30 = rotMatrix(np.degrees(30)).dot(v)
plt.arrow(0,0,v[0],v[1], head_width=0.8, head_length=0.8)
plt.arrow(0,0,v30[0],v30[1],head_width=0.8, head_length=0.8)
plt.axis([-5,5,0,10])
plt.show()
</code></pre>
| 0 | 2016-08-26T23:29:15Z | 39,176,029 | <p>In your rotMatrix function you have used the arcsin() function. You want to use -sin() You should also convert your degrees value to radians</p>
<pre><code>return np.array([[np.cos(np.radians(angle)),
-np.sin(np.radians(angle))],
[np.sin(np.radians(angle)),
np.cos(np.radians(angle))]])
</code></pre>
<p>Or slightly improve efficiently and readability by </p>
<pre><code>c = np.cos(np.radians(angle))
s = np.sin(np.radians(angle))
return np.array([[c, -s], [s, c]])
</code></pre>
<p>and the call with</p>
<pre><code>rotMatrix(30).dot(v)
</code></pre>
<p>-sin and arcsin are very different.</p>
| 0 | 2016-08-26T23:43:35Z | [
"python",
"numpy",
"matplotlib",
"vector",
"rotational-matrices"
] |
Apply rotation matrix to vector + plot it | 39,175,928 | <p>I have created a vector (v) and would like to perform the rotMatrix function on it. I cannot figure out how to call the function rotMatrix with a degree of 30 on the vector (v). I am also plotting the vectors. </p>
<p>Here is my code: </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("white")
import math
def rotMatrix(angle):
return np.array([[np.cos(np.degrees(angle)), np.arcsin(np.degrees(angle))], [np.sin(np.degrees(angle)), np.cos(np.degrees(angle))]])
v = np.array([3,7])
v30 = rotMatrix(np.degrees(30)).dot(v)
plt.arrow(0,0,v[0],v[1], head_width=0.8, head_length=0.8)
plt.arrow(0,0,v30[0],v30[1],head_width=0.8, head_length=0.8)
plt.axis([-5,5,0,10])
plt.show()
</code></pre>
| 0 | 2016-08-26T23:29:15Z | 39,176,073 | <p>When in doubt, play around with the calculations in an interactive Python/numpy session.</p>
<pre><code>In [23]: 30/180*np.pi # do it yourself convsion - easy
Out[23]: 0.5235987755982988
In [24]: np.radians(30) # degrees to radians - samething
Out[24]: 0.52359877559829882
In [25]: np.sin(np.radians(30)) # sin(30deg)
Out[25]: 0.49999999999999994
</code></pre>
| 0 | 2016-08-26T23:48:26Z | [
"python",
"numpy",
"matplotlib",
"vector",
"rotational-matrices"
] |
Scala MurmurHash3 library not matching Python mmh3 library | 39,176,052 | <p>I have a need to MurmurHash strings in both Python and Scala. However they are giving very different results. Scala's builtin <code>MurmurHash3</code> library does not seem to give the same results as any of the other libraries I have tried including online ones. The odd thing is it seems to match on a single character but not multiple characters. Here are some examples:</p>
<p>Python:</p>
<pre><code>mmh3.hash('string', 0)
res: -1390314837
</code></pre>
<p>Scala:</p>
<pre><code>MurmurHash3.stringHash("string", 0)
res: 379569354
</code></pre>
<p>I have tried playing with signed and unsigned ints as I know Java has signed and the C implementation python is wrapping is using unsigned. But even using NumPy to convert to a signed int gives us no help. This website seems to agree with the python implementation: </p>
<p><a href="http://murmurhash.shorelabs.com/" rel="nofollow">http://murmurhash.shorelabs.com/</a></p>
<p>Any ideas on what could be going on here?</p>
| 4 | 2016-08-26T23:46:37Z | 39,177,996 | <p>Scala uses Java strings which are encoded as UTF-16. These are packed two at a time into an <code>Int</code>; Python uses a <code>char*</code> (8 bits), so packs in four characters at a time instead of two.</p>
<p>Edit: Scala also packs the chars in MSB order, i.e. <code>(s.charAt(i) << 16) | (s.charAt(i+1))</code>. You might need to switch to an array of shorts and then swap every pair of them if it's really important to get exactly the same answer. (Or port the Scala code to Python or vice versa.) It also finalizes with the string length; I'm not sure how Python incorporates length data, if it does at all. (This is important so you can distinguish the strings <code>"\u0000"</code> and <code>"\u0000\u0000"</code>.)</p>
| 3 | 2016-08-27T05:59:49Z | [
"python",
"scala",
"encryption",
"hash",
"murmurhash"
] |
How can I run a bash executable from a python script with the same i/o? | 39,176,066 | <p>So, what I'm trying to do is execute a program from a python script, which I execute in the shell. I've read other questions so I can do it with:</p>
<pre><code>def bash(cmd):
subprocess.Popen(cmd, shell=True, executable='/bin/bash')
bash('./my/file')
</code></pre>
<p>Now the first problem with this is I often want to terminate the program from shell. Ordinarily, if I did "./my/file" from terminal I could stop it with ctrl+c. But using subprocess runs it in the background somehow and I can only kill it through the command "top", and killing it by pid in bash. But the program spawns a large number of processes so fast I literally just can't kill it this way in a reasonable way.</p>
<p>Also, the second thing I want to do is to wait for the program to finish running before executing more of my python script. I tried setting... </p>
<pre><code>process=subprocess.Popen(cmd, shell=True, executable='/bin/bash')
process.wait()
</code></pre>
<p>But that actually stops the program almost immediately after it starts.</p>
<p>I could just execute the scripts separately, but I don't see a way to use the results from the program without creating a third script that I have to run myself after the original python script and then the program are complete.
Any suggestions?</p>
| 0 | 2016-08-26T23:47:38Z | 39,176,766 | <p>For you first question, try <code>preexec_fn</code> argument:</p>
<pre><code>import signal
import ctypes
libc = ctypes.CDLL("libc.so.6")
def set_pdeathsig(sig = signal.SIGTERM):
def callable():
return libc.prctl(1, sig)
return callable
p = subprocess.Popen(args, preexec_fn = set_pdeathsig(signal.SIGTERM))
</code></pre>
<p>This will ensure all the child processes spawned by <code>subprocess.Popen</code> will automatically exit right after the parent process exits. So all you need to kill is just the python process.</p>
<p>For you second question, try <code>communicate</code> method:</p>
<pre><code>process=subprocess.Popen(cmd, shell=True, executable='/bin/bash', stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = process.communicate()
</code></pre>
<p>It will build a pipe between your python process and its child process through which input data can be passed to the child process and the result can be returned back. If you don't call the <code>communicate</code> method, generally the process will be running in the background, and the result if exists will be discarded.</p>
<p>By calling <code>communicate</code> method, parent process will block here, waiting for the child process to finish. <code>out</code> and <code>err</code> represent the normal output and the error output of the child process. You can use it to get the output of the bash script. </p>
| 0 | 2016-08-27T02:01:55Z | [
"python",
"bash",
"shell",
"executable"
] |
operations on lists while unpacking | 39,176,128 | <p>I have a list of strings:</p>
<pre><code>my_list = ['text_1','ibno0d',' text_2 ' ]
</code></pre>
<p>And I want to save the first and third value in new variables:</p>
<p>a,_,b = my_list</p>
<p>Since text_1 and text_2 have some spaces I use strip method to clean those values.</p>
<pre><code>a = a.strip()
b = b.strip()
</code></pre>
<p>I want to do this a the moment of the assignment, what I thought to do is:</p>
<pre><code>my_list = [' text_1','ibno0d',' text_2 ' ]
a,_,b = [text.strip() for text in my_list]
</code></pre>
<p>is there a better option to acomplish this?</p>
| -1 | 2016-08-26T23:55:19Z | 39,176,250 | <pre><code>a = my_list[0].strip()
b = my_list[2].strip()
</code></pre>
| -1 | 2016-08-27T00:14:26Z | [
"python",
"list"
] |
operations on lists while unpacking | 39,176,128 | <p>I have a list of strings:</p>
<pre><code>my_list = ['text_1','ibno0d',' text_2 ' ]
</code></pre>
<p>And I want to save the first and third value in new variables:</p>
<p>a,_,b = my_list</p>
<p>Since text_1 and text_2 have some spaces I use strip method to clean those values.</p>
<pre><code>a = a.strip()
b = b.strip()
</code></pre>
<p>I want to do this a the moment of the assignment, what I thought to do is:</p>
<pre><code>my_list = [' text_1','ibno0d',' text_2 ' ]
a,_,b = [text.strip() for text in my_list]
</code></pre>
<p>is there a better option to acomplish this?</p>
| -1 | 2016-08-26T23:55:19Z | 39,176,264 | <p>I prefer using <code>map</code> for this..</p>
<pre><code>a, _, b = map(str.strip, my_list)
</code></pre>
| 1 | 2016-08-27T00:18:21Z | [
"python",
"list"
] |
operations on lists while unpacking | 39,176,128 | <p>I have a list of strings:</p>
<pre><code>my_list = ['text_1','ibno0d',' text_2 ' ]
</code></pre>
<p>And I want to save the first and third value in new variables:</p>
<p>a,_,b = my_list</p>
<p>Since text_1 and text_2 have some spaces I use strip method to clean those values.</p>
<pre><code>a = a.strip()
b = b.strip()
</code></pre>
<p>I want to do this a the moment of the assignment, what I thought to do is:</p>
<pre><code>my_list = [' text_1','ibno0d',' text_2 ' ]
a,_,b = [text.strip() for text in my_list]
</code></pre>
<p>is there a better option to acomplish this?</p>
| -1 | 2016-08-26T23:55:19Z | 39,176,312 | <p>You could iterate only over the indexes you are interested about. Doesn't matter too much here but in case list is longer it makes a difference:</p>
<pre><code>a, b = (my_list[i].strip() for i in [0, 2])
</code></pre>
| 1 | 2016-08-27T00:27:03Z | [
"python",
"list"
] |
pandas load data with data type issues | 39,176,131 | <p>Here is the code, output and raw csv file data, the dtypes are all object type from output, is there a way to recognize each column as string (and last column as float type)? Using Python 2.7 with miniconda.</p>
<p>Code,</p>
<pre><code>import pandas as pd
sample=pd.read_csv('123.csv', sep=',',header=None)
print sample.dtypes
</code></pre>
<p>program output,</p>
<pre><code>0 object
1 object
2 object
3 object
</code></pre>
<p>123.csv content,</p>
<pre><code>c_a,c_b,c_c,c_d
hello,python,pandas,1.2
</code></pre>
<p><strong>Edit 1</strong>,</p>
<pre><code>sample = pd.read_csv('123.csv', header=None, skiprows=1,
dtype={0:str, 1:str, 2:str, 3:str})
print sample.dtypes
0 object
1 object
2 object
3 object
dtype: object
</code></pre>
<p><strong>Edit 2</strong>,</p>
<pre><code>sample = pd.read_csv('123.csv', header=None, skiprows=1,
dtype={0:str, 1:str, 2:str, 3:str})
sample.columns = pd.Index(data=['c_a', 'c_b', 'c_c', 'c_d'])
sample['c_d'] = sample['c_d'].astype('float32')
print sample.dtypes
c_a object
c_b object
c_c object
c_d float32
</code></pre>
<p>regards,
Lin</p>
| 0 | 2016-08-26T23:55:44Z | 39,176,699 | <p>You have to use the argument <code>dtype</code>. And since you do not want the header, you must skip it with <code>skiprows</code> because the third element is not a float.</p>
<pre><code>df = pd.read_csv('123.csv', header=None, skiprows=1,
dtype={0:str, 1:str, 2:str, 3:float})
</code></pre>
<p>The output is:</p>
<pre><code> 0 1 2 3
0 hello python pandas 1.2
</code></pre>
<p><strong>EDIT:</strong></p>
<p>To add a header with different types to your DataFrame, you can use:</p>
<pre><code>df.columns = pd.Index(data=['c_a', 'c_b', 'c_d', 4.])
</code></pre>
<p>and the output is:</p>
<pre><code> c_a c_b c_d 4.0
0 hello python pandas 1.2
</code></pre>
| 1 | 2016-08-27T01:46:04Z | [
"python",
"python-2.7",
"pandas"
] |
transform string column of a pandas data frame into 0 1 vectors | 39,176,190 | <p><code>LabelEncoder</code> and <code>OneHotEncoder</code> works pretty good for numpy array, which transform string into <code>0,1</code> based vectors.</p>
<p>My question is, is there a neat API to convert a column of a pandas data frame into <code>0, 1</code> vectors? I showed my code and raw content of the pandas data frame <code>123.csv</code>, suppose I want to binary <code>0, 1</code> for columns <code>c_a</code>,<code>c_b</code>,<code>c_c</code>, each of the 3 columns are independent, I want to binary <code>0, 1</code> for the separately independent.</p>
<p>Code,</p>
<pre><code>import pandas as pd
sample=pd.read_csv('123.csv', sep=',',header=None)
print sample.dtypes
</code></pre>
<p>123.csv content,</p>
<pre><code>c_a,c_b,c_c,c_d
hello,python,pandas,1.2
hi,c++,vector,1.2
</code></pre>
<p>Label Encoder and OneHotEncoder examples for numpy,</p>
<pre><code>from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
S = np.array(['b','a','c'])
le = LabelEncoder()
S = le.fit_transform(S)
print(S)
ohe = OneHotEncoder()
one_hot = ohe.fit_transform(S.reshape(-1,1)).toarray()
print(one_hot)
which results in:
[1 0 2]
[[ 0. 1. 0.]
[ 1. 0. 0.]
[ 0. 0. 1.]]
</code></pre>
<p><strong>Edit 1</strong>, tried <code>get_dummies</code>, and it seems results are <code>0.0</code> and <code>1.0</code> (seems <code>float</code>), is there a way to convert into integer directly?</p>
<pre><code> 0_c_a 0_hello 0_hi 0_ho 1_c++ 1_c_b 1_java 1_python 2_c_c 2_numpy \
0 1.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 0.0
1 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0
2 0.0 0.0 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
3 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0
</code></pre>
| 1 | 2016-08-27T00:05:21Z | 39,176,249 | <p>Are you looking for <code>get_dummies</code>?</p>
<pre><code>s = pd.Series(["a", "b", "a", "c"])
pd.get_dummies(s)
</code></pre>
<p>If you want <code>ints</code>:</p>
<pre><code>pd.get_dummies(s).astype(np.uint8)
</code></pre>
<p>reference:</p>
<p><a href="http://stackoverflow.com/questions/27468892/pandas-get-dummies-to-output-dtype-integer-bool-instead-of-float">Pandas get_dummies to output dtype integer/bool instead of float</a></p>
| 2 | 2016-08-27T00:14:05Z | [
"python",
"python-2.7",
"pandas",
"numpy",
"one-hot-encoding"
] |
Get distribution name from scipy.stats.distribution object? (SciPy | Python 3) | 39,176,236 | <p>I'm trying out a bunch of different fits for distributions. </p>
<p><strong>Is there any way to get the name of the distribution back from the distribution object?</strong> </p>
<p>I found a way but it doesn't seem very efficient. </p>
<pre><code>distribution = "gamma"
distr = getattr(stats, distribution)
print(distr)
# <scipy.stats._continuous_distns.gamma_gen object at 0x11688f518>
str(distr).split(".")[3].split("_")[0]
# 'gamma'
</code></pre>
| 1 | 2016-08-27T00:12:14Z | 39,176,310 | <p>You can use the <code>name</code> attribute:</p>
<pre><code>from scipy import stats
print(stats.gamma.name)
</code></pre>
| 4 | 2016-08-27T00:26:52Z | [
"python",
"numpy",
"scipy",
"statistics",
"distribution"
] |
Get distribution name from scipy.stats.distribution object? (SciPy | Python 3) | 39,176,236 | <p>I'm trying out a bunch of different fits for distributions. </p>
<p><strong>Is there any way to get the name of the distribution back from the distribution object?</strong> </p>
<p>I found a way but it doesn't seem very efficient. </p>
<pre><code>distribution = "gamma"
distr = getattr(stats, distribution)
print(distr)
# <scipy.stats._continuous_distns.gamma_gen object at 0x11688f518>
str(distr).split(".")[3].split("_")[0]
# 'gamma'
</code></pre>
| 1 | 2016-08-27T00:12:14Z | 39,176,311 | <p>Use the <code>name</code> attribute:</p>
<pre><code>>>> from scipy import stats
>>> distribution = "gamma"
>>> distr = getattr(stats, distribution)
>>> distr.name
'gamma'
</code></pre>
| 2 | 2016-08-27T00:27:01Z | [
"python",
"numpy",
"scipy",
"statistics",
"distribution"
] |
Python: convert numpy array of signs to int and back | 39,176,308 | <p>I'm trying to convert from a numpy array of signs (i.e., a numpy array whose entries are either <code>1.</code> or <code>-1.</code>) to an integer and back through a binary representation. I have something that works, but it's not Pythonic, and I expect it'll be slow.</p>
<pre><code>def sign2int(s):
s[s==-1.] = 0.
bstr = ''
for i in range(len(s)):
bstr = bstr + str(int(s[i]))
return int(bstr, 2)
def int2sign(i, m):
bstr = bin(i)[2:].zfill(m)
s = []
for d in bstr:
s.append(float(d))
s = np.array(s)
s[s==0.] = -1.
return s
</code></pre>
<p>Then </p>
<pre><code>>>> m = 4
>>> s0 = np.array([1., -1., 1., 1.])
>>> i = sign2int(s0)
>>> print i
11
>>> s = int2sign(i, m)
>>> print s
[ 1. -1. 1. 1.]
</code></pre>
<p>I'm concerned about (1) the for loops in each and (2) having to build an intermediate representation as a string. </p>
<p>Ultimately, I will want something that works with a 2-d numpy array, too---e.g.,</p>
<pre><code>>>> s = np.array([[1., -1., 1.], [1., 1., 1.]])
>>> print sign2int(s)
[5, 7]
</code></pre>
| 2 | 2016-08-27T00:26:41Z | 39,176,491 | <p>Here are some vectorized versions of your functions: </p>
<pre><code>def sign2int(s):
return int(''.join(np.where(s == -1., 0, s).astype(int).astype(str)), 2)
def int2sign(i, m):
tmp = np.array(list(bin(i)[2:].zfill(m)))
return np.where(tmp == "0", "-1", tmp).astype(int)
s0 = np.array([1., -1., 1., 1.])
sign2int(s0)
# 11
int2sign(11, 5)
# array([-1, 1, -1, 1, 1])
</code></pre>
<p>To use your functions on 2-d arrays, you can use <code>map</code> function:</p>
<pre><code>s = np.array([[1., -1., 1.], [1., 1., 1.]])
map(sign2int, s)
# [5, 7]
map(lambda x: int2sign(x, 4), [5, 7])
# [array([-1, 1, -1, 1]), array([-1, 1, 1, 1])]
</code></pre>
| 0 | 2016-08-27T00:58:04Z | [
"python",
"arrays",
"numpy",
"binary"
] |
Python: convert numpy array of signs to int and back | 39,176,308 | <p>I'm trying to convert from a numpy array of signs (i.e., a numpy array whose entries are either <code>1.</code> or <code>-1.</code>) to an integer and back through a binary representation. I have something that works, but it's not Pythonic, and I expect it'll be slow.</p>
<pre><code>def sign2int(s):
s[s==-1.] = 0.
bstr = ''
for i in range(len(s)):
bstr = bstr + str(int(s[i]))
return int(bstr, 2)
def int2sign(i, m):
bstr = bin(i)[2:].zfill(m)
s = []
for d in bstr:
s.append(float(d))
s = np.array(s)
s[s==0.] = -1.
return s
</code></pre>
<p>Then </p>
<pre><code>>>> m = 4
>>> s0 = np.array([1., -1., 1., 1.])
>>> i = sign2int(s0)
>>> print i
11
>>> s = int2sign(i, m)
>>> print s
[ 1. -1. 1. 1.]
</code></pre>
<p>I'm concerned about (1) the for loops in each and (2) having to build an intermediate representation as a string. </p>
<p>Ultimately, I will want something that works with a 2-d numpy array, too---e.g.,</p>
<pre><code>>>> s = np.array([[1., -1., 1.], [1., 1., 1.]])
>>> print sign2int(s)
[5, 7]
</code></pre>
| 2 | 2016-08-27T00:26:41Z | 39,176,524 | <p>For 1d arrays you can use this one linear Numpythonic approach, using <code>np.packbits</code>:</p>
<pre><code>>>> np.packbits(np.pad((s0+1).astype(bool).astype(int), (8-s0.size, 0), 'constant'))
array([11], dtype=uint8)
</code></pre>
<p>And for reversing:</p>
<pre><code>>>> unpack = (np.unpackbits(np.array([11], dtype=np.uint8))[-4:]).astype(float)
>>> unpack[unpack==0] = -1
>>> unpack
array([ 1., -1., 1., 1.])
</code></pre>
<p>And for 2d array:</p>
<pre><code>>>> x, y = s.shape
>>> np.packbits(np.pad((s+1).astype(bool).astype(int), (8-y, 0), 'constant')[-2:])
array([5, 7], dtype=uint8)
</code></pre>
<p>And for reversing:</p>
<pre><code>>>> unpack = (np.unpackbits(np.array([5, 7], dtype='uint8'))).astype(float).reshape(x, 8)[:,-y:]
>>> unpack[unpack==0] = -1
>>> unpack
array([[ 1., -1., 1.],
[ 1., 1., 1.]])
</code></pre>
| 1 | 2016-08-27T01:05:00Z | [
"python",
"arrays",
"numpy",
"binary"
] |
Python: convert numpy array of signs to int and back | 39,176,308 | <p>I'm trying to convert from a numpy array of signs (i.e., a numpy array whose entries are either <code>1.</code> or <code>-1.</code>) to an integer and back through a binary representation. I have something that works, but it's not Pythonic, and I expect it'll be slow.</p>
<pre><code>def sign2int(s):
s[s==-1.] = 0.
bstr = ''
for i in range(len(s)):
bstr = bstr + str(int(s[i]))
return int(bstr, 2)
def int2sign(i, m):
bstr = bin(i)[2:].zfill(m)
s = []
for d in bstr:
s.append(float(d))
s = np.array(s)
s[s==0.] = -1.
return s
</code></pre>
<p>Then </p>
<pre><code>>>> m = 4
>>> s0 = np.array([1., -1., 1., 1.])
>>> i = sign2int(s0)
>>> print i
11
>>> s = int2sign(i, m)
>>> print s
[ 1. -1. 1. 1.]
</code></pre>
<p>I'm concerned about (1) the for loops in each and (2) having to build an intermediate representation as a string. </p>
<p>Ultimately, I will want something that works with a 2-d numpy array, too---e.g.,</p>
<pre><code>>>> s = np.array([[1., -1., 1.], [1., 1., 1.]])
>>> print sign2int(s)
[5, 7]
</code></pre>
| 2 | 2016-08-27T00:26:41Z | 39,176,895 | <p>I'll start with <code>sig2int</code>.. Convert from a sign representation to <em>binary</em></p>
<pre><code>>>> a
array([ 1., -1., 1., -1.])
>>> (a + 1) / 2
array([ 1., 0., 1., 0.])
>>>
</code></pre>
<p>Then you can simply create an array of powers of two, multiply it by the <em>binary</em> and sum.</p>
<pre><code>>>> powers = np.arange(a.shape[-1])[::-1]
>>> np.power(2, powers)
array([8, 4, 2, 1])
>>> a = (a + 1) / 2
>>> powers = np.power(2, powers)
>>> a * powers
array([ 8., 0., 2., 0.])
>>> np.sum(a * powers)
10.0
>>>
</code></pre>
<p>Then make it operate on rows by adding axis information and rely on broadcasting.</p>
<pre><code>def sign2int(a):
# powers of two
powers = np.arange(a.shape[-1])[::-1]
np.power(2, powers, powers)
# sign to "binary" - add one and divide by two
np.add(a, 1, a)
np.divide(a, 2, a)
# scale by powers of two and sum
np.multiply(a, powers, a)
return np.sum(a, axis = -1)
>>> b = np.array([a, a, a, a, a])
>>> sign2int(b)
array([ 11., 11., 11., 11., 11.])
>>>
</code></pre>
<p>I tried it on a 4 by 100 bit array and it seemed fast</p>
<pre><code>>>> a = a.repeat(100)
>>> b = np.array([a, a, a, a, a])
>>> b
array([[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.],
[ 1., 1., 1., ..., 1., 1., 1.]])
>>> sign2int(b)
array([ 2.58224988e+120, 2.58224988e+120, 2.58224988e+120,
2.58224988e+120, 2.58224988e+120])
>>>
</code></pre>
<hr>
<p>I'll add the reverse if i can figure it. - the best I could do relies on some plain Python without any numpy vectoriztion magic and I haven't figured how to make it work with a sequence of ints other than to iterate over them and convert them one at a time - but the time still seems acceptable.</p>
<pre><code>def foo(n):
'''yields bits in increasing powers of two
bit sequence from lsb --> msb
'''
while n > 0:
n, r = divmod(n, 2)
yield r
def int2sign(n):
n = int(n)
a = np.fromiter(foo(n), dtype = np.int8, count = n.bit_length())
np.multiply(a, 2, a)
np.subtract(a, 1, a)
return a[::-1]
</code></pre>
<p>Works on 1324:</p>
<pre><code>>>> bin(1324)
'0b10100101100'
>>> a = int2sign(1324)
>>> a
array([ 1, -1, 1, -1, -1, 1, -1, 1, 1, -1, -1], dtype=int8)
</code></pre>
<p>Seems to work with 1.2e305:</p>
<pre><code>>>> n = int(1.2e305)
>>> n.bit_length()
1014
>>> a = int2sign(n)
>>> a.shape
(1014,)
>>> s = bin(n)
>>> s = s[2:]
>>> all(2 * int(x) -1 == y for x, y in zip(s, a))
True
>>>
</code></pre>
| 1 | 2016-08-27T02:32:19Z | [
"python",
"arrays",
"numpy",
"binary"
] |
Python: convert numpy array of signs to int and back | 39,176,308 | <p>I'm trying to convert from a numpy array of signs (i.e., a numpy array whose entries are either <code>1.</code> or <code>-1.</code>) to an integer and back through a binary representation. I have something that works, but it's not Pythonic, and I expect it'll be slow.</p>
<pre><code>def sign2int(s):
s[s==-1.] = 0.
bstr = ''
for i in range(len(s)):
bstr = bstr + str(int(s[i]))
return int(bstr, 2)
def int2sign(i, m):
bstr = bin(i)[2:].zfill(m)
s = []
for d in bstr:
s.append(float(d))
s = np.array(s)
s[s==0.] = -1.
return s
</code></pre>
<p>Then </p>
<pre><code>>>> m = 4
>>> s0 = np.array([1., -1., 1., 1.])
>>> i = sign2int(s0)
>>> print i
11
>>> s = int2sign(i, m)
>>> print s
[ 1. -1. 1. 1.]
</code></pre>
<p>I'm concerned about (1) the for loops in each and (2) having to build an intermediate representation as a string. </p>
<p>Ultimately, I will want something that works with a 2-d numpy array, too---e.g.,</p>
<pre><code>>>> s = np.array([[1., -1., 1.], [1., 1., 1.]])
>>> print sign2int(s)
[5, 7]
</code></pre>
| 2 | 2016-08-27T00:26:41Z | 39,178,113 | <p>After a bit of testing, the Numpythonic approach of @wwii that doesn't use strings seems to fit what I need best. For the <code>int2sign</code>, I used a for-loop over the exponents with a standard algorithm for the conversion---which will have at most 64 iterations for 64-bit integers. Numpy's broadcasting happens across each integer very efficiently. </p>
<p><code>packbits</code> and <code>unpackbits</code> are restricted to 8-bit integers; otherwise, I suspect that would've been the best (though I didn't try).</p>
<p>Here are the specific implementations I tested that follow the suggestions in the other answers (thanks to everyone!):</p>
<pre><code>def _sign2int_str(s):
return int(''.join(np.where(s == -1., 0, s).astype(int).astype(str)), 2)
def sign2int_str(s):
return np.array(map(_sign2int_str, s))
def _int2sign_str(i, m):
tmp = np.array(list(bin(i)[2:])).astype(int)
return np.pad(np.where(tmp == 0, -1, tmp), (m - len(tmp), 0), "constant", constant_values = -1)
def int2sign_str(i,m):
return np.array(map(lambda x: _int2sign_str(x, m), i.astype(int).tolist())).transpose()
def sign2int_np(s):
p = np.arange(s.shape[-1])[::-1]
s = s + 1
return np.sum(np.power(s, p), axis = -1).astype(int)
def int2sign_np(i,m):
N = i.shape[-1]
S = np.zeros((m, N))
for k in range(m):
b = np.power(2, m - 1 - k).astype(int)
S[k,:] = np.divide(i.astype(int), b).astype(float)
i = np.mod(i, b)
S[S==0.] = -1.
return S
</code></pre>
<p>And here is my test:</p>
<pre><code>X = np.sign(np.random.normal(size=(5000, 20)))
N = 100
t = time.time()
for i in range(N):
S = sign2int_np(X)
print 'sign2int_np: \t{:10.8f} sec'.format((time.time() - t)/N)
t = time.time()
for i in range(N):
S = sign2int_str(X)
print 'sign2int_str: \t{:10.8f} sec'.format((time.time() - t)/N)
m = 20
S = np.random.randint(0, high=np.power(2,m), size=(5000,))
t = time.time()
for i in range(N):
X = int2sign_np(S, m)
print 'int2sign_np: \t{:10.8f} sec'.format((time.time() - t)/N)
t = time.time()
for i in range(N):
X = int2sign_str(S, m)
print 'int2sign_str: \t{:10.8f} sec'.format((time.time() - t)/N)
</code></pre>
<p>This produced the following results:</p>
<pre><code>sign2int_np: 0.00165325 sec
sign2int_str: 0.04121902 sec
int2sign_np: 0.00318024 sec
int2sign_str: 0.24846984 sec
</code></pre>
| 0 | 2016-08-27T06:19:06Z | [
"python",
"arrays",
"numpy",
"binary"
] |
Python: convert numpy array of signs to int and back | 39,176,308 | <p>I'm trying to convert from a numpy array of signs (i.e., a numpy array whose entries are either <code>1.</code> or <code>-1.</code>) to an integer and back through a binary representation. I have something that works, but it's not Pythonic, and I expect it'll be slow.</p>
<pre><code>def sign2int(s):
s[s==-1.] = 0.
bstr = ''
for i in range(len(s)):
bstr = bstr + str(int(s[i]))
return int(bstr, 2)
def int2sign(i, m):
bstr = bin(i)[2:].zfill(m)
s = []
for d in bstr:
s.append(float(d))
s = np.array(s)
s[s==0.] = -1.
return s
</code></pre>
<p>Then </p>
<pre><code>>>> m = 4
>>> s0 = np.array([1., -1., 1., 1.])
>>> i = sign2int(s0)
>>> print i
11
>>> s = int2sign(i, m)
>>> print s
[ 1. -1. 1. 1.]
</code></pre>
<p>I'm concerned about (1) the for loops in each and (2) having to build an intermediate representation as a string. </p>
<p>Ultimately, I will want something that works with a 2-d numpy array, too---e.g.,</p>
<pre><code>>>> s = np.array([[1., -1., 1.], [1., 1., 1.]])
>>> print sign2int(s)
[5, 7]
</code></pre>
| 2 | 2016-08-27T00:26:41Z | 39,185,245 | <p>I think <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.packbits.html" rel="nofollow"><code>numpy.packbits</code></a> is worth another look. Given a real-valued sign array <code>a</code>, you can use <code>numpy.packbits(a > 0)</code>. Decompression is done by <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.unpackbits.html" rel="nofollow"><code>numpy.unpackbits</code></a>. This implicitly flattens multi-dimensional arrays so you'll need to <code>reshape</code> after <code>unpackbits</code> if you have a multi-dimensional array.</p>
<p>Note that you can combine bit packing with conventional compression (e.g., zlib or lzma). If there is a pattern or bias to your data, you may get a useful compression factor, but for unbiased random data, you'll typically see a moderate size increase.</p>
| 0 | 2016-08-27T20:02:12Z | [
"python",
"arrays",
"numpy",
"binary"
] |
difference between multithreading and multiprocessing threadpool? | 39,176,517 | <p>I have list of 20 items in file A and these are passed to file B for processing and the result to be returned.</p>
<p>Currently I am doing with multithreading. And came accross concept of threadpool and multiprocessing and was wondering whats the difference between multithreading and threadpool and will my program benefit from threading or threadpool?</p>
<p>Thanks</p>
| -1 | 2016-08-27T01:03:35Z | 39,176,578 | <blockquote>
<p>whats the difference between multithreading and threadpool</p>
</blockquote>
<p>Multithreading is the ability of a CPU to execute multiple processes/threads concurrently. See <a href="https://en.wikipedia.org/wiki/Multithreading_(computer_architecture)" rel="nofollow">multithreading</a> for details.
A thread pool is a group of threads which are created in advance which you can reuse over and over to do tasks. See <a href="http://programmers.stackexchange.com/questions/173575/what-is-a-thread-pool">What is a thread pool?</a> for more information.</p>
<blockquote>
<p>will my program benefit from threading or threadpool?</p>
</blockquote>
<p>From your description, you only have 2 files, A and B and there are only 20 items you need to process. Most likely threading and thread pools will provide no benefit. If the processing is extremely io intensive or cpu intensive, you may benefit from threading, but you have to explain what processing is going on to answer that question. As for a thread pool though, you will not benefit either way. Thread pools are used because creating threads is very expensive. They eliminate having to create/destroy threads multiple times. However, your program only has two files, so there will be no benefit.</p>
| 1 | 2016-08-27T01:17:11Z | [
"python",
"multithreading",
"python-2.7",
"python-3.x",
"python-multithreading"
] |
Django Charfield null=False Inegrity Error not raised | 39,176,618 | <p>I have a model:</p>
<pre><code>class Discount(models.Model):
code = models.CharField(max_length=14, unique=True, null=False, blank=False)
email = models.EmailField(unique=True)
discount = models.IntegerField(default=10)
</code></pre>
<p>In my shell when I try and save a Discount object with no input, it doesn't raise an error. What am I doing wrong?</p>
<pre><code>> e = Discount()
> e.save()
</code></pre>
| 1 | 2016-08-27T01:26:48Z | 39,176,848 | <p>No default Django behavior will save <code>CHAR</code> or <code>TEXT</code> types as <code>Null</code> - it will always use an empty string (<code>''</code>). <code>null=False</code> has no effect on these types of fields. </p>
<p><code>blank=False</code> means that the field will be required by default when the model is used to render a ModelForm. It does not prevent you from manually saving a model instance without that value.</p>
<p>What you want here is a custom model validator:</p>
<pre><code>from django.core.exceptions import ValidationError
def validate_not_empty(value):
if value == '':
raise ValidationError('%(value)s is empty!'), params={'value':value})
</code></pre>
<p>Then add the validator to your model:</p>
<pre><code>code = models.CharField(max_length=14, unique=True, validators=[validate_not_empty])
</code></pre>
<p>This will take care of the form validation you want, but validators don't automatically run when a model instance is saved. <a href="https://docs.djangoproject.com/en/1.10/ref/validators/#how-validators-are-run" rel="nofollow">Further reading here.</a> If you want to validate this every time an instance is saved, I suggest overriding the default <code>save</code> behavior, checking the value of your string there, and interrupting the save by raising an error if necessary. <a href="http://stackoverflow.com/questions/9953427/django-custom-save-model">Good post on overriding <code>save</code> here.</a></p>
<p><a href="https://docs.djangoproject.com/en/1.10/ref/models/fields/#null" rel="nofollow">Further reading on <code>null</code>:</a> </p>
<blockquote>
<p>Avoid using null on string-based fields such as CharField and TextField because empty string values will always be stored as empty strings, not as NULL. If a string-based field has null=True, that means it has two possible values for âno dataâ: NULL, and the empty string. In most cases, itâs redundant to have two possible values for âno data;â the Django convention is to use the empty string, not NULL.</p>
</blockquote>
<p><a href="https://docs.djangoproject.com/en/1.10/ref/validators/" rel="nofollow">And on validators.</a></p>
| 1 | 2016-08-27T02:19:55Z | [
"python",
"django",
"django-models"
] |
Python raw_input issue | 39,176,638 | <p>Here's my code</p>
<pre><code>def player_input():
input = raw_input('Choose X or O')
if input == "X":
print input
else:
print 'Please choose an X or an O'
player_input()
player_input()
</code></pre>
<p>Not sure why I'm getting this. No matter what I type, it seems to jump to the else statement.</p>
<p>Please Choose an X or an O<br>
Choose X or O>? l<br>
Please Choose an X or an O<br>
Choose X or O>? X<br>
Please Choose an X or an O<br>
Choose X or O<br></p>
| 0 | 2016-08-27T01:31:18Z | 39,176,971 | <p>To your second question answered in the comments</p>
<blockquote>
<p>Thanks all! Turns out it was an issue with PyCharms interpreter :( As soon as I used the python interpreter on my OS it came out fine. My next issue is using the 'or' comparison. This -> if input == "x" or "X" or "o" or "O": yields the result of only accepting lowercase x. Everything else jumps to the else statement. including X o and O â jmleczko</p>
</blockquote>
<p>In python you have to use </p>
<pre><code>if input == 'x' or input == 'X' or input == 'o' or input == 'O':
</code></pre>
<p>Python does not like it otherwise.
I recommend not to use <code>input</code> as a variable, since input is a system reserved function. </p>
<pre><code>someInput = input("Enter here:")
>>> Enter here: ABC
print(someInput)
>>> ABC
</code></pre>
| -1 | 2016-08-27T02:48:17Z | [
"python"
] |
ModelForm displays ForeignKey table values, not choices | 39,176,735 | <p>I'm creating a Django app where each site has a single category, selected from a range of possible categories. So I have two models: <code>Site</code>, and <code>Category</code>. </p>
<p>I want a many-to-one relationship between the <code>Site</code> and <code>Category</code> models. The <code>Category</code> table contains a selection of different categories, and each <code>Site</code> record will have a foreign key referencing one of those categories. </p>
<p>models.py:</p>
<pre><code>from django.db import models
from django.utils.encoding import python_2_unicode_compatible
@python_2_unicode_compatible # for python 2 support
class Site(models.Model):
Category = models.ForeignKey('Category')
def __str__(self):
return self.Name
CATEGORY_CHOICES = (
('AU', 'Automobiles'),
('BE', 'Beauty Products'),
('GR', 'Groceries'),
)
class Category(models.Model):
Category = models.CharField(choices=CATEGORY_CHOICES,max_length=2)
def __str__(self):
return '%s' % (self.Category)
</code></pre>
<p>forms.py:</p>
<pre><code>from django.forms import ModelForm
from sites.models import Site
class NewSiteForm(ModelForm):
class Meta:
model = Site
fields = ['Category']
</code></pre>
<p>newsite.html:</p>
<pre><code><!DOCTYPE html>
<form method="post" action="">
{{ form}}
</form>
</code></pre>
<p>Newsite.html gives me a Category dropdown that lists values already stored in my Category database, rather than listing the contents of the choices tuples defined in the Category model. so I'm getting this:</p>
<pre><code><!DOCTYPE html>
<form method="post" action="">
<tr><th><label for="id_Category">Category:</label></th><td><select id="id_Category" name="Category">
<option value="" selected="selected">---------</option>
<option value="1">AU</option>
<option value="2">BA</option>
<option value="3">BE</option>
<option value="4">BO</option>
<option value="5">PH</option>
<option value="6">CO</option>
<option value="7">CC</option>
<option value="8">CB</option>
<option value="9">CE</option>
<option value="10">CS</option>
<option value="11">CA</option>
<option value="12">EA</option>
<option value="13">EC</option>
<option value="14">FA</option>
<option value="15">GR</option>
<option value="16">HA</option>
<option value="17">HE</option>
<option value="18">HG</option>
<option value="19">IN</option>
<option value="20">JE</option>
<option value="21">LT</option>
<option value="22">MS</option>
<option value="23">MU</option>
<option value="24">MI</option>
<option value="25">OF</option>
<option value="26">OU</option>
<option value="27">PC</option>
<option value="28">SH</option>
<option value="29">SO</option>
<option value="30">SP</option>
<option value="31">TO</option>
<option value="32">TY</option>
<option value="33">DV</option>
<option value="34">CL</option>
<option value="35">WA</option>
<option value="36">WI</option>
</select></td></tr>
</form>
</code></pre>
<p>I want the form to just display the three choices defined in the model, rather than the contents of the database, like this:</p>
<pre><code><!DOCTYPE html>
<form method="post" action="">
<tr><th><label for="id_Category">Category:</label></th><td><select id="id_Category" name="Category">
<option value="" selected="selected">---------</option>
<option value="AU">Auto</option>
<option value="BE">Beauty Products</option>
<option value="GR">Groceries</option>
</select></td></tr>
</form>
</code></pre>
<p>What am I not understanding here and how do I make this work?</p>
<p>(Note that there will be a tuple in <code>CATEGORY_CHOICES</code> giving a readable choice for every two-letter entry stored in the <code>Category</code> table, but I am saving space by not listing all of them in this question). </p>
| 1 | 2016-08-27T01:54:18Z | 39,176,771 | <p>You need to specify <code>Category</code> model:</p>
<pre><code>class NewSiteForm(ModelForm):
class Meta:
model = Category # <--
fields = ['Category']
</code></pre>
| 1 | 2016-08-27T02:03:19Z | [
"python",
"django",
"django-models",
"django-forms",
"modelform"
] |
ModelForm displays ForeignKey table values, not choices | 39,176,735 | <p>I'm creating a Django app where each site has a single category, selected from a range of possible categories. So I have two models: <code>Site</code>, and <code>Category</code>. </p>
<p>I want a many-to-one relationship between the <code>Site</code> and <code>Category</code> models. The <code>Category</code> table contains a selection of different categories, and each <code>Site</code> record will have a foreign key referencing one of those categories. </p>
<p>models.py:</p>
<pre><code>from django.db import models
from django.utils.encoding import python_2_unicode_compatible
@python_2_unicode_compatible # for python 2 support
class Site(models.Model):
Category = models.ForeignKey('Category')
def __str__(self):
return self.Name
CATEGORY_CHOICES = (
('AU', 'Automobiles'),
('BE', 'Beauty Products'),
('GR', 'Groceries'),
)
class Category(models.Model):
Category = models.CharField(choices=CATEGORY_CHOICES,max_length=2)
def __str__(self):
return '%s' % (self.Category)
</code></pre>
<p>forms.py:</p>
<pre><code>from django.forms import ModelForm
from sites.models import Site
class NewSiteForm(ModelForm):
class Meta:
model = Site
fields = ['Category']
</code></pre>
<p>newsite.html:</p>
<pre><code><!DOCTYPE html>
<form method="post" action="">
{{ form}}
</form>
</code></pre>
<p>Newsite.html gives me a Category dropdown that lists values already stored in my Category database, rather than listing the contents of the choices tuples defined in the Category model. so I'm getting this:</p>
<pre><code><!DOCTYPE html>
<form method="post" action="">
<tr><th><label for="id_Category">Category:</label></th><td><select id="id_Category" name="Category">
<option value="" selected="selected">---------</option>
<option value="1">AU</option>
<option value="2">BA</option>
<option value="3">BE</option>
<option value="4">BO</option>
<option value="5">PH</option>
<option value="6">CO</option>
<option value="7">CC</option>
<option value="8">CB</option>
<option value="9">CE</option>
<option value="10">CS</option>
<option value="11">CA</option>
<option value="12">EA</option>
<option value="13">EC</option>
<option value="14">FA</option>
<option value="15">GR</option>
<option value="16">HA</option>
<option value="17">HE</option>
<option value="18">HG</option>
<option value="19">IN</option>
<option value="20">JE</option>
<option value="21">LT</option>
<option value="22">MS</option>
<option value="23">MU</option>
<option value="24">MI</option>
<option value="25">OF</option>
<option value="26">OU</option>
<option value="27">PC</option>
<option value="28">SH</option>
<option value="29">SO</option>
<option value="30">SP</option>
<option value="31">TO</option>
<option value="32">TY</option>
<option value="33">DV</option>
<option value="34">CL</option>
<option value="35">WA</option>
<option value="36">WI</option>
</select></td></tr>
</form>
</code></pre>
<p>I want the form to just display the three choices defined in the model, rather than the contents of the database, like this:</p>
<pre><code><!DOCTYPE html>
<form method="post" action="">
<tr><th><label for="id_Category">Category:</label></th><td><select id="id_Category" name="Category">
<option value="" selected="selected">---------</option>
<option value="AU">Auto</option>
<option value="BE">Beauty Products</option>
<option value="GR">Groceries</option>
</select></td></tr>
</form>
</code></pre>
<p>What am I not understanding here and how do I make this work?</p>
<p>(Note that there will be a tuple in <code>CATEGORY_CHOICES</code> giving a readable choice for every two-letter entry stored in the <code>Category</code> table, but I am saving space by not listing all of them in this question). </p>
| 1 | 2016-08-27T01:54:18Z | 39,194,270 | <p>My model structure was bad. It didn't make sense to have separate choices for the <code>Category</code> model. What I should have done was:</p>
<p>1) Either just deleted the <code>Category</code> model altogether and used the <code>CATEGORY_CHOICES</code> directly for the <code>Category</code> field in the <code>Site</code> model, abandoning the idea of a foreign key.</p>
<p>2) Transferred the human-readable values that I was storing in <code>CATEGORY_CHOICES</code> into the database values for the <code>Category</code> model. and deleted the <code>CATEGORY_CHOICES</code>. I eventually opted for this, as there will be other fields associated with <code>Category</code>, like 'Approved'. So it still makes sense to have a separate 'Category' model.</p>
| 0 | 2016-08-28T18:03:05Z | [
"python",
"django",
"django-models",
"django-forms",
"modelform"
] |
Get average grade for 10 students - python | 39,176,762 | <p>I have a program that asks a user to enter a Student NETID and then what grades they got on 5 assignments plus the grade they got on the mid-term and final. It then adds them and divides to get that students average grade which displays in table format.</p>
<p>What I need to do, is loop through that whole process 9 more times. So essentially, Ill be asking 9 more students the same input, which then Ill need to display in the table format.</p>
<p>My question is how would I loop through the process that I have right now 'x' amount of times, then display the average of all students.</p>
<p>This is my code right now:</p>
<pre><code> # x holds the list of grades
x = []
# count of assignments
assignments = 5
# Ask for a student ID from user
NETID = int(input('Enter your 4 digit student NET ID: '))
# fill list with grades from console input
x = [int(input('Please enter the grade you got on assignment {}: '.format(i+1))) for i in range(assignments)]
midTermGrade = int(input('Please enter the grade you got on you Mid-Term: '))
finalGrade = int(input('Please enter the grade you got on you Final: '))
# count average,
average_assignment_grade = (sum(x) + midTermGrade + finalGrade) / 7
print()
print('NET ID \t Average Final Grade')
print('---------------------------------')
for number in range(1):
print(NETID, '\t\t', format(average_assignment_grade, '.1f'),'%')
main()
</code></pre>
<p>And this is how it looks on console:</p>
<p><a href="http://i.stack.imgur.com/0g2C2.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/0g2C2.jpg" alt="My Python Program"></a></p>
| 0 | 2016-08-27T02:01:31Z | 39,176,845 | <p>You really did the hardest part. I don't see why you couldn't so the loop of the average. Anyway:</p>
<pre><code>student_count = 5;
A = [student_count]
for id_student in range(student_count):
print("STUDENT #", id_student+1)
# x holds the list of grades
x = []
# count of assignments
assignments = 5
# Ask for a student ID from user
NETID = int(input('Enter your 4 digit student NET ID: '))
# fill list with grades from console input
x = [int(input('Please enter the grade you got on assignment {}: '.format(i+1))) for i in range(assignments)]
midTermGrade = int(input('Please enter the grade you got on you Mid-Term: '))
finalGrade = int(input('Please enter the grade you got on you Final: '))
# count average,
average_assignment_grade = (sum(x) + midTermGrade + finalGrade) / 7
print()
print('NET ID | Average Final Grade')
print('---------------------------------')
for number in range(1):
print(NETID, " | ", format(average_assignment_grade, '.1f'),'%')
A.append(average_assignment_grade);
grades_sum = sum(A)
grades_average = grades_sum / 5;
print("SUM OF ALL STUDENTS = " + grades_sum)
print("AVERAGE OF ALL STUDENTS = " + grades_average)
</code></pre>
<p><strong>Update:</strong> As suggested above, you should make a function for a single student and loop through that function in another, since SO is not a coding service I won't do that for you, but I think you got the idea.</p>
| 2 | 2016-08-27T02:19:46Z | [
"python",
"loops"
] |
Am i overenginering it with websocket for IOT device Pushing data to endpoint | 39,176,800 | <p>I have a raspberry pi running unix and it has a gps chip that sends text over its serial com pots.</p>
<p>I want to forward this data without doing any kind of parsing, just forward the data stream directly to a endpoint (running aspnet core, webapi and signalr).</p>
<p>Just like if i was doing <code>sudo cat < /dev/ttyUSB0</code> </p>
<p>To broadcast the data i will do a python script instead of cat in above command to read the data coming from usb0. </p>
<p>Since its text messages coming from the USB at a decent rate, i dont want to do a http request for each message. Instead of want to open a connection to the backend and just push data.</p>
<p>I set up a signalr rawconnection very easily and theres a signalr client for python, so its not a big task to make it all work.</p>
<p>I am conserned about, if there is overhead of using signalr (websockets) for this. is the alternative to just open a http post request and keep it alive?</p>
<p>I am guessing that signalr can provide me with some connection monitoring and help keeping the connection alive incase of failures. But are there any other benefits of using websockets for something like this.</p>
<p>Is the benefits higher than the costs, what are the costs?</p>
| 1 | 2016-08-27T02:10:08Z | 39,177,450 | <p>as compared to http websocket is always a good choice for such cases.</p>
<p><a href="https://www.engineyard.com/articles/websocket" rel="nofollow">benefits of websockets</a></p>
<p>but for iot related stuff MQTT is preferred.</p>
<p>here it is articulated really well <a href="https://systembash.com/mqtt-vs-websockets-vs-http2-the-best-iot-messaging-protocol/" rel="nofollow">https://systembash.com/mqtt-vs-websockets-vs-http2-the-best-iot-messaging-protocol/</a></p>
| 1 | 2016-08-27T04:25:38Z | [
"python",
"websocket",
"signalr",
"asp.net-web-api2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.