title stringlengths 10 172 | question_id int64 469 40.1M | question_body stringlengths 22 48.2k | question_score int64 -44 5.52k | question_date stringlengths 20 20 | answer_id int64 497 40.1M | answer_body stringlengths 18 33.9k | answer_score int64 -38 8.38k | answer_date stringlengths 20 20 | tags list |
|---|---|---|---|---|---|---|---|---|---|
Different image sizes in tensorflow with batch_size =1 | 38,966,533 | <p>I want to achieve a python class,which can load a tensorflow model and implement a inference.but i have no idea about how can i input image with variable image size.:(</p>
<pre><code>class ArtGenerater():
def __init__(self,model_path):
self.model_path = model_path
# vary shape?
self.x = tf.placeholder(tf.float32,shape=(1,512,512,3))
self.gen = model.resnet(self.x)
self.out = tf.saturate_cast(self.gen,tf.uint8)
self.sess = tf.Session()
file = tf.train.lastest_checkpoint(self.model_path)
saver = tf.train.Saver()
saver.restore(self.sess,file)
def pic(self,image_path):
img =np.asarray(Image.open(image_path)).astype(np.float32)
img = np.expand_dims(img,0)
output_t = self.sess.run(self.out,feed_dict={self.x:img})
return output_t
</code></pre>
<p>Now i just use <code>tf.placeholder(tf.float32,shape=(1,512,512,3))</code>,
but my image have different sizes(eg. 1000*900).how can i achieve this function?
thank you. </p>
<p>EDIT:
thank you everyone.I have solved proplem by using
<code>
x = tf.placeholder(tf.string)
img = tf.image.decode_jpeg(x,channels=3)
</code>
and this can feed network(my ConvNet include many conv2d & conv2d_tranpose) with different image size. :)</p>
| 0 | 2016-08-16T04:09:54Z | 38,968,615 | <p>Basically you can define a various size input using None as follows </p>
<pre><code>self.x = tf.placeholder(tf.float32, [None, 784])
</code></pre>
<p>and then you can feed different input</p>
<pre><code>feed_dict={self.x: current_data} etc..
</code></pre>
<p>But be careful about neural net structure, probably you should fit your data (resize\reshape and so on) instead modifying it.</p>
| 0 | 2016-08-16T07:08:53Z | [
"python",
"tensorflow"
] |
Python write() function not working | 38,966,576 | <pre><code>f = open('path.txt','w')
text = f.write('This is a test\n')
print(text)
</code></pre>
<p>This is supposed to write on the file path.txt "This is a test" but it doesn't.</p>
| -3 | 2016-08-16T04:16:18Z | 38,966,597 | <p>Try doing </p>
<pre><code>f.close()
</code></pre>
<p>and delete</p>
<pre><code>print(text)
</code></pre>
<p>Hope I helped. :)</p>
| 3 | 2016-08-16T04:19:32Z | [
"python",
"python-3.x"
] |
Python equivalent of R's rnbinom parametrized with mu | 38,966,622 | <p>R has a <a href="https://stat.ethz.ch/R-manual/R-devel/library/stats/html/NegBinomial.html" rel="nofollow">negative binomial function</a> that can be parameterized with <code>mu</code> rather than a probability (e.g., a float >= 0 and <=1). I am trying to replicate this distribution:</p>
<pre><code>rnbinom(1000,size=0.3,mu=15)
</code></pre>
<p>in Python. From what I can see, <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.negative_binomial.html" rel="nofollow">Numpy's negative binomial function</a> only allows a probability. Also, I am unclear as to what the <code>size</code> parameter would be in Numpy.</p>
| 1 | 2016-08-16T04:23:42Z | 38,967,045 | <p>Here are the parameters that you passed to <code>rnbinom</code>:</p>
<pre><code>In [131]: num_samples = 10000
In [132]: size = 0.3
In [133]: mu = 15
</code></pre>
<p>As explained in the R documentation that you linked to, you can compute the probability as follows:</p>
<pre><code>In [134]: prob = size/(size + mu)
</code></pre>
<p>The first two arguments of <code>numpy.random.negative_binomial</code> correspond to the <code>size</code> and <code>prob</code> arguments of the R functions. The third argument of <code>negative_binomial</code> is the number of samples. (Be careful--numpy calls this argument <code>size</code>; it refers to the size of the sample to generate. All the numpy random functions take a <code>size</code> argument.)</p>
<pre><code>In [135]: sample = np.random.negative_binomial(size, prob, num_samples)
</code></pre>
<p>The mean of the sample should be close to 15.</p>
<pre><code>In [136]: sample.mean()
Out[136]: 14.9032
</code></pre>
| 3 | 2016-08-16T05:11:49Z | [
"python",
"numpy"
] |
URL Separation in Flask | 38,966,653 | <p>I am listening to a <code>Flask</code> URL of the form,</p>
<pre><code>http://example.com:8080/v1/api?param1=value1&param2=value2&param3=value3&param4=value4
</code></pre>
<p>Now, I want to achieve <code>URL</code> separation of the parameters in either of the below forms, (<strong>ForwardSlash</strong>)</p>
<pre><code>http://example.com:8080/v1/api?param1=value1&param2=value2/parameters?param3=value3&param4=value4
</code></pre>
<p>OR (<strong>Semicolon</strong>)</p>
<pre><code>http://example.com:8080/v1/api?param1=value1&param2=value2;parameters?param3=value3&param4=value4
</code></pre>
<p>I know these are not clean <code>URLs</code> and need to be avoided, but such is the usecase.</p>
<p>I am currently listening to the URL as,</p>
<pre><code>@app.route('/v1/api', methods=['GET','POST'])
def api_call():
#....code for listening ...
</code></pre>
<p>How do I modify my code to get the <code>URL</code> separation as desired above?</p>
<p>I understand I am not following good principles of URL formation or other design principles, this is a use case requirement and am stuck on achieving this through Flask.</p>
| 1 | 2016-08-16T04:27:34Z | 38,966,749 | <p>Instead of passing all the values as query params, can you use form-post to achieve this? You can pass an object using this method and will give you more flexibility on the type of data-structure that you want to achieve. </p>
| 0 | 2016-08-16T04:39:07Z | [
"python",
"rest",
"url",
"flask",
"restful-url"
] |
Writing safe, enforced Python classes | 38,966,762 | <p>I am trying to implement a class with the best and safest conventions possible. Are there better ways to </p>
<p>a) prevent external edits to properties, and </p>
<p>b) enforce certain constraints, such as valid rank and suit, on these properties?</p>
<pre><code>class Card:
__ranks = ['Ace', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'Jack', 'Queen', 'King']
__suits = ['Heart', 'Club', 'Diamond', 'Spade']
def __init__(self, rank, suit):
assert rank in self.__ranks
assert suit in self.__suits
self.__rank = rank
self.__suit = suit
def getRank(self):
return self.__rank
def getSuit(self):
return self.__suit
</code></pre>
| 0 | 2016-08-16T04:40:01Z | 38,966,864 | <p>It's more common to control this type of behavior through the property decorator. You can essentially create a read-only attribute by not implementing a setter:</p>
<pre><code>class Foo:
def __init__(self, bar):
self._bar = bar
@property
def bar(self):
"""Read-only access to bar."""
return self._bar
</code></pre>
<p>I wouldn't bother with the double underscore name mangling. That still won't (an isn't intended to) make attributes inaccessible from the outside.</p>
| 0 | 2016-08-16T04:51:42Z | [
"python",
"class"
] |
Writing safe, enforced Python classes | 38,966,762 | <p>I am trying to implement a class with the best and safest conventions possible. Are there better ways to </p>
<p>a) prevent external edits to properties, and </p>
<p>b) enforce certain constraints, such as valid rank and suit, on these properties?</p>
<pre><code>class Card:
__ranks = ['Ace', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'Jack', 'Queen', 'King']
__suits = ['Heart', 'Club', 'Diamond', 'Spade']
def __init__(self, rank, suit):
assert rank in self.__ranks
assert suit in self.__suits
self.__rank = rank
self.__suit = suit
def getRank(self):
return self.__rank
def getSuit(self):
return self.__suit
</code></pre>
| 0 | 2016-08-16T04:40:01Z | 38,967,556 | <h1>Prevent external edits to properties</h1>
<p>Any attribute can be changed when you have a class instance. But you can follow convention that attributes started from single and double underscore is private and you should not access them directly unless you know what are you doing.</p>
<p>To provide public interface <code>@property</code> decorator really what you want.</p>
<h1>Enforce certain constraints</h1>
<p>Quote from <a href="https://docs.python.org/2/reference/simple_stmts.html#the-assert-statement" rel="nofollow">docs</a>:</p>
<blockquote>
<p>In the current implementation, the built-in variable <code>__debug__</code> is <code>True</code> under normal circumstances, <code>False</code> when optimization is requested (command line option <code>-O</code>). <strong>The current code generator emits no code for an assert statement when optimization is requested at compile time.</strong></p>
</blockquote>
<p>Assets is for development only. They can be used for test checking, etc. In case you need to check appropriate values are passed into <code>__init__</code> method, you can raise <code>ValueError</code> or custom error derived from it.</p>
<pre><code>class Card(object):
class CardValueError(ValueError):
pass
__ranks = ['Ace', '2', '3', '4', '5', '6', '7', '8', '9', '10',
'Jack', 'Queen', 'King']
__suits = ['Heart', 'Club', 'Diamond', 'Spade']
def __init__(self, rank, suit):
if rank not in self.__ranks or suit not in self.__suits:
raise Card.CardValueError()
self.__rank = rank
self.__suit = suit
@property
def rank(self):
return self.__rank
@property
def suit(self):
return self.__suit
</code></pre>
| 0 | 2016-08-16T05:58:47Z | [
"python",
"class"
] |
Writing safe, enforced Python classes | 38,966,762 | <p>I am trying to implement a class with the best and safest conventions possible. Are there better ways to </p>
<p>a) prevent external edits to properties, and </p>
<p>b) enforce certain constraints, such as valid rank and suit, on these properties?</p>
<pre><code>class Card:
__ranks = ['Ace', '2', '3', '4', '5', '6', '7', '8', '9', '10', 'Jack', 'Queen', 'King']
__suits = ['Heart', 'Club', 'Diamond', 'Spade']
def __init__(self, rank, suit):
assert rank in self.__ranks
assert suit in self.__suits
self.__rank = rank
self.__suit = suit
def getRank(self):
return self.__rank
def getSuit(self):
return self.__suit
</code></pre>
| 0 | 2016-08-16T04:40:01Z | 38,973,074 | <p>You could use a named tuple, so that the object is immutable</p>
<pre><code>>>> from collections import namedtuple
>>> Card = namedtuple('Card','rank,suit')
>>> acard = Card('10','D')
>>> acard.suit
'D'
>>> acard.rank
'10'
>>> acard.rank='H'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
>>>
</code></pre>
<p>For more information see the <code>namedtuple</code> documentation (in the <code>collections</code> module, <a href="https://docs.python.org/2/library/collections.html" rel="nofollow">https://docs.python.org/2/library/collections.html</a>)</p>
| 1 | 2016-08-16T10:56:53Z | [
"python",
"class"
] |
How to return correctly formatted pandas dataframe from apply? | 38,966,779 | <p>Say we have the following dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
years = [2005, 2006]
location = ['city', 'suburb']
dft = pd.DataFrame({
'year': [years[np.random.randint(0, 1+1)] for _ in range(100)],
'location': [location[np.random.randint(0, 1+1)] for _ in range(100)],
'days_to_complete': np.random.randint(100, high=600, size=100),
'cost_in_millions': np.random.randint(1, high=10, size=100)
})
</code></pre>
<p>Groupby year and location and then apply a function like the following:</p>
<pre><code>def get_custom_summary(group):
gt_200 = group.days_to_complete > 200
lt_200 = group.days_to_complete < 200
avg_days_gt200 = group[gt_200].days_to_complete.mean()
avg_cost_gt200 = group[gt_200].cost_in_millions.mean()
avg_days_lt200 = group[lt_200].days_to_complete.mean()
avg_cost_lt200 = group[lt_200].cost_in_millions.mean()
lt_200_prop = lt_200.sum() / (gt_200.sum() + lt_200.sum())
return pd.DataFrame({
'gt_200': {'AVG_DAYS': avg_days_gt200, 'AVG_COST': avg_cost_gt200},
'lt_200': {'avg_days': avg_days_lt200, 'avg_cost': avg_cost_lt200},
'lt_200_prop' : lt_200_prop
})
result = dft.groupby(['year', 'location']).apply(get_custom_summary)
</code></pre>
<p>Calling unstack(2) on the result we get the following output:</p>
<pre><code>print(result.unstack(2))
gt_200 lt_200 lt_200_prop
AVG_COST AVG_DAYS avg_cost avg_days AVG_COST AVG_DAYS avg_cost avg_days AVG_COST AVG_DAYS avg_cost avg_days
year location
2005 city 4.818182 415.636364 NaN NaN NaN NaN 7.250000 165.50 0.153846 0.153846 0.153846 0.153846
suburb 5.631579 336.631579 NaN NaN NaN NaN 5.166667 140.50 0.240000 0.240000 0.240000 0.240000
2006 city 4.130435 396.913043 NaN NaN NaN NaN 5.750000 150.75 0.258065 0.258065 0.258065 0.258065
suburb 5.294118 392.823529 NaN NaN NaN NaN 1.000000 128.00 0.055556 0.055556 0.055556 0.055556
</code></pre>
<p>For the columns <code>gt_200</code> and <code>lt_200</code> a call to <code>dropna(axis=1)</code> will remove the columns filled with NaN, but the <code>lt_200_prop</code> column is still stuck with the wrong column names. How could I return a DataFrame from get_custom_summary that doesn't broadcast (if that's the right word) the subcolumns (<code>AVG_COST</code>, <code>AVG_DAYS</code>, <code>avg_cost</code>, <code>avg_days</code>) to the columns (<code>gt_200</code>, <code>lt_200</code>, <code>lt_200_prop</code>)?</p>
<p>EDIT:</p>
<p>Desired output:</p>
<pre><code> gt_200 lt_200 lt_200_prop
AVG_COST AVG_DAYS avg_cost avg_days
year location
2005 city 4.818182 415.636364 7.250000 165.50 0.153846
suburb 5.631579 336.631579 5.166667 140.50 0.240000
2006 city 4.130435 396.913043 5.750000 150.75 0.258065
suburb 5.294118 392.823529 1.000000 128.00 0.055556
</code></pre>
| 1 | 2016-08-16T04:42:10Z | 38,968,099 | <p>My solution is use same column names in function <code>get_custom_summary</code> in <code>gt_200</code> and <code>lt_200</code> and then rename it by function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.lower.html" rel="nofollow"><code>str.lower</code></a> and add last custom column name <code>col</code>.</p>
<p>But there is <code>MultiIndex</code>, so you need create new by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_tuples.html" rel="nofollow"><code>MultiIndex.from_tuples</code></a>:</p>
<pre><code>years = [2005, 2006]
location = ['city', 'suburb']
np.random.seed(1234)
dft = pd.DataFrame({
'year': [years[np.random.randint(0, 1+1)] for _ in range(100)],
'location': [location[np.random.randint(0, 1+1)] for _ in range(100)],
'days_to_complete': np.random.randint(100, high=600, size=100),
'cost_in_millions': np.random.randint(1, high=10, size=100)
})
def get_custom_summary(group):
gt_200 = group.days_to_complete > 200
lt_200 = group.days_to_complete < 200
avg_days_gt200 = group[gt_200].days_to_complete.mean()
avg_cost_gt200 = group[gt_200].cost_in_millions.mean()
avg_days_lt200 = group[lt_200].days_to_complete.mean()
avg_cost_lt200 = group[lt_200].cost_in_millions.mean()
lt_200_prop = (lt_200).sum() / ((gt_200).sum() + (lt_200).sum())
return pd.DataFrame({
'gt_200': {'AVG_DAYS': avg_days_gt200, 'AVG_COST': avg_cost_gt200},
'lt_200': {'AVG_DAYS': avg_days_lt200, 'AVG_COST': avg_cost_lt200},
'lt_200_prop' : lt_200_prop
})
</code></pre>
<pre><code>result = dft.groupby(['year', 'location']).apply(get_custom_summary).unstack(2)
#drop last column with duplicates values
result = result.drop(result.columns[[-1]], axis=1)
#rename columns names in level 1
a = (result.columns.get_level_values(1))
level1 = a[:2].union(a[2:4].str.lower().union(['col']))
cols = list(zip(result.columns.get_level_values(0),level1))
result.columns = pd.MultiIndex.from_tuples(cols)
print (result)
gt_200 lt_200 lt_200_prop
AVG_COST AVG_DAYS avg_cost avg_days col
year location
2005 city 5.238095 392.095238 5.500000 144.666667 0.222222
suburb 4.428571 427.095238 4.000000 167.666667 0.125000
2006 city 4.368421 406.789474 4.571429 150.142857 0.269231
suburb 4.000000 439.062500 4.142857 145.142857 0.304348
</code></pre>
<hr>
<p>Simplier solution is remove columns:</p>
<pre><code>result = dft.groupby(['year', 'location']).apply(get_custom_summary).unstack(2)
#drop last 3 column, then drop NaN columns
result = result.drop(result.columns[[-1, -2, -3]], axis=1).dropna(axis=1)
print (result)
gt_200 lt_200 lt_200_prop
AVG_COST AVG_DAYS avg_cost avg_days AVG_COST
year location
2005 city 5.238095 392.095238 5.500000 144.666667 0.222222
suburb 4.428571 427.095238 4.000000 167.666667 0.125000
2006 city 4.368421 406.789474 4.571429 150.142857 0.269231
suburb 4.000000 439.062500 4.142857 145.142857 0.304348
</code></pre>
| 1 | 2016-08-16T06:36:53Z | [
"python",
"pandas",
"dataframe",
"multiple-columns",
null
] |
How to return correctly formatted pandas dataframe from apply? | 38,966,779 | <p>Say we have the following dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
years = [2005, 2006]
location = ['city', 'suburb']
dft = pd.DataFrame({
'year': [years[np.random.randint(0, 1+1)] for _ in range(100)],
'location': [location[np.random.randint(0, 1+1)] for _ in range(100)],
'days_to_complete': np.random.randint(100, high=600, size=100),
'cost_in_millions': np.random.randint(1, high=10, size=100)
})
</code></pre>
<p>Groupby year and location and then apply a function like the following:</p>
<pre><code>def get_custom_summary(group):
gt_200 = group.days_to_complete > 200
lt_200 = group.days_to_complete < 200
avg_days_gt200 = group[gt_200].days_to_complete.mean()
avg_cost_gt200 = group[gt_200].cost_in_millions.mean()
avg_days_lt200 = group[lt_200].days_to_complete.mean()
avg_cost_lt200 = group[lt_200].cost_in_millions.mean()
lt_200_prop = lt_200.sum() / (gt_200.sum() + lt_200.sum())
return pd.DataFrame({
'gt_200': {'AVG_DAYS': avg_days_gt200, 'AVG_COST': avg_cost_gt200},
'lt_200': {'avg_days': avg_days_lt200, 'avg_cost': avg_cost_lt200},
'lt_200_prop' : lt_200_prop
})
result = dft.groupby(['year', 'location']).apply(get_custom_summary)
</code></pre>
<p>Calling unstack(2) on the result we get the following output:</p>
<pre><code>print(result.unstack(2))
gt_200 lt_200 lt_200_prop
AVG_COST AVG_DAYS avg_cost avg_days AVG_COST AVG_DAYS avg_cost avg_days AVG_COST AVG_DAYS avg_cost avg_days
year location
2005 city 4.818182 415.636364 NaN NaN NaN NaN 7.250000 165.50 0.153846 0.153846 0.153846 0.153846
suburb 5.631579 336.631579 NaN NaN NaN NaN 5.166667 140.50 0.240000 0.240000 0.240000 0.240000
2006 city 4.130435 396.913043 NaN NaN NaN NaN 5.750000 150.75 0.258065 0.258065 0.258065 0.258065
suburb 5.294118 392.823529 NaN NaN NaN NaN 1.000000 128.00 0.055556 0.055556 0.055556 0.055556
</code></pre>
<p>For the columns <code>gt_200</code> and <code>lt_200</code> a call to <code>dropna(axis=1)</code> will remove the columns filled with NaN, but the <code>lt_200_prop</code> column is still stuck with the wrong column names. How could I return a DataFrame from get_custom_summary that doesn't broadcast (if that's the right word) the subcolumns (<code>AVG_COST</code>, <code>AVG_DAYS</code>, <code>avg_cost</code>, <code>avg_days</code>) to the columns (<code>gt_200</code>, <code>lt_200</code>, <code>lt_200_prop</code>)?</p>
<p>EDIT:</p>
<p>Desired output:</p>
<pre><code> gt_200 lt_200 lt_200_prop
AVG_COST AVG_DAYS avg_cost avg_days
year location
2005 city 4.818182 415.636364 7.250000 165.50 0.153846
suburb 5.631579 336.631579 5.166667 140.50 0.240000
2006 city 4.130435 396.913043 5.750000 150.75 0.258065
suburb 5.294118 392.823529 1.000000 128.00 0.055556
</code></pre>
| 1 | 2016-08-16T04:42:10Z | 38,987,486 | <p>Return a Dataframe with columns set equal to a MultiIndex.</p>
<pre><code>from collections import OrderedDict
def get_multi_index(ordered_dict):
length = len(list(ordered_dict.values())[0])
for k in ordered_dict:
assert(len(ordered_dict[k]) == length)
names = list()
arrays = list()
for k in ordered_dict:
names.append(k)
arrays.append(np.array(ordered_dict[k]))
tuples = list(zip(*arrays))
return pd.MultiIndex.from_tuples(tuples, names=names)
def get_custom_summary(group):
gt_200 = group.days_to_complete > 200
lt_200 = group.days_to_complete < 200
avg_days_gt_200 = group[gt_200].days_to_complete.mean()
avg_cost_gt_200 = group[gt_200].cost_in_millions.mean()
avg_days_lt_200 = group[lt_200].days_to_complete.mean()
avg_cost_lt_200 = group[lt_200].cost_in_millions.mean()
lt_200_prop = lt_200.sum() / (gt_200.sum() + lt_200.sum())
ordered_dict = OrderedDict()
ordered_dict['first'] = ['lt_200', 'lt_200', 'gt_200', 'gt_200', 'lt_200_prop']
ordered_dict['second'] = ['avg_cost', 'avg_days', 'AVG_COST', 'AVG_DAYS', 'prop']
data = [[avg_cost_lt_200, avg_days_lt_200, avg_cost_gt_200, avg_days_gt_200, lt_200_prop]]
return pd.DataFrame(data, columns=get_multi_index(ordered_dict))
</code></pre>
<p>Get and print result:</p>
<pre><code>result = dft.groupby(['year', 'location']).apply(get_custom_summary).xs(0, level=2)
print(result)
</code></pre>
<p>Output:</p>
<pre><code>first lt_200 gt_200 lt_200_prop
second avg_cost avg_days AVG_COST AVG_DAYS prop
year location
2005 city 7.555556 135.444444 5.300000 363.750000 0.310345
suburb 5.000000 137.333333 5.555556 444.222222 0.250000
2006 city 6.250000 169.000000 4.714286 422.380952 0.160000
suburb 4.428571 133.142857 4.333333 445.666667 0.318182
</code></pre>
| 0 | 2016-08-17T03:08:41Z | [
"python",
"pandas",
"dataframe",
"multiple-columns",
null
] |
It is possible to skip/fail test in setup using pytest? | 38,966,785 | <p>I am using pytest and hope to check: </p>
<p>Is it possible to conditionally skip/fail tests (grouped in class) in setup?</p>
| 0 | 2016-08-16T04:42:58Z | 38,971,509 | <p>You can call <code>pytest.skip(...)</code> and <code>pytest.fail(...)</code> there. See "<a href="http://doc.pytest.org/en/latest/skipping.html#imperative-xfail-from-within-a-test-or-setup-function" rel="nofollow">Imperative xfail from within a test or setup function</a>" in the docs.</p>
| 1 | 2016-08-16T09:41:22Z | [
"python",
"py.test"
] |
How to deal with this logic in pandas | 38,966,912 | <p>I have a data frame like following below.</p>
<pre><code> coutry flag
0 China red
1 Russia green
2 China yellow
3 Britain yellow
4 Russia green
......................
</code></pre>
<p>In df['country'], you can see many different country names. I want to set the first appear country as 1, the second as 2. The flag is the same logic.So you can see the result is:</p>
<pre><code> coutry flag
0 1 1
1 2 2
2 1 3
3 3 3
4 2 2
</code></pre>
<p>But I don't know how to achieve this logic in python. Thank you.
Moreover when I get the result data frame, I want to have an function to back data frame to the original.</p>
| 1 | 2016-08-16T04:57:46Z | 38,966,939 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="nofollow"><code>factorize</code></a> and add <code>1</code>:</p>
<pre><code>df['coutry'] = pd.factorize(df.coutry)[0] + 1
df['flag'] = pd.factorize(df.flag)[0] + 1
print (df)
coutry flag
0 1 1
1 2 2
2 1 3
3 3 3
4 2 2
</code></pre>
<p>Then you can convert columns to <code>categories</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Categorical.html" rel="nofollow"><code>Categorical</code></a> if need save memory:</p>
<pre><code>df['coutry'] = pd.Categorical(pd.factorize(df.coutry)[0] + 1)
df['flag'] = pd.Categorical(pd.factorize(df.flag)[0] + 1)
print (df)
coutry flag
0 1 1
1 2 2
2 1 3
3 3 3
4 2 2
print (df.dtypes)
coutry category
flag category
dtype: object
</code></pre>
<hr>
<pre><code>#1000 times larger df
df = pd.concat([df]*1000).reset_index(drop=True)
df['coutry'] = pd.Categorical(pd.factorize(df.coutry)[0] + 1)
df['flag'] = pd.factorize(df.flag)[0] + 1
print (df)
coutry flag
0 1 1
1 2 2
2 1 3
3 3 3
4 2 2
5 1 1
6 2 2
...
...
print (df['coutry'].nbytes)
5024
print (df['flag'].nbytes)
20000
</code></pre>
<p>If need convert back, you can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow"><code>map</code></a> values by dictionaries:</p>
<pre><code>b = [list(x) for x in pd.factorize(df.coutry.drop_duplicates())]
d1 = dict(zip(b[0], b[1]))
print (d1)
{0: 'China', 1: 'Russia', 2: 'Britain'}
b = [list(x) for x in pd.factorize(df.flag.drop_duplicates())]
d2 = dict(zip(b[0], b[1]))
print (d2)
{0: 'red', 1: 'green', 2: 'yellow'}
df['coutry'] = pd.Categorical(pd.factorize(df.coutry)[0])
df['flag'] = pd.Categorical(pd.factorize(df.flag)[0])
print (df)
coutry flag
0 0 0
1 1 1
2 0 2
3 2 2
4 1 1
df['coutry'] = df.coutry.map(d1)
df['flag'] = df.flag.map(d2)
print (df)
coutry flag
0 China red
1 Russia green
2 China yellow
3 Britain yellow
4 Russia green
</code></pre>
| 3 | 2016-08-16T05:00:21Z | [
"python",
"pandas",
"dataframe",
"categorical-data"
] |
Access to Google analytic management API | 38,967,004 | <p>I wish to use Google analytic management API and I have very basic problem that I am struggling with. For both Javascript and Python that I tried this <a href="https://developers.google.com/analytics/devguides/config/mgmt/v3/mgmtReference/management/filters/list#examples" rel="nofollow">instruction</a>, it says there is no module named 'analytics'. Do I need to use some codes to connect to my Google analytic account? </p>
<pre><code>function listFilters() {
var request = gapi.client.analytics.management.filters.list({
'accountId': '123456'
});
request.execute(printFilters);
}
</code></pre>
<p>Would you please help me to know what I'm missing?</p>
| 0 | 2016-08-16T05:07:29Z | 38,969,054 | <p>It mentions here that you need to have analytics object authorized which is why you're getting no analytics module error
<a href="http://take.ms/Gk7z6" rel="nofollow">http://take.ms/Gk7z6</a></p>
<p>Here's how you can get your analytics authorized object
<a href="https://developers.google.com/analytics/devguides/config/mgmt/v3/mgmtReference/management/filters/list#auth" rel="nofollow">https://developers.google.com/analytics/devguides/config/mgmt/v3/mgmtReference/management/filters/list#auth</a>
<a href="https://developers.google.com/analytics/devguides/config/mgmt/v3/authorization" rel="nofollow">https://developers.google.com/analytics/devguides/config/mgmt/v3/authorization</a></p>
<p>Here's how you can get started with the javascript api library
<a href="https://developers.google.com/api-client-library/javascript/start/start-js" rel="nofollow">https://developers.google.com/api-client-library/javascript/start/start-js</a></p>
| 1 | 2016-08-16T07:34:06Z | [
"javascript",
"python",
"google-analytics",
"google-analytics-api"
] |
Tensorflow: how to install roi_pooling user_op | 38,967,048 | <p>I read in <a href="http://stackoverflow.com/questions/38618960/tensorflow-how-to-insert-custom-input-to-existing-graph">this</a> post by HediBy that this ROI_POOLING user_op implementation works: <a href="https://github.com/yuxng/tensorflow/" rel="nofollow">LINK</a> </p>
<p>I used bazel build </p>
<pre><code>-c opt //tensorflow/core/user_ops:roi_pooling.so to generate the so file (after installing tensorflow)
</code></pre>
<p>But when I use <code>tf.load_op_library to load roi_pooling.so</code>, I get this weird error: </p>
<pre><code> tensorflow.python.framework.errors.NotFoundError: /home/fishdrop/tensorflow/bazel-bin/tensorflow/core/user_ops/roi_pooling.so: undefined symbol: _Z21ROIPoolForwardLaucherPKffiiiiiiS0_PfPiRKN5Eigen9GpuDeviceE
</code></pre>
<p>Any ideas? has anyone else been successful with this user_op?</p>
| 1 | 2016-08-16T05:12:15Z | 38,986,829 | <p>thanks for the tip i just found out that the error occured when I built the user_op without GPU support. I reinstalled tensorflow v 0.10, with GPU support, and placed all user_op files inside //tensorflow/core/usââer_ops. </p>
<p>If I compile the user_op using bazel build -c opt --config=cuda //tensorflow/core/usââer_ops:roi_pooling.so (the addition of --config=cuda isn't in the tensorflow user_op documentation), the user_op now works.. I guess this issue could now be closed </p>
| 1 | 2016-08-17T01:32:24Z | [
"python",
"tensorflow",
"detection",
"roi"
] |
parsing quotes in json | 38,967,121 | <p>I had a problem parsing quotes in json. I use python 2.7.
my json file is here.</p>
<pre><code>{
"table": "test",
"rows":
[
{
"comment_id" : "11111",
"title" : "Worked great with limited space",
"comment" : "We have a very "small" kitchen but wanted a stylish refrigerator that was counter depth. This model made great use of the limited space. The deep door shelves are great and the touch controls on the door make it easy."
},
{
"comment_id" : "22222",
"title" : "Amazing Refrigerator",
"comment" : "Customer Service was "FANTASTIC" when I was shopping for this refrigerator. This refrigerator fit perfectly in our space, it only took 2 hours to cool from delivery. Has a ton of space and the lighting is great in it"
}
]
}
</code></pre>
<p>and my source is here:</p>
<pre><code>def create_file(from_file, to_file):
with open(from_file, "r") as f:
result = f.read().replace('\\', '').replace('&amp;', '&').replace('&gt;', '>').replace('&lt;', '<')
res = json.loads(result, strict=False, encoding="ISO-8859-1")
f = open(to_file, "w")
f.write("id" + '\t' + "title" + '\t' + "review" + '\n') # write a first line.
pattern = re.compile("[^a-zA-Z0-9_.;:,!?&]")
for data in res['rows']:
output = "\""
output += str(data['comment_id'] + '"\t"')
output += str(pattern.sub(' ', data['title']) + '"\t"')
output += str(pattern.sub(' ', data['comment']) + '""\n')
f.write(output)
f.close()
</code></pre>
<p>the error code is here:</p>
<pre><code>ValueError: Expecting , delimiter: line 20119 column 154 (char 1495987)
Process finished with exit code 1
</code></pre>
<p>if the quotes("") are included in comment fields in json.
the error occurs, how can I fix it?</p>
| -1 | 2016-08-16T05:19:11Z | 38,967,619 | <p>your json is not valid, it should be something like this, :-)
use a valid json</p>
<pre><code>{
"table": "test",
"rows": [{
"comment_id": "11111",
"title": "Worked great with limited space",
"comment": "We have a very \"small\" kitchen but wanted a stylish refrigerator that was counter depth. This model made great use of the limited space. The deep door shelves are great and the touch controls on the door make it easy."
}, {
"comment_id": "22222",
"title": "Amazing Refrigerator",
"comment": "Customer Service was \"FANTASTIC\" when I was shopping for this refrigerator. This refrigerator fit perfectly in our space, it only took 2 hours to cool from delivery. Has a ton of space and the lighting is great in it"
}]
}
</code></pre>
| 0 | 2016-08-16T06:03:17Z | [
"python",
"json"
] |
Add a new node at the begininning of linkedlist in python? | 38,967,232 | <p>This is my code. I trying to add a new node at the beginning of the linked-list. But the first node should be overwritten. So how should i add a new node without overwrite a first node.
sample output: <code>10 15 20</code>
I trying to add 5 at the beginning. It comes like : <code>5 15 20</code>
I need a output like this : <code>5 10 15 20</code>. </p>
<pre><code>def push(self, new_data):
new_node = Node(new_data)
new_node.next = self.head
self.head = new_node
llist.push(5)
</code></pre>
<p>This is the code in full:</p>
<pre><code>class Node:
def init__(self, data):
self.data = data
self.next = None
class Linkedlist:
def __init__(self):
self.head = None
def printlist(self):
temp = self.head
while(temp):
print temp.data,
temp = temp.next
def push(self, new_data):
new_node = Node(new_data)
ew_node.next = self.head
self.head = new_node
if __name == 'main':
llist = Linkedlist()
llist.head = Node(10)
second = Node(15)
third = Node(20)
llist.push(5)
llist.head.next = second
second.next = third
llist.printlist()
</code></pre>
| -1 | 2016-08-16T05:31:01Z | 38,967,551 | <p>The problem is in your main program:</p>
<pre><code>llist.head.next = second
</code></pre>
<p>This explicitely sets the list's head to point to the second element (value 15), effectively losing the previous first node (10).</p>
<blockquote>
<p>But the first node should be overwritten. </p>
</blockquote>
<p>This part of your question is unclear - your example says the opposite. If you want to <em>replace</em> the previous first node, then all you have to do is <code>self.head = self.head.next</code> and then push, as you do in <code>__main__</code>. However that's not what your example shows.</p>
<p>If on the other hand you want to actually <em>overwrite</em> the value of the first node, you could do e.g.:</p>
<pre><code>self.head.value = 99
</code></pre>
<blockquote>
<p>So how should i add a new node without overwrite a first node. </p>
</blockquote>
<p>Your code for <code>push</code> looks correct. I'm assuming your <code>self.head</code> points to the wrong node to begin with. Here's your code embedded in a minimalistic linked list implementation that works. For comparison I'm also adding the <code>push_replace</code> method that loses the first element:</p>
<pre><code>class Node(object):
def __init__(self, value):
self.value = value
self.next = None
class List(object):
def __init__(self):
self.head = None
def push(self, new_data):
# this is your actual code
new_node = Node(new_data)
new_node.next = self.head
self.head = new_node
def push_replace(self, new_data):
# replace the previous first node
new_node = Node(new_data)
new_node.next = self.head.next
self.head = new_node
def __iter__(self):
node = self.head
while node:
yield node.value
node = node.next
l = List()
l.push(20)
l.push(15)
l.push(10)
l.push(5)
list(l)
=>
[5, 10, 15, 20]
# now loose the first item
l.push_replace(99)
list(l)
=>
[99, 10, 15, 20]
</code></pre>
<p>Note as @FujiApple has pointed out you should avoid modifying the list from outside the <code>List</code> code. In other words, always implement list modification as a new method. This localizes knowledge about how the list works and makes your code more stable and easier to debug.</p>
| 2 | 2016-08-16T05:58:35Z | [
"python"
] |
Add a new node at the begininning of linkedlist in python? | 38,967,232 | <p>This is my code. I trying to add a new node at the beginning of the linked-list. But the first node should be overwritten. So how should i add a new node without overwrite a first node.
sample output: <code>10 15 20</code>
I trying to add 5 at the beginning. It comes like : <code>5 15 20</code>
I need a output like this : <code>5 10 15 20</code>. </p>
<pre><code>def push(self, new_data):
new_node = Node(new_data)
new_node.next = self.head
self.head = new_node
llist.push(5)
</code></pre>
<p>This is the code in full:</p>
<pre><code>class Node:
def init__(self, data):
self.data = data
self.next = None
class Linkedlist:
def __init__(self):
self.head = None
def printlist(self):
temp = self.head
while(temp):
print temp.data,
temp = temp.next
def push(self, new_data):
new_node = Node(new_data)
ew_node.next = self.head
self.head = new_node
if __name == 'main':
llist = Linkedlist()
llist.head = Node(10)
second = Node(15)
third = Node(20)
llist.push(5)
llist.head.next = second
second.next = third
llist.printlist()
</code></pre>
| -1 | 2016-08-16T05:31:01Z | 38,968,817 | <p>There's a little difference in your code you have to use head as a Class Variable where value is shared among all the instances not as the attribute of class. Please refer the code for better understanding. Thank you</p>
<pre><code>class Node:
def __init__(self, data):
self.data = data
self.next = None
class Linkedlist:
head = None
def __init__(self):
pass
def printlist(self):
temp = Linkedlist.head
while(temp):
print temp.data,
temp = temp.next
def push(self, new_data):
new_node = Node(new_data)
new_node.next = Linkedlist.head
Linkedlist.head = new_node
if __name__ == '__main__':
llist = Linkedlist()
Linkedlist.head = Node(10)
second = Node(15)
third = Node(20)
#llist.push(5)
Linkedlist.head.next = second
second.next = third
llist.printlist()
llist.push(5)
print ''
llist.printlist()
</code></pre>
<p>OUTPUT:</p>
<pre><code>10 15 20
5 10 15 20
</code></pre>
| 0 | 2016-08-16T07:21:32Z | [
"python"
] |
Pyspark extracting four tuples from RDD | 38,967,281 | <p>I have a rdd which contains five tuples as shown below</p>
<pre><code>return [word_val+'&'+f_val+'&'+N_val+'&'+n_val+'&'+str(1)]
</code></pre>
<p>I want to map these values to compute result, I was expecting the mapping to work like:</p>
<pre><code>reducer_3 = add_m.map(lambda word: (word[0],word[1],word[2],word[3],1)).reduceByKey(lambda word[0],1: word[0]+1)
</code></pre>
<p>And the reducer_3 should return an rdd containing:</p>
<pre><code>word[0] & summation_of_1's & word[1] & word[2] & word[3]
</code></pre>
| 0 | 2016-08-16T05:35:30Z | 38,978,838 | <p>You need to map into a pair tuple before the reduceByKey, e.g.:</p>
<pre><code>reducer_3 = add_m.map(lambda word: ((word[0],word[1],word[2],word[3]),1)).reduceByKey(lambda x,y: x+y)
</code></pre>
<p>This will return a set of 4-tuples and their counts. The original code you showed seemed to be missing the extra parens around the 4-tuple in the map step.
Your question doesn't make it perfectly clear what you're trying to achieve, but hopefully the example above will help...</p>
| 0 | 2016-08-16T15:27:39Z | [
"python",
"apache-spark",
"pyspark"
] |
django image field - magic | 38,967,288 | <p>i define upload_to = 'products', my media_root is /var/www/work.sanremo-dv.ru/media/</p>
<p>In debugger image.path equal /var/www/work.sanremo-dv.ru/media/imagename.ext - without upload_to dir. Phisical image stored by /var/www/work.sanremo-dv.ru/media/products/imagename.ext why?</p>
<hr>
<p><a href="http://i.stack.imgur.com/C8W4x.png" rel="nofollow"><img src="http://i.stack.imgur.com/C8W4x.png" alt="enter image description here"></a></p>
<hr>
<p><a href="http://i.stack.imgur.com/ku0WO.png" rel="nofollow"><img src="http://i.stack.imgur.com/ku0WO.png" alt="enter image description here"></a></p>
<hr>
<p><a href="http://i.stack.imgur.com/xHO4w.png" rel="nofollow"><img src="http://i.stack.imgur.com/xHO4w.png" alt="enter image description here"></a></p>
| 0 | 2016-08-16T05:35:48Z | 38,977,809 | <p>maybe you should set <code>upload_to</code> with end slash?</p>
<pre><code>upload_to = 'products/'
</code></pre>
| 0 | 2016-08-16T14:37:57Z | [
"python",
"django"
] |
How to multiply every column of one Pandas Dataframe with every column of another Dataframe efficiently? | 38,967,402 | <p>I'm trying to multiply two pandas dataframes with each other. Specifically, I want to multiply every column with every column of the other df. </p>
<p>The dataframes are one-hot encoded, so they look like this: </p>
<pre><code>col_1, col_2, col_3, ...
0 1 0
1 0 0
0 0 1
...
</code></pre>
<p>I could just iterate through each of the columns using a for loop, but in python that is computationally expensive, and I'm hoping there's an easier way. </p>
<p>One of the dataframes has 500 columns, the other has 100 columns. </p>
<p>This is the fastest version that I've been able to write so far: </p>
<pre><code>interact_pd = pd.DataFrame(index=df_1.index)
df1_columns = [column for column in df_1]
for column in df_2:
col_pd = df_1[df1_columns].multiply(df_2[column], axis="index")
interact_pd = interact_pd.join(col_pd, lsuffix='_' + column)
</code></pre>
<p>I iterate over each column in df_2 and multiply all of df_1 by that column, then I append the result to interact_pd. I would rather not do it using a for loop however, as this is very computationally costly. Is there a faster way of doing it?</p>
<p>EDIT: example </p>
<p>df_1: </p>
<pre><code>1col_1, 1col_2, 1col_3
0 1 0
1 0 0
0 0 1
</code></pre>
<p>df_2: </p>
<pre><code>2col_1, 2col_2
0 1
1 0
0 0
</code></pre>
<p>interact_pd:</p>
<pre><code>1col_1_2col_1, 1col_2_2col_1,1col_3_2col_1, 1col_1_2col_2, 1col_2_2col_2,1col_3_2col_2
0 0 0 0 1 0
1 0 0 0 0 0
0 0 0 0 0 0
</code></pre>
| 4 | 2016-08-16T05:46:03Z | 38,970,257 | <p>You can use numpy.</p>
<p>Consider this example code, I did modify the variable names, but <code>Test1()</code> is essentially your code. I didn't bother create the correct column names in that function though:</p>
<pre><code>import pandas as pd
import numpy as np
A = [[1,0,1,1],[0,1,1,0],[0,1,0,1]]
B = [[0,0,1,0],[1,0,1,0],[1,1,0,0],[1,0,0,1],[1,0,0,0]]
DA = pd.DataFrame(A).T
DB = pd.DataFrame(B).T
def Test1(DA,DB):
E = pd.DataFrame(index=DA.index)
DAC = [column for column in DA]
for column in DB:
C = DA[DAC].multiply(DB[column], axis="index")
E = E.join(C, lsuffix='_' + str(column))
return E
def Test2(DA,DB):
MA = DA.as_matrix()
MB = DB.as_matrix()
MM = np.zeros((len(MA),len(MA[0])*len(MB[0])))
Col = []
for i in range(len(MB[0])):
for j in range(len(MA[0])):
MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i]
Col.append('1col_'+str(i+1)+'_2col_'+str(j+1))
return pd.DataFrame(MM,dtype=int,columns=Col)
print Test1(DA,DB)
print Test2(DA,DB)
</code></pre>
<p>Output:</p>
<pre><code> 0_1 1_1 2_1 0 1 2 0_3 1_3 2_3 0 1 2 0 1 2
0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0
1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0
2 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0
1col_1_2col_1 1col_1_2col_2 1col_1_2col_3 1col_2_2col_1 1col_2_2col_2 \
0 0 0 0 1 0
1 0 0 0 0 0
2 1 1 0 1 1
3 0 0 0 0 0
1col_2_2col_3 1col_3_2col_1 1col_3_2col_2 1col_3_2col_3 1col_4_2col_1 \
0 0 1 0 0 1
1 0 0 1 1 0
2 0 0 0 0 0
3 0 0 0 0 1
1col_4_2col_2 1col_4_2col_3 1col_5_2col_1 1col_5_2col_2 1col_5_2col_3
0 0 0 1 0 0
1 0 0 0 0 0
2 0 0 0 0 0
3 0 1 0 0 0
</code></pre>
<p>Performance of your function:</p>
<pre><code>%timeit(Test1(DA,DB))
100 loops, best of 3: 11.1 ms per loop
</code></pre>
<p>Performance of my function:</p>
<pre><code>%timeit(Test2(DA,DB))
1000 loops, best of 3: 464 µs per loop
</code></pre>
<p>It's not beautiful, but it's efficient.</p>
| 2 | 2016-08-16T08:41:57Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
How to multiply every column of one Pandas Dataframe with every column of another Dataframe efficiently? | 38,967,402 | <p>I'm trying to multiply two pandas dataframes with each other. Specifically, I want to multiply every column with every column of the other df. </p>
<p>The dataframes are one-hot encoded, so they look like this: </p>
<pre><code>col_1, col_2, col_3, ...
0 1 0
1 0 0
0 0 1
...
</code></pre>
<p>I could just iterate through each of the columns using a for loop, but in python that is computationally expensive, and I'm hoping there's an easier way. </p>
<p>One of the dataframes has 500 columns, the other has 100 columns. </p>
<p>This is the fastest version that I've been able to write so far: </p>
<pre><code>interact_pd = pd.DataFrame(index=df_1.index)
df1_columns = [column for column in df_1]
for column in df_2:
col_pd = df_1[df1_columns].multiply(df_2[column], axis="index")
interact_pd = interact_pd.join(col_pd, lsuffix='_' + column)
</code></pre>
<p>I iterate over each column in df_2 and multiply all of df_1 by that column, then I append the result to interact_pd. I would rather not do it using a for loop however, as this is very computationally costly. Is there a faster way of doing it?</p>
<p>EDIT: example </p>
<p>df_1: </p>
<pre><code>1col_1, 1col_2, 1col_3
0 1 0
1 0 0
0 0 1
</code></pre>
<p>df_2: </p>
<pre><code>2col_1, 2col_2
0 1
1 0
0 0
</code></pre>
<p>interact_pd:</p>
<pre><code>1col_1_2col_1, 1col_2_2col_1,1col_3_2col_1, 1col_1_2col_2, 1col_2_2col_2,1col_3_2col_2
0 0 0 0 1 0
1 0 0 0 0 0
0 0 0 0 0 0
</code></pre>
| 4 | 2016-08-16T05:46:03Z | 38,970,625 | <pre><code># use numpy to get a pair of indices that map out every
# combination of columns from df_1 and columns of df_2
pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1)
# use pandas MultiIndex to create a nice MultiIndex for
# the final output
lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns],
names=[df_1.columns.name, df_2.columns.name])
# df_1.values[:, pidx[0]] slices df_1 values for every combination
# like wise with df_2.values[:, pidx[1]]
# finally, I marry up the product of arrays with the MultiIndex
pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]],
columns=lcol)
</code></pre>
<p><a href="http://i.stack.imgur.com/YaMNM.png" rel="nofollow"><img src="http://i.stack.imgur.com/YaMNM.png" alt="enter image description here"></a></p>
<hr>
<h3>Timing</h3>
<p><strong>code</strong></p>
<pre><code>from string import ascii_letters
df_1 = pd.DataFrame(np.random.randint(0, 2, (1000, 26)), columns=list(ascii_letters[:26]))
df_2 = pd.DataFrame(np.random.randint(0, 2, (1000, 52)), columns=list(ascii_letters))
def pir1(df_1, df_2):
pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1)
lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns],
names=[df_1.columns.name, df_2.columns.name])
return pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]],
columns=lcol)
def Test2(DA,DB):
MA = DA.as_matrix()
MB = DB.as_matrix()
MM = np.zeros((len(MA),len(MA[0])*len(MB[0])))
Col = []
for i in range(len(MB[0])):
for j in range(len(MA[0])):
MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i]
Col.append('1col_'+str(i+1)+'_2col_'+str(j+1))
return pd.DataFrame(MM,dtype=int,columns=Col)
</code></pre>
<p><strong>results</strong></p>
<p><a href="http://i.stack.imgur.com/WJ7KH.png" rel="nofollow"><img src="http://i.stack.imgur.com/WJ7KH.png" alt="enter image description here"></a></p>
| 6 | 2016-08-16T08:59:19Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
How to multiply every column of one Pandas Dataframe with every column of another Dataframe efficiently? | 38,967,402 | <p>I'm trying to multiply two pandas dataframes with each other. Specifically, I want to multiply every column with every column of the other df. </p>
<p>The dataframes are one-hot encoded, so they look like this: </p>
<pre><code>col_1, col_2, col_3, ...
0 1 0
1 0 0
0 0 1
...
</code></pre>
<p>I could just iterate through each of the columns using a for loop, but in python that is computationally expensive, and I'm hoping there's an easier way. </p>
<p>One of the dataframes has 500 columns, the other has 100 columns. </p>
<p>This is the fastest version that I've been able to write so far: </p>
<pre><code>interact_pd = pd.DataFrame(index=df_1.index)
df1_columns = [column for column in df_1]
for column in df_2:
col_pd = df_1[df1_columns].multiply(df_2[column], axis="index")
interact_pd = interact_pd.join(col_pd, lsuffix='_' + column)
</code></pre>
<p>I iterate over each column in df_2 and multiply all of df_1 by that column, then I append the result to interact_pd. I would rather not do it using a for loop however, as this is very computationally costly. Is there a faster way of doing it?</p>
<p>EDIT: example </p>
<p>df_1: </p>
<pre><code>1col_1, 1col_2, 1col_3
0 1 0
1 0 0
0 0 1
</code></pre>
<p>df_2: </p>
<pre><code>2col_1, 2col_2
0 1
1 0
0 0
</code></pre>
<p>interact_pd:</p>
<pre><code>1col_1_2col_1, 1col_2_2col_1,1col_3_2col_1, 1col_1_2col_2, 1col_2_2col_2,1col_3_2col_2
0 0 0 0 1 0
1 0 0 0 0 0
0 0 0 0 0 0
</code></pre>
| 4 | 2016-08-16T05:46:03Z | 38,970,709 | <p>You can multiply along the <code>index</code> axis your first <code>df</code> with each column of the second <code>df</code>, this is the <strong><em>fastest method</em></strong> for big dataset (see below):</p>
<pre><code>df = pd.concat([df_1.mul(col[1], axis="index") for col in df_2.iteritems()], axis=1)
# Change the name of the columns
df.columns = ["_".join([i, j]) for j in df_2.columns for i in df_1.columns]
df
1col_1_2col_1 1col_2_2col_1 1col_3_2col_1 1col_1_2col_2 \
0 0 0 0 0
1 1 0 0 0
2 0 0 0 0
1col_2_2col_2 1col_3_2col_2
0 1 0
1 0 0
2 0 0
</code></pre>
<h3>--> See benchmark for comparison with other answers to adapt to your dataset.</h3>
<hr>
<h1>Benchmark</h1>
<h2>Functions:</h2>
<pre><code>def Test2(DA,DB):
MA = DA.as_matrix()
MB = DB.as_matrix()
MM = np.zeros((len(MA),len(MA[0])*len(MB[0])))
Col = []
for i in range(len(MB[0])):
for j in range(len(MA[0])):
MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i]
Col.append('1col_'+str(i+1)+'_2col_'+str(j+1))
return pd.DataFrame(MM,dtype=int,columns=Col)
def Test3(df_1, df_2):
df = pd.concat([df_1.mul(i[1], axis="index") for i in df_2.iteritems()], axis=1)
df.columns = ["_".join([i,j]) for j in df_2.columns for i in df_1.columns]
return df
def Test4(df_1,df_2):
pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1)
lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns],
names=[df_1.columns.name, df_2.columns.name])
return pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]],
columns=lcol)
def jeanrjc_imp(df_1, df_2):
df = pd.concat([df_1.mul(ââi[1], axis="index") for i in df_2.iteritems()], axis=1, keys=df_2.columns)
return df
</code></pre>
<h2>Code:</h2>
<p>Sorry, ugly code, the plot at the end matters :</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
df_1 = pd.DataFrame(np.random.randint(0, 2, (1000, 600)))
df_2 = pd.DataFrame(np.random.randint(0, 2, (1000, 600)))
df_1.columns = ["1col_"+str(i) for i in range(len(df_1.columns))]
df_2.columns = ["2col_"+str(i) for i in range(len(df_2.columns))]
resa = {}
resb = {}
resc = {}
for f, r in zip([Test2, Test3, Test4, jeanrjc_imp], ["T2", "T3", "T4", "T3bis"]):
resa[r] = []
resb[r] = []
resc[r] = []
for i in [5, 10, 30, 50, 150, 200]:
a = %timeit -o f(df_1.iloc[:,:i], df_2.iloc[:, :10])
b = %timeit -o f(df_1.iloc[:,:i], df_2.iloc[:, :50])
c = %timeit -o f(df_1.iloc[:,:i], df_2.iloc[:, :200])
resa[r].append(a.best)
resb[r].append(b.best)
resc[r].append(c.best)
X = [5, 10, 30, 50, 150, 200]
fig, ax = plt.subplots(1, 3, figsize=[16,5])
for j, (a, r) in enumerate(zip(ax, [resa, resb, resc])):
for i in r:
a.plot(X, r[i], label=i)
a.set_xlabel("df_1 columns #")
a.set_title("df_2 columns # = {}".format(["10", "50", "200"][j]))
ax[0].set_ylabel("time(s)")
plt.legend(loc=0)
plt.tight_layout()
</code></pre>
<p><a href="http://i.stack.imgur.com/P9wls.png" rel="nofollow"><img src="http://i.stack.imgur.com/P9wls.png" alt="Pandas column multiplication"></a></p>
<p>With <code>T3b <=> jeanrjc_imp</code>. Which is a bit faster that Test3.</p>
<h2>Conclusion:</h2>
<p>Depending on your dataset size, pick the right function, between Test4 and Test3(b). Given the OP's dataset, <code>Test3</code> or <code>jeanrjc_imp</code> should be the fastest, and also the shortest to write!</p>
<p>HTH</p>
| 4 | 2016-08-16T09:03:53Z | [
"python",
"pandas",
"numpy",
"dataframe"
] |
Margins of matplotlib.pyplot.imshow when used in subplot environment | 38,967,478 | <p>I am trying to plot the values of several arrays in separate plots of a figure using imshow.</p>
<p>When I plot one image only, and use the plt.imshow() command with the correct extents, the figure comes out perfectly.
However, when I try to create multiple plots of this image in the same figure, using plt.subplot(), each of these plots ends up with incorrect x-axis settings, and there are white margins. I tried correcting the x-axis range with the set_xlim() command, but it has no effect (which I also don't understand). </p>
<p>The minimal working sample is below - any help would be appreciated!</p>
<pre><code>from matplotlib import pyplot as plt
import numpy as n
image = n.array([[ 1., 2., 2., 5.],
[ 1., 0., 0., 3.],
[ 1., 2., 0., 2.],
[ 4., 2., 3., 2.]])
xextent, yextent= n.shape(image)
fig, ax = plt.subplots(2,sharex=True, sharey=True)
im0 = ax[0].imshow(image, extent=(0,xextent,yextent,0),interpolation='nearest');
ax[0].set_xlim([0,4])
im1 = ax[1].imshow(image, extent=(0,xextent,yextent,0),interpolation='nearest');
ax[1].set_xlim([0,4])
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/eZkQa.png" rel="nofollow"><img src="http://i.stack.imgur.com/eZkQa.png" alt="enter image description here"></a></p>
| 0 | 2016-08-16T05:53:08Z | 38,967,970 | <p>I believe the reason for the whitespace is the size of the window. You can either change the window size (you'd have to figure out the numbers) or you can adjust the subplot. I found this out by playing with the "configure subplots" button in the image popup.</p>
<pre><code>plt.subplots_adjust(right=0.4)
</code></pre>
<p>With this line the plot will have no whitespace, but still some empty space (which you can fix by adjusting the window size).</p>
| 1 | 2016-08-16T06:28:53Z | [
"python",
"matplotlib",
"subplot",
"imshow"
] |
Margins of matplotlib.pyplot.imshow when used in subplot environment | 38,967,478 | <p>I am trying to plot the values of several arrays in separate plots of a figure using imshow.</p>
<p>When I plot one image only, and use the plt.imshow() command with the correct extents, the figure comes out perfectly.
However, when I try to create multiple plots of this image in the same figure, using plt.subplot(), each of these plots ends up with incorrect x-axis settings, and there are white margins. I tried correcting the x-axis range with the set_xlim() command, but it has no effect (which I also don't understand). </p>
<p>The minimal working sample is below - any help would be appreciated!</p>
<pre><code>from matplotlib import pyplot as plt
import numpy as n
image = n.array([[ 1., 2., 2., 5.],
[ 1., 0., 0., 3.],
[ 1., 2., 0., 2.],
[ 4., 2., 3., 2.]])
xextent, yextent= n.shape(image)
fig, ax = plt.subplots(2,sharex=True, sharey=True)
im0 = ax[0].imshow(image, extent=(0,xextent,yextent,0),interpolation='nearest');
ax[0].set_xlim([0,4])
im1 = ax[1].imshow(image, extent=(0,xextent,yextent,0),interpolation='nearest');
ax[1].set_xlim([0,4])
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/eZkQa.png" rel="nofollow"><img src="http://i.stack.imgur.com/eZkQa.png" alt="enter image description here"></a></p>
| 0 | 2016-08-16T05:53:08Z | 39,013,209 | <p>So the options are:</p>
<ol>
<li><p>Remove the <code>sharex</code>/<code>sharey</code> keywords - seems to clash with imshow in the subplot environment. (Suggested by <a href="http://stackoverflow.com/users/293594/xnx">xnx</a>)</p></li>
<li><p>Use <code>plt.subplots_adjust</code> with appropriate settings, in combination with <code>plt.gcf().tight_layout()</code> (Suggested by <a href="http://stackoverflow.com/users/5528308/mwormser">mwormser</a>)</p></li>
<li><p>Use pcolormesh instead of imshow in the subplot environment.</p></li>
</ol>
| 0 | 2016-08-18T08:22:03Z | [
"python",
"matplotlib",
"subplot",
"imshow"
] |
Retrieve company name with ticker symbol input, yahoo or google API | 38,967,533 | <p>Just looking for a simple api return, where I can input a ticker symbol and receive the full company name:</p>
<p>ticker('MSFT')
will return
"Microsoft"</p>
| 1 | 2016-08-16T05:57:17Z | 38,968,465 | <p>You need to first find a website / API which allows you to lookup stock symbols and provide information. Then you can query that API for information. </p>
<p>I came up with a quick and dirty solution here: </p>
<pre><code>import requests
def get_symbol(symbol):
symbol_list = requests.get("http://chstocksearch.herokuapp.com/api/{}".format(symbol)).json()
for x in symbol_list:
if x['symbol'] == symbol:
return x['company']
company = get_symbol("MSFT")
print(company)
</code></pre>
<p>This website only provides company name. I didn't put any error checks. And you need the <code>requests</code> module for it to work. Please install it using <code>pip install requests</code>. </p>
<p><strong>Update:</strong> Here's the code sample using Yahoo! Finance API: </p>
<pre><code>import requests
def get_symbol(symbol):
url = "http://d.yimg.com/autoc.finance.yahoo.com/autoc?query={}&region=1&lang=en".format(symbol)
result = requests.get(url).json()
for x in result['ResultSet']['Result']:
if x['symbol'] == symbol:
return x['name']
company = get_symbol("MSFT")
print(company)
</code></pre>
| 1 | 2016-08-16T07:00:18Z | [
"python",
"json",
"yahoo-finance",
"stock",
"google-finance-api"
] |
Joining two querysets in Django | 38,967,599 | <p>Suppose I have the following models</p>
<pre><code>class Award(models.Model):
user = models.ForeignKey(User)
class AwardReceived(models.Model):
award = models.ForeignKey(award)
date = models.DateField()
units = models.IntegerField()
class AwardUsed(models.Model):
award = models.ForeignKey(award)
date = models.DateField()
units = models.IntegerField()
</code></pre>
<p>Now, suppose I want to get the number of awards for all users and the number of awards used for all users (ie, a queryset containing both). I prefer to do it one query for each calculation - when I combined it in my code I had some unexpected results. Also for some of my queries it won't be possible to do it one query, since the query will get too complex - I'm calculating 8 fields. This is how I solved it so far:</p>
<pre><code>def get_summary(query_date)
summary = (Award.objects.filter(awardreceived__date__lte=query_date))
.annotate(awarded=Sum('awardissuedactivity__units_awarded')))
awards_used = (Award.objects.filter(awardused__date__lte=query_date)
.annotate(used=Sum('awardused__date__lte__units')))
award_used_dict = {}
for award in awards_used:
award_used_dict[award] = award.used
for award in summary:
award.used = award_used_dict.get(award, 0)
return summary
</code></pre>
<p>I'm sure there must be a way to solve this without the dictionary approach? For instance, something like this: <code>awards_used.get(award=award)</code>, but this causes a db lookup every loop.</p>
<p>Or some other fancy way to join the querysets?</p>
<p>Note this is a simplified example and I know for this example the DB structure can be improved, I'm just trying to illustrate my question.</p>
| 1 | 2016-08-16T06:01:36Z | 38,967,621 | <p><strong>SOLUTION 1</strong></p>
<p>Just try to concatenate your queryset using <code>|</code> </p>
<pre><code>final_q = q1 | q2
</code></pre>
<p>In your example</p>
<pre><code>final_q = summary | awards_used
</code></pre>
<p>UPDATED:</p>
<p>| does not works using calculated attributes, so, we can select our queryset first and then mapping our extra attributes </p>
<pre><code>summary = Award.objects.filter(awardreceived__date__lte=query_date)
awards_used = Award.objects.filter(awardused__date__lte=query_date)
final_q = summary | awards_used
final_q = final_q.annotate(used=Sum('awardused__date__lte__units')).annotate(awarded=Sum('awardissuedactivity__units_awarded'))
</code></pre>
<p><strong>SOLUTION 2</strong> </p>
<p>Using chain built-in function </p>
<pre><code>from itertools import chain
final_list = list(chain(summary, awards_used))
</code></pre>
<p>There is an issue with this approach, you won't get a queryset, you will get a list containing instances. </p>
| 2 | 2016-08-16T06:03:57Z | [
"python",
"django"
] |
ODO command in python and how to upload missing files from CSV as NULL with simple command | 38,967,666 | <p>i try to load data from a csv file into a mysql table using odo in python.</p>
<p>the csv file contains blank cells. The odo command files when it encounters blank cells.</p>
<p>how can I use the odo command to load the data and insert a null value by default for missing data.</p>
<p>I'm trying to import a simple CSV file that I downloaded from Quandl into a MySQL table with the odo python package</p>
<pre><code>t = odo(csvpathName)
</code></pre>
<p>The rsow look like this in the CSV. The second line has a value missing.</p>
<pre><code>A 7/25/2016 46.49 46.52 45.92 46.14 1719772 0 1 46.49 46.52 45.92 46.14 1719772
B 7/25/2016 46.49 46.52 45.92 1719772 0 1 46.49 46.52 45.92 46.14 1719772
</code></pre>
<p>The MySQL table is defined as follows:</p>
<pre><code>Ticker varchar(255) NOT NULL,
Date varchar(255) NOT NULL,
Open numeric(15,2) NULL,
High numeric(15,2) NULL,
Low numeric(15,2) NULL,
Close numeric(15,2) NULL,
Volume bigint NULL,
ExDividend numeric(15,2),
SplitRatio int NULL,
OpenAdj numeric(15,2) NULL,
HighAdj numeric(15,2) NULL,
LowAdj numeric(15,2) NULL,
CloseAdj numeric(15,2) NULL,
VolumeAdj bigint NULL,
PRIMARY KEY(Ticker,Date)
</code></pre>
<p>It throws an exception 1366 with the following info:</p>
<p>sqlalchemy.exc.InternalError: (pymysql.err.InternalError) (1366, "Incorrect decimal value: '' for column 'High' at row 185") [SQL: 'LOAD DATA INFILE %(path)s\n INTO TABLE <code>QUANDL_DATA_WIKI</code>\n CHARACTER SET %(encoding)s\n FIELDS\n TERMINATED BY %(delimiter)s\n ENCLOSED BY %(quotechar)s\n ESCAPED BY %(escapechar)s\n LINES TERMINATED BY %(lineterminator)s\n IGNORE %(skiprows)s LINES\n '] [parameters: {'quotechar': '"', 'encoding': 'utf8', 'path': 'C:\ProgramData\MySQL\MySQL Server 5.6\Uploads\WIKI_20160725.partial.csv', 'lineterminator': '\n', 'escapechar': '\', 'skiprows': 0, 'delimiter': ','}]</p>
<p>Does anyone know how to configure ODO so I can upload missing values as NULL values with the simple command?</p>
| 0 | 2016-08-16T06:08:00Z | 38,984,224 | <p>If I make all fields varchar(255) then it reads missing fiels as ''. sqlalchemy cannot force a '' from the csv file into another datatype.</p>
<p>Best is to use varchar to purely reads the csv file and then afterwards convert it to the proper formats</p>
| 0 | 2016-08-16T20:49:31Z | [
"python",
"mysql",
null,
"odo"
] |
Recursion seems not working in Python | 38,967,678 | <p>I am writing a piece of code to recursively processing *.py files. The code block is as the following:</p>
<pre><code>class FileProcessor(object):
def convert(self,file_path):
if os.path.isdir(file_path):
""" If the path is a directory,then process it recursively
untill a file is met"""
dir_list=os.listdir(file_path)
print("Now Processing Directory:",file_path)
i=1
for temp_dir in dir_list:
print(i,":",temp_dir)
i=i+1
self.convert(temp_dir)
else:
""" if the path is not a directory"""
""" TODO something meaningful """
if __name__ == '__main__':
tempObj=FileProcessor()
tempObj.convert(sys.argv[1])
</code></pre>
<p>When I run the script with a directory path as argument, it only runs the first layer of the directory, the line:</p>
<pre><code>self.convert(temp_dir)
</code></pre>
<p>seems never get called. I'm using Python 3.5.</p>
| 0 | 2016-08-16T06:09:03Z | 38,967,808 | <p>The recursion is happening fine, but <code>temp_dir</code> is not a directory so it passes control to your stub <code>else</code> block. You can see this if you put <code>print(file_path)</code> outside your <code>if</code> block.</p>
<p><code>temp_dir</code> is the <strong><em>name</em></strong> of the next directory, not its absolute path. <code>"C:/users/adsmith/tmp/folder"</code> becomes just <code>"folder"</code>. Use <code>os.path.abspath</code> to get that</p>
<pre><code>self.convert(os.path.abspath(temp_dir))
</code></pre>
<p>Although the canonical way to do this (as mentioned in my comment on the question) is to use <code>os.walk</code>.</p>
<pre><code>class FileProcessor(object):
def convert(self, file_path):
for root, dirs, files in os.walk(file_path):
# if file_path is C:/users/adsmith, then:
# root == C:/users/adsmith
# dirs is an iterator of each directory in C:/users/adsmith
# files is an iterator of each file in C:/users/adsmith
# this walks on its own, so your next iteration will be
# the next deeper directory in `dirs`
for i, d in enumerate(dirs):
# this is also preferred to setting a counter var and incrementing
print(i, ":", d)
# no need to recurse here since os.walk does that itself
for fname in files:
# do something with the files? I guess?
</code></pre>
| 4 | 2016-08-16T06:17:20Z | [
"python",
"python-3.x",
"recursion"
] |
Recursion seems not working in Python | 38,967,678 | <p>I am writing a piece of code to recursively processing *.py files. The code block is as the following:</p>
<pre><code>class FileProcessor(object):
def convert(self,file_path):
if os.path.isdir(file_path):
""" If the path is a directory,then process it recursively
untill a file is met"""
dir_list=os.listdir(file_path)
print("Now Processing Directory:",file_path)
i=1
for temp_dir in dir_list:
print(i,":",temp_dir)
i=i+1
self.convert(temp_dir)
else:
""" if the path is not a directory"""
""" TODO something meaningful """
if __name__ == '__main__':
tempObj=FileProcessor()
tempObj.convert(sys.argv[1])
</code></pre>
<p>When I run the script with a directory path as argument, it only runs the first layer of the directory, the line:</p>
<pre><code>self.convert(temp_dir)
</code></pre>
<p>seems never get called. I'm using Python 3.5.</p>
| 0 | 2016-08-16T06:09:03Z | 38,968,143 | <p>As <code>temp_dir</code> has the filename only without parent path, you should change</p>
<pre><code>self.convert(temp_dir)
</code></pre>
<p>to</p>
<pre><code>self.convert(os.path.join(file_path, temp_dir))
</code></pre>
| 0 | 2016-08-16T06:40:22Z | [
"python",
"python-3.x",
"recursion"
] |
pymongo: global name 'ReturnDocument' is not defined | 38,967,706 | <p>I'm trying to write a python program that finds and updates a document in mongodb:</p>
<pre><code>db.collection.find_one_and_update({"Machine": "24", "Available": True},
{"$set": {"Overview.Available": False}},
projection= {"_id": 0, "Machine": 1, "Available": 1},
return_document= ReturnDocument.AFTER)
</code></pre>
<p>But I'm getting the following error message (apparently I'm not using return_document correctly but all pymongo documentation says I am)</p>
<blockquote>
<p>NameError: global name 'ReturnDocument' is not defined</p>
</blockquote>
| 0 | 2016-08-16T06:11:03Z | 38,967,956 | <p>You need to import the <code>ReturnDocument</code> class first. Add this to the top of your script: </p>
<pre><code>from pymongo.collection import ReturnDocument
</code></pre>
<p>Detailed docs: <a href="http://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.ReturnDocument" rel="nofollow">http://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.ReturnDocument</a> </p>
| 0 | 2016-08-16T06:27:46Z | [
"python",
"mongodb",
"pymongo",
"nameerror"
] |
Non-Image Bar Codes for Django | 38,967,729 | <p>I was wondering if there's a library that I can use to create bar codes in Django using just HTML + CSS rather than producing the bar codes as images (like in <strong>reportlab</strong> & <strong>pybarcode</strong>). I'm reluctant to use images because I'm creating many bar codes on the same page and I feel images could be a little slow. </p>
<p>P.S <em>This technique has been used by <a href="https://github.com/milon/barcode" rel="nofollow">dinesh/barcode</a> as a laravel library in php</em> </p>
| -1 | 2016-08-16T06:12:03Z | 38,970,441 | <p>I have decided to use <a href="http://www.jqueryscript.net/other/Simple-jQuery-Based-Barcode-Generator-Barcode.html" rel="nofollow">jquery-barcode</a> which is completely works at client end </p>
| 0 | 2016-08-16T08:50:17Z | [
"python",
"django",
"barcode"
] |
Redirection login cookie or pass in data via POST? | 38,967,833 | <p>Iâm trying to use Pythonâs requests library to automatically get my grades from a university website. The URL is <a href="https://acorn.utoronto.ca/sws/transcript/academic/main.do?main.dispatch" rel="nofollow">https://acorn.utoronto.ca/sws/transcript/academic/main.do?main.dispatch</a>, but there are several redirects. I have the following simple code but it doesn't seem to be doing what I want.</p>
<pre><code>import requests
payload = {"user" : "username", "pass" : "password"}
r = requests.post("https://acorn.utoronto.ca/sws/transcript/academic/main.do?main.dispatch", data= payload)
print(r.text)
</code></pre>
<p>The output is as follows:</p>
<pre><code>C:\Users\johnp\AppData\Local\Programs\Python\Python35-32\python.exe
C:/Users/johnp/Desktop/git_stuff/16AugRequests/acorn_requests.py <html> <head> </head> <body onLoad="document.relay.submit()"> <form method=post action="https://weblogin.utoronto.ca/" name=relay> <input type=hidden name=pubcookie_g_req value="b25lPWlkcC51dG9yYXV0aC51dG9yb250by5jYSZ0d289Q0lNRl9TaGliYm9sZXRoX1BpbG90JnRocmVlPTEmZm91cj1hNWEmZml2ZT1HRVQmc2l4PWlkcC51dG9yYXV0aC51dG9yb250by5jYSZzZXZlbj1MMmxrY0M5QmRYUm9iaTlTWlcxdmRHVlZjMlZ5Um05eVkyVkJkWFJvJmVpZ2h0PSZob3N0bmFtZT1pZHAudXRvcmF1dGgudXRvcm9udG8uY2EmbmluZT0xJmZpbGU9JnJlZmVyZXI9KG51bGwpJnNlc3NfcmU9NSZwcmVfc2Vzc190b2s9LTczODQ3MDk2OCZmbGFnPTA=">
</code></pre>
<p> You do not have Javascript turned on, please click the button to continue. </p>
<p>Am I going about this the right way? I feel like I should be trying to pass a cookie instead, but how would I get the cookie?<br>
Thanks in advance. </p>
<p>Edit: this is the stuff I get from Firefox:
<a href="http://i.stack.imgur.com/uIHWl.png" rel="nofollow">Network tab</a></p>
<p>Does this mean I need to fill out the entire form as parameters in the request?</p>
| 2 | 2016-08-16T06:19:17Z | 38,971,901 | <p>You can try logging in then getting whatever page you want, there is more more data to be posted which you can get with <em>bs4</em>:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = "https://weblogin.utoronto.ca/"
with requests.Session() as s:
s.headers.update({"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36"})
soup = BeautifulSoup(s.get(url).content)
data = {inp["name"]: inp["value"] for inp in soup.select("#query input[value]")}
data["user"] = "username"
data["pass"] = "password"
post = s.post(url, data=data)
print post
print(post.content)
protect = s.get("protected_page")
</code></pre>
<p>If we run the code and just print the data dict, you can see bs4 populates the required fields:</p>
<pre><code>In [14]: with requests.Session() as s:
....: s.headers.update({"User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36"})
....: soup = BeautifulSoup(s.get(url).content,"html.parser")
....: data = {inp["name"]:inp["value"] for inp in soup.select("#query input[value]")}
....: data["user"] = "username"
....: data["pass"] = "password"
....: print(data)
....:
{'seven': '/index.cgi', 'sess_re': '0', 'pre_sess_tok': '0', 'pass': 'password', 'four': 'a5', 'user': 'username', 'reply': '1', 'two': 'pinit', 'hostname': '', 'three': '1', 'pinit': '1', 'relay_url': '', 'nine': 'PInit', 'create_ts': '1471341718', 'referer': '', 'six': 'weblogin.utoronto.ca', 'first_kiss': '1471341718-777129', 'flag': '', 'five': '', 'post_stuff': '', 'creds_from_greq': '1', 'fr': '', 'eight': '', 'one': 'weblogin.utoronto.ca', 'file': ''}
</code></pre>
| 0 | 2016-08-16T10:00:19Z | [
"python",
"html",
"web",
"python-requests"
] |
How to send data from C# to Python | 38,967,979 | <p>How can i send this list to list in Python script? This is so big for sending as arguments. Thank you. </p>
<pre><code> List<String> cefList= new List<String>();
for(int i=0; i<1000; i++){
cefList.Add("CEF:0|ArcSight|ArcSight|6.0.3.6664.0|agent:030|Agent [test] type [testalertng] started|Low|
eventId=1 mrt=1396328238973 categorySignificance=/Normal categoryBehavior=/Execute/Start
categoryDeviceGroup=/Application catdt=Security Mangement categoryOutcome=/Success
categoryObject=/Host/Application/Service art=1396328241038 cat=/Agent/Started
deviceSeverity=Warning rt=1396328238937 fileType=Agent
cs2=<Resource ID\="3DxKlG0UBABCAA0cXXAZIwA\=\="/> c6a4=fe80:0:0:0:495d:cc3c:db1a:de71
cs2Label=Configuration Resource c6a4Label=Agent
IPv6 Address ahost=SKEELES10 agt=888.99.100.1 agentZoneURI=/All Zones/ArcSight
System/Private Address Space
Zones/RFC1918: 888.99.0.0-888.200.255.255 av=6.0.3.6664.0 atz=Australia/Sydney
aid=3DxKlG0UBABCAA0cXXAZIwA\=\= at=testalertng dvchost=SKEELES10 dvc=888.99.100.1
deviceZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918:
888.99.0.0-888.200.255.255 dtz=Australia/Sydney _cefVer=0.1");
}
</code></pre>
| 0 | 2016-08-16T06:29:24Z | 38,968,056 | <p>You need to serialize the data to a common format that is accessible from both C# and Python. For example - XML or JSON. I would recommend using JSON. </p>
<p>Then you have several options: </p>
<ul>
<li>Use sockets to transfer the data. </li>
<li>Use http to transfer the data. </li>
<li>Write to a file from C# and read that file from Python</li>
</ul>
<p>Sockets would probably the faster. Using http might be easier. With files, you will need to have some sort of scheduling or notification system to let your Python program know when you have written to the file. </p>
| 0 | 2016-08-16T06:34:51Z | [
"c#",
"python"
] |
How to send data from C# to Python | 38,967,979 | <p>How can i send this list to list in Python script? This is so big for sending as arguments. Thank you. </p>
<pre><code> List<String> cefList= new List<String>();
for(int i=0; i<1000; i++){
cefList.Add("CEF:0|ArcSight|ArcSight|6.0.3.6664.0|agent:030|Agent [test] type [testalertng] started|Low|
eventId=1 mrt=1396328238973 categorySignificance=/Normal categoryBehavior=/Execute/Start
categoryDeviceGroup=/Application catdt=Security Mangement categoryOutcome=/Success
categoryObject=/Host/Application/Service art=1396328241038 cat=/Agent/Started
deviceSeverity=Warning rt=1396328238937 fileType=Agent
cs2=<Resource ID\="3DxKlG0UBABCAA0cXXAZIwA\=\="/> c6a4=fe80:0:0:0:495d:cc3c:db1a:de71
cs2Label=Configuration Resource c6a4Label=Agent
IPv6 Address ahost=SKEELES10 agt=888.99.100.1 agentZoneURI=/All Zones/ArcSight
System/Private Address Space
Zones/RFC1918: 888.99.0.0-888.200.255.255 av=6.0.3.6664.0 atz=Australia/Sydney
aid=3DxKlG0UBABCAA0cXXAZIwA\=\= at=testalertng dvchost=SKEELES10 dvc=888.99.100.1
deviceZoneURI=/All Zones/ArcSight System/Private Address Space Zones/RFC1918:
888.99.0.0-888.200.255.255 dtz=Australia/Sydney _cefVer=0.1");
}
</code></pre>
| 0 | 2016-08-16T06:29:24Z | 38,968,841 | <p>Since your C# program runs the python script, I guess the easiest solution would be to redirect the standard input of the python process:</p>
<pre><code> Process pyProc = Process.Start(
new ProcessStartInfo("python.exe", @"/path/to/the/script.py")
{
RedirectStandardInput = true,
UseShellExecute = false
}
);
for (int ii = 0; ii < 100; ++ii)
{
pyProc.StandardInput.WriteLine(string.Format("this is message # {0}", ii));
}
</code></pre>
<p>At the python script side, you just need to use built-in function <a href="https://docs.python.org/2/library/functions.html#raw_input" rel="nofollow">raw_input</a> like below (please note the function has been renamed to raw_input in 3.x):</p>
<pre><code>while True:
data = raw_input()
print(data)
</code></pre>
| 1 | 2016-08-16T07:23:01Z | [
"c#",
"python"
] |
How do you call C++ and\or Java functions from Python 2.7? | 38,968,007 | <p>I am creating a Windows program that so far has a .bat file calling a .pyw file, and I need functions from Java and C++. How can I do this?(I don't mind creating a new batch or python file, and I already have the header file for the C++ section and a .jar file for my java components. (For Java I use Eclipse Java Mars, and it's Java 8u101)) Thanks!!!</p>
| 2 | 2016-08-16T06:30:51Z | 38,968,074 | <p>This is rather simple for C++: you have to compile a library with you function, import it in Python and... call it! Python has a powerful standard library <a href="https://docs.python.org/2/library/ctypes.html" rel="nofollow">ctypes</a> to handle this kind of tasks. </p>
<p>Here is an example of loading <code>print()</code> function from hypothetical <code>libc.dll</code>.</p>
<pre><code>from ctypes import *
libc = cdll.LoadLibrary("libc.dll")
>>> print(libc.time(None))
1150640792
</code></pre>
<p>Calling Java from Python is covered here: <a href="http://stackoverflow.com/questions/10707671/how-to-call-a-java-function-from-python-numpy">How to call a java function from python/numpy?</a></p>
| 1 | 2016-08-16T06:35:38Z | [
"java",
"python",
"c++",
"python-2.7",
"batch-file"
] |
How do you call C++ and\or Java functions from Python 2.7? | 38,968,007 | <p>I am creating a Windows program that so far has a .bat file calling a .pyw file, and I need functions from Java and C++. How can I do this?(I don't mind creating a new batch or python file, and I already have the header file for the C++ section and a .jar file for my java components. (For Java I use Eclipse Java Mars, and it's Java 8u101)) Thanks!!!</p>
| 2 | 2016-08-16T06:30:51Z | 38,968,156 | <p>You can load C++ function and execute it from Python like BasicWolf explained in his answer. For Java, Jython might be a good approach. But then there's a problem - you will need to be dependent on Jython which is not up to date with the latest versions of Python. You will face compatibility issues with different libraries too. </p>
<p>I would recommend compiling your C++ and Java functions to create individual binaries out of them. Then execute these binaries from within Python, passing the arguments as command line parameters. This way you can keep using CPython. You can interoperate with programs written in any language. </p>
| 0 | 2016-08-16T06:41:22Z | [
"java",
"python",
"c++",
"python-2.7",
"batch-file"
] |
Python : Regex capturing genric for 3 cases. | 38,968,187 | <p>Hi Anyone help me imporve my not working regular expresion.</p>
<p><strong><em>Strings Cases:</em></strong></p>
<blockquote>
<p>1) 120 lbs and is intended for riders ages <em>8 years and up</em>. <strong>#catch : 8 years and up</strong></p>
<p>2) 56w x 28d x 32h inches recommended for hobbyists recommended for ages <em>12 and up</em>. <strong>#catch : 12 and up</strong></p>
<p>3) 4 users recorded speech for effective use language tutor pod measures 11l x 9w x 5h inches recommended for ages <em>6 and above</em>. <strong>#catch : 6 and above</strong></p>
</blockquote>
<p>I want a <strong>genric regular expression</strong> which works perfectly for all the three string. </p>
<blockquote>
<p>My regular expression is : </p>
<blockquote>
<p><strong>\b\d+[\w+\s]<em>?(?:\ban[a-z]</em>\sup\b|\ban[a-z]<em>\sabove\b|\ban[a-z]</em>\sold[a-z]*\b|\b&\sup)</strong></p>
</blockquote>
</blockquote>
<p>But it is not working quite well. If anyone can provide me a <strong><em>generic regular expression</em></strong> which works <strong><em>for all 3 cases</em></strong>. <strong>I am using python re.findall()</strong></p>
<p>Anyone? could Help?</p>
| -1 | 2016-08-16T06:43:08Z | 38,968,471 | <p>Make it a habit and start with verbose regular expressions:</p>
<pre><code>import re
rx = re.compile(r'''
ages\ # look for ages
(\d+(?:\ years)?\ and\ (?:above|up)) # capture a digit, years eventually
# and one of above or up
''', re.VERBOSE)
string = '''
1) 120 lbs and is intended for riders ages 8 years and up. #catch : 8 years and up
2) 56w x 28d x 32h inches recommended for hobbyists recommended for ages 12 and up. #catch : 12 and up
3) 4 users recorded speech for effective use language tutor pod measures 11l x 9w x 5h inches recommended for ages 6 and above. #catch : 6 and above
'''
matches = rx.findall(string)
print(matches)
# ['8 years and up', '12 and up', '6 and above']
</code></pre>
<p><hr>
See <a href="http://ideone.com/vsSu4w" rel="nofollow"><strong>a demo on ideone.com</strong></a> as well as on <a href="https://regex101.com/r/eK6bI0/1" rel="nofollow"><strong>regex101.com</strong></a>.</p>
| 2 | 2016-08-16T07:00:48Z | [
"python",
"regex",
"findall"
] |
Python : Regex capturing genric for 3 cases. | 38,968,187 | <p>Hi Anyone help me imporve my not working regular expresion.</p>
<p><strong><em>Strings Cases:</em></strong></p>
<blockquote>
<p>1) 120 lbs and is intended for riders ages <em>8 years and up</em>. <strong>#catch : 8 years and up</strong></p>
<p>2) 56w x 28d x 32h inches recommended for hobbyists recommended for ages <em>12 and up</em>. <strong>#catch : 12 and up</strong></p>
<p>3) 4 users recorded speech for effective use language tutor pod measures 11l x 9w x 5h inches recommended for ages <em>6 and above</em>. <strong>#catch : 6 and above</strong></p>
</blockquote>
<p>I want a <strong>genric regular expression</strong> which works perfectly for all the three string. </p>
<blockquote>
<p>My regular expression is : </p>
<blockquote>
<p><strong>\b\d+[\w+\s]<em>?(?:\ban[a-z]</em>\sup\b|\ban[a-z]<em>\sabove\b|\ban[a-z]</em>\sold[a-z]*\b|\b&\sup)</strong></p>
</blockquote>
</blockquote>
<p>But it is not working quite well. If anyone can provide me a <strong><em>generic regular expression</em></strong> which works <strong><em>for all 3 cases</em></strong>. <strong>I am using python re.findall()</strong></p>
<p>Anyone? could Help?</p>
| -1 | 2016-08-16T06:43:08Z | 38,973,995 | <p>(As the suggestion I made in a comment appears to have been what you wanted, I offer it as an answer.)</p>
<p>If your examples illustrate all possible strings (but I fear they don't ;) you could do it as simple as</p>
<pre><code>\d+[^\d]*$
</code></pre>
<p><a href="https://regex101.com/r/gX5cC4/2" rel="nofollow">See it here at regex101</a>.</p>
<p>It matches the last number, and everything after it.</p>
<p>Or a little bit more sophisticated - making sure it's preceded by age - <a href="https://regex101.com/r/gX5cC4/3" rel="nofollow">here</a></p>
| 0 | 2016-08-16T11:41:53Z | [
"python",
"regex",
"findall"
] |
How to use doc2vec with phrases? | 38,968,353 | <p>i want to have phrases in doc2vec and i use gensim.phrases. in doc2vec we need tagged document to train the model and i cannot tag the phrases. how i can do this?</p>
<p>here is my code</p>
<pre><code>text = phrases.Phrases(text)
for i in range(len(text)):
string1 = "SENT_" + str(i)
sentence = doc2vec.LabeledSentence(tags=string1, words=text[i])
text[i]=sentence
print "Training model..."
model = Doc2Vec(text, workers=num_workers, \
size=num_features, min_count = min_word_count, \
window = context, sample = downsampling)
</code></pre>
| 0 | 2016-08-16T06:53:20Z | 38,985,945 | <p>The invocation of <code>Phrases()</code> trains a phrase-creating-model. You later use that model on text to get back phrase-combined text. </p>
<p>Don't replace your original <code>text</code> with the trained model, as on your code's first line. Also, don't try to assign into the Phrases model, as happens in your current loop, nor access the Phrases model by integers.</p>
<p>The <a href="https://radimrehurek.com/gensim/models/phrases.html" rel="nofollow">gensim docs for the Phrases class</a> has examples of the proper use of the <code>Phrases</code> class; if you follow that pattern you'll do well. </p>
<p>Further, note that <code>LabeledSentence</code> has been replaced by <code>TaggedDocument</code>, and its <code>tags</code> argument should be a list-of-tags. If you provide a string, it will see that as a list-of-one-character tags (instead of the one tag you intend). </p>
| 0 | 2016-08-16T23:24:42Z | [
"python",
"nlp",
"gensim",
"phrases",
"doc2vec"
] |
How to list the pre-installed Python packages on IBM's Spark service | 38,968,367 | <p>In a Python notebook, I can execute <code>!pip freeze</code> to get a list of installed packages. But the result is an empty list, or shows only a few packages that I installed myself. Until a few weeks ago, the command would return a list of all the packages, including those pre-installed by IBM. How can I get the full list now?</p>
| 0 | 2016-08-16T06:54:08Z | 38,968,368 | <p><code>!PIP_USER= pip freeze</code></p>
<p>IBM sets the environment variable PIP_USER to enable the <code>--user</code> option by default. That's because many users forgot to specify that option for <code>pip install</code>. Unfortunately, this also enables the option for <code>pip freeze</code>, where it might not be desired. Therefore, you have to override the default option to get the full list of installed packages.</p>
<p>Alternative ways to ignore default options from environment variables:</p>
<ul>
<li><code>!pip freeze --isolated</code></li>
<li><code>!env -i pip freeze</code></li>
</ul>
| 3 | 2016-08-16T06:54:08Z | [
"python",
"apache-spark",
"ibm-bluemix"
] |
Python: Changing one object's data changes the other | 38,968,393 | <p>I apologize ahead of time if my explanation is not coherent enough. I am new to python and coding. I'm currently trying to create a Pokemon program that will allow 2 users to battle each other. <code>moveList</code> is a global list that contains all the possible moves in the game.
<code>x</code> - is a file the stores which indexes correspond to which pokemon's moves. <code>pokeData.loadMoves</code> loads a preexisting move array inside each pokemon with 4 <code>move</code> objects. When i first add Arcanine's 4 moves, it is fine. However when I add eevee's moves, it rewrites over Arcanine's moves. I've spent about 40 minutes already trying to remedy this to no avail.</p>
<p>separate objects:</p>
<pre><code>P1 = None
P2 = None
loadGame()
P1 = inputPokemon('player1')
P2 = inputPokemon('player2')
</code></pre>
<p>function:</p>
<pre><code>for x in file.readlines():
x = x.split(' ')
if(x[0].strip().lower() == string):
i =int(x[1])-1
j =int(x[2])-1
k =int(x[3])-1
l =int(x[4])-1
if(second == True):
print("Arcanine's moves before loading eevee")
for x in range(4):
print(P1.move[x].name)
pokeData.loadMoves(moveList[i],moveList[j],moveList[k],moveList[l])
print("Load %s's moves " %x)
if(second == True):
print("Arcanine's moves after loading eevee")
for x in range(4):
print(P1.move[x].name)
</code></pre>
<p>loadMoves:</p>
<pre><code>move=['','','','']
def loadMoves(self, move1, move2, move3, move4):
self.move[0] = move1
self.move[1] = move2
self.move[2] = move3
self.move[3] = move4
</code></pre>
<p>Output:</p>
<pre><code>player2, please choose your Pokemon: Eevee
Loading moves for eevee
Arcanine's moves before loading eevee
['takedown']
['growl']
['flamethrower']
['confuseray']
Load 3's moves
Arcanine's moves after loading eevee
['watergun']
['doubleteam']
['shockwave']
['tackle']
</code></pre>
| -1 | 2016-08-16T06:56:06Z | 38,968,621 | <p>The culprit is the class variable <code>move</code>. Setting up that list as a class variable means all instances will be pointing to the same list object. Consider the following:</p>
<pre><code>>>> class T:
... move = [] # move defined as class variable
... def update(self, v):
... self.move[0:] = v
...
>>> c = T()
>>> c.move
[]
>>> d = T()
>>> d.update([1,2,3])
>>> c.move
[1, 2, 3] # updates in d are seen in c
>>> c.update([4,5,6])
>>> d.move
[4, 5, 6] # updates in c are equally seen in d
</code></pre>
<hr>
<p>You should consider setting up <code>move</code> on the instance of the class i.e. in the <code>__init__</code> of the class. So the list is bound to each instance and not to the class itself:</p>
<pre><code>class T:
def __init__(self):
self.move = ['','','','']
</code></pre>
| 0 | 2016-08-16T07:09:10Z | [
"python"
] |
Subtract series value from pandas data frame given multiple index | 38,968,585 | <p>I have a big data frame in Pandas, <em>table A</em>, with structure like below:</p>
<pre>
key1 key2 value1
1 201501 12
2 201502 4
3 201503 3
4 201506 9
5 201507 15
6 201509 nan
</pre>
<p>from <em>table A</em>, colum <em>value1</em>, I want to subtract <em>value2</em> from <em>table B</em> with apperance like below, using <em>key1</em> and <em>key2</em> as joining keys: </p>
<pre>
key1 key2 value2
1 201501 11
3 201503 2
5 201507 14
</pre>
<p>I want the the following in <em>table A</em>:</p>
<pre>
key1 key2 value1
1 201501 1
2 201502 4
3 201503 1
4 201506 9
5 201507 1
6 201509 nan
</pre>
<p>How can I achieve this in a super efficient way? Today I join together the two tables and substrat <em>value1</em> in <code>A</code> with <em>value2</em> from <code>B</code>, my questions is if this can be done in a smarter pythonic "look-up" fashion which is more sleek and compact?</p>
<p>Data Frame code below</p>
<pre><code>import numpy as np
tableA= pd.DataFrame({'key1':[1,2,3,4,5,6],
'key2':[201501,201502,201503,201506,201507,201509],
'value1':[12,4,3,9,15,np.nan]
})
tableB= pd.DataFrame({'key1':[1,3,5],
'key2':[201501,201503,201507],
'value1':[11,2,14]
})
</code></pre>
| 2 | 2016-08-16T07:07:21Z | 38,968,675 | <p>You can create <code>DataFrames</code> with <code>MultiIndexes</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a>, then substract by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sub.html" rel="nofollow"><code>sub</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow"><code>fillna</code></a> by first <code>DataFrame</code>:</p>
<pre><code>print (tableA.set_index(['key1','key2'])
.sub(tableB.set_index(['key1','key2']))
.fillna(tableA.set_index(['key1','key2']))
.reset_index())
key1 key2 value1
0 1 201501 1.0
1 2 201502 4.0
2 3 201503 1.0
3 4 201506 9.0
4 5 201507 1.0
5 6 201509 NaN
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="nofollow"><code>combine_first</code></a>:</p>
<pre><code>print (tableA.set_index(['key1','key2'])
.sub(tableB.set_index(['key1','key2']))
.combine_first(tableA.set_index(['key1','key2']))
.reset_index())
key1 key2 value1
0 1 201501 1.0
1 2 201502 4.0
2 3 201503 1.0
3 4 201506 9.0
4 5 201507 1.0
5 6 201509 NaN
</code></pre>
| 1 | 2016-08-16T07:12:09Z | [
"python",
"pandas",
"join",
"dataframe",
"subtraction"
] |
Subtract series value from pandas data frame given multiple index | 38,968,585 | <p>I have a big data frame in Pandas, <em>table A</em>, with structure like below:</p>
<pre>
key1 key2 value1
1 201501 12
2 201502 4
3 201503 3
4 201506 9
5 201507 15
6 201509 nan
</pre>
<p>from <em>table A</em>, colum <em>value1</em>, I want to subtract <em>value2</em> from <em>table B</em> with apperance like below, using <em>key1</em> and <em>key2</em> as joining keys: </p>
<pre>
key1 key2 value2
1 201501 11
3 201503 2
5 201507 14
</pre>
<p>I want the the following in <em>table A</em>:</p>
<pre>
key1 key2 value1
1 201501 1
2 201502 4
3 201503 1
4 201506 9
5 201507 1
6 201509 nan
</pre>
<p>How can I achieve this in a super efficient way? Today I join together the two tables and substrat <em>value1</em> in <code>A</code> with <em>value2</em> from <code>B</code>, my questions is if this can be done in a smarter pythonic "look-up" fashion which is more sleek and compact?</p>
<p>Data Frame code below</p>
<pre><code>import numpy as np
tableA= pd.DataFrame({'key1':[1,2,3,4,5,6],
'key2':[201501,201502,201503,201506,201507,201509],
'value1':[12,4,3,9,15,np.nan]
})
tableB= pd.DataFrame({'key1':[1,3,5],
'key2':[201501,201503,201507],
'value1':[11,2,14]
})
</code></pre>
| 2 | 2016-08-16T07:07:21Z | 38,971,858 | <pre><code>tableA.set_index(keys).value1 \
.sub(tableB.set_index(keys).value1, fill_value=0) \
.reset_index()
</code></pre>
<p><a href="http://i.stack.imgur.com/W6CBr.png" rel="nofollow"><img src="http://i.stack.imgur.com/W6CBr.png" alt="enter image description here"></a></p>
| 1 | 2016-08-16T09:58:00Z | [
"python",
"pandas",
"join",
"dataframe",
"subtraction"
] |
Django tables2 external link generation w/ custom parameters | 38,968,790 | <p>I have a music related model with Artist and song Title fields and I'd like to add a column that provides a link to search Amazon's digital music store using the artist and title from the given row on a table using Tables2. Here is what I have but not sure how to add the Amazon column and provide the Artist and Title fields to the Amazon URL?</p>
<p>models.py:</p>
<pre><code>class Artist (models.Model):
name = models.CharField(max_length=100)
class Track (models.Model):
artist = models.ForeignKey(Artist, blank=True, null=True, on_delete=models.SET_NULL, verbose_name="Artist")
title = models.CharField(max_length=100, verbose_name="Title")
</code></pre>
<p>tables.py:</p>
<pre><code>class amazonColumn(tables.Column):
def render(self, value):
return mark_safe('https://www.amazon.com/gp/search?ie=UTF8&index=digital-music&keywords={{artist}}-{{title}}', value) # not sure how to pass artist and title records
class TrackTable(tables.Table):
amazon = amazonColumn()
class Meta:
model = Track
attrs = {"class": "paleblue"}
fields = ('artist', 'title', 'amazon')
</code></pre>
| 0 | 2016-08-16T07:19:48Z | 38,974,129 | <p>I would use <a href="https://docs.djangoproject.com/en/1.10/ref/utils/#django.utils.html.format_html" rel="nofollow"><code>format_html</code></a>. </p>
<p>Furthermore, you'll need to add <code>record</code> as a parameter to the render function, which allows you access its other attributes:</p>
<pre><code>class AmazonColumn(tables.Column):
amazon_url = '<a href="https://www.amazon.com/gp/search?ie=UTF8&index=digital-music&keywords={artist}-{title}">Amazon</a>'
def render(self, record):
return format_html(self.amazon_url, artist=record.artist.name, title=record.title)
</code></pre>
<p>You might have to set <code>empty_values=()</code> where you instantiate the <code>AmazonColumn</code> in your table.</p>
| 1 | 2016-08-16T11:48:42Z | [
"python",
"django",
"django-tables2"
] |
pd.to_datetime or parse datetimes won't work with my csv file (format: dd/mm/yyyy, hh:mm:ss) | 38,968,811 | <p>I extracted the following table from my csv file </p>
<pre><code>Date,Time,CO2(ppm),CellTemp(c),CellPres(kPa)
10/08/2016,13:21:11,356.89,51.07,99.91
10/08/2016,13:21:12,356.89,51.07,99.91
10/08/2016,13:21:13,356.83,51.07,99.91
</code></pre>
<p>I researched the last couple of days and tried different things to make pandas read the <code>Date</code> and <code>Time</code> columns as <code>datetime</code>, but I just can't make it. Here are some of the things I tried:</p>
<pre><code>df = pd.read_csv(myfile)
print(df.dtypes)
</code></pre>
<p>I get:</p>
<pre class="lang-none prettyprint-override"><code>Date object
Time object
CO2(ppm) object
CellTemp(c) object
CellPres(kPa) object
dtype: object
</code></pre>
<p>When I try:</p>
<pre><code>df_2 = pd.read_csv(file, parse_dates=[['Date', 'Time']])
print(df_2.dtypes)
</code></pre>
<p>I get</p>
<pre class="lang-none prettyprint-override"><code>Date_Time object
CO2(ppm) object
CellTemp(c) object
CellPres(kPa) object
dtype: object
</code></pre>
<p>So, now <code>Date</code> and <code>Time</code> are in one column (<code>11/08/2016 14:06:18</code>) (what I want), but not recognized as <code>datetime</code>. </p>
<p>When I then try:</p>
<pre><code>pd.to_datetime(df_2['Date_Time'], format='%d/%m/%Y %H:%M:%S)
</code></pre>
<p>I get the error message: </p>
<pre><code>File "<ipython-input-31-ace4ed1a0aa9>", line 1
pd.to_datetime(df_2['Date_Time'],format='%d/%m/%Y %H:%M:%S
SyntaxError: EOL while scanning string literal
</code></pre>
<p>When I try:</p>
<pre><code>import dateutil.parser
dateutil.parser.parse(df_2['Date_Time'])
</code></pre>
<p>I get (besides some other output) the error message:</p>
<pre><code>AttributeError: 'Series' object has no attribute 'read'
</code></pre>
<p>I also manually changed the dateformat to <code>yyyy-mm-dd</code> in Excel and tried the same things without any better result. I kind of think it must be a very basic mistake I am doing, I am new to scripting and would appreciate any help. Please apologize if my question has formatting errors I really tried.</p>
| 2 | 2016-08-16T07:20:50Z | 38,968,905 | <p>It looks like there is some wrong datetime or some value, which cannot be converted to datetime, so you can add parameter <code>errors='coerce'</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>to_datetime</code></a> for converting them to <code>NaT</code>:</p>
<pre><code>#31.11. does not exist
print (df_2)
Date_Time CO2(ppm) CellTemp(c) CellPres(kPa)
0 10/08/2016 13:21:11 356.89 51.07 99.91
1 10/08/2016 13:21:12 356.89 51.07 99.91
2 31/11/2016 13:21:13 356.83 51.07 99.91
df_2['Date_Time'] = pd.to_datetime(df_2['Date_Time'],
format='%d/%m/%Y %H:%M:%S',
errors='coerce')
print (df_2)
Date_Time CO2(ppm) CellTemp(c) CellPres(kPa)
0 2016-08-10 13:21:11 356.89 51.07 99.91
1 2016-08-10 13:21:12 356.89 51.07 99.91
2 NaT 356.83 51.07 99.91
</code></pre>
<p>You can also check all problematic values by:</p>
<pre><code>print (df_2[pd.to_datetime(df_2['Date_Time'],format='%d/%m/%Y %H:%M:%S', errors='coerce').isnull()])
Date_Time CO2(ppm) CellTemp(c) CellPres(kPa)
2 31/11/2016 13:21:13 356.83 51.07 99.91
</code></pre>
| 3 | 2016-08-16T07:26:34Z | [
"python",
"csv",
"datetime",
"pandas"
] |
Read array of preprocessed length with lldb | 38,968,902 | <p>I have this array in my C code </p>
<pre><code>uint8_t a[LENGTH];
</code></pre>
<p>I want to read the elements of array with lldb python script, but I don't know the <code>LENGTH</code> value when I'm debugging.
What is the solution?</p>
| 0 | 2016-08-16T07:26:27Z | 38,969,473 | <p>you can make it if use a temporary variables</p>
<pre><code>uint8_t a[LENGTH];
uint64_t tmp_len;
tmp_len = LENGTH;
</code></pre>
| 0 | 2016-08-16T07:58:20Z | [
"python",
"c",
"xcode",
"debugging",
"lldb"
] |
Python: How to change variable in http post request | 38,968,943 | <pre><code>data = {'Email':'myemail@gmail.com','Name':'1','Password':'gfgf65jh56456jh67'}
r = requests.post(url, data=json.dumps(data), headers=headers)
</code></pre>
<p>Hello,
I am using above code to send http request and it worked well. However, I want to use for loop to change "Name" variable. So, this is my code after changing</p>
<pre><code>for i in range(1,1000):
data = "{'Email':'myemail@gmail.com','Name':'" + str(i) + "','Password':'gfgf65jh56456jh67'}"
r = requests.post(url, data=json.dumps(data), headers=headers)
</code></pre>
<p>However, I got an output from server: {'Message': 'An error has occurred.'}. It is not python error. So, how to fix my code ? Thank you :)</p>
| 0 | 2016-08-16T07:29:04Z | 38,969,045 | <p>In the line that changes data, <code>data</code> is not a dictionary any more but a string...</p>
<p>Change data simply by:</p>
<pre><code>data['Name'] = str(i)
</code></pre>
| 0 | 2016-08-16T07:33:47Z | [
"python",
"python-3.x",
"http",
"post",
"https"
] |
How to find and leave only doubles in list python? | 38,969,024 | <p>How to find only doubles in list? My version of the algorithm</p>
<pre><code>import collections
a = [1,2,3,4,5,2,4,5]
b = []
for x,y in collections.Counter(a).items():
if y>1:
b.append(x)
print(b) # [2, 4, 5]
c = []
for item in a:
if item in b:
c.append(item)
print(c) # [2, 4, 5, 2, 4, 5]
</code></pre>
<p>need find result such as c</p>
<p>code defects:</p>
<ol>
<li>three list (a,b,c), one collections (dict)</li>
<li>long code</li>
</ol>
<p>me need leave list doubles values, example. x = [1,2,2,2,3,4,5,6,6,7ââ], need [2,2,2,6,6] not [2,6]</p>
| 5 | 2016-08-16T07:32:54Z | 38,969,122 | <pre><code>from collections import Counter
a = [1, 2, 3, 4, 5, 2, 4, 5]
counts = Counter(a)
print([num for num in a if counts[num] > 1])
</code></pre>
| 13 | 2016-08-16T07:37:59Z | [
"python"
] |
How to find and leave only doubles in list python? | 38,969,024 | <p>How to find only doubles in list? My version of the algorithm</p>
<pre><code>import collections
a = [1,2,3,4,5,2,4,5]
b = []
for x,y in collections.Counter(a).items():
if y>1:
b.append(x)
print(b) # [2, 4, 5]
c = []
for item in a:
if item in b:
c.append(item)
print(c) # [2, 4, 5, 2, 4, 5]
</code></pre>
<p>need find result such as c</p>
<p>code defects:</p>
<ol>
<li>three list (a,b,c), one collections (dict)</li>
<li>long code</li>
</ol>
<p>me need leave list doubles values, example. x = [1,2,2,2,3,4,5,6,6,7ââ], need [2,2,2,6,6] not [2,6]</p>
| 5 | 2016-08-16T07:32:54Z | 38,969,124 | <p>Not the most efficient way, but very concise:</p>
<pre><code>a = [1,2,3,4,5,2,4,5]
b = [x for x in a if a.count(x) > 1]
print(b)
</code></pre>
| 5 | 2016-08-16T07:38:05Z | [
"python"
] |
How to find and leave only doubles in list python? | 38,969,024 | <p>How to find only doubles in list? My version of the algorithm</p>
<pre><code>import collections
a = [1,2,3,4,5,2,4,5]
b = []
for x,y in collections.Counter(a).items():
if y>1:
b.append(x)
print(b) # [2, 4, 5]
c = []
for item in a:
if item in b:
c.append(item)
print(c) # [2, 4, 5, 2, 4, 5]
</code></pre>
<p>need find result such as c</p>
<p>code defects:</p>
<ol>
<li>three list (a,b,c), one collections (dict)</li>
<li>long code</li>
</ol>
<p>me need leave list doubles values, example. x = [1,2,2,2,3,4,5,6,6,7ââ], need [2,2,2,6,6] not [2,6]</p>
| 5 | 2016-08-16T07:32:54Z | 38,969,202 | <p>@Karin almost had it I think, but end result will not be a set.</p>
<pre><code>from collections import Counter
a = [1, 2, 3, 4, 5, 2, 4, 5]
counts = Counter(a)
print({k for k, v in counts.items() if v >= 2})
</code></pre>
<p>EDIT: Ahh, "leave only doubles" </p>
<pre><code>print([x for x in a if counts[x] >= 2])
</code></pre>
<p>EDIT2: Additional comment clarification by OP for values with anything with a double or more frequent. </p>
| 1 | 2016-08-16T07:43:03Z | [
"python"
] |
speed up slicing dicing of customers mysql | 38,969,135 | <p>Currently we are using AWS RDS(Mysql) + Pandas. We have order, customers, products tables and so on. To get customers and do campaign based on various filters(total 18 filters) on those customers its taking too much time. "Order"'s table itself is of magnitude of million of rows. So to speed up We started doing poc with elasticsearch as our filters contains too many text searches ex "product name", "vendor name",etc . </p>
<p>The problem we are facing with is
1) Filtering on AOV bucket ( average order value ) , with relavent document details also
2) Filtering on Order count
3) Filtering on first_order_date and last_order_date</p>
<p>Our document structure is </p>
<pre><code>{
"order_id":"6",
"customer_id":"1",
"customer_name":"shailendra",
"mailing_addres":"shailendra@gmail.com",
"actual_order_date":"2000-04-30",
"is_veg":"0",
"total_amount":"2499",
"store_id":"276",
"city_id":"12",
"payment_mode":"cod",
"is_elite":"0",
"product":["1","2"],
"coupon_id":"",
"client_source":"1",
"vendor_id":"",
"vendor_name: "",
"brand_id":"",
"third_party_source":""
}
</code></pre>
<p>this is the query</p>
<pre><code>{
"aggs": {
"customer_ids":{
"terms":{
"field":"customer_id"
}
}
}
}
</code></pre>
<p>it return results as</p>
<pre><code>{
"took": 13,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 8,
"max_score": 1,
"hits": [
{
"_index": "customers4",
"_type": "details",
"_id": "5",
"_score": 1,
"_source": {
"order_id": "5",
"customer_id": "5",
"customer_name": "ashish",
"mailing_addres": "ashish@gmail.com",
"actual_order_date": "2016-05-30",
"is_veg": "1",
"total_amount": "300",
"store_id": "2",
"city_id": "",
"payment_mode": "cod",
"is_elite": "0",
"product": [
"1",
"2"
],
"coupon_id": "",
"client_source": "1",
"vendor_id": "",
"brand_id": "",
"third_party_source": ""
}
},
{
"_index": "customers4",
"_type": "details",
"_id": "8",
"_score": 1,
"_source": {
"order_id": "8",
"customer_id": "2",
"customer_name": "nikhil",
"mailing_addres": "nikhil@gmail.com",
"actual_order_date": "2016-05-30",
"is_veg": "0",
"total_amount": "249",
"store_id": "2",
"city_id": "",
"payment_mode": "cod",
"is_elite": "0",
"product": [
"1",
"2"
],
"coupon_id": "",
"client_source": "1",
"vendor_id": "",
"brand_id": "",
"third_party_source": ""
}
},
{
"_index": "customers4",
"_type": "details",
"_id": "2",
"_score": 1,
"_source": {
"order_id": "2",
"customer_id": "2",
"customer_name": "nikhil",
"mailing_addres": "nikhil.01@gmail.com",
"actual_order_date": "2016-01-30",
"is_veg": "1",
"total_amount": "255",
"store_id": "1",
"city_id": "",
"payment_mode": "cod",
"is_elite": "0",
"product": [
"1",
"2",
"3"
],
"coupon_id": "",
"client_source": "1",
"vendor_id": "",
"brand_id": "",
"third_party_source": ""
}
},
{
"_index": "customers4",
"_type": "details",
"_id": "4",
"_score": 1,
"_source": {
"order_id": "4",
"customer_id": "4",
"customer_name": "vivek",
"mailing_addres": "vivek@gmail.com",
"actual_order_date": "2016-04-30",
"is_veg": "0",
"total_amount": "249",
"store_id": "2",
"city_id": "",
"payment_mode": "cod",
"is_elite": "0",
"product": [
"1",
"2"
],
"coupon_id": "",
"client_source": "1",
"vendor_id": "",
"brand_id": "",
"third_party_source": ""
}
},
{
"_index": "customers4",
"_type": "details",
"_id": "6",
"_score": 1,
"_source": {
"order_id": "7",
"customer_id": "1",
"customer_name": "shailendra",
"mailing_addres": "shailendra07121@gmail.com",
"actual_order_date": "2016-05-30",
"is_veg": "0",
"total_amount": "249",
"store_id": "2",
"city_id": "",
"payment_mode": "cod",
"is_elite": "0",
"product": [
"1",
"2"
],
"coupon_id": "",
"client_source": "1",
"vendor_id": "",
"brand_id": "",
"third_party_source": ""
}
},
{
"_index": "customers4",
"_type": "details",
"_id": "1",
"_score": 1,
"_source": {
"order_id": "1",
"customer_id": "1",
"customer_name": "shailendra",
"mailing_addres": "shailendra07121@gmail.com",
"actual_order_date": "2016-01-30",
"is_veg": "1",
"total_amount": "251",
"store_id": "1",
"city_id": "",
"payment_mode": "cod",
"is_elite": "0",
"product": [
"1",
"2",
"3"
],
"coupon_id": "",
"client_source": "1",
"vendor_id": "",
"brand_id": "",
"third_party_source": ""
}
},
{
"_index": "customers4",
"_type": "details",
"_id": "7",
"_score": 1,
"_source": {
"order_id": "6",
"customer_id": "4",
"customer_name": "vivek",
"mailing_addres": "vivek@gmail.com",
"actual_order_date": "2016-05-30",
"is_veg": "0",
"total_amount": "249",
"store_id": "2",
"city_id": "",
"payment_mode": "cod",
"is_elite": "0",
"product": [
"1",
"2"
],
"coupon_id": "",
"client_source": "1",
"vendor_id": "",
"brand_id": "",
"third_party_source": ""
}
},
{
"_index": "customers4",
"_type": "details",
"_id": "3",
"_score": 1,
"_source": {
"order_id": "3",
"customer_id": "3",
"customer_name": "manish",
"mailing_addres": "manish@gmail.com",
"actual_order_date": "2016-03-30",
"is_veg": "0",
"total_amount": "249",
"store_id": "2",
"city_id": "",
"payment_mode": "cod",
"is_elite": "0",
"product": [
"1",
"2"
],
"coupon_id": "",
"client_source": "1",
"vendor_id": "",
"brand_id": "",
"third_party_source": ""
}
}
]
},
"aggregations": {
"customer_ids": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "1",
"doc_count": 2
},
{
"key": "2",
"doc_count": 2
},
{
"key": "4",
"doc_count": 2
},
{
"key": "3",
"doc_count": 1
},
{
"key": "5",
"doc_count": 1
}
]
}
}
}
</code></pre>
<p>here as u can see only doc count is being returned. we want all fields of the document along with the doc count</p>
| 1 | 2016-08-16T07:38:31Z | 38,969,632 | <p>You can use the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-top-hits-aggregation.html" rel="nofollow"><code>top_hits</code> aggregation</a> to retrieve the documents for each customer bucket. </p>
<pre><code>{
"aggs": {
"customer_ids":{
"terms":{
"field":"customer_id"
},
"aggs": {
"docs": {
"top_hits": {
"size": 3
}
}
}
}
}
}
</code></pre>
| 0 | 2016-08-16T08:06:20Z | [
"python",
"mysql",
"pandas",
"elasticsearch"
] |
Python look for variable in JSON | 38,969,179 | <p>I'm expecting a JSON msg to be parsed with Python along these lines to come in through MQTT:</p>
<pre><code>{"OPTION1": "0", "OPTION2": "50", "OPTION3": "0", "OPTION4": "0"}
</code></pre>
<p>Depending on the circumstances, these options may or may not be parsed through Python into the JSON msg, and as such, it may end up looking as:</p>
<pre><code>{"OPTION1": "0", "OPTION3": "0", "OPTION4": "0"}
</code></pre>
<p>And thus skipping OPTION2 and it's value entirely. </p>
<p>To avoid my script borking out on my, I was thinking of scanning if the option is there first, before setting it, like so:</p>
<pre><code> if data['OPTION1']:
>do something here<
else:
continue
</code></pre>
<p>However, this doesn't seem to work, it comes up with:</p>
<pre><code> File "listen-mqtt.py", line 28
continue
SyntaxError: 'continue' not properly in loop
</code></pre>
<p>Any help would be really appreciated! Thanks.</p>
| -1 | 2016-08-16T07:42:18Z | 38,969,345 | <p>If you are working with if else <strong>pass</strong>, <strong>continue</strong> is used with loops:-</p>
<pre><code> if data['OPTION1']:
>do something here<
else:
pass
</code></pre>
<p>Continue is used with loops. Also you can try:-</p>
<pre><code>for dataItem in data:
if "OPTION2" == dataItem:
pass
else:
>do something<
for dataItem in data:
if "OPTION2" == dataItem:
continue
>do something<
</code></pre>
| 2 | 2016-08-16T07:51:01Z | [
"python",
"json"
] |
Python look for variable in JSON | 38,969,179 | <p>I'm expecting a JSON msg to be parsed with Python along these lines to come in through MQTT:</p>
<pre><code>{"OPTION1": "0", "OPTION2": "50", "OPTION3": "0", "OPTION4": "0"}
</code></pre>
<p>Depending on the circumstances, these options may or may not be parsed through Python into the JSON msg, and as such, it may end up looking as:</p>
<pre><code>{"OPTION1": "0", "OPTION3": "0", "OPTION4": "0"}
</code></pre>
<p>And thus skipping OPTION2 and it's value entirely. </p>
<p>To avoid my script borking out on my, I was thinking of scanning if the option is there first, before setting it, like so:</p>
<pre><code> if data['OPTION1']:
>do something here<
else:
continue
</code></pre>
<p>However, this doesn't seem to work, it comes up with:</p>
<pre><code> File "listen-mqtt.py", line 28
continue
SyntaxError: 'continue' not properly in loop
</code></pre>
<p>Any help would be really appreciated! Thanks.</p>
| -1 | 2016-08-16T07:42:18Z | 38,969,356 | <p><code>continue</code> is used with loops, you might need <code>pass</code>ehere. Also, you can use <code>in</code> to check if a key is available in a dictionary:</p>
<pre><code>if 'OPTION1' in data:
# do something
else:
pass
</code></pre>
<p><strong>But I don't think, that is what you want!</strong> You want to have your default values and fill in the blanks if the key is not available in <code>data</code>:</p>
<pre><code>defaults = {"OPTION1": "0", "OPTION2": "50", "OPTION3": "0", "OPTION4": "0"}
finalData = defaults.update(data)
</code></pre>
<p>Find out more <a href="http://stackoverflow.com/a/26853961/2295964">here</a>.</p>
| 0 | 2016-08-16T07:51:28Z | [
"python",
"json"
] |
Extract complete URL from href using Python | 38,969,205 | <p>I am doing a project on web crawling for which I need to find all links within a given web page. Till now I was using <code>urljoin</code> in <code>urllib.parse</code>. But now I found that some links are not properly joined using the <code>urljoin</code> function. </p>
<p>For e.g. the <code><a></code> tag might be something like <code><a href="a.xml?value=basketball">A</a></code>. The complete address however might be <code>http://www.example.org/main/test/a.xml?value=basketball</code>, but the <code>urljoin</code> function will give wrong results ( something like <code>http://www.example.com/a.xml?value=basketball</code>). </p>
<p>Code which I am using:</p>
<pre><code>parentUrl = urlQueue.get()
html = get_page_source(parentUrl)
bSoup = BeautifulSoup(html, 'html.parser')
aTags = bSoup.find_all('a', href=True)
for aTag in aTags:
childUrl = aTag.get('href')
# just to check if the url is complete or not(for .com only)
if '.com' not in childUrl:
# this urljoin is giving invalid resultsas mentioned above
childUrl = urljoin(parentUrl, childUrl)
</code></pre>
<p>Is there any way through which I can correctly join two URLs, including these cases ?</p>
| 0 | 2016-08-16T07:43:14Z | 38,974,999 | <p>Just some tweaks to get this working. In your case pass base URI with trailing slash. Everything you will need to accomplish this is written in the <a href="https://docs.python.org/2/library/urlparse.html" rel="nofollow">docs of urlparse</a></p>
<pre><code>>>> import urlparse
>>> urlparse.urljoin('http://www.example.org/main/test','a.xml?value=basketball')
'http://www.example.org/main/a.xml?value=basketball'
>>> urlparse.urljoin('http://www.example.org/main/test/','a.xml?value=basketball')
'http://www.example.org/main/test/a.xml?value=basketball'
</code></pre>
<p>BTW: this is a perfect use case to factor out the code for building URLs into a separate function. Then write some unit tests to verify its working as expected and even works with your edge cases. Afterwards use it in your web crawler code.</p>
| 1 | 2016-08-16T12:29:14Z | [
"python",
"web-crawler",
"urllib"
] |
Using more that 1 argument in glob() | 38,969,240 | <p>I'm trying to recursively search files ending with .png or .jpg in a folder named 'dataset' using the glob(). Here is the code snippet:</p>
<pre><code> for imagePath in glob.glob(args["dataset"] + "/*.png"):
</code></pre>
<p>I'm setting the image ID using the imagePath inside the loop. How can I search using 2 arguments. I know ',' doesn't work as glob() accepts exactly one argument.</p>
| 0 | 2016-08-16T07:44:43Z | 38,969,895 | <p>One way could perhaps be to first create the list of existing files and then loop this list</p>
<pre><code>image_types = (â/*.jpg', â/*.png')
image_files = []
for files in image_types:
image_files.extend(glob.glob("dataset/" + files))
image_files #list of .jpg and .png files
</code></pre>
| 0 | 2016-08-16T08:22:05Z | [
"python",
"python-2.7",
"recursion",
"arguments",
"glob"
] |
python pandas selecting columns from a dataframe via a list of column names | 38,969,267 | <p>I have a dataframe with a lot of columns in it. Now I want to select only certain columns. I have saved all the names of the columns that I want to select into a Python list and now I want to filter my dataframe according to this list. </p>
<p>I've been trying to do:</p>
<pre><code>df_new = df[[list]]
</code></pre>
<p>where list includes all the column names that I want to select.</p>
<p>However I get the error:</p>
<pre><code>TypeError: unhashable type: 'list'
</code></pre>
<p>Any help on this one?</p>
| 0 | 2016-08-16T07:46:28Z | 38,969,308 | <p>You can remove one <code>[]</code>:</p>
<pre><code>df_new = df[list]
</code></pre>
<p>Also better is use other name as <code>list</code>, e.g. <code>L</code>:</p>
<pre><code>df_new = df[L]
</code></pre>
<p>It look like working, I try only simplify it:</p>
<pre><code>L = []
for x in df.columns:
if not "_" in x[-3:]:
L.append(x)
print (L)
</code></pre>
<hr>
<p><code>List comprehension</code>:</p>
<pre><code>print ([x for x in df.columns if not "_" in x[-3:]])
</code></pre>
| 1 | 2016-08-16T07:48:11Z | [
"python",
"pandas",
"dataframe"
] |
Prize distribution algorithm to handle ties | 38,969,303 | <p>I'm trying to come up with a prize distribution algorithm that can scalably handle different number of players and factor in ties i.e in case the contestants fall in the same position.</p>
<p>This is what I have so far:</p>
<pre><code>Distribution Formula
P=((1-d)/1-d^n)*d^(p-1))*A
Where:
P Prize
n Number of winners
A Total amout of money to share
d Distribution constant >0<1
p Position or rank of the user
</code></pre>
<p>Modeling it in Excel I get the following results:</p>
<pre><code>Constants
A 50000
d 0.4
n 15
</code></pre>
<p><strong>Sample data</strong></p>
<p><strong>Distribution without ties</strong></p>
<pre><code>Position (p) Player Prize (P)
1 A 30000.03221
2 B 12000.01288
3 C 4800.005154
4 D 1920.002062
5 E 768.0008246
6 F 307.2003299
7 C 122.8801319
8 D 49.15205278
9 E 19.66082111
10 F 7.864328444
11 C 3.145731378
12 D 1.258292551
13 E 0.5033170204
14 F 0.2013268082
15 C 0.08053072327
**Total 50000**
</code></pre>
<p><strong>Distribution with ties</strong></p>
<pre><code>Position (p) Player Prize (P)
1 A 30000.03221
1 B 30000.03221
2 C 12000.01288
3 D 4800.005154
4 E 1920.002062
4 F 1920.002062
5 C 768.0008246
6 D 307.2003299
7 E 122.8801319
8 F 49.15205278
9 C 19.66082111
10 D 7.864328444
11 E 3.145731378
12 F 1.258292551
13 C 0.5033170204
**Total 81919.75242**
</code></pre>
<p><strong>Problem</strong></p>
<p>Not my second data with ties the total distributed prizes is more than 50000 which is what I wanted to share</p>
<p><strong>Desired results</strong></p>
<p>Users falling in the same position should get an equal amount and have the prizes well distributed to the other users. The total amount paid out should not be more than what was intended.</p>
<p>How can I improve the above function to achieve the above results.</p>
| -1 | 2016-08-16T07:47:54Z | 38,971,607 | <ol>
<li>Let <code>MaxT</code> (default value 1) is maximum of all ties for various tied positions </li>
<li><p>Choose <code>d <= 1/MaxT</code> </p>
<p><strong>UPDATE</strong>: For example:</p>
<blockquote>
<pre><code>1 A 30000.03221 |
1 B 30000.03221 | Tie count T1 = 2
2 C 12000.01288
3 D 4800.005154
4 E 1920.002062 |
4 F 1920.002062 | Tie count T2 = 2
</code></pre>
<p>maxT = max of {T1, T2, .. Tn} = max {2, 2} = 2</p>
</blockquote></li>
<li><p>Calculate prize money once for each unique position </p></li>
<li>For tied positions, just divide the prize money calculated in step #2 by no. of ties for the position (Tn) (Example: For position 1: 30000/2.0)</li>
</ol>
<p>This scheme makes sure that total is A and prize value of a position is less than prize value of upper position, independent of ties.</p>
| 1 | 2016-08-16T09:46:19Z | [
"python",
"algorithm",
"math",
"statistics",
"game-engine"
] |
Python - Get the value which Satisfies OR Truth Table out of two values | 38,969,307 | <p>I have two objects which returns two values, lets say x1, x2. I just have to check whether the values exist or not and if exists it should return those, which is not None,False, 0. </p>
<p>This is just simple try, which is OK:</p>
<pre><code>def get_valid_keys(x1, x2):
""" """
a = []
if x1: a.append(x1)
if x2: a.append(x2)
return a
</code></pre>
<p>But I feel,there should be something in python which would return above variables instead of values with an efficient way:
like:</p>
<pre><code>>>> x1, x2 = 0, 1
>>> x1 or x2
1 # It should return x2
>>> x1, x2 = 1, 0
>>> x1 or x2
1 # It should return x1
>>> x1, x2 = 0, 0
>>> x1 or x2
0 # It should return None
>>> x1, x2 = 1, 1
>>> x1 or x2
1 # It should return x1, x2
</code></pre>
| 0 | 2016-08-16T07:48:10Z | 38,969,374 | <p>Weed out any zeroes:</p>
<pre><code>filter(lambda x: x, [x1, x2])
</code></pre>
| 2 | 2016-08-16T07:52:25Z | [
"python"
] |
Fuzzy string matching in Python | 38,969,383 | <p>I have 2 lists of over a million names with slightly different naming conventions. The goal here it to match those records that are similar, with the logic of 95% confidence.</p>
<p>I am made aware there are libraries which I can leverage on, such as the FuzzyWuzzy module in Python.</p>
<p>However in terms of processing it seems it will take up too much resources having every string in 1 list to be compared to the other, which in this case seems to require 1 million multiplied by another million number of iterations.</p>
<p>Are there any other more efficient methods for this problem?</p>
<p>UPDATE:</p>
<p>So I created a bucketing function and applied a simple normalization of removing whitespace, symbols and converting the values to lowercase etc...</p>
<pre><code>for n in list(dftest['YM'].unique()):
n = str(n)
frame = dftest['Name'][dftest['YM'] == n]
print len(frame)
print n
for names in tqdm(frame):
closest = process.extractOne(names,frame)
</code></pre>
<p>By using pythons pandas, the data is loaded to smaller buckets grouped by years and then using the FuzzyWuzzy module, <code>process.extractOne</code> is used to get the best match.</p>
<p>Results are still somewhat disappointing. During test the code above is used on a test data frame containing only 5 thousand names and takes up almost a whole hour.</p>
<p>The test data is split up by.</p>
<ul>
<li>Name</li>
<li>Year Month of Date of Birth</li>
</ul>
<p>And I am comparing them by buckets where their YMs are in the same bucket.</p>
<p>Could the problem be because of the FuzzyWuzzy module I am using? Appreciate any help.</p>
| 5 | 2016-08-16T07:52:41Z | 38,969,585 | <p>You have to index, or normalize the strings to avoid the O(n^2) run. Basically, you have to map each string to a normal form, and to build a reverse dictionary with all the words linked to corresponding normal forms.</p>
<p>Let's consider that normal forms of 'world' and 'word' are the same. So, first build a reversed dictionary of <code>Normalized -> [word1, word2, word3],</code> e.g.:</p>
<pre><code>"world" <-> Normalized('world')
"word" <-> Normalized('wrd')
to:
Normalized('world') -> ["world", "word"]
</code></pre>
<p>There you go - all the items (lists) in the Normalized dict which have more than one value - are the matched words.</p>
<p>The normalization algorithm depends on data i.e. the words. Consider one of the many:</p>
<ul>
<li>Soundex</li>
<li>Metaphone</li>
<li>Double Metaphone</li>
<li>NYSIIS</li>
<li>Caverphone</li>
<li>Cologne Phonetic</li>
<li>MRA codex</li>
</ul>
| 4 | 2016-08-16T08:04:12Z | [
"python",
"algorithm",
"fuzzy-search",
"fuzzywuzzy"
] |
Fuzzy string matching in Python | 38,969,383 | <p>I have 2 lists of over a million names with slightly different naming conventions. The goal here it to match those records that are similar, with the logic of 95% confidence.</p>
<p>I am made aware there are libraries which I can leverage on, such as the FuzzyWuzzy module in Python.</p>
<p>However in terms of processing it seems it will take up too much resources having every string in 1 list to be compared to the other, which in this case seems to require 1 million multiplied by another million number of iterations.</p>
<p>Are there any other more efficient methods for this problem?</p>
<p>UPDATE:</p>
<p>So I created a bucketing function and applied a simple normalization of removing whitespace, symbols and converting the values to lowercase etc...</p>
<pre><code>for n in list(dftest['YM'].unique()):
n = str(n)
frame = dftest['Name'][dftest['YM'] == n]
print len(frame)
print n
for names in tqdm(frame):
closest = process.extractOne(names,frame)
</code></pre>
<p>By using pythons pandas, the data is loaded to smaller buckets grouped by years and then using the FuzzyWuzzy module, <code>process.extractOne</code> is used to get the best match.</p>
<p>Results are still somewhat disappointing. During test the code above is used on a test data frame containing only 5 thousand names and takes up almost a whole hour.</p>
<p>The test data is split up by.</p>
<ul>
<li>Name</li>
<li>Year Month of Date of Birth</li>
</ul>
<p>And I am comparing them by buckets where their YMs are in the same bucket.</p>
<p>Could the problem be because of the FuzzyWuzzy module I am using? Appreciate any help.</p>
| 5 | 2016-08-16T07:52:41Z | 38,972,464 | <p>There are several level of optimizations possible here to turn this problem from O(n^2) to a lesser time complexity.</p>
<ul>
<li><p><strong>Preprocessing</strong> : Sort your list in the first pass, creating an output map for each string , they key for the map can be normalized string.
Normalizations may include:</p>
<ul>
<li>lowercase conversion,</li>
<li>no whitespaces, special characters removal,</li>
<li>transform unicode to ascii equivalents if possible,use <a href="https://docs.python.org/2/library/unicodedata.html#unicodedata.normalize" rel="nofollow">unicodedata.normalize</a> or <a href="https://pypi.python.org/pypi/Unidecode" rel="nofollow">unidecode</a> module )</li>
</ul>
<p>This would result in <code>"Andrew H Smith"</code>, <code>"andrew h. smith"</code>, <code>"ándréw h. smith"</code> generating same key <code>"andrewhsmith"</code>, and would reduce your set of million names to a smaller set of unique/similar grouped names.</p></li>
</ul>
<p>You can use this <a href="https://github.com/dhruvpathak/misc-python-utils/blob/master/helpers.py#L145" rel="nofollow">utlity method</a> to normalize your string (does not include the unicode part though) : </p>
<pre><code>def process_str_for_similarity_cmp(input_str, normalized=False, ignore_list=[]):
""" Processes string for similarity comparisons , cleans special characters and extra whitespaces
if normalized is True and removes the substrings which are in ignore_list)
Args:
input_str (str) : input string to be processed
normalized (bool) : if True , method removes special characters and extra whitespace from string,
and converts to lowercase
ignore_list (list) : the substrings which need to be removed from the input string
Returns:
str : returns processed string
"""
for ignore_str in ignore_list:
input_str = re.sub(r'{0}'.format(ignore_str), "", input_str, flags=re.IGNORECASE)
if normalized is True:
input_str = input_str.strip().lower()
#clean special chars and extra whitespace
input_str = re.sub("\W", "", input_str).strip()
return input_str
</code></pre>
<ul>
<li><p>Now similar strings will already lie in the same bucket if their normalized key is same.</p></li>
<li><p>For further comparison, <strong>you will need to compare the keys only, not the names</strong>. e.g
<code>andrewhsmith</code> and <code>andrewhsmeeth</code>, since this similarity
of names will need fuzzy string matching apart from the normalized
comparison done above.</p></li>
<li><p><strong>Bucketing</strong> : <strong>Do you really need to compare a 5 character key with 9 character key to see if that is 95% match</strong> ? No you do not.
So you can create buckets of matching your strings. e.g. 5 character names will be matched with 4-6 character names, 6 character names with 5-7 characters etc. A n+1,n-1 character limit for a n character key is a reasonably good bucket for most practical matching.</p></li>
<li><p><strong>Beginning match</strong> : Most variations of names will have same first character in the normalized format ( e.g <code>Andrew H Smith</code>, <code>ándréw h. smith</code>, and <code>Andrew H. Smeeth</code> generate keys <code>andrewhsmith</code>,<code>andrewhsmith</code>, and <code>andrewhsmeeth</code> respectively.
They will usually not differ in the first character, so you can run matching for keys starting with <code>a</code> to other keys which start with <code>a</code>, and fall within the length buckets. This would highly reduce your matching time. No need to match a key <code>andrewhsmith</code> to <code>bndrewhsmith</code> as such a name variation with first letter will rarely exist.</p></li>
</ul>
<p>Then you can use something on the lines of this <a href="https://github.com/dhruvpathak/misc-python-utils/blob/master/helpers.py#L115" rel="nofollow">method</a> ( or FuzzyWuzzy module ) to find string similarity percentage, you may exclude one of <a href="https://pypi.python.org/pypi/jellyfish" rel="nofollow">jaro_winkler</a> or difflib to optimize your speed and result quality:</p>
<pre><code>def find_string_similarity(first_str, second_str, normalized=False, ignore_list=[]):
""" Calculates matching ratio between two strings
Args:
first_str (str) : First String
second_str (str) : Second String
normalized (bool) : if True ,method removes special characters and extra whitespace
from strings then calculates matching ratio
ignore_list (list) : list has some characters which has to be substituted with "" in string
Returns:
Float Value : Returns a matching ratio between 1.0 ( most matching ) and 0.0 ( not matching )
using difflib's SequenceMatcher and and jellyfish's jaro_winkler algorithms with
equal weightage to each
Examples:
>>> find_string_similarity("hello world","Hello,World!",normalized=True)
1.0
>>> find_string_similarity("entrepreneurship","entreprenaurship")
0.95625
>>> find_string_similarity("Taj-Mahal","The Taj Mahal",normalized= True,ignore_list=["the","of"])
1.0
"""
first_str = process_str_for_similarity_cmp(first_str, normalized=normalized, ignore_list=ignore_list)
second_str = process_str_for_similarity_cmp(second_str, normalized=normalized, ignore_list=ignore_list)
match_ratio = (difflib.SequenceMatcher(None, first_str, second_str).ratio() + jellyfish.jaro_winkler(unicode(first_str), unicode(second_str)))/2.0
return match_ratio
</code></pre>
| 5 | 2016-08-16T10:26:27Z | [
"python",
"algorithm",
"fuzzy-search",
"fuzzywuzzy"
] |
python: tree = ET.parse('file.xml') use variable (getdata) in place of 'file.xml' | 38,969,404 | <p>this is probably a simple answer.</p>
<p>I can parse and convert datae from XML using whats in the title,</p>
<p>however I'm pulling back data from different sources and I don't want to write them all to files before parsing them as I'm going to be combining the data into one file. </p>
<p>So I run an api call to get some info and store it in variable (getdata)</p>
<pre><code> tree = ET.parse(getdata)
</code></pre>
<p>doesn't seem to work.</p>
| 0 | 2016-08-16T07:53:47Z | 38,969,450 | <p>Take a look at the <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xml.etree.ElementTree.fromstring" rel="nofollow"><code>fromstring</code></a> method of ElementTree:</p>
<pre><code>root_element = ET.fromstring(getdata)
</code></pre>
| 2 | 2016-08-16T07:57:23Z | [
"python",
"xml",
"parsing"
] |
SQLAlchemy bulk_save_objects can't save updated object having a versioning field | 38,969,406 | <p>I'm trying to have a combination of a versioning on my rows, and <code>bulk_save_objects</code>. Here's my code, and it fails when I try to give the function an updated object at the end of the code.</p>
<pre><code>import datetime
import sqlalchemy as sqa
import sqlalchemy.ext
import sqlalchemy.ext.declarative
import sqlalchemy.orm
Base = sqa.ext.declarative.declarative_base()
class Test(Base):
__tablename__ = 'gads_sqlalchemyTest'
id = sqa.Column(sqa.Integer, primary_key = True)
id2 = sqa.Column(sqa.String(50), primary_key = True)
name = sqa.Column(sqa.String(200))
lastUpdated = sqa.Column(sqa.DateTime)
__mapper_args__ = {
'version_id_col': lastUpdated,
'version_id_generator': lambda version: datetime.datetime.now()
}
def __repr__(self):
return('<Test(id: %d, name: %s)>' % (
self.id, self.name))
if __name__ == '__main__':
connection_string = ('mssql+pyodbc://'
'username:password@server:1433/'
'databasename'
'?driver=FreeTDS')
engine = sqa.create_engine(connection_string, echo=True)
Base.metadata.create_all(engine)
Session = sqa.orm.sessionmaker(bind = engine)
session = Session()
objects = []
for i in range(3):
tmp = Test()
tmp.id = i
tmp.id2 = 'SE'
tmp.name = 'name %d' % i
objects.append(tmp)
session.bulk_save_objects(objects)
session.commit()
tmp = session.query(Test).filter(Test.id == 1).one()
tmp.name = 'test'
session.bulk_save_objects([tmp])
session.commit()
</code></pre>
<p>And here's the output:</p>
<pre><code>2016-08-16 09:44:00,710 INFO sqlalchemy.engine.base.Engine
SELECT default_schema_name FROM
sys.database_principals
WHERE principal_id=database_principal_id()
2016-08-16 09:44:00,710 INFO sqlalchemy.engine.base.Engine ()
2016-08-16 09:44:00,729 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1
2016-08-16 09:44:00,729 INFO sqlalchemy.engine.base.Engine ()
2016-08-16 09:44:00,734 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS NVARCHAR(60)) AS anon_1
2016-08-16 09:44:00,734 INFO sqlalchemy.engine.base.Engine ()
2016-08-16 09:44:00,740 INFO sqlalchemy.engine.base.Engine SELECT [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS]
WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS NVARCHAR(max)) AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max))
2016-08-16 09:44:00,741 INFO sqlalchemy.engine.base.Engine ('gads_sqlalchemyTest', 'dbo')
2016-08-16 09:44:00,966 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2016-08-16 09:44:00,967 INFO sqlalchemy.engine.base.Engine INSERT INTO [gads_sqlalchemyTest] (id, id2, name, [lastUpdated]) VALUES (?, ?, ?, ?)
2016-08-16 09:44:00,968 INFO sqlalchemy.engine.base.Engine ((0, 'SE', 'as;dkljasdfl;kj 0 1', datetime.datetime(2016, 8, 16, 9, 44, 0, 967306)), (1, 'SE', 'as;dkljasdfl;kj 1 2', datetime.datetime(2016, 8, 16, 9, 44, 0, 967328)), (2, 'SE', 'as;dkljasdfl;kj 2 3', datetime.datetime(2016, 8, 16, 9, 44, 0, 967337)))
2016-08-16 09:44:00,976 INFO sqlalchemy.engine.base.Engine COMMIT
2016-08-16 09:44:00,984 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2016-08-16 09:44:00,986 INFO sqlalchemy.engine.base.Engine SELECT [gads_sqlalchemyTest].id AS [gads_sqlalchemyTest_id], [gads_sqlalchemyTest].id2 AS [gads_sqlalchemyTest_id2], [gads_sqlalchemyTest].name AS [gads_sqlalchemyTest_name], [gads_sqlalchemyTest].[lastUpdated] AS [gads_sqlalchemyTest_lastUpdated]
FROM [gads_sqlalchemyTest]
WHERE [gads_sqlalchemyTest].id = ?
2016-08-16 09:44:00,986 INFO sqlalchemy.engine.base.Engine (1,)
2016-08-16 09:44:00,992 INFO sqlalchemy.engine.base.Engine ROLLBACK
Traceback (most recent call last):
File "tmp.py", line 60, in <module>
session.bulk_save_objects([tmp])
File "/home/adrin/Projects/venv/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 2264, in bulk_save_objects
return_defaults, update_changed_only, False)
File "/home/adrin/Projects/venv/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 2428, in _bulk_save_mappings
transaction.rollback(_capture_exception=True)
File "/home/adrin/Projects/venv/lib/python3.5/site-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/home/adrin/Projects/venv/lib/python3.5/site-packages/sqlalchemy/util/compat.py", line 186, in reraise
raise value
File "/home/adrin/Projects/venv/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 2419, in _bulk_save_mappings
isstates, update_changed_only)
File "/home/adrin/Projects/venv/lib/python3.5/site-packages/sqlalchemy/orm/persistence.py", line 123, in _bulk_update
bookkeeping=False)
File "/home/adrin/Projects/venv/lib/python3.5/site-packages/sqlalchemy/orm/persistence.py", line 642, in _emit_update_statements
lambda rec: (
File "/home/adrin/Projects/venv/lib/python3.5/site-packages/sqlalchemy/orm/persistence.py", line 439, in _collect_update_commands
update_version_id in states_to_update:
File "/home/adrin/Projects/venv/lib/python3.5/site-packages/sqlalchemy/orm/persistence.py", line 117, in <genexpr>
for mapping in mappings
KeyError: 'lastUpdated'
</code></pre>
<p>The code runs smoothly if I simply completely remove the <code>lastUpdated</code> field.</p>
<p><strong>EDIT</strong>:
There's a patch to fix the error <a href="https://bitbucket.org/zzzeek/sqlalchemy/issues/3781/implement-version_id-for-bulk_save" rel="nofollow">here</a></p>
| 0 | 2016-08-16T07:54:09Z | 39,168,568 | <p>Bulk operations bypass a lot of SQLAlchemy's functionality in the name of speed. Every bulk operation has a large list of warnings and caveats in the docs. I would not be surprised if the versioning functionality is one of the things bypassed.</p>
| 0 | 2016-08-26T14:23:44Z | [
"python",
"orm",
"sqlalchemy"
] |
Spark 2.0 filter using a UDF after a self-join | 38,969,413 | <p>I need to filter a Spark dataframe using my own User-Defined Function. My dataframe is read from a database using a jdbc connection and then goes through a self-join operation in spark before being filtered. The error occurs when trying to <code>collect</code> the dataframe after the filter.</p>
<p>I have been using this successfully in spark 1.6. However, after upgrading to 2.0 yesterday it fails with the error:</p>
<pre><code>py4j.protocol.Py4JJavaError: An error occurred while calling o400.collectToPython.
: java.lang.UnsupportedOperationException: Cannot evaluate expression:
<lambda>(input[0, string, true])
</code></pre>
<p>Here is a minimal example that produces the error (in my environment):</p>
<pre><code>from pyspark.sql.functions import udf, col
from pyspark.sql.types import BooleanType
spark = SparkSession.builder.master('local').appName('test').getOrCreate()
# this works successfully
df = spark.createDataFrame([('Alice', 1), ('Bob', 2), ('Dan', None)],
['name', 'age'])
df.filter(udf(lambda x: 'i' in x, BooleanType())(df.name)).collect()
>>> [Row(name=u'Alice', age=1)]
# this produces the error
df_emp = spark.createDataFrame([(1, 'Alice', None), (2, 'Bob', 1),
(3, 'Dan', 2), (4, 'Joe', 2)],
['id', 'name', 'manager_id'])
df1 = df_emp.alias('df1')
df2 = df_emp.alias('df2')
cols = df1.columns
# the self-join
result = df1.join(df2, col('df1.id') == col('df2.manager_id'), 'left_outer')
result.collect()
>>> [Row(id=1, name=u'Alice', manager_id=None),
Row(id=3, name=u'Dan', manager_id=2), Row(id=2, name=u'Bob', manager_id=1),
Row(id=2, name=u'Bob', manager_id=1), Row(id=4, name=u'Joe', manager_id=2)]
# simple udf filter
filtered = result.filter(udf(lambda x: 'i' in x, BooleanType())(result.name))
filtered.collect()
# the above error is produced...
</code></pre>
<p>Am I doing anything wrong in this case? Is this a bug in 2.0 or should I consider some change in behavior between the two versions?</p>
| 0 | 2016-08-16T07:54:36Z | 38,988,600 | <p>This is a bug in pyspark.</p>
<p>I have filed a bug for it here <a href="https://issues.apache.org/jira/browse/SPARK-17100" rel="nofollow">https://issues.apache.org/jira/browse/SPARK-17100</a></p>
<p>This problem arises in left_outer, right_outer and outer joins, but not for inner joins.</p>
<p>One workaround is to cache the join result before the filter. </p>
<p>eg: </p>
<p><code>result = df1.join(df2, col('df1.id') == col('df2.manager_id'),
'left_outer').select(df2.name).cache()
</code></p>
| 1 | 2016-08-17T05:18:16Z | [
"python",
"apache-spark",
"pyspark",
"spark-dataframe"
] |
Can not find api to reset virtual guest's password | 38,969,449 | <p>I have a requirement that resetting the virtual guest's password when the password is forgotten. And I failed to find a suitable method to reset the password. Maybe reloading the os is a way to reset the password, but it is too crude. Is there any api/method to reset the virtual guest's password?</p>
| 0 | 2016-08-16T07:57:19Z | 38,977,645 | <p>Currently thereâs no API method to reset the Virtual Guestâs password.</p>
| 0 | 2016-08-16T14:31:14Z | [
"python",
"softlayer"
] |
python: replace strings in a nested dictionary | 38,969,466 | <p>I have a nested dictionary and I would like to replace all the strings in the lists that have a space followed by numbers (<code>vlc 2.2</code>, <code>ado 3.4</code> and <code>ultr 3.1</code>) just with their name, i.e. <code>vlc</code>, <code>ado</code> and <code>ultr</code>. Here is the input dictionary: </p>
<pre><code>input = {'cl1': {'to_do': ['ab',
'dir8',
'cop',
'vlc 2.2.2.0',
'7zi',
'7zi',
'ado 3.4']},
'cl2': {'to_do': ['ultr 3.1', 'ab']}}
</code></pre>
<p>This should be the output:</p>
<pre><code>result = {'cl1': {'to_do': ['ab',
'dir8',
'cop',
'vlc',
'7zi',
'7zi',
'ado']},
'cl2': {'to_do': ['ultr', 'ab']}}
</code></pre>
<p>I am trying something like:</p>
<pre><code>for k in input:
for e in input[k]['to_do']:
input[k]['to_do'] = e.replace(e, e.split()[0])
</code></pre>
<p>Getting the wrong output:</p>
<pre><code>{'cl1': {'to_do': 'ado'}, 'cl2': {'to_do': 'ab'}}
</code></pre>
<p>I don't fully understand where is the mistake. Any help? Thank you</p>
| 3 | 2016-08-16T07:58:02Z | 38,969,568 | <p>This solution will work if you want to strip anything following a space within the <code>to_do</code> lists. This seemed implied by your current implementation, but if you want digits only, you'll need to use a regex.</p>
<p><strong>Python 2</strong>:</p>
<pre><code>d = {
'cl1': {
'to_do': [
'ab',
'dir8',
'cop',
'vlc 2.2.2.0',
'7zi',
'7zi',
'ado 3.4'
]
},
'cl2': {
'to_do': [
'ultr 3.1',
'ab'
]
}
}
for inner_dict in d.itervalues():
inner_dict['to_do'] = [x.split()[0] for x in inner_dict['to_do']]
print d # {'cl1': {'to_do': ['ab', 'dir8', 'cop', 'vlc', '7zi', '7zi', 'ado']}, 'cl2': {'to_do': ['ultr', 'ab']}}
</code></pre>
<p><strong>Python 3</strong> (assume the same <code>d</code>):</p>
<pre><code>for inner_dict in d.values():
inner_dict['to_do'] = [x.split()[0] for x in inner_dict['to_do']]
print(d)
</code></pre>
| 0 | 2016-08-16T08:03:26Z | [
"python",
"dictionary"
] |
python: replace strings in a nested dictionary | 38,969,466 | <p>I have a nested dictionary and I would like to replace all the strings in the lists that have a space followed by numbers (<code>vlc 2.2</code>, <code>ado 3.4</code> and <code>ultr 3.1</code>) just with their name, i.e. <code>vlc</code>, <code>ado</code> and <code>ultr</code>. Here is the input dictionary: </p>
<pre><code>input = {'cl1': {'to_do': ['ab',
'dir8',
'cop',
'vlc 2.2.2.0',
'7zi',
'7zi',
'ado 3.4']},
'cl2': {'to_do': ['ultr 3.1', 'ab']}}
</code></pre>
<p>This should be the output:</p>
<pre><code>result = {'cl1': {'to_do': ['ab',
'dir8',
'cop',
'vlc',
'7zi',
'7zi',
'ado']},
'cl2': {'to_do': ['ultr', 'ab']}}
</code></pre>
<p>I am trying something like:</p>
<pre><code>for k in input:
for e in input[k]['to_do']:
input[k]['to_do'] = e.replace(e, e.split()[0])
</code></pre>
<p>Getting the wrong output:</p>
<pre><code>{'cl1': {'to_do': 'ado'}, 'cl2': {'to_do': 'ab'}}
</code></pre>
<p>I don't fully understand where is the mistake. Any help? Thank you</p>
| 3 | 2016-08-16T07:58:02Z | 38,969,736 | <p>When you do this :</p>
<pre><code>input[k]['to_do'] = e.replace(e, e.split()[0])
</code></pre>
<p>You replace the initial arrays (<code>['ultr 3.1', 'ab']</code> and <code>['ab', 'dir8', 'cop', 'vlc 2.2.2.0', '7zi', '7zi', 'ado 3.4']</code>) by a <strong>single value</strong> (the last one processed in your loop).</p>
<p>You just have to replace the inner loop by this single line to make it work :</p>
<pre><code>input[k]['to_do'] = [e.split()[0] for e in input[k]['to_do']]
</code></pre>
<p>The correct result is given :</p>
<pre><code> {'cl2': {'to_do': ['ultr', 'ab']},
'cl1': {'to_do': ['ab', 'dir8', 'cop', 'vlc', '7zi', '7zi', 'ado']}}
</code></pre>
| 2 | 2016-08-16T08:12:24Z | [
"python",
"dictionary"
] |
python: replace strings in a nested dictionary | 38,969,466 | <p>I have a nested dictionary and I would like to replace all the strings in the lists that have a space followed by numbers (<code>vlc 2.2</code>, <code>ado 3.4</code> and <code>ultr 3.1</code>) just with their name, i.e. <code>vlc</code>, <code>ado</code> and <code>ultr</code>. Here is the input dictionary: </p>
<pre><code>input = {'cl1': {'to_do': ['ab',
'dir8',
'cop',
'vlc 2.2.2.0',
'7zi',
'7zi',
'ado 3.4']},
'cl2': {'to_do': ['ultr 3.1', 'ab']}}
</code></pre>
<p>This should be the output:</p>
<pre><code>result = {'cl1': {'to_do': ['ab',
'dir8',
'cop',
'vlc',
'7zi',
'7zi',
'ado']},
'cl2': {'to_do': ['ultr', 'ab']}}
</code></pre>
<p>I am trying something like:</p>
<pre><code>for k in input:
for e in input[k]['to_do']:
input[k]['to_do'] = e.replace(e, e.split()[0])
</code></pre>
<p>Getting the wrong output:</p>
<pre><code>{'cl1': {'to_do': 'ado'}, 'cl2': {'to_do': 'ab'}}
</code></pre>
<p>I don't fully understand where is the mistake. Any help? Thank you</p>
| 3 | 2016-08-16T07:58:02Z | 38,970,181 | <p>An even more generic way to do this would be to create a recursive method, that calls itself again:</p>
<pre class="lang-py prettyprint-override"><code>def recursive_split(input, search):
# check whether it's a dict, list, tuple, or scalar
if isinstance(input, dict):
items = input.items()
elif isinstance(input, (list, tuple)):
items = enumerate(input)
else:
# just a value, split and return
return str(input).split(search)[0]
# now call ourself for every value and replace in the input
for key, value in items:
input[key] = recursive_split(value, search)
return input
</code></pre>
<p>Please note, that this approach uses in-place replacement, but could easily be converted to return a new dictionary instead. This should cover any structure containing any type of values, that can be transformed into strings. In your case you would use it as by simply calling:</p>
<pre><code>d = recursive_split(d, " ")
</code></pre>
| 0 | 2016-08-16T08:37:17Z | [
"python",
"dictionary"
] |
why regular expression match an additional space in Python 2.7? | 38,969,486 | <p>Using Python 2.7. And in a long string, I want to match content which starts and ends with <code>{</code> <code>}</code>. And particularly, I am interested in two parts within <code>{</code> <code>}</code>. The first part is anything in <code>[1J, 2J, ..., 10J]</code> or <code>[1S, 2S, ..., 10S]</code>, and wrapped with <code>()</code> and delimiter by <code>,</code>. The 2nd part I am interested in is the remaining text within <code>{</code> <code>}</code>.</p>
<p>In the example below, I want to find <code>(2J,3S)</code> and <code>Hello World</code> in the first <code>{</code> <code>}</code>, and find <code>(1J,2S,3J)</code> and <code>Hello Python</code> in the 2nd <code>{</code> <code>}</code>.</p>
<p>My question is, in my code below, there is an additional space between <code>J</code> and <code>,</code> in <code>2J ,3S</code>, and another additional space between <code>J</code> and <code>,</code> in <code>1J ,2S,3J</code>. Wondering where is the space coming from and how to fix it?</p>
<pre><code>import re
judgeItemYesRegNew = r'(\((?:(?:10|[1-9])J|S(?:,|\)))+)(.*?)\s?}'
string = "Some content {(2J,3S) Hello World } Some content {(1J,2S,3J) Hello Python }"
result = re.findall(judgeItemYesRegNew, string)
for (num, content) in result:
print num, content
</code></pre>
<p>Output is,</p>
<pre><code>(2J ,3S) Hello World
(1J ,2S,3J) Hello Python
</code></pre>
| 1 | 2016-08-16T07:59:12Z | 38,969,518 | <p><code>print num, content</code> separates the two printed values by a space. Concatenate the two strings if you don't want that space to be printed:</p>
<pre><code>print num + content
</code></pre>
<p>Note that <code>num</code> only consists of <code>'(2J'</code> and <code>(1J'</code>, respectively. The remainder is contained in <code>content</code> (<code>',3S) Hello World'</code> and <code>',2S,3J) Hello Python'</code>, respectively.</p>
<p>That's because you split the group into a <code>J</code> and <code>S</code> part with <code>|</code>; <em>everything before and after</em> within the same parentheses are now part of those two options, not just those two letters. You either match <code>(?:10|[1-9])J</code> or you match <code>S(?:,|\)</code>.</p>
<p>Use <code>[JS]</code> (a character class) instead of alternative grouping:</p>
<pre><code>(\((?:(?:10|[1-9])[JS](?:,|\)))+)
</code></pre>
<p>making the full expression:</p>
<pre><code>judgeItemYesRegNew = r'(\((?:(?:10|[1-9])[JS](?:,|\)))+)(.*?)\s?}'
</code></pre>
<p>This would result in <code>num = '(2J,3S)'</code> and <code>content = ' Hello World'</code>; note the space, you may want to leave spaces after the closing parens out of the second group:</p>
<pre><code>judgeItemYesRegNew = r'(\((?:(?:10|[1-9])[JS](?:,|\)))+)\s*(.*?)\s?}'
</code></pre>
<p>See <a href="https://regex101.com/r/xH5xP9/1" rel="nofollow">https://regex101.com/r/xH5xP9/1</a> for an online regex demo of the pattern.</p>
<p>Python demo:</p>
<pre><code>>>> import re
>>> judgeItemYesRegNew = r'(\((?:(?:10|[1-9])[JS](?:,|\)))+)\s*(.*?)\s?}'
>>> string = "Some content {(2J,3S) Hello World } Some content {(1J,2S,3J) Hello Python }"
>>> result = re.findall(judgeItemYesRegNew, string)
>>> for (num, content) in result:
... print (num, content)
...
('(2J,3S)', 'Hello World')
('(1J,2S,3J)', 'Hello Python')
</code></pre>
| 3 | 2016-08-16T08:00:40Z | [
"python",
"regex",
"python-2.7"
] |
why regular expression match an additional space in Python 2.7? | 38,969,486 | <p>Using Python 2.7. And in a long string, I want to match content which starts and ends with <code>{</code> <code>}</code>. And particularly, I am interested in two parts within <code>{</code> <code>}</code>. The first part is anything in <code>[1J, 2J, ..., 10J]</code> or <code>[1S, 2S, ..., 10S]</code>, and wrapped with <code>()</code> and delimiter by <code>,</code>. The 2nd part I am interested in is the remaining text within <code>{</code> <code>}</code>.</p>
<p>In the example below, I want to find <code>(2J,3S)</code> and <code>Hello World</code> in the first <code>{</code> <code>}</code>, and find <code>(1J,2S,3J)</code> and <code>Hello Python</code> in the 2nd <code>{</code> <code>}</code>.</p>
<p>My question is, in my code below, there is an additional space between <code>J</code> and <code>,</code> in <code>2J ,3S</code>, and another additional space between <code>J</code> and <code>,</code> in <code>1J ,2S,3J</code>. Wondering where is the space coming from and how to fix it?</p>
<pre><code>import re
judgeItemYesRegNew = r'(\((?:(?:10|[1-9])J|S(?:,|\)))+)(.*?)\s?}'
string = "Some content {(2J,3S) Hello World } Some content {(1J,2S,3J) Hello Python }"
result = re.findall(judgeItemYesRegNew, string)
for (num, content) in result:
print num, content
</code></pre>
<p>Output is,</p>
<pre><code>(2J ,3S) Hello World
(1J ,2S,3J) Hello Python
</code></pre>
| 1 | 2016-08-16T07:59:12Z | 38,969,681 | <p>You placed the <code>+</code> after your group #1, but you forgot to make sure a comma is also matched. Add it as an optional symbol to match. Also, the <code>(?:,|\))</code> part is put inside an alternative branch, while the <code>)</code> should be placed outside as the trailing char in Group 1, whule the comma is the one that should be alternated as an optional subpattern.</p>
<pre><code>(\((?:,?(?:10|[1-9])[JS])+\))\s*(.*?)\s*}
^^
</code></pre>
<p>See the <a href="https://regex101.com/r/eH2cJ1/2" rel="nofollow">regex demo</a></p>
<p>I also modified the pattern to match:</p>
<ul>
<li><p><code>(\((?:,?(?:10|[1-9])[JS])+\))</code> - Group 1: </p>
<ul>
<li><code>\(</code> - a literal <code>(</code></li>
<li><code>(?:,?(?:10|[1-9])[JS])+</code> - 1 or more sequences of:
<ul>
<li><code>,?</code> - an optional comma</li>
<li><code>(?:10|[1-9])[JS]</code> - <code>10</code> or a single digit followed with either <code>J</code> or <code>S</code></li>
</ul></li>
<li><code>\)</code> - a literal <code>)</code></li>
</ul></li>
<li><p><code>\s*</code> - zero or more whitespaces</p></li>
<li><code>(.*?)</code> - Group 2: zero or more chars other than a newline up to the first</li>
<li><code>\s*</code> - 0+ whiotespaces</li>
<li><code>}</code> - a literal <code>}</code>.</li>
</ul>
<p>And a <a href="https://ideone.com/UOVjXk" rel="nofollow">Python demo</a>:</p>
<pre><code>import re
p = re.compile(r'(\((?:,?(?:10|[1-9])[JS])+\))\s*(.*?)\s*}')
s = "Some content {(2J,3S) Hello World } Some content {(1J,2S,3J) Hello Python }"
print(p.findall(s))
</code></pre>
| 2 | 2016-08-16T08:09:15Z | [
"python",
"regex",
"python-2.7"
] |
Python Yandex translate example | 38,969,543 | <p>so for example I am trying to use <code>unirest</code> for this
so i put</p>
<pre><code>base = 'translate.yandex.net
post = '/api/v1.5/tr./getLangs?ui=en&key=' + api_key
request = unirest.get(base+post, headers={'accept' : "json"})
</code></pre>
<p>and the code says something about not a valid url this is directly from the docs. </p>
<p>What I am asking for is a working example on how to get this api to work with the <code>unirest</code> module. If not possible how would I use with another package. </p>
<p>This may be a stupid question but maybe I just don't comprehend the docs from Yandex. </p>
<p><strong>update</strong> a link to the docs is here.
<a href="https://tech.yandex.com/translate/doc/dg/reference/translate-docpage/" rel="nofollow">https://tech.yandex.com/translate/doc/dg/reference/translate-docpage/</a></p>
| 2 | 2016-08-16T08:01:56Z | 38,990,798 | <p>Try adding <code>http://</code> or <code>https://</code> in the base url:</p>
<pre><code>base = 'http://translate.yandex.net'
post = '/api/v1.5/tr.json/getLangs?ui=en&key=' + api_key
request = unirest.get(base+post, headers={'accept' : "json"})
</code></pre>
<p>and it should be fine.</p>
<p>This is based on <a href="https://tech.yandex.com/translate/doc/dg/reference/getLangs-docpage/" rel="nofollow" title="yandex documentation">yandex documentation</a>.</p>
| 2 | 2016-08-17T07:41:50Z | [
"python",
"json",
"api",
"unirest",
"yandex"
] |
Program to check if list is in alternating form | 38,969,629 | <p>I am trying to write a function in Python that checks if a given list of integers is strictly alternating i.e.alternately go up and down strictly</p>
<p>So for example:</p>
<pre><code>alternating([]) = True
alternating([1,3,2,3,1,5]) = True
alternating([3,2,2,1,5])= False
alternating([3,2,1,3,5]) = False
</code></pre>
<p>This is the code I could think of, but it isnt working for the empty list, does work for the rest and any changes I make gives me errors for all the rest.</p>
<pre><code>def alternating(list):
for i in range(1,len(list)):
if (((list[i]>list[i+1]) and (list[i]>list[i-1])) or ((list[i] <list[i+1]) and (list[i]<list[i-1]))) :
return True
else:
return False
</code></pre>
| 0 | 2016-08-16T08:06:10Z | 38,969,766 | <p>Here is how I would do it:</p>
<pre><code>def alternating(l):
return all(cmp(a, b)*cmp(b, c) == -1 for a, b, c in zip(l, l[1:], l[2:]))
assert alternating([]) is True
assert alternating([1, 3, 2, 3, 1, 5]) is True
assert alternating([3, 2, 2, 1, 5]) is False
assert alternating([3, 2, 1, 3, 5]) is False
assert alternating([1, 3, 2, 3, 2, 1]) is False
</code></pre>
<p>Here are two simpler versions:</p>
<pre><code>def alternating(l):
for i in range(len(l)-2):
if (l[i] < l[i+1]) and (l[i+1] > l[i+2]):
continue
if (l[i] > l[i+1]) and (l[i+1] < l[i+2]):
continue
return False
return True
</code></pre>
<p> </p>
<pre><code>def alternating(l):
for i in range(len(l)-2):
if (l[i] < l[i+1]) and (l[i+1] < l[i+2]):
return False
if (l[i] > l[i+1]) and (l[i+1] > l[i+2]):
return False
if (l[i] == l[i+1]) or (l[i+1] == l[i+2]):
return False
return True
</code></pre>
| 5 | 2016-08-16T08:13:50Z | [
"python"
] |
Program to check if list is in alternating form | 38,969,629 | <p>I am trying to write a function in Python that checks if a given list of integers is strictly alternating i.e.alternately go up and down strictly</p>
<p>So for example:</p>
<pre><code>alternating([]) = True
alternating([1,3,2,3,1,5]) = True
alternating([3,2,2,1,5])= False
alternating([3,2,1,3,5]) = False
</code></pre>
<p>This is the code I could think of, but it isnt working for the empty list, does work for the rest and any changes I make gives me errors for all the rest.</p>
<pre><code>def alternating(list):
for i in range(1,len(list)):
if (((list[i]>list[i+1]) and (list[i]>list[i-1])) or ((list[i] <list[i+1]) and (list[i]<list[i-1]))) :
return True
else:
return False
</code></pre>
| 0 | 2016-08-16T08:06:10Z | 38,969,847 | <p>Basic error #1:</p>
<pre><code>for i in range(1,len(list)):
if (((list[i]>list[i+1]) and (list[i]>list[i-1])) or ...:
return True
</code></pre>
<p>On the algorithmic perspective, you cannot possibly determine that a list of <code>n</code> elements is alternating by checking only <code>O(1)</code> elements.</p>
<hr>
<p>Basic error #2:</p>
<p>Your function doesn't always return value. Although this is not mandatory in Python, in your specific case you probably want it to return a value, and even more so - a Boolean value.</p>
| 1 | 2016-08-16T08:18:49Z | [
"python"
] |
Program to check if list is in alternating form | 38,969,629 | <p>I am trying to write a function in Python that checks if a given list of integers is strictly alternating i.e.alternately go up and down strictly</p>
<p>So for example:</p>
<pre><code>alternating([]) = True
alternating([1,3,2,3,1,5]) = True
alternating([3,2,2,1,5])= False
alternating([3,2,1,3,5]) = False
</code></pre>
<p>This is the code I could think of, but it isnt working for the empty list, does work for the rest and any changes I make gives me errors for all the rest.</p>
<pre><code>def alternating(list):
for i in range(1,len(list)):
if (((list[i]>list[i+1]) and (list[i]>list[i-1])) or ((list[i] <list[i+1]) and (list[i]<list[i-1]))) :
return True
else:
return False
</code></pre>
| 0 | 2016-08-16T08:06:10Z | 38,969,910 | <p>Here is a straightforward way to do it:</p>
<pre><code>def alternating(l):
for i in range(1, len(l) - 1):
if not (l[i - 1] < l[i] > l[i + 1] or
l[i - 1] > l[i] < l[i + 1]):
return False
return True
print alternating([])
print alternating([1,3,2,3,1,5])
print alternating([3,2,2,1,5])
print alternating([3,2,1,3,5])
</code></pre>
| 3 | 2016-08-16T08:22:56Z | [
"python"
] |
How to use variable from looping as dictionary key in Django template? | 38,969,637 | <p>assumed that i have return a dictionary like this in my views or template_tags :</p>
<pre><code>data = {key1:value1,key2:value2,......,keyn:valuen}
</code></pre>
<p>how can i generate something like this in my html template ?</p>
<pre><code>value1
value2
value3
.
.
.
valuen
</code></pre>
<p>This is what i've got as far as i know :</p>
<pre><code> {% for i in data %}
{% for j in data.i %}
{{ j }} <br>
{% endfor %}
{% endfor %}
</code></pre>
| -1 | 2016-08-16T08:06:45Z | 38,969,714 | <p>According to this documentation page : <a href="https://docs.djangoproject.com/en/1.9/ref/templates/builtins/#for" rel="nofollow">https://docs.djangoproject.com/en/1.9/ref/templates/builtins/#for</a> this should work as you wish :</p>
<pre><code>{% for key, value in data.items %}
{{ value }} <br />
{% endfor %}
</code></pre>
| 1 | 2016-08-16T08:11:03Z | [
"python",
"django",
"django-templates"
] |
Clearing LRU cache on all object properties | 38,969,823 | <p>So I'm using <code>@lru_cache</code> on my objects in different parts, and I'm just wondering how to flush the cache on all functions on an object where @lru_cache is used, something like:</p>
<pre><code>for i in dir(self):
if 'cache_clear' in dir(i):
self.get_attr(i).cache_clear()
</code></pre>
<p>The problem is:</p>
<ul>
<li>I'm not sure if this is really a very elegant way to do it</li>
<li>cache_clear doesn't actually appear when I do dir() on the function which it decorates</li>
</ul>
<p>What's the best way to do this?</p>
| 0 | 2016-08-16T08:17:23Z | 38,970,023 | <p>When you request an instance method from <code>self</code>, Python returns a bound method object that does not have the <code>cache_clear</code> method.</p>
<p>You need to avoid triggering instance method lookup:</p>
<pre><code>for value in vars(self).values():
attr = getattr(value, 'cache_clear', None)
if callable(attr):
attr()
</code></pre>
<p>Keep in mind that <code>@lru_cache</code> cache is shared among instances of the class, which means that calling <code>cache_clear</code> from within one instance will empty the cache for all instances (<a href="http://stackoverflow.com/a/14946506/2301450">possible solution</a>).</p>
| 0 | 2016-08-16T08:29:31Z | [
"python"
] |
how to step into C++ code while debugging python using DDD(or gdb) | 38,969,859 | <p>For test example, I have this test C++ class which I exported to Python using boost.(from boost website)</p>
<pre><code>#include <boost/python.hpp>
using namespace boost::python;
struct WorldC
{
void set(std::string msg) { this->msg = msg; }
std::string greet() { return msg; }
std::string msg;
};
BOOST_PYTHON_MODULE(hello)
{
class_<WorldC>("WorldP")
.def("greet", &WorldC::greet)
.def("set", &WorldC::set)
;
}
</code></pre>
<p>I compiled this code by <code>g++ -g -shared -o hello.so -fPIC hello.cpp -lboost_python -lpython2.7 -I/usr/local/include/python2.7</code> and tested it ok. The test script <code>pp1.py</code> is like this : </p>
<pre><code>import hello
a = hello.WorldP()
a.set('ahahahah') # <-- line 3
print a.greet()
print('print1')
b = hello.WorldP()
b.set('bhbhbhbh')
print b.greet()
print('print2')
print('program done')
</code></pre>
<p>This code runs ok either in interacitive mode and as a script.</p>
<pre><code>ckim@stph45:~/python/hello] python pp1.py
ahahahah
print1
bhbhbhbh
print2
program done
</code></pre>
<p>I'm using DDD for visual debugging. When I give the command <code>ddd -pydb pp1.py</code>, I can do Python code debugging. When I'm inside the debugger, I can give <code>next</code> command and see the result. But when the debugger is for example in line 3, when I give <code>step</code> command, it just passes the line not entering into the c++ code. How can I make this work?
(I tried with just gdb, but it's the same- not entering into c++ code.)</p>
| 4 | 2016-08-16T08:19:31Z | 39,012,199 | <p>I posted an answer for debugging C++ while running python program. </p>
<p><a href="http://stackoverflow.com/questions/38898459/debugging-python-and-c-exposed-by-">debugging python and c++ exposed by boost together</a>
boost-together/39012185#39012185</p>
| 0 | 2016-08-18T07:24:16Z | [
"python",
"c++",
"boost",
"gdb",
"ddd-debugger"
] |
Receive return value from different thread in another module | 38,969,928 | <p>I am trying to replicate <code>C#</code> code in <code>python</code> which executes a thread, waits for it to finish and returns a value. Essentially the method <code>RunAndWait</code> is in a helper class because a call to that method is being made multiple times.</p>
<p><code>C#</code> code is as follows:</p>
<pre><code>public static bool RunAndWait(Action _action, long _timeout)
{
Task t = Task.Run(() =>
{
Log.Message(Severity.MESSAGE, "Executing " + _action.Method.Name);
_action();
});
if (!t.Wait(Convert.ToInt32(_timeout)))
{
Log.Message(Severity.ERROR, "Executing " + _action.Method.Name + " timedout. Could not execute MCS command.");
throw new AssertFailedException();
}
t.Dispose();
t = null;
return true;
}
</code></pre>
<p>In <code>python</code> I have been struggling with a few things. Firstly, there seem to be different types of Queue's where I simply picked the import that seemed to be working <code>import Queue</code>. Secondly, I receive a TypeError as below.</p>
<blockquote>
<p>Traceback (most recent call last):
File "C:/Users/JSC/Documents/Git/EnterprisePlatform/Enterprise/AI.App.Tool.AutomatedMachineTest/Scripts/monkey.py",
line 9, in
File "C:\Users\JSC\Documents\Git\EnterprisePlatform\Enterprise\AI.App.Tool.AutomatedMachineTest\Scripts\Libs\MonkeyHelper.py",
line 4, in RunCmdAndWait
TypeError: module is not callable</p>
</blockquote>
<p>Here is the <code>python</code> code for monkey:</p>
<pre><code>from Libs.CreateConnection import CreateMcsConnection
import Libs.MonkeyHelper as mh
import Queue
q = Queue.Queue()
to = 5000 #timeout
mh.RunCmdAndWait(CreateMcsConnection, to, q)
serv, con = q.get()
</code></pre>
<p>and <code>MonkeyHelper.py</code>:</p>
<pre><code>import threading
def RunCmdAndWait(CmdToRun, timeout, q):
t = threading(group=None, target=CmdToRun, arg=q)
t.start()
t.join(timeout=timeout)
</code></pre>
<p>I am not sure what I am doing wrong. I am fairly new to python. Could someone please help me out?</p>
<p><strong>Edit</strong></p>
<pre><code>t = threading.Thread(group=None, target=CmdToRun, args=q)
</code></pre>
<p>correcting the line above brought up another error:</p>
<blockquote>
<p>Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Program Files (x86)\IronPython 2.7\Lib\threading.py", line 552, in _Thread__bootstrap_inner
self.run()
File "C:\Program Files (x86)\IronPython 2.7\Lib\threading.py", line 505, in run
self.<strong>target(*self.__args, **self.__kwargs)
AttributeError: Queue instance has no attribute '__len</strong>'</p>
</blockquote>
<p>Is that because <code>Thread</code> expects multiple args or because the <code>queue</code> is still empty at this point? From what I've seen is that the <code>queue</code> is just being passed as an argument to receive the return value. Is that the right way to go?</p>
<p><strong>Edit2</strong></p>
<p>Changed <code>t = threading.Thread(group=None, target=CmdToRun, args=q)</code> to <code>t = threading.Thread(group=None, target=CmdToRun, args=(q,))</code></p>
<p>The change yields in a TypeError below, seems weird to me since Thread is expecting a tuple.</p>
<blockquote>
<p>Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Program Files (x86)\IronPython 2.7\Lib\threading.py", line 552, in _Thread__bootstrap_inner
self.run()
File "C:\Program Files (x86)\IronPython 2.7\Lib\threading.py", line 505, in run
self.__target(*self.__args, **self.__kwargs)
TypeError: tuple is not callable</p>
</blockquote>
| 0 | 2016-08-16T08:23:46Z | 38,970,155 | <p><code>threading</code> is a module. You likely mean to replace</p>
<pre><code>t = threading(group=None, target=CmdToRun, arg=q)
</code></pre>
<p>with </p>
<pre><code>t = threading.Thread(group=None, target=CmdToRun, args=(q,))
</code></pre>
<p><code>args</code> is an argument tuple.</p>
| 1 | 2016-08-16T08:35:23Z | [
"python",
"queue",
"ironpython",
"python-multithreading"
] |
Search and replace multiple specific sequences of elements in Python list/array | 38,969,984 | <p>I currently have 6 separate for loops which iterate over a list of numbers looking to match specific sequences of numbers within larger sequences, and replace them like this:</p>
<pre><code>[...0,1,0...] => [...0,0,0...]
[...0,1,1,0...] => [...0,0,0,0...]
[...0,1,1,1,0...] => [...0,0,0,0,0...]
</code></pre>
<p>And their inverse:</p>
<pre><code>[...1,0,1...] => [...1,1,1...]
[...1,0,0,1...] => [...1,1,1,1...]
[...1,0,0,0,1...] => [...1,1,1,1,1...]
</code></pre>
<p>My existing code is like this:</p>
<pre><code>for i in range(len(output_array)-2):
if output_array[i] == 0 and output_array[i+1] == 1 and output_array[i+2] == 0:
output_array[i+1] = 0
for i in range(len(output_array)-3):
if output_array[i] == 0 and output_array[i+1] == 1 and output_array[i+2] == 1 and output_array[i+3] == 0:
output_array[i+1], output_array[i+2] = 0
</code></pre>
<p>In total I'm iterating over the same output_array 6 times, using brute force checking. Is there a faster method?</p>
| 0 | 2016-08-16T08:27:34Z | 38,970,350 | <pre><code># I would create a map between the string searched and the new one.
patterns = {}
patterns['010'] = '000'
patterns['0110'] = '0000'
patterns['01110'] = '00000'
# I would loop over the lists
lists = [[0,1,0,0,1,1,0,0,1,1,1,0]]
for lista in lists:
# i would join the list elements as a string
string_list = ''.join(map(str,lista))
# we loop over the patterns
for pattern,value in patterns.items():
# if a pattern is detected, we replace it
string_list = string_list.replace(pattern, value)
lista = list(string_list)
print lista
</code></pre>
| 2 | 2016-08-16T08:46:06Z | [
"python",
"arrays",
"list",
"design-patterns",
"iterator"
] |
Search and replace multiple specific sequences of elements in Python list/array | 38,969,984 | <p>I currently have 6 separate for loops which iterate over a list of numbers looking to match specific sequences of numbers within larger sequences, and replace them like this:</p>
<pre><code>[...0,1,0...] => [...0,0,0...]
[...0,1,1,0...] => [...0,0,0,0...]
[...0,1,1,1,0...] => [...0,0,0,0,0...]
</code></pre>
<p>And their inverse:</p>
<pre><code>[...1,0,1...] => [...1,1,1...]
[...1,0,0,1...] => [...1,1,1,1...]
[...1,0,0,0,1...] => [...1,1,1,1,1...]
</code></pre>
<p>My existing code is like this:</p>
<pre><code>for i in range(len(output_array)-2):
if output_array[i] == 0 and output_array[i+1] == 1 and output_array[i+2] == 0:
output_array[i+1] = 0
for i in range(len(output_array)-3):
if output_array[i] == 0 and output_array[i+1] == 1 and output_array[i+2] == 1 and output_array[i+3] == 0:
output_array[i+1], output_array[i+2] = 0
</code></pre>
<p>In total I'm iterating over the same output_array 6 times, using brute force checking. Is there a faster method?</p>
| 0 | 2016-08-16T08:27:34Z | 38,970,429 | <p>While this question related to the questions <a href="http://stackoverflow.com/questions/425604/best-way-to-determine-if-a-sequence-is-in-another-sequence-in-python">Here</a> and <a href="http://stackoverflow.com/questions/2250633/python-find-a-list-within-members-of-another-listin-order">Here</a>, the question from OP relates to fast searching of multiple sequences at once. While the accepted answer works well, we may not want to loop through all the search sequences for every sub-iteration of the base sequence.</p>
<p>Below is an algo which checks for a sequence of i ints only if the sequence of (i-1) ints is present in the base sequence</p>
<pre><code># This is the driver function which takes in a) the search sequences and
# replacements as a dictionary and b) the full sequence list in which to search
def findSeqswithinSeq(searchSequences,baseSequence):
seqkeys = [[int(i) for i in elem.split(",")] for elem in searchSequences]
maxlen = max([len(elem) for elem in seqkeys])
decisiontree = getdecisiontree(seqkeys)
i = 0
while i < len(baseSequence):
(increment,replacement) = get_increment_replacement(decisiontree,baseSequence[i:i+maxlen])
if replacement != -1:
baseSequence[i:i+len(replacement)] = searchSequences[",".join(map(str,replacement))]
i +=increment
return baseSequence
#the following function gives the dictionary of intermediate sequences allowed
def getdecisiontree(searchsequences):
dtree = {}
for elem in searchsequences:
for i in range(len(elem)):
if i+1 == len(elem):
dtree[",".join(map(str,elem[:i+1]))] = True
else:
dtree[",".join(map(str,elem[:i+1]))] = False
return dtree
# the following is the function does most of the work giving us a) how many
# positions we can skip in the search and b)whether the search seq was found
def get_increment_replacement(decisiontree,sequence):
if str(sequence[0]) not in decisiontree:
return (1,-1)
for i in range(1,len(sequence)):
key = ",".join(map(str,sequence[:i+1]))
if key not in decisiontree:
return (1,-1)
elif decisiontree[key] == True:
key = [int(i) for i in key.split(",")]
return (len(key),key)
return 1, -1
</code></pre>
<p>You can test the above code with this snippet:</p>
<pre><code>if __name__ == "__main__":
inputlist = [5,4,0,1,1,1,0,2,0,1,0,99,15,1,0,1]
patternsandrepls = {'0,1,0':[0,0,0],
'0,1,1,0':[0,0,0,0],
'0,1,1,1,0':[0,0,0,0,0],
'1,0,1':[1,1,1],
'1,0,0,1':[1,1,1,1],
'1,0,0,0,1':[1,1,1,1,1]}
print(findSeqswithinSeq(patternsandrepls,inputlist))
</code></pre>
<p>The proposed solution represents the sequences to be searched as a decision tree. </p>
<p>Due to skipping the many of the search points, we should be able to do better than O(m*n) with this method (where m is number of search sequences and n is length of base sequence.</p>
<p>EDIT: Changed answer based on more clarity in edited question.</p>
| 1 | 2016-08-16T08:49:54Z | [
"python",
"arrays",
"list",
"design-patterns",
"iterator"
] |
Change value of the selection field using on_change | 38,969,990 | <p>I am trying to change the value of my selection field using on_change. I have the code below.</p>
<p>.xml</p>
<pre><code><field name="product_id" on_change="onchange_partner_id_override(product_id, context)"
</code></pre>
<p>.py</p>
<pre><code>class sale_order_line(osv.osv):
_inherit = "sale.order.line"
_columns = {
'product_id': fields.many2one('product.product', "Product"),
'price_select': fields.selection(SELECT_PRICE, "Unit Price"),
}
def product_id_change_override(self, cr, uid, ids, product, context=None):
result = []
product_obj = self.pool.get('product.product')
product_obj = product_obj.browse(cr, uid, product, context=context_partner)
global SELECT_PRICE
SELECT_PRICE = [
('sale_price', product_obj.list_price),
('dist_price', product_obj.distributor_price),
('emp_price', product_obj.employee_price),
]
# How could I change the value of my selection field 'price_select'
return {'value': result}
</code></pre>
<p>But i don't know the syntax on how to append this data on my selection field.
Could someone help me Please !</p>
| 1 | 2016-08-16T08:27:49Z | 38,972,424 | <p>You need to override <code>product_id_change</code> method and specify the value of <code>price_select</code> field in <code>value</code> dict: </p>
<pre><code>def product_id_change(self, cr, uid, ids, pricelist, product, qty=0,
uom=False, qty_uos=0, uos=False, name='', partner_id=False,
lang=False, update_tax=True, date_order=False, packaging=False, fiscal_position=False, flag=False, context=None):
res = super(sale_order_line, self).product_id_change(cr, uid, ids, pricelist, product, qty,
uom, qty_uos, uos, name, partner_id,
lang, update_tax, date_order, packaging, fiscal_position, flag, context)
res['value'].update({'price_select': 'emp_price'})
return res
</code></pre>
| 2 | 2016-08-16T10:24:47Z | [
"python",
"openerp"
] |
Automatically update title of chart with matplotlib and pylab | 38,970,033 | <pre><code>import numpy as np
import matplotlib.pyplot as plt
%pylab inline
def fun (x): #piecewise functions
if x< -1:
return 2*x + 4
elif -1<= x <= 1:
return 2*x**2
elif x>1:
return 2
vfun = np.vectorize(fun)
a=-4 #as provided in question paper
b=5
N=50
x = np.linspace(a, b, N)
pylab.xlim(a, b)
pylab.ylim(vfun(x).min(), vfun(x).max())
axvline(x=-1.,color='k',ls='dashed')
axvline(x=1.,color='k',ls='dashed')
y= vfun(x)
pylab.xlabel('x') #labeling
pylab.ylabel('y')
pylab.title('My First Plot')
plt.plot(x, y, '.') # dotted style of line
plt.show()
</code></pre>
<p>How do i update the title if changes are made to the interval. Eg if my title is <code>"f(x) E [-4,5], N=50"</code>. If the interval is changed to <code>[-2,3]</code>, how do i make the title automatically update</p>
| -1 | 2016-08-16T08:30:02Z | 38,970,327 | <p>What about this?</p>
<pre><code>pylab.title("f(x) E ["+str(a)+","+str(b)+"], N="+str(N))
</code></pre>
<p>instead of</p>
<pre><code>pylab.title('My First Plot')
</code></pre>
| -1 | 2016-08-16T08:44:50Z | [
"python",
"matplotlib"
] |
Automatically update title of chart with matplotlib and pylab | 38,970,033 | <pre><code>import numpy as np
import matplotlib.pyplot as plt
%pylab inline
def fun (x): #piecewise functions
if x< -1:
return 2*x + 4
elif -1<= x <= 1:
return 2*x**2
elif x>1:
return 2
vfun = np.vectorize(fun)
a=-4 #as provided in question paper
b=5
N=50
x = np.linspace(a, b, N)
pylab.xlim(a, b)
pylab.ylim(vfun(x).min(), vfun(x).max())
axvline(x=-1.,color='k',ls='dashed')
axvline(x=1.,color='k',ls='dashed')
y= vfun(x)
pylab.xlabel('x') #labeling
pylab.ylabel('y')
pylab.title('My First Plot')
plt.plot(x, y, '.') # dotted style of line
plt.show()
</code></pre>
<p>How do i update the title if changes are made to the interval. Eg if my title is <code>"f(x) E [-4,5], N=50"</code>. If the interval is changed to <code>[-2,3]</code>, how do i make the title automatically update</p>
| -1 | 2016-08-16T08:30:02Z | 38,970,450 | <p>You can use <a href="https://docs.python.org/2/library/stdtypes.html#str.format" rel="nofollow">str.format()</a> to insert the current values of a, b and N:</p>
<pre><code>pylab.title('f(x) E [{0},{1}], N={2}'.format(a,b,N))
</code></pre>
| 1 | 2016-08-16T08:50:50Z | [
"python",
"matplotlib"
] |
Keeping format of text (.txt) files when reading and rewriting | 38,970,141 | <p>I have a <code>.txt</code> file containing formatting elements as <code>\n</code> for line breaks which I want to read and then rewrite its data until a specific line back to a new <code>.txt</code> file. My code looks like this:</p>
<pre><code>with open (filename) as f:
content=f.readlines()
with open("lf.txt", "w") as file1:
file1.write(str(content))
file1.close
</code></pre>
<p>The output file <code>lf.txt</code> is produced correctly but it throws away the formatting of the input file. Is there a way to keep the formatting of file 1 when rewriting it to a new file?</p>
| 1 | 2016-08-16T08:34:47Z | 38,970,454 | <p>You converted <code>content</code> to a string, while it's really a list of strings (lines).</p>
<p>Use <code>join</code> to convert the lines back to a string:</p>
<pre><code>file1.write(''.join(content))
</code></pre>
<p><code>join</code> is a string method, and it is activated in the example from an empty string object. The string calling this method is used as a separator for the strings joining process. In this situation we don't need any separator, just joining the strings.</p>
| 3 | 2016-08-16T08:51:06Z | [
"python"
] |
win32gui.FindWindow Not finding window | 38,970,354 | <p>I'm trying to send a keystroke to an inactive TeraTerm Window using Pywin32. </p>
<p><a href="http://stackoverflow.com/a/38888131/3714940">This</a> answer led me to write this code:</p>
<pre><code>import win32gui
import win32con
import win32api
hwndMain = win32gui.FindWindow("Tera Term VT", None)
print hwndMain
hwndChild = win32gui.GetWindow(hwndMain, win32con.GW_CHILD)
win32api.PostMessage(hwndChild, win32con.WM_CHAR, 0x5b, 0)
</code></pre>
<p>but:<br>
<code>hwndMain = win32gui.FindWindow("Tera Term VT", None)</code> returns <code>0</code>, it can't find the window. </p>
<p>If I change <code>"Tera Term VT"</code> to <code>"Notepad"</code>, I can happily send keystrokes to an active Notepad window all day long. So, why can't I get the TeraTerm window?</p>
<p>According to the <a href="http://docs.activestate.com/activepython/2.7/pywin32/win32gui__FindWindow_meth.html" rel="nofollow">ActiveState documentation</a>:</p>
<blockquote>
<p>PyHANDLE = FindWindow(ClassName, WindowName)</p>
<p>ClassName : PyResourceId
Name or atom of window class to find, can be None<br>
WindowName : string
Title of window to find, can be None</p>
</blockquote>
<p>So how can I get the correct ClassName to use?</p>
<p>I've tried just about every variation of <code>Tera Term VT</code>, escaping the spaces: <code>"Tera\ Term\ VT"</code>, enclosing the whole in single quotes: <code>"'Tera Term VT'"</code>, but nothing works. I've even tried using the name of the process:<code>"ttermpro.exe"</code>, and included the child name in the string <code>"COM11:115200baud - Tera Term VT"</code> in my desperation, but nothing works.</p>
<p>Interestingly, this:</p>
<pre><code>import win32com.client
shell = win32com.client.Dispatch("WScript.Shell")
shell.AppActivate("Tera Term VT")
shell.SendKeys("\%i", 0)
</code></pre>
<p>works just fine, but brings the window to the foreground, which I don't wan't. The <code>Tera Term VT</code> string works fine in this instance though.</p>
| 1 | 2016-08-16T08:46:16Z | 38,970,546 | <p>The line </p>
<pre><code>shell.AppActivate("Tera Term VT")
</code></pre>
<p>works on the window title and therefore it works.<br>
You should be able to to the same with </p>
<pre><code>hwndMain = win32gui.FindWindow(None, "Tera Term VT")
</code></pre>
<p>that is, swapping the arguments so that it also works based on the window title </p>
<p>If you want to work based on the window class name you could use a tool like Spy++ with its <a href="https://msdn.microsoft.com/en-us/library/dd460750.aspx" rel="nofollow">Finder Tool</a> to target the Tera Term window and get its window class name from the properties</p>
| 1 | 2016-08-16T08:56:03Z | [
"python",
"windows",
"winapi",
"pywin32",
"teraterm"
] |
Heroku mLab MongoDB admin user not authorized for query in Flask application | 38,970,357 | <p>I have this section of code, which is part of my Flask application. I am using <code>flask_mongoengine</code>.</p>
<pre><code>app = Flask(__name__)
app.config.from_object('config')
db = MongoEngine(app)
from .models import *
@app.context_processor
def inject_config():
return dict(Config.objects.first(), version=version)
</code></pre>
<p><code>Config</code> is a class within <code>.models</code> that extends Document.</p>
<pre><code>class Config(Document):
title = StringField()
description = StringField()
keywords = StringField()
author = StringField()
version = StringField()
meta = {"collection": "web_config"}
</code></pre>
<p>Upon calling <code>Config.objects</code>, it's returning an error:</p>
<pre><code>pymongo.errors.OperationFailure: database error: not authorized for query on heroku_dptwtq1j.web_config
</code></pre>
<p>I'm logged in through the admin user. Why am I not authorized for query? Also, how do I authorize myself to query?</p>
<p>I have no trouble querying through another application that uses PyMongo, so why is it not working in Flask?</p>
| 0 | 2016-08-16T08:46:20Z | 39,030,871 | <p>So, answering my own question: the issue was probably in the flask_mongoengine library. I switched to just mongoengine and it worked fine.</p>
| 0 | 2016-08-19T03:56:11Z | [
"python",
"mongodb",
"heroku",
"flask",
"flask-mongoengine"
] |
netCDF convert to NaN for conditional | 38,970,423 | <p>I want to convert values from a netCDF file into NaN called <code>LandMask_NaN</code> when they are greater than zero. However, there seems to be a type mismatch between <code>LandMask</code> and what numpy will convert to NaNs. Any help much appreciated, code and info below:</p>
<pre><code> import netCDF4 as nc
import numpy as np
import matplotlib.pyplot as plt
import csv as cs
import pandas as pd
ncfile = nc.Dataset('C:\Users\mmso2\Google Drive\ENVI_I-PAC_2007_10_21_21_22_47.nc')#office machine
SARwind = ncfile.variables['sar_wind']
ModelWind = ncfile.variables['model_speed']
LON = ncfile.variables['longitude']
LAT = ncfile.variables['latitude']
LandMask = ncfile.variables['mask']
#clean the data of values = 70
SARwind_nan = SARwind[:].copy()
SARwind_nan[SARwind_nan == 0.0] = np.nan
SARwind_nan[SARwind_nan == 70.0] = np.nan
#clear the data of values where there is land
# % pos = land; neg = water
LandMask_NaN = LandMask[:].copy()
#LandMask_NaN[int(float(LandMask_NaN))]### will not convert
LandMask_NaN[LandMask_NaN >0.0] = np.nan #error here
</code></pre>
<p>The error I get is</p>
<pre><code>#error
line 37, in <module>
LandMask_NaN[LandMask_NaN >= 0.0] = np.nan
ValueError: cannot convert float NaN to integer
</code></pre>
<p>When trying </p>
<pre><code>LandMask_NaN[int(float(LandMask_NaN))]
</code></pre>
<p>or</p>
<pre><code>LandMask_NaN[float(int(LandMask_NaN))]
</code></pre>
<p>before trying to convert to NaN, I get</p>
<pre><code>TypeError: only length-1 arrays can be converted to Python scalars
</code></pre>
<p>When checking for the type of LandMask I get </p>
<p><code><type 'netCDF4._netCDF4.Variable'></code></p>
<p>I am not sure how to find out the variable type?</p>
<p>Update: Details of nc variables</p>
<pre><code>NetCDF dimension information:
Name: x
size: 848
type: WARNING: x does not contain variable attributes
Name: y
size: 972
type: WARNING: y does not contain variable attributes
Name: xfit
size: 6
type: WARNING: xfit does not contain variable attributes
NetCDF variable information:
Name: acquisition_time
dimensions: ()
size: 1.0
type: dtype('float64')
units: u'seconds since 2000-01-01 00:00:00'
long_name: u'Acqusition time in Julian seconds since 2000-01-01T00:00:00Z'
standard_name: u'time'
calendar: u'gregorian'
Name: nx
dimensions: ()
size: 1.0
type: dtype('int32')
units: u'1'
long_name: u'Number of elements in this file'
Name: ny
dimensions: ()
size: 1.0
type: dtype('int32')
units: u'1'
long_name: u'Number of lines in this file'
Name: nx0
dimensions: ()
size: 1.0
type: dtype('int32')
units: u'1'
long_name: u'Number of elements in SIO file'
Name: ny0
dimensions: ()
size: 1.0
type: dtype('int32')
units: u'1'
long_name: u'Number of lines in SIO file'
Name: nx00
dimensions: ()
size: 1.0
type: dtype('int32')
units: u'1'
long_name: u'Number of elements in original SAR file'
Name: ny00
dimensions: ()
size: 1.0
type: dtype('int32')
units: u'1'
long_name: u'Number of lines in original SAR file'
Name: xn
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'1'
long_name: u'1'
Name: yn
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'1'
long_name: u'1'
Name: line_size
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'm'
long_name: u'Line size'
Name: pixel_size
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'm'
long_name: u'Pixel size'
Name: model_time_js
dimensions: ()
size: 1.0
type: dtype('float64')
units: u'seconds since 2000-01-01 00:00:00'
long_name: u'Model time julian seconds since 2000-01-01T00:00:00Z'
Name: model_time_js_tau
dimensions: ()
size: 1.0
type: dtype('float64')
units: u'seconds since 2000-01-01 00:00:00'
long_name: u'Model time plus tau julian seconds since 2000-01-01T00:00:00Z'
Name: upper_left_longitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_east'
long_name: u'degrees'
Name: upper_right_longitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_east'
long_name: u'degrees'
Name: upper_left_latitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_north'
long_name: u'degrees'
Name: upper_right_latitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_north'
long_name: u'degrees'
Name: start_center_longitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_east'
long_name: u'degrees'
Name: start_center_latitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_north'
long_name: u'degrees'
Name: scene_center_longitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_east'
long_name: u'degrees'
Name: scene_center_latitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_north'
long_name: u'degrees'
Name: lower_left_longitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_east'
long_name: u'degrees'
Name: lower_right_longitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_east'
long_name: u'degrees'
Name: lower_left_latitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_north'
long_name: u'degrees'
Name: lower_right_latitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_north'
long_name: u'degrees'
Name: end_center_longitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_east'
long_name: u'degrees'
Name: end_center_latitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_north'
long_name: u'degrees'
Name: northernmost_latitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_north'
long_name: u'degrees'
Name: southernmost_latitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_north'
long_name: u'degrees'
Name: easternmost_longitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_east'
long_name: u'degrees'
Name: westernmost_longitude
dimensions: ()
size: 1.0
type: dtype('float32')
units: u'degrees_east'
long_name: u'degrees'
Name: nrcs_slope
dimensions: ()
size: 1.0
type: dtype('float32')
Name: nrcs_bias
dimensions: ()
size: 1.0
type: dtype('float32')
Name: sigma
dimensions: (u'y', u'x')
size: 824256
type: dtype('float32')
units: u'1'
long_name: u'Normalized Radar Cross Section.'
coordinates: u'longitude latitude'
Name: sar_wind
dimensions: (u'y', u'x')
size: 824256
type: dtype('float32')
units: u'm s-1'
long_name: u'SAR-derived wind speed at 10-m height neutral stability'
standard_name: u'wind_speed'
coordinates: u'longitude latitude'
Name: input_dir
dimensions: (u'y', u'x')
size: 824256
type: dtype('float32')
units: u'degrees'
long_name: u'Interpolated directions used for wind inversion'
coordinates: u'longitude latitude'
Name: model_speed
dimensions: (u'y', u'x')
size: 824256
type: dtype('float32')
units: u'm s-1'
long_name: u'Interpolated model wind speed (=1 for non model directions)'
standard_name: u'wind_speed'
coordinates: u'longitude latitude'
Name: mask
dimensions: (u'y', u'x')
size: 824256
type: dtype('int16')
units: u'1'
long_name: u'Interpolated land mask distance from shore line. Positive values land / Negative value water'
flag_values: array([-1, 0, 1], dtype=int16)
flag_meanings: u'water shore land'
coordinates: u'longitude latitude'
Name: longitude
dimensions: (u'y', u'x')
size: 824256
type: dtype('float32')
units: u'degrees_east'
long_name: u'Longitude array in decimal degrees'
standard_name: u'longitude'
Name: latitude
dimensions: (u'y', u'x')
size: 824256
type: dtype('float32')
units: u'degrees_north'
long_name: u'Latitude array in decimal degrees'
standard_name: u'latitude'
Name: rlook
dimensions: (u'y', u'x')
size: 824256
type: dtype('float32')
units: u'degrees'
long_name: u'Radar look direction array in decimal degrees from North'
coordinates: u'longitude latitude'
Name: incid
dimensions: (u'y', u'x')
size: 824256
type: dtype('float32')
units: u'degrees'
long_name: u'Incident angle array in degrees from nadir'
coordinates: u'longitude latitude'
Name: icemask
dimensions: (u'y', u'x')
size: 824256
type: dtype('int16')
units: u'1'
long_name: u'Ice mask 0=no_data 1=water 2=land 3=sea_ice 4=snow'
flag_values: array([0, 1, 2, 3, 4], dtype=int16)
flag_meanings: u'no_data water land sea_ice snow'
coordinates: u'longitude latitude'
Name: lon_coef
dimensions: (u'xfit',)
size: 6
type: dtype('float64')
units: u'1'
long_name: u'Coefficients to compute longitude in degs from pixel/lines'
Name: lon_xexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Longitude pixel exponents'
Name: lon_yexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Longitude line exponents'
Name: lat_coef
dimensions: (u'xfit',)
size: 6
type: dtype('float64')
units: u'1'
long_name: u'Coefficients to compute latitude in degs from pixel/lines'
Name: lat_xexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Latitude pixel exponents'
Name: lat_yexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Latitude line exponents'
Name: i_coef
dimensions: (u'xfit',)
size: 6
type: dtype('float64')
units: u'1'
long_name: u'Coefficients to compute pixel from longitue/latitude'
Name: i_xexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Pixel longitude exponents'
Name: i_yexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Pixel latitude exponents'
Name: j_coef
dimensions: (u'xfit',)
size: 6
type: dtype('float64')
units: u'1'
long_name: u'Coefficients to compute line from longitue/latitude'
Name: j_xexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Line longitude exponents'
Name: j_yexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Line latitude exponents'
Name: incid_coef
dimensions: (u'xfit',)
size: 6
type: dtype('float64')
units: u'1'
long_name: u'Coefficients to compute incid in degs from pixel/lines'
Name: incid_xexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Incid pixel exponents'
Name: incid_yexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Incid line exponents'
Name: rlook_coef
dimensions: (u'xfit',)
size: 6
type: dtype('float64')
units: u'1'
long_name: u'Coefficients to compute radar look direction in degs from pixel/lines'
Name: rlook_xexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Radar look direction pixel exponents'
Name: rlook_yexp
dimensions: (u'xfit',)
size: 6
type: dtype('float32')
units: u'1'
long_name: u'Radar look direction line exponents'
</code></pre>
| 1 | 2016-08-16T08:49:45Z | 38,977,628 | <p>It'll be helpful if you share the netcdf file, but here are a few ideas of what's going on:</p>
<p>Variables are not currently being read-in as numpy arrays. You need to add indexing parameters to cast them to arrays. Without the file, I'm not sure what they are, but surely some are multi-dimensional. For example: </p>
<pre><code>SARwind = ncfile.variables['sar_wind'][:,:]
ModelWind = ncfile.variables['model_speed'][:,:]
LON = ncfile.variables['longitude'][:]
LAT = ncfile.variables['latitude'][:]
LandMask = ncfile.variables['mask'][:,:]
</code></pre>
<p>Then you can simply assign <code>SARwind</code> to a new variable <code>SARWind_nan</code> and input <code>nan</code> values as you have been doing.</p>
<pre><code>SARWind_nan = SARwind
SARwind_nan[SARwind_nan == 0.0] = np.nan
SARwind_nan[SARwind_nan == 70.0] = np.nan
</code></pre>
<p>With <code>LandMask</code> properly read-in (again, I think this is very likely 2D, not 1D as you have), you can similarly assign to a new variable <code>LandMask_nan</code> and input <code>nan</code> values. Double check the type of <code>LandMask</code>, it's possibly an integer. <a href="http://www.unidata.ucar.edu/software/netcdf/workshops/2011/utilities/Ncdump.html" rel="nofollow">ncdump</a> or <a href="http://nco.sourceforge.net/nco.html#ncks-netCDF-Kitchen-Sink" rel="nofollow">ncks</a> are good tools for examining netcdf contents. </p>
<pre><code>LandMask_NaN = LandMask
LandMask_NaN[LandMask_NaN > 0.0] = np.nan
</code></pre>
| 1 | 2016-08-16T14:30:28Z | [
"python",
"numpy",
null,
"netcdf4"
] |
Need to print the values of a dictionary by calling the keys within a range | 38,970,435 | <p>I made a dictionary with </p>
<pre><code>key = integer
value = string
</code></pre>
<p>I would like to make an .exe (probably it's a bit early for my level) where I can input 2 integers and it would print the values of the keys within the range I specified. </p>
<p>eg. </p>
<pre><code>{1: 'General 001-002', 3: 'Computer science', 7: 'Bibliography', 20: 'Library & Information', 30: 'General 030-068', 69: 'Museum Science', 70: 'Journalism & News Media', 80: 'General 080-099', 100: 'Philosophy 100-149', 150: 'Psychology',}
</code></pre>
<p>I'd like to input 5 and 85 and want it to print </p>
<pre><code>Bibliography
Library & Information
General 030-068
Museum Science
Journalism & News Media
General 080-099
</code></pre>
<p>EDIT:</p>
<p>Thank you all for the suggestions.</p>
<p>My code looks like this right now and is working as it should:</p>
<pre><code>text_file = open("dewey.txt", "rt") #Open the list containing the values
text = text_file.read().split('\n')
text_file.close()
num = []
with open ('dewey num.txt', 'rt') as in_file: #Open the list containing the keys
for line in in_file:
num.append(line[:3])
intenum = []
for i in num:
intenum.append(int(i)) #Make the keys as integers
dict1= dict(zip(intenum,text)) #Merge the 2 lists in a dictionary
print ('This software will give you the subjects for each stack.') #Instructions for the user
print ('I need to know the range first, please only use the first three numbers of the classmark.') #Instructions for the user
x=input('Please key in the starting number for this stack: ') #Input for start
y=input('Please key in the last number for this stack: ') #Input for stop
start = int(x)
stop = int(y)
values = [dict1[k] for k in range(start, stop + 1) if k in dict1] #Call the values in range
print('The subject(s) for this section: ')
for i in values: #Print the values in range
print (i)
</code></pre>
<p>My next step is to make it an .exe so I'm studying py2exe, pyinstaller and cx_freeze. If you have any insight on what's better for my script it would be much appreciated.</p>
<p>I'm using Python 3.5.2</p>
<p>Alessio</p>
| 0 | 2016-08-16T08:50:09Z | 38,970,536 | <p>The solution is quite short:</p>
<pre><code># this is your dictionary
d = {1: 'General 001-002', 3: 'Computer science', 7: .....}
start = 5
stop = 85
values = [d[k] for k in range(start, stop + 1) if k in d]
</code></pre>
<p>You get what you need in <code>values</code> - a list.</p>
<p>I omitted most of the dictionary, but it's stored in <code>d</code>.</p>
<p>I used a list comprehension, taking the dictionary value at the key <code>k</code> for <code>k</code> running in the range (stop + 1 because <code>range</code> reaches to end of the range minus 1) provided that <code>k</code> is within the key values.</p>
<p>The expression</p>
<pre><code>k in d
</code></pre>
<p>tests whether the key <code>k</code> is in the keys of the dict <code>d</code>, and it's the same as testing:</p>
<pre><code>k in d.keys()
</code></pre>
| 1 | 2016-08-16T08:55:27Z | [
"python",
"dictionary",
"exe"
] |
Need to print the values of a dictionary by calling the keys within a range | 38,970,435 | <p>I made a dictionary with </p>
<pre><code>key = integer
value = string
</code></pre>
<p>I would like to make an .exe (probably it's a bit early for my level) where I can input 2 integers and it would print the values of the keys within the range I specified. </p>
<p>eg. </p>
<pre><code>{1: 'General 001-002', 3: 'Computer science', 7: 'Bibliography', 20: 'Library & Information', 30: 'General 030-068', 69: 'Museum Science', 70: 'Journalism & News Media', 80: 'General 080-099', 100: 'Philosophy 100-149', 150: 'Psychology',}
</code></pre>
<p>I'd like to input 5 and 85 and want it to print </p>
<pre><code>Bibliography
Library & Information
General 030-068
Museum Science
Journalism & News Media
General 080-099
</code></pre>
<p>EDIT:</p>
<p>Thank you all for the suggestions.</p>
<p>My code looks like this right now and is working as it should:</p>
<pre><code>text_file = open("dewey.txt", "rt") #Open the list containing the values
text = text_file.read().split('\n')
text_file.close()
num = []
with open ('dewey num.txt', 'rt') as in_file: #Open the list containing the keys
for line in in_file:
num.append(line[:3])
intenum = []
for i in num:
intenum.append(int(i)) #Make the keys as integers
dict1= dict(zip(intenum,text)) #Merge the 2 lists in a dictionary
print ('This software will give you the subjects for each stack.') #Instructions for the user
print ('I need to know the range first, please only use the first three numbers of the classmark.') #Instructions for the user
x=input('Please key in the starting number for this stack: ') #Input for start
y=input('Please key in the last number for this stack: ') #Input for stop
start = int(x)
stop = int(y)
values = [dict1[k] for k in range(start, stop + 1) if k in dict1] #Call the values in range
print('The subject(s) for this section: ')
for i in values: #Print the values in range
print (i)
</code></pre>
<p>My next step is to make it an .exe so I'm studying py2exe, pyinstaller and cx_freeze. If you have any insight on what's better for my script it would be much appreciated.</p>
<p>I'm using Python 3.5.2</p>
<p>Alessio</p>
| 0 | 2016-08-16T08:50:09Z | 38,972,718 | <p>If the key integers are much larger than the number of items in the dictionary it may be inefficient to loop over every integer in the range looking for keys that are present in the dictionary. An alternative strategy is to filter the keys of the dictionary, sort them, and then build a list of the desired values. Eg,</p>
<pre><code>def key_range(d, lo, hi):
r = range(lo, hi + 1)
return [d[k] for k in sorted(k for k in d.keys() if k in r)]
data = {
1: 'General 001-002',
3: 'Computer science',
7: 'Bibliography',
20: 'Library & Information',
30: 'General 030-068',
69: 'Museum Science',
70: 'Journalism & News Media',
80: 'General 080-099',
100: 'Philosophy 100-149',
150: 'Psychology',
}
# Test
for item in key_range(data, 5, 85):
print(item)
</code></pre>
<p><strong>output</strong></p>
<pre class="lang-none prettyprint-override"><code>Bibliography
Library & Information
General 030-068
Museum Science
Journalism & News Media
General 080-099
</code></pre>
<hr>
<p>FWIW, here's a "functional" implementation of this algorithm.</p>
<pre><code>def key_range(d, lo, hi):
r = range(lo, hi + 1)
return list(map(d.get, sorted(filter(lambda k: k in r, d.keys()))))
</code></pre>
<p>In Python 2, that <code>list</code> call can be eliminated, since the Python 2 version of <code>filter</code> returns a list. OTOH, in Python 2 the <code>k in r</code> test is rather inefficient; it would be <em>much</em> better to do <code>lo <= k <= hi</code>.</p>
| 0 | 2016-08-16T10:39:19Z | [
"python",
"dictionary",
"exe"
] |
Change xaxis range values in a plot with matplotlib | 38,970,515 | <p>I am doing a plot of dataframe df_no_missing.</p>
<pre><code>df_no_missing.head()
TIMESTAMP datetime64[ns]
P_ACT_KW float64
PERIODE_TARIF object
P_SOUSCR float64
SITE object
TARIF object
depassement float64
dtype: object
Out[236]:
TIMESTAMP P_ACT_KW PERIODE_TARIF P_SOUSCR SITE TARIF depassement date time
2015-08-01 23:10:00 248.0 HC 425.0 ST GEREON TURPE_HTA5 0.0 2015-08-01 23:10:00
2015-08-01 23:20:00 244.0 HC 425.0 ST GEREON TURPE_HTA5 0.0 2015-08-01 23:20:00
2015-08-01 23:30:00 243.0 HC 425.0 ST GEREON TURPE_HTA5 0.0 2015-08-01 23:30:00
2015-08-01 23:40:00 238.0 HC 425.0 ST GEREON TURPE_HTA5 0.0 2015-08-01 23:40:00
2015-08-01 23:50:00 234.0 HC 425.0 ST GEREON TURPE_HTA5 0.0 2015-08-01 23:50:00
</code></pre>
<p>I did a plot representing the variation of P_ACT_KW and P_SOUSCR with TIMESTAMP.
The python code is below :</p>
<pre><code>fig = plt.figure(figsize=(11, 6), dpi=100)
ax = fig.add_subplot(111)
yearFmt = mdates.DateFormatter("%H:%M:%S")
ax.xaxis.set_major_formatter(yearFmt)
sns.set_style("darkgrid")
x = pd.to_datetime(df_no_missing.TIMESTAMP, format="%h:%m")
y = df_no_missing.P_ACT_KW
z = df_no_missing.P_SOUSCR
plt.plot(x, y, marker='o', label='P_SOUSCR')
plt.plot(x, z, marker='o', linestyle='--', color='g', label='P_ACT_KW')
plt.xlabel('temps')
plt.ylabel('puissance')
plt.title('variation de la puissance')
plt.legend()
plt.show()
</code></pre>
<p>I get a plot like this (the attached image) <a href="http://i.stack.imgur.com/OPJBv.png" rel="nofollow"><img src="http://i.stack.imgur.com/OPJBv.png" alt="enter image description here"></a></p>
<p>My question is how can I show the timestamp in the xaxis, I mean I need the see the timestamp indicated in the dataframe for exampel here : 23:10:00, 23:20:00 , 23:30:00, 23:40:00, 23:50:00. and not 01:00 , 04:00, 07:00 ...</p>
<p>Thank you if you can help me</p>
<p>Bests</p>
| 2 | 2016-08-16T08:54:28Z | 38,972,469 | <p>You can add <code>set_major_locator</code> before <code>set_major_formatter</code>:</p>
<pre><code>import matplotlib.dates as mdates
yearFmt = mdates.DateFormatter("%H:%M:%S")
minx = pd.to_datetime( '2015-08-01 23:10:00' )
minx = pd.to_datetime( '2015-08-01 23:50:00' )
ax.set_xlim( [ minx, maxx ] )
ax.xaxis.set_major_locator( mdates.MinuteLocator(byminute=range(0,60,10)) )
ax.xaxis.set_major_formatter( yearFmt )
</code></pre>
| 0 | 2016-08-16T10:26:34Z | [
"python",
"matplotlib",
"plot"
] |
PyInstaller, onefile, no console. missing bootloader error | 38,970,626 | <p>I'm trying to compile my Kivy applications to a single windows exe. </p>
<p>A sample of my .spec file:</p>
<pre><code>from kivy.deps import sd12, glew
exe = EXE(pyz,Tree('C:\\Users\\me\\PycharmProjects\\test\\'),
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
*[Tree(p) for p in (sdl2.dep_bins + glew.dep_bins)],
name='test',
debug=False,
strip=False,
upx=True,
console= False)
</code></pre>
<p>It works with a console i.e when console=True. But when i try to make it windowed the pyinstaller complains about missing "pre-compiled bootloader". I've checked my bootloader map and i have both the windows 64 and 32 bootloaders there.</p>
<p>Any suggestions?</p>
| 0 | 2016-08-16T08:59:19Z | 38,971,797 | <p>Missing runw file in bootloader windows 32</p>
| 1 | 2016-08-16T09:54:23Z | [
"python",
"kivy",
"pyinstaller"
] |
python merge multiple xmls to single CSV | 38,970,641 | <p>Here's a snippet from a script I'm trying to put together to make my life easier. I have a bunch of XML files from different API sources. They have different items in them and different amount of fields. What they do all have in common is a common field like "clientid". </p>
<p>What I want to do is end up with a CSV that has the combined headers of all the XMLs with their corresponding data. So i need to be able to make sure that all the info from the "clientid" of 12345 adds onto the end of the row for the client id of the same name in the "itemid" pull</p>
<p>item data:</p>
<pre><code><item>
<id>99899</id>
<client-id>12345</client-id>
</code></pre>
<p>part of script:</p>
<pre><code>def parseXML():
### Parse XML and convert to CSV ###
#Get XML Source #
tree = ET.fromstring(getdata)
# open a file for writing
xmlTest01 = open('xmlTest01.csv', 'w')
# create the csv writer object
csvwriter = csv.writer(xmlTest01)
item_head = []
count = 0
for member in tree.findall('item'):
item = []
if count == 0:
id = member.find('id').tag
item_head.append(id)
clientid = member.find('client-id').tag
item_head.append(clientid)
id = member.find('id').text
item.append(id)
clientid = member.find('client-id').text
item.append(clientid)
csvwriter.writerow(item)
xmlTest01.close()
</code></pre>
<p>The next set of data has this in it:</p>
<pre><code><client>
<id>12345</id>
<name>Clients name</name>
<current type="boolean">true</current>
<status>good</status>
</code></pre>
<p>So I want to check the row in the previous set of data for the clientid of the same and then add name, current and status to the end of that row.</p>
<p>Any ideas on the best way to do this? I have about 5-7 of these types of files to merge. Should I be trying to combine the files first before converting them to CSV? This might be ok if they all had similar content but they dont.</p>
<p>Desired output which combines values of both xml files:</p>
<pre><code>id,clientid,name,current,status
99899,12345,Clients name,true,good
</code></pre>
| 0 | 2016-08-16T09:00:15Z | 38,982,496 | <p>Consider iterating across the three files and conditionally check for client ids. Parse xml values to a list that you write to csv file:</p>
<pre><code>import csv
import xml.etree.ElementTree as ET
def parseXML():
projecttree = ET.parse('projects.xml')
clienttree = ET.parse('clients.xml')
teamtasktree = ET.parse('teammembers.xml')
projectroot = projecttree.getroot()
clientroot = clienttree.getroot()
teamtaskroot = teamtasktree.getroot()
data = []
for i in projectroot.iter('project'):
for j in clientroot.iter('client'):
clientid = i.find('client-id').text
if clientid == j.find('id').text:
data.append(i.find('id').text)
data.append(j.find('id').text)
data.append(j.find('name').text)
data.append(j.find('active').text)
data.append(i.find('name').text)
data.append(i.find('active').text)
data.append(i.find('billable').text)
data.append(i.find('bill-by').text)
data.append(i.find('hourly-rate').text)
data.append(i.find('budget').text)
data.append(i.find('over-budget-notification-percentage').text)
data.append(i.find('created-at').text)
data.append(i.find('updated-at').text)
data.append(i.find('starts-on').text)
data.append(i.find('ends-on').text)
data.append(i.find('estimate').text)
data.append(i.find('estimate-by').text)
data.append(i.find('notes').text)
data.append(i.find('cost-budget').text)
cnt = 1
for tm in teamtaskroot.iter('team_members'):
for item in tm.iter('item'):
if item.find('cid').text == clientid and cnt <= 3:
data.append(item.find('full_name').text)
data.append(item.find('cost_rate').text)
cnt += 1
cnt = 1
for tk in teamtaskroot.iter('tasks'):
for item in tk.iter('item'):
if item.find('cid').text == clientid and cnt <= 2:
data.append(item.find('task_id').text)
data.append(item.find('total_hours').text)
cnt += 1
with open('Output.csv', 'w') as f:
csvwriter = csv.writer(f, lineterminator = '\n')
csvwriter.writerow(['Pid', 'Clientid', 'ClientName', 'ClientActive', 'ProjectName', 'ProjectActive',
'Billable', 'BillBy', 'HourlyRate', 'Budget', 'OverbudgetNotificationPercentage',
'CreatedAt', 'UpdatedAt', 'StartsOn', 'EndsOn', 'Estimate', 'EstimateBy',
'Notes', 'CostBudget', 'TeammemberName1', 'CostRate1', 'TeammemberName2', 'CostRate2',
'TeammemberName3', 'CostRate3', 'TaskId1', 'TotalHours1', 'TaskId2', 'TotalHours2'])
csvwriter.writerow(data)
if __name__ == "__main__":
parseXML()
</code></pre>
<p><strong>Output</strong></p>
<pre><code>Pid,Clientid,ClientName,ClientActive,ProjectName,ProjectActive,Billable,
BillBy,HourlyRate,Budget,OverbudgetNotificationPercentage,CreatedAt,
UpdatedAt,StartsOn,EndsOn,Estimate,EstimateBy,Notes,CostBudget,TeammemberName
1,CostRate1,TeammemberName2,CostRate2,TeammemberName3,CostRate3,
TaskId1,TotalHours1,TaskId2,TotalHours2
11493770,4708336,AFB,true,Services - Consulting - AH,true,true,Project,
421.28,16.0,80.0,2016-08-16T03:22:51Z,
2016-08-16T03:22:51Z,,,16.0,project,Random
notes,,BobR,76.0,BobR,76.0,BobR,76.0,6357137,0.0,6357138,0.0
</code></pre>
| 0 | 2016-08-16T18:56:45Z | [
"python",
"xml",
"csv",
"merge"
] |
python merge multiple xmls to single CSV | 38,970,641 | <p>Here's a snippet from a script I'm trying to put together to make my life easier. I have a bunch of XML files from different API sources. They have different items in them and different amount of fields. What they do all have in common is a common field like "clientid". </p>
<p>What I want to do is end up with a CSV that has the combined headers of all the XMLs with their corresponding data. So i need to be able to make sure that all the info from the "clientid" of 12345 adds onto the end of the row for the client id of the same name in the "itemid" pull</p>
<p>item data:</p>
<pre><code><item>
<id>99899</id>
<client-id>12345</client-id>
</code></pre>
<p>part of script:</p>
<pre><code>def parseXML():
### Parse XML and convert to CSV ###
#Get XML Source #
tree = ET.fromstring(getdata)
# open a file for writing
xmlTest01 = open('xmlTest01.csv', 'w')
# create the csv writer object
csvwriter = csv.writer(xmlTest01)
item_head = []
count = 0
for member in tree.findall('item'):
item = []
if count == 0:
id = member.find('id').tag
item_head.append(id)
clientid = member.find('client-id').tag
item_head.append(clientid)
id = member.find('id').text
item.append(id)
clientid = member.find('client-id').text
item.append(clientid)
csvwriter.writerow(item)
xmlTest01.close()
</code></pre>
<p>The next set of data has this in it:</p>
<pre><code><client>
<id>12345</id>
<name>Clients name</name>
<current type="boolean">true</current>
<status>good</status>
</code></pre>
<p>So I want to check the row in the previous set of data for the clientid of the same and then add name, current and status to the end of that row.</p>
<p>Any ideas on the best way to do this? I have about 5-7 of these types of files to merge. Should I be trying to combine the files first before converting them to CSV? This might be ok if they all had similar content but they dont.</p>
<p>Desired output which combines values of both xml files:</p>
<pre><code>id,clientid,name,current,status
99899,12345,Clients name,true,good
</code></pre>
| 0 | 2016-08-16T09:00:15Z | 39,049,541 | <p>Additionally, consider <a href="https://www.w3.org/Style/XSL/" rel="nofollow">XSLT</a>, the special purpose transformation language that can directly transform XML to CSV even parsing from other XML files using its <code>document()</code> function. Python's <a href="http://lxml.de/" rel="nofollow">lxml</a> module can process XSLT 1.0 scripts. Be sure all three xmls reside in same directory.</p>
<p><strong>XSLT</strong> Script <em>(save as .xsl file --a special .xml file-- to be called below in Python)</em></p>
<pre><code><xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0">
<xsl:output version="1.0" encoding="UTF-8" method="text" indent="yes" omit-xml-declaration="yes"/>
<xsl:strip-space elements="*"/>
<xsl:template match="/projects">
<xsl:copy>
<xsl:text>Pid,Clientid,ClientName,ClientActive,ProjectName,ProjectActive,Billable,BillBy,HourlyRate,</xsl:text>
<xsl:text>Budget,OverbudgetNotificationPercentage,CreatedAt,UpdatedAt,StartsOn,EndsOn,Estimate,EstimateBy,</xsl:text>
<xsl:text>Notes,CostBudget,TeammemberName1,CostRate1,TeammemberName2,CostRate2,TeammemberName3,CostRate3,</xsl:text>
<xsl:text>TaskId1,TotalHours1,TaskId2,TotalHours2&#xa;</xsl:text>
<xsl:apply-templates select="project"/>
</xsl:copy>
</xsl:template>
<xsl:template match="project">
<xsl:variable name="clientid" select="client-id"/>
<xsl:value-of select="concat(id, ',')"/>
<xsl:variable name="delimiter"><xsl:text>&quot;,&quot;</xsl:text></xsl:variable>
<xsl:for-each select="document('clients.xml')/clients/client[id=$clientid]/*
[local-name()='id' or local-name()='name' or local-name()='active']">
<xsl:value-of select="." />
<xsl:if test="position() != last()">
<xsl:text>,</xsl:text>
</xsl:if>
</xsl:for-each>
<xsl:value-of select="concat(',',name,',',active,',',billable,',',bill-by,',',hourly-rate,',',budget,',',
over-budget-notification-percentage,',',created-at,',',updated-at,',',starts-on,',',ends-on,',',
estimate,',',estimate-by,',',notes,',',cost-budget,',')"/>
<xsl:for-each select="document('teammembers.xml')/root/team_members/item[cid=$clientid]/*
[local-name()='full_name' or local-name()='cost_rate']">
<xsl:if test="position() &lt; 5">
<xsl:value-of select="." />
<xsl:text>,</xsl:text>
</xsl:if>
</xsl:for-each>
<xsl:for-each select="document('ClientItems_teammembers.xml')/root/tasks/item[cid=$clientid]/*
[local-name()='task_id' or local-name()='total_hours']">
<xsl:if test="position() &lt; 5">
<xsl:value-of select="." />
<xsl:if test="position() != last()">
<xsl:text>,</xsl:text>
</xsl:if>
</xsl:if>
</xsl:for-each>
<xsl:text>&#xa;</xsl:text>
</xsl:template>
</xsl:transform>
</code></pre>
<p><strong>Python</strong> script <em>(transforming projects.xml and reading in other two in XSLT)</em></p>
<pre><code>import lxml.etree as ET
def transformXML():
dom = ET.parse('projects.xml')
xslt = ET.parse('XSLTscript.xsl')
transform = ET.XSLT(xslt)
newdom = transform(dom)
with open('Output.csv'),'w') as f:
f.write(str(newdom))
if __name__ == "__main__":
transformXML()
</code></pre>
<p><strong>Output</strong></p>
<pre><code>Pid,Clientid,ClientName,ClientActive,ProjectName,ProjectActive,Billable,
BillBy,HourlyRate,Budget,OverbudgetNotificationPercentage,CreatedAt,
UpdatedAt,StartsOn,EndsOn,Estimate,EstimateBy,Notes,CostBudget,TeammemberName
1,CostRate1,TeammemberName2,CostRate2,TeammemberName3,CostRate3,
TaskId1,TotalHours1,TaskId2,TotalHours2
11493770,4708336,AFB,true,Services - Consulting - AH,true,true,Project,
421.28,16.0,80.0,2016-08-16T03:22:51Z,
2016-08-16T03:22:51Z,,,16.0,project,Random
notes,,BobR,76.0,BobR,76.0,BobR,76.0,6357137,0.0,6357138,0.0,
</code></pre>
| 0 | 2016-08-20T01:05:35Z | [
"python",
"xml",
"csv",
"merge"
] |
I cant to run my script automatically (cron) Raspberry Pi | 38,970,659 | <p>I need to run my script <code>time.py</code> every day.
I execute <code>crontab -e</code> and add</p>
<pre><code>15 19 * * * pi /usr/bin/python /home/Desktop/miBBDD/time.py
</code></pre>
<p>So my script should run every day at 19:15, but it doesn't.
If I execute <code>service cron status</code> I can see:<code>active running</code>and my file has permissions for read and write.</p>
<p>Somebody know what is the problem? (My script work fine)</p>
| 0 | 2016-08-16T09:00:58Z | 38,971,284 | <p>Have you edited the crontab with sudo? </p>
<p>Do a quick test: include a line in the crontab (sudo crontab -e) that calls a test line every minute, for example:</p>
<pre><code>* * * * * logger "Is it working?"
</code></pre>
<p>This line will execute every minute and you should see the result of it at /var/log/syslog. If every minute you see a new line "Is it working?" printed out there it means you are working in the right direction and your line at the crontab should work ( remember sudo ).</p>
| 0 | 2016-08-16T09:30:50Z | [
"python",
"linux",
"cron",
"raspberry-pi2"
] |
I cant to run my script automatically (cron) Raspberry Pi | 38,970,659 | <p>I need to run my script <code>time.py</code> every day.
I execute <code>crontab -e</code> and add</p>
<pre><code>15 19 * * * pi /usr/bin/python /home/Desktop/miBBDD/time.py
</code></pre>
<p>So my script should run every day at 19:15, but it doesn't.
If I execute <code>service cron status</code> I can see:<code>active running</code>and my file has permissions for read and write.</p>
<p>Somebody know what is the problem? (My script work fine)</p>
| 0 | 2016-08-16T09:00:58Z | 38,971,515 | <p>I had similar issue some time ago. Delete the user part of your crontab file. </p>
<pre><code>15 19 * * * /usr/bin/python /home/Desktop/miBBDD/time.py
</code></pre>
<p>Compare with <a href="http://unix.stackexchange.com/questions/278486/crontab-task-not-trigerred/278930#278930">this answear</a>.</p>
<p>EDIT: just one more thing: is the path valid? Shouldn't it be <code>/home/pi/Desktop...</code>?</p>
| 0 | 2016-08-16T09:41:51Z | [
"python",
"linux",
"cron",
"raspberry-pi2"
] |
After creating NAN category, aggregation on groupby object went wrong | 38,970,712 | <p>I first create some data:</p>
<pre><code>df = pd.DataFrame(data = {"A":np.random.random_integers(1,10,10), "B":np.arange(1,11,1)})
df.A.ix[3,4] = np.nan
</code></pre>
<p>Then I got a pd dataframe with Nans</p>
<pre><code> A B
0 7 1
1 1 2
2 3 3
3 NaN 4
4 NaN 5
5 9 6
6 2 7
7 10 8
8 6 9
9 6 10
</code></pre>
<p>I try to group column A using pd.cut function add use aggregation functions on each group</p>
<pre><code>bin_S = pd.cut(df.A, [-math.inf, 3,5,8,9, math.inf],right= False)
df.groupby(bin_S).agg("count")
</code></pre>
<p>But the Nan values are not grouped( no Nan category)</p>
<pre><code> A B
A
[-inf, 3) 2 2
[3, 5) 1 1
[5, 8) 3 3
[8, 9) 0 0
[9, inf) 2 2
</code></pre>
<p>Then I tried to add a new category called "Missing" by:</p>
<pre><code>bin_S.cat.add_categories("Missing", inplace = True)
bin_S.fillna(value = "Missing", inplace = True
</code></pre>
<p>The binning series looks fine. However, the groupby aggregation is not what I expected. </p>
<pre><code>df.groupby(bin_S).agg("count")
</code></pre>
<p>Result is,</p>
<pre><code> A B
A
[-inf, 3) 2 2
[3, 5) 1 1
[5, 8) 3 3
[8, 9) 0 0
[9, inf) 2 2
Missing 0 2
</code></pre>
<p>I am expecting column A and column B to be exactly the same. Why they are different on row "Missing"? The real problem involves more complicated operation on each group. This issue really bothers me since grouping Nan values might be unreliable. </p>
| 2 | 2016-08-16T09:04:05Z | 38,970,884 | <p><code>'count'</code> is going to skip <code>NaN</code>. You can use <code>'size'</code></p>
<pre><code>df.groupby(bin_S).agg(["size"])
</code></pre>
<p><a href="http://i.stack.imgur.com/9NeH6.png" rel="nofollow"><img src="http://i.stack.imgur.com/9NeH6.png" alt="enter image description here"></a></p>
| 3 | 2016-08-16T09:12:41Z | [
"python",
"pandas"
] |
Why i get wrong amount of documents? | 38,970,724 | <p>I need to find all users (unique ids) who has done 2 events "Level 1" and "Purchase Hard"</p>
<pre><code>x = list(db.events.distinct("uid", {"eventName": "Level 1"}) and db.events.distinct("uid", {"eventName": "Purchase Hard"}))
</code></pre>
<p>This code as a result gives around 1600 documents so as a result i should get this amount of documents. So I take documents for this with this code:</p>
<pre><code>lel = list(db.events.aggregate(
[
{"$match": {"eventName" : "Level 1"}},
{"$group": {"_id": "$uid", "dateofstart": {"$addToSet": "$updated_at"}}}
]))
</code></pre>
<p>And then I try to get only documents I need like this:</p>
<pre><code>dateoflvl1pay = list()
for players in lel:
kost = players["_id"]
if kost in x:
dateoflvl1pay.append(players)
</code></pre>
<p>But as a result I get only 850 documents. Can you please help to find out what is wrong?</p>
| 0 | 2016-08-16T09:04:42Z | 38,970,906 | <p>You are getting a fewer documents because your <strong><a href="https://docs.mongodb.com/manual/reference/operator/aggregation/match/#pipe._S_match" rel="nofollow"><code>$match</code></a></strong> filter is excluding the other event "Purchase Hard". You need to include it with the other event using the <strong><a href="https://docs.mongodb.com/manual/reference/operator/query/in/" rel="nofollow"><code>$in</code></a></strong> operator as follows:</p>
<pre><code>db.events.aggregate([
{ "$match": { "eventName": { "$in": ["Level 1", "Purchase Hard"] } } },
{ "$group": { "_id": "$uid", "dateofstart": { "$addToSet": "$updated_at" } } }
])
</code></pre>
<p>which you can also query for the distinct <code>uid</code> as</p>
<pre><code>db.events.distinct("uid", { "eventName": { "$in": ["Level 1", "Purchase Hard"] } })
</code></pre>
| 0 | 2016-08-16T09:13:34Z | [
"python",
"mongodb"
] |
How to decode a mime part of a message and get a **unicode** string in Python 2.7? | 38,970,760 | <p>Here is a method which tries to get the html part of an email message:</p>
<pre><code>from __future__ import absolute_import, division, unicode_literals, print_function
import email
html_mail_quoted_printable=b'''Subject: =?ISO-8859-1?Q?WG=3A_Wasenstra=DFe_84_in_32052_Hold_Stau?=
MIME-Version: 1.0
Content-type: multipart/mixed;
Boundary="0__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253"
--0__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253
Content-type: multipart/alternative;
Boundary="1__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253"
--1__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253
Content-type: text/plain; charset=ISO-8859-1
Content-transfer-encoding: quoted-printable
Freundliche Gr=FC=DFe
--1__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253
Content-type: text/html; charset=ISO-8859-1
Content-Disposition: inline
Content-transfer-encoding: quoted-printable
<html><body>
Freundliche Gr=FC=DFe
</body></html>
--1__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253--
--0__=4EBBF4C4DFD012538f9e8a93df938690918c4EBBF4C4DFD01253--
'''
def get_html_part(msg):
for part in msg.walk():
if part.get_content_type() == 'text/html':
return part.get_payload(decode=True)
msg=email.message_from_string(html_mail_quoted_printable)
html=get_html_part(msg)
print(type(html))
print(html)
</code></pre>
<p>Output:</p>
<pre><code><type 'str'>
<html><body>
Freundliche Gr��e
</body></html>
</code></pre>
<p>Unfortunately I get a byte string. I would like to have unicode string.</p>
<p>According to <a href="http://stackoverflow.com/questions/27550567/python-email-payload-decoding">this answer</a> <code>msg.get_payload(decode=True)</code> should do the magic. But it does not in this case.</p>
<p>How to decode a mime part of a message and get a <strong>unicode</strong> string in Python 2.7?</p>
| 3 | 2016-08-16T09:06:12Z | 39,063,834 | <blockquote>
<p>Unfortunately I get a byte string. I would like to have unicode string.</p>
</blockquote>
<p>The <code>decode=True</code> parameter to <code>get_payload</code> only decodes the <code>Content-Transfer-Encoding</code> wrapper, the <code>=</code>-encoding in this message. To get from there to characters is one of the many things the <code>email</code> package makes you do yourself:</p>
<pre><code>bytes = part.get_payload(decode=True)
charset = part.get_content_charset('iso-8859-1')
chars = bytes.decode(charset, 'replace')
</code></pre>
<p>(<code>iso-8859-1</code> being the fallback in case the message specifies no encoding.)</p>
| 4 | 2016-08-21T11:46:33Z | [
"python",
"python-2.7",
"email",
"unicode"
] |
How can we append unbalanced row on Pandas dataframe in the fastest way? | 38,970,767 | <p>I am going to make pandas dataframe from unbalanced csv file</p>
<p>But the speed is too slow when I make it in a brute force way.</p>
<p>Here, I have the list of columns which can make Schema of Dataframe</p>
<p>And a bunch of rows in a file.</p>
<p>How could I make it fast?</p>
<p>(Should I make empty list in a different way?)</p>
<pre><code>import pandas as pd
import numpy as np
for key in column_name:
newdf = pd.DataFrame(columns = column_name[key])
with open(str(key) +'.csv') as f:
reader1 = csv.reader(f)
index = 0
print key, sum(1 for row in csv.reader(open(str(key) +'.csv')))
for row in reader1:
if index % 10000 == 0:
print index
new_row = [np.nan]*len(column_name[key])
for i in range(len(row)):
new_row[i] = row[i]
newdf.loc[index] = new_row
index = index+1
newdf.to_csv(key+"_with_column_name"+".csv")
</code></pre>
| 2 | 2016-08-16T09:06:38Z | 38,971,477 | <p><code>pd.DataFrame</code> can build a DataFrame from a list of ragged rows:</p>
<pre><code>In [17]: pd.DataFrame([['a','b'],[1,2,3]])
Out[17]:
0 1 2
0 a b NaN
1 1 2 3.0
</code></pre>
<p>Moreover, it is faster to build the DataFrame with one call to <code>pd.DataFrame</code>
than many calls to <code>newdf.loc[index] = new_row</code> in a loop.</p>
<hr>
<pre><code>import numpy as np
import pandas as pd
# column_name = {'foo':['A','B']}
for key in column_name:
with open('{}.csv'.format(key), 'r') as f:
reader1 = csv.reader(f)
data = list(reader1)
nrows = len(data)
print('{}, {}'.format(key, nrows))
newdf = pd.DataFrame(data, columns=column_name[key])
# do stuff with newdf (1)
newdf.to_csv('{}_with_column_name.csv'.format(key))
</code></pre>
<hr>
<p><sup>(1)</sup> Note that if your sole purpose is to create the a new CSV with
column names, then it would be quicker to simply write the column names to the
new file and then copy the contents from the old CSV into the new CSV. Building a
DataFrame would not be necessary in this case and would slow down performance.</p>
<pre><code>for key in column_name:
newname = '{}_with_column_name.csv'.format(key)
with open('{}.csv'.format(key), 'r'), open(newname, 'w') as f, g:
g.write(','.join(column_name[key])+'\n') # assuming no quotation necessary
g.write(f.read())
</code></pre>
| 2 | 2016-08-16T09:39:52Z | [
"python",
"performance",
"pandas"
] |
Session data corrupted in django | 38,970,832 | <p>Every time when I'm going to my signup page, I'm receiving this error </p>
<pre><code>Session data corrupted
</code></pre>
<p>when I'm trying to signup anyway, POST request status is 302, but User is still created, but didn't save any email to registered user.</p>
<p>Why I'm getting that error and how can I fix it?</p>
<p>Thanks! </p>
| 1 | 2016-08-16T09:09:49Z | 38,972,036 | <p>You are getting this error because of this line: <a href="https://github.com/django/django/blob/master/django/contrib/sessions/backends/base.py#L109" rel="nofollow">https://github.com/django/django/blob/master/django/contrib/sessions/backends/base.py#L109</a></p>
<p>Apparently, there's something went terribly wrong with encryption of session data.</p>
<p>How to fix it? I'm not sure, I have a couple of ideas though:</p>
<ul>
<li>Do you use a custom session class?</li>
<li>Do you use your Django session in another project?</li>
</ul>
| 0 | 2016-08-16T10:06:19Z | [
"python",
"django",
"session"
] |
Session data corrupted in django | 38,970,832 | <p>Every time when I'm going to my signup page, I'm receiving this error </p>
<pre><code>Session data corrupted
</code></pre>
<p>when I'm trying to signup anyway, POST request status is 302, but User is still created, but didn't save any email to registered user.</p>
<p>Why I'm getting that error and how can I fix it?</p>
<p>Thanks! </p>
| 1 | 2016-08-16T09:09:49Z | 39,729,372 | <p>Sorry for getting late to this post, but by any chance, are you changing or did you changed the SECRET_KEY variable on your project, sessions used to be cyphered using this salt, so if you changed it your corrupted all your sessions, is not a big deal, sessions that were existing before this needs to log-in again</p>
| 0 | 2016-09-27T16:03:01Z | [
"python",
"django",
"session"
] |
pass array of dictionaries to python | 38,970,840 | <p>I have a bash script that builds a dictionary kind of structure for multiple iterations as shown below:</p>
<pre><code>{ "a":"b", "c":"d", "e":"f"}
{ "a1":"b1", "c1":"d1", "e1":"f1", "g1":"h1" }
</code></pre>
<p>I have appended all of them to an array in shell script and they are to be fed as an input to python script and I want the above data to be parsed as list of dictionaries. </p>
<p>I tried some thing like this and it didn't work.</p>
<pre><code>var=({ "a":"b", "c":"d", "e":"f"} { "a1":"b1", "c1":"d1", "e1":"f1", "g1":"h1" })
function plot_graph {
RESULT="$1" python - <<END
from __future__ import print_function
import pygal
import os
import sys
def main():
result = os.getenv('RESULT')
print(result)
if __name__ == "__main__":
main()
END
}
plot_graph ${var[@]}
</code></pre>
<p>Arguments are being split and they are not being treated as a single variable.</p>
<pre><code>Out of result will be :[ {"a":"b", ]
</code></pre>
<p>where as I want the entire var value to be read as a string and then I can split it into multiple dictionaries.</p>
<p>Please help me get over this.</p>
| -1 | 2016-08-16T09:10:08Z | 38,971,335 | <p>seems problem of <code>plot_graph $var</code></p>
<p>The following code should work:</p>
<pre><code>var="({ 'a':'b', 'c':'d', 'e':'f'} { 'a1':'b1', 'c1':'d1', 'e1':'f1', 'g1':'h1' })"
echo $var
function plot_graph {
echo $1
RESULT="$1" python - <<END
from __future__ import print_function
import os
import sys
def main():
result = os.getenv('RESULT')
print(result)
if __name__ == "__main__":
main()
END
}
plot_graph "$var"
</code></pre>
| 0 | 2016-08-16T09:33:20Z | [
"python",
"arrays",
"dictionary"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.