title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
|---|---|---|---|---|---|---|---|---|---|
rolling mean with increasing window
| 39,468,228
|
<p>I have a range</p>
<pre><code>np.arange(1,11) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
</code></pre>
<p>and for each element, <em>i</em>, in my range I want to compute the average from element <em>i=0</em> to my current element. the result would be something like:</p>
<pre><code>array([ 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. , 5.5])
# got this result via np.cumsum(np.arange(1,11,dtype=np.float32))/(np.arange(1, 11))
</code></pre>
<p>I was wondering if there isn't an out of the box function in numpy / pandas that gives me this result?</p>
| 2
|
2016-09-13T10:50:52Z
| 39,468,415
|
<p>This seems to be the simplest, although it may become inefficient if <em>x</em> is very large:</p>
<pre><code>x = range(1,11)
[np.mean(x[:i+1]) for i in xrange(0,len(x))]
</code></pre>
| 1
|
2016-09-13T11:02:06Z
|
[
"python",
"pandas",
"numpy"
] |
rolling mean with increasing window
| 39,468,228
|
<p>I have a range</p>
<pre><code>np.arange(1,11) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
</code></pre>
<p>and for each element, <em>i</em>, in my range I want to compute the average from element <em>i=0</em> to my current element. the result would be something like:</p>
<pre><code>array([ 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. , 5.5])
# got this result via np.cumsum(np.arange(1,11,dtype=np.float32))/(np.arange(1, 11))
</code></pre>
<p>I was wondering if there isn't an out of the box function in numpy / pandas that gives me this result?</p>
| 2
|
2016-09-13T10:50:52Z
| 39,469,185
|
<p>Here's a vectorized approach -</p>
<pre><code>a.cumsum()/(np.arange(a.size)+1)
</code></pre>
<p>Please note that to make sure the results are floating pt numbers, we need to add in at the start :</p>
<pre><code>from __future__ import division
</code></pre>
<p>Alternatively, we can use <code>np.true_divide</code> for the division -</p>
<pre><code>np.true_divide(a.cumsum(),(np.arange(a.size)+1))
</code></pre>
<p>Sample runs -</p>
<pre><code>In [17]: a
Out[17]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
In [18]: a.cumsum()/(np.arange(a.size)+1)
Out[18]: array([ 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. , 5.5])
In [20]: a
Out[20]: array([3, 3, 2, 4, 6, 6, 3, 5, 6, 4])
In [21]: a.cumsum()/(np.arange(a.size)+1)
Out[21]:
array([ 3. , 3. , 2.66666667, 3. , 3.6 ,
4. , 3.85714286, 4. , 4.22222222, 4.2 ])
</code></pre>
| 1
|
2016-09-13T11:43:05Z
|
[
"python",
"pandas",
"numpy"
] |
Program hang with asyncio
| 39,468,252
|
<p>Here is my code:</p>
<pre><code>import asyncio, socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(('', 1234))
sock.setblocking(False)
queue = asyncio.Queue()
def sock_reader():
print(sock.recv(1024))
# x = yield from queue
def test_sock_reader():
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto(b'HELLO', ('127.0.0.1', 1234))
loop = asyncio.get_event_loop()
loop.add_reader(sock, sock_reader)
loop.call_later(0.5, test_sock_reader)
loop.run_forever()
loop.close()
</code></pre>
<p>This is the output:</p>
<pre><code>b'HELLO'
</code></pre>
<blockquote>
<p>When the line <code># x = yield from queue</code> is uncommented the program is not printing <code>b'Hello'</code> anymore.</p>
</blockquote>
<p>Why is the <code>yield from</code> affecting a command that should already be <strong>executed</strong>?</p>
| 2
|
2016-09-13T10:52:31Z
| 39,469,604
|
<p>The problem is a combination of syntax and API definition.</p>
<p>First of, refer to the <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.add_reader">documentation of <code>add_reader</code></a>, which states that it expects a <em>callback</em>. It is not obvious from the word itself, but by saying <em>callback</em> it means a regular function.</p>
<p>Now, when you uncomment the <code># x = yield from queue</code> line, your <code>sock_reader</code> function actually becomes a generator/coroutine due to <code>yield from</code>, in which case when called like a regular function (i.e. <code>sock_reader(...)</code>), it returns a generator object, and does not get executed.</p>
| 5
|
2016-09-13T12:02:35Z
|
[
"python",
"python-3.5",
"python-asyncio"
] |
Assign group name to kivy graphic instruction
| 39,468,389
|
<p>This seems like it should be really simple, yet I have failed to find a way to do it.</p>
<p>In order to iterate over the instructions on my canvas, I am supposed to use <code>canvas.get_group()</code> method. In order to do that, I first need to assign a group name to whichever instructions I intend to iterate over. How does one do that?</p>
| 0
|
2016-09-13T11:00:12Z
| 39,468,603
|
<p>Use <code>InstructionGroup()</code>. This is an example from kivy documentations:</p>
<pre><code>blue = InstructionGroup()
blue.add(Color(0, 0, 1, 0.2))
blue.add(Rectangle(pos=self.pos, size=(100, 100)))
green = InstructionGroup()
green.add(Color(0, 1, 0, 0.4))
green.add(Rectangle(pos=(100, 100), size=(100, 100)))
# Here, self should be a Widget or subclass
[self.canvas.add(group) for group in [blue, green]]
</code></pre>
| 0
|
2016-09-13T11:12:26Z
|
[
"python",
"kivy",
"kivy-language"
] |
Assign group name to kivy graphic instruction
| 39,468,389
|
<p>This seems like it should be really simple, yet I have failed to find a way to do it.</p>
<p>In order to iterate over the instructions on my canvas, I am supposed to use <code>canvas.get_group()</code> method. In order to do that, I first need to assign a group name to whichever instructions I intend to iterate over. How does one do that?</p>
| 0
|
2016-09-13T11:00:12Z
| 39,476,851
|
<p>This is an answer to my own question which I was able to find.</p>
<p>So, i have found (<a href="http://stackoverflow.com/questions/36230958/how-do-i-reference-ids-of-children-within-a-canvas-in-kivy">from here</a>) that Instructions (and many other classes in the canvas scope) have a <code>group</code> property, not listed in the documentation, which can be set when creating the instruction:</p>
<p>like so in python:</p>
<pre><code>with self.canvas: (self must be a widget)
Rectangle(pos=self.pos, size=self.size, group='my_group')
</code></pre>
<p>and like so in kv:</p>
<pre><code><SomeWidget>:
canvas:
Rectangle:
group:'my_group'
</code></pre>
<p>When done like so, a later call to <code>self.canvas.get_group('my_group')</code> returns an iterable with the instructions in it.</p>
<p>However this has its own problems. Check my <a href="http://stackoverflow.com/questions/39476837/return-from-canvas-get-group-call-in-kivy">next question</a>.</p>
| 0
|
2016-09-13T18:33:21Z
|
[
"python",
"kivy",
"kivy-language"
] |
Memory allocation in python
| 39,468,407
|
<p>I create a list with three elements named l after that I copy all the content of the list into the list y. But when I print the address of them in the memory I don't understand why this is not the same address. Why y is not a reference of l, and if I want that y will be a reference of l so that they will have the same address. How can I do this ?</p>
<p>This is my code</p>
<pre><code>l = [8,12,3]
y = l[:]
print l
print y
print id(l)
print id(y)
</code></pre>
<p>Display on the screeen :</p>
<pre><code>[8, 12, 3]
[8, 12, 3]
40894592
40837072
</code></pre>
| -2
|
2016-09-13T11:01:30Z
| 39,468,435
|
<p>Because a slice of a list creates a new list. id returns the address of each list.</p>
| 0
|
2016-09-13T11:03:00Z
|
[
"python",
"memory"
] |
Memory allocation in python
| 39,468,407
|
<p>I create a list with three elements named l after that I copy all the content of the list into the list y. But when I print the address of them in the memory I don't understand why this is not the same address. Why y is not a reference of l, and if I want that y will be a reference of l so that they will have the same address. How can I do this ?</p>
<p>This is my code</p>
<pre><code>l = [8,12,3]
y = l[:]
print l
print y
print id(l)
print id(y)
</code></pre>
<p>Display on the screeen :</p>
<pre><code>[8, 12, 3]
[8, 12, 3]
40894592
40837072
</code></pre>
| -2
|
2016-09-13T11:01:30Z
| 39,468,440
|
<p><code>[:]</code> copies the content of <code>l</code> to a new list <code>y</code>, so they need to be in different addresses. To make <code>y</code> a reference of <code>l</code>, simply write</p>
<pre><code>y = l
</code></pre>
| 2
|
2016-09-13T11:03:09Z
|
[
"python",
"memory"
] |
Memory allocation in python
| 39,468,407
|
<p>I create a list with three elements named l after that I copy all the content of the list into the list y. But when I print the address of them in the memory I don't understand why this is not the same address. Why y is not a reference of l, and if I want that y will be a reference of l so that they will have the same address. How can I do this ?</p>
<p>This is my code</p>
<pre><code>l = [8,12,3]
y = l[:]
print l
print y
print id(l)
print id(y)
</code></pre>
<p>Display on the screeen :</p>
<pre><code>[8, 12, 3]
[8, 12, 3]
40894592
40837072
</code></pre>
| -2
|
2016-09-13T11:01:30Z
| 39,468,652
|
<p>A good point is that if you write this statement</p>
<pre><code>l = [8,12,3],
</code></pre>
<p>Python creates a pointer from the object <code>l</code> to a list <code>[8, 12, 3]</code>. </p>
<p>In case you make this statement</p>
<pre><code>y = l,
</code></pre>
<p>Then the object <code>y</code> points to <code>l</code> and if you change <code>l</code>, <code>y</code> will be also changed. </p>
<p>In case you write</p>
<pre><code>y = list(l),
</code></pre>
<p>the object <code>y</code> points to a new list. Then if you modify <code>l</code>, values in <code>y</code> remain unchanged.</p>
| 0
|
2016-09-13T11:14:39Z
|
[
"python",
"memory"
] |
ttk label doesn't behave properly
| 39,468,535
|
<p>A ttk Label which contains a bitmap image doesn't behave properly when I change the image's foreground color. Only ttk Labels have this problem. Tkinter labels works properly.</p>
<p>Here is the code:</p>
<pre><code>import tkinter as tk
import tkinter.ttk as ttk
BITMAP0 = """
#define zero_width 24
#define zero_height 32
static char zero_bits[] = {
0x00,0x00,0x00, 0x00,0x00,0x00, 0xf0,0x3c,0x0f, 0xf0,0x3c,0x0f,
0xf0,0x3c,0x0f, 0xf0,0x3c,0x0f, 0x00,0x00,0x00, 0x00,0x00,0x00,
0xf0,0x00,0x0f, 0xf0,0x00,0x0f, 0xf0,0x00,0x0f, 0xf0,0x00,0x0f,
0x00,0x00,0x00, 0x00,0x00,0x00, 0xf0,0x00,0x0f, 0xf0,0x00,0x0f,
0xf0,0x00,0x0f, 0xf0,0x00,0x0f, 0x00,0x00,0x00, 0x00,0x00,0x00,
0xf0,0x00,0x0f, 0xf0,0x00,0x0f, 0xf0,0x00,0x0f, 0xf0,0x00,0x0f,
0x00,0x00,0x00, 0x00,0x00,0x00, 0xf0,0x3c,0x0f, 0xf0,0x3c,0x0f,
0xf0,0x3c,0x0f, 0xf0,0x3c,0x0f, 0x00,0x00,0x00, 0x00,0x00,0x00
};
"""
root = tk.Tk()
img = tk.BitmapImage(data=BITMAP0, foreground='Lime', background='Black')
label = ttk.Label(root, image=img)
label.pack()
color = ['red', 'yellow', 'lime', 'white']
def change_color(n):
img.config(foreground=color[n])
if n == 3:
root.after(1000, change_color, 0)
else:
root.after(1000, change_color, n+1)
root.after(1000, change_color, 0)
root.mainloop()
</code></pre>
<p>The image's foreground color should change every second, it doesn't, unless you fly over the image with the mouse.
Just replace the line:</p>
<pre><code>label = ttk.Label(root, image=img)
</code></pre>
<p>with:</p>
<pre><code>label = tk.Label(root, image=img)
</code></pre>
<p>and the program works.
Any help would be appreciated.</p>
<p>I am using python 3.5 with windows Vista</p>
| 1
|
2016-09-13T11:08:12Z
| 39,488,578
|
<p>Try to reassign the changed image to label:</p>
<pre><code>def change_color(n):
img.config(foreground=color[n%4])
label.config(image=img) # reassign the changed image to label
root.after(1000, change_color, n+1)
</code></pre>
| 2
|
2016-09-14T10:50:26Z
|
[
"python",
"python-3.x",
"tkinter",
"ttk"
] |
TensorFlow freeze_graph.py: The name 'save/Const:0' refers to a Tensor which does not exist
| 39,468,640
|
<p>I am currently trying to export a trained TensorFlow model as a ProtoBuf file to use it with the TensorFlow C++ API on Android. Therefore, I'm using the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py" rel="nofollow"><code>freeze_graph.py</code></a> script.</p>
<p>I exported my model using <code>tf.train.write_graph</code>:</p>
<p><code>tf.train.write_graph(graph_def, FLAGS.save_path, out_name, as_text=True)
</code></p>
<p>and I'm using a checkpoint saved with <code>tf.train.Saver</code>.</p>
<p>I invoke <code>freeze_graph.py</code> as described at the top of the script. After compiling, I run</p>
<pre><code>bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=<path_to_protobuf_file> \
--input_checkpoint=<model_name>.ckpt-10000 \
--output_graph=<output_protobuf_file_path> \
--output_node_names=dropout/mul_1
</code></pre>
<p>This gives me the following error message:</p>
<pre><code>TypeError: Cannot interpret feed_dict key as Tensor: The name 'save/Const:0' refers to a Tensor which does not exist. The operation, 'save/Const', does not exist in the graph.
</code></pre>
<p>As the error states I do not have a tensor <code>save/Const:0</code> in my exported model. However, the code of <code>freeze_graph.py</code> says that one can specify this tensor name by the flag <code>filename_tensor_name</code>. Unfortunately I cannot find any information on what this tensor should be and how to set it correctly for my model.</p>
<p>Can somebody tell my either how to produce a <code>save/Const:0</code> tensor in my exported ProtoBuf model or how to set the flag <code>filename_tensor_name</code> correctly?</p>
| 1
|
2016-09-13T11:14:08Z
| 39,476,154
|
<p>The <code>--filename_tensor_name</code> flag is used to specify the name of a placeholder tensor created when you construct a <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/state_ops.html#Saver" rel="nofollow"><code>tf.train.Saver</code></a> for your model.*</p>
<p>In your original program, you can print out the value of <code>saver.saver_def.filename_tensor_name</code> to get the value that you should pass for this flag. You may also want to print the value of <code>saver.saver_def.restore_op_name</code> to get a value for the <code>--restore_op_name</code> flag (since I suspect the default won't be correct for your graph).</p>
<p>Alternatively, the <a href="https://github.com/tensorflow/tensorflow/blob/91a70cbf1c627117b70a3d2dd4c612779369e293/tensorflow/core/protobuf/saver.proto" rel="nofollow"><code>tf.train.SaverDef</code> protocol buffer</a> includes all of the information you need to reconstruct the relevant information for these flags. If you prefer, you can write <code>saver.saver_def</code> to a file, and pass the name of that file as the <code>--input_saver</code> flag to <code>freeze_graph.py</code>.</p>
<hr>
<p> * The default name scope for a <code>tf.train.Saver</code> is <code>"save/"</code> and the placeholder is <a href="https://github.com/tensorflow/tensorflow/blob/91a70cbf1c627117b70a3d2dd4c612779369e293/tensorflow/python/training/saver.py#L609" rel="nofollow">actually a <code>tf.constant()</code></a> whose name defaults to <code>"Const:0"</code>, which explains why the flag defaults to <code>"save/Const:0"</code>.</p>
| 1
|
2016-09-13T17:49:41Z
|
[
"python",
"tensorflow",
"protocol-buffers",
"google-protobuf"
] |
Figure out if called from function without main guard
| 39,468,658
|
<p>If a module is imported from a script without a main guard (<code>if __name__ == '__main__':</code>), doing any kind of parallelism in some function in the module will result in an infinite loop on Windows. Each new process loads all of the sources, now with <code>__name__</code> not equal to <code>'__main__'</code>, and then continues execution in parallel. If there's no main guard, we're going to do another call to the same function in each of our new processes, spawning even more processes, until we crash. It's only a problem on Windows, but the scripts are also executed on osx and linux.</p>
<p>I could check this by writing to a special file on disk, and read from it to see if we've already started, but that limits us to a single python script running at once. The simple solution of modifying all the calling code to add main guards is not feasible because they are spread out in many repositories, which I do not have access to. Thus, I would like to parallelize, when main guards are used, but fallback to single threaded execution when they're not.</p>
<p>How do I figure out if I'm being called in an import loop due to a missing main guard, so that I can fallback to single threaded execution?</p>
<p><strong>Here's some demo code:</strong></p>
<p>lib with parallel code:</p>
<pre><code>from multiprocessing import Pool
def _noop(x):
return x
def foo():
p = Pool(2)
print(p.map(_noop, [1, 2, 3]))
</code></pre>
<p>Good importer (with guard):</p>
<pre><code>from lib import foo
if __name__ == "__main__":
foo()
</code></pre>
<p>Bad importer (without guard):</p>
<pre><code>from lib import foo
foo()
</code></pre>
<p>where the bad importer fails with this RuntimeError, over and over again:</p>
<pre><code> p = Pool(2)
File "C:\Users\filip.haglund\AppData\Local\Programs\Python\Python35\lib\multiprocessing\context.py", line 118, in Pool
context=self.get_context())
File "C:\Users\filip.haglund\AppData\Local\Programs\Python\Python35\lib\multiprocessing\pool.py", line 168, in __init__
self._repopulate_pool()
File "C:\Users\filip.haglund\AppData\Local\Programs\Python\Python35\lib\multiprocessing\pool.py", line 233, in _repopulate_pool
w.start()
File "C:\Users\filip.haglund\AppData\Local\Programs\Python\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\filip.haglund\AppData\Local\Programs\Python\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "C:\Users\filip.haglund\AppData\Local\Programs\Python\Python35\lib\multiprocessing\popen_spawn_win32.py", line 34, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\filip.haglund\AppData\Local\Programs\Python\Python35\lib\multiprocessing\spawn.py", line 144, in get_preparation_data
_check_not_importing_main()
File "C:\Users\filip.haglund\AppData\Local\Programs\Python\Python35\lib\multiprocessing\spawn.py", line 137, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
</code></pre>
| 0
|
2016-09-13T11:15:00Z
| 39,471,519
|
<p>Since you're using <code>multiprocessing</code>, you can also use it to detect if you're the main process or a child process. However, these features are not documented and are therefore just implementation details that could change without warning between python versions. </p>
<p>Each process has a <code>name</code>, <code>_identity</code> and <code>_parent_pid</code>. You can check any of them to see if you're in the main process or not. In the main process <code>name</code> will be <code>'MainProcess'</code>, <code>_identity</code> will be <code>()</code>, and <code>_parent_pid</code> will be <code>None</code>).</p>
<p>My solution allows you to continue using <code>multiprocessing</code>, but just modifies child processes so they can't keep creating child processes forever. It uses a decorator to change <code>foo</code> to a no-op in child processes, but returns <code>foo</code> unchanged in the main process. This means when the spawned child process tries to execute <code>foo</code> nothing will happen (as if it had been executed inside a <code>__main__</code> guard.</p>
<pre><code>from multiprocessing import Pool
from multiprocessing.process import current_process
def run_in_main_only(func):
if current_process().name == "MainProcess":
return func
else:
def noop(*args, **kwargs):
pass
return noop
def _noop(_ignored):
p = current_process()
return p.name, p._identity, p._parent_pid
@run_in_main_only
def foo():
with Pool(2) as p:
for result in p.map(_noop, [1, 2, 3]):
print(result) # prints something like ('SpawnPoolWorker-2', (2,), 10720)
if __name__ == "__main__":
print(_noop(1)) # prints ('MainProcess', (), None)
</code></pre>
| 1
|
2016-09-13T13:37:32Z
|
[
"python",
"windows",
"multithreading"
] |
Match Current date with CSV file and print the matches
| 39,468,735
|
<p>hi so im writing a python script to send a birthday mail kind of thing . But im stuck in middle . i Have a csv file containing names and there birthdays and already wrote a code to get the current date ,</p>
<pre><code>#Import Date
import datetime
CurrentDate = datetime.datetime.now().date()
CurrentDate = CurrentDate.strftime("%d-%B-%Y")
print(CurrentDate)
</code></pre>
<p>and my csv file is </p>
<pre><code>user1,13-September-2016
user2,19-October-2016
user3,13-September-2016
user4,25-August-2016
</code></pre>
<p>So what i want is match current date with the second column of this csv and Get the corresponding column one value and export them as a string like i got the CurrentDate from datetime . If there's more than one user havind birthday on corresponding day an "and" should print between their names . I Hope i didn't make any mistake asking the question :)</p>
| 1
|
2016-09-13T11:19:17Z
| 39,468,887
|
<p>Just group them in a list and output whatever you want:</p>
<pre><code>import csv
from datetime import datetime
today = datetime.now().date().strftime("%d-%B-%Y")
with open("b.csv") as f:
has_birthday = [user for user, birthday in csv.reader(f) if birthday == today]
print(has_birthday)
</code></pre>
<p>Output:</p>
<pre><code>['user1', 'user3']
</code></pre>
<p>You can add a custom message based on the length of has_birthday:</p>
<pre><code>has_birthday = [user for user, birthday in csv.reader(f) if birthday == today]
frm = "{} have their birthdays today." if len(has_birthday) > 1 else "{} has their birthday today."
print(frm.format(" and ".join(has_birthday or ["Nobody"])))
</code></pre>
<p>So for one matching birthday:</p>
<pre><code>userx has their birthday today.
</code></pre>
<p>For more than one:</p>
<pre><code>userx and Usery have their birthdays today.
</code></pre>
<p>And for no user:</p>
<pre><code>Nobody has their birthday today.
</code></pre>
| 1
|
2016-09-13T11:27:47Z
|
[
"python",
"date",
"csv",
"datetime",
"smtp"
] |
Match Current date with CSV file and print the matches
| 39,468,735
|
<p>hi so im writing a python script to send a birthday mail kind of thing . But im stuck in middle . i Have a csv file containing names and there birthdays and already wrote a code to get the current date ,</p>
<pre><code>#Import Date
import datetime
CurrentDate = datetime.datetime.now().date()
CurrentDate = CurrentDate.strftime("%d-%B-%Y")
print(CurrentDate)
</code></pre>
<p>and my csv file is </p>
<pre><code>user1,13-September-2016
user2,19-October-2016
user3,13-September-2016
user4,25-August-2016
</code></pre>
<p>So what i want is match current date with the second column of this csv and Get the corresponding column one value and export them as a string like i got the CurrentDate from datetime . If there's more than one user havind birthday on corresponding day an "and" should print between their names . I Hope i didn't make any mistake asking the question :)</p>
| 1
|
2016-09-13T11:19:17Z
| 39,661,606
|
<p>Thank you everyone , So after doing some work and after getting all your helps , i was able to resolve this </p>
<pre><code>import csv
from datetime import datetime
today = datetime.now().date().strftime("%d-%B-%Y")
while True:
with open("InputFile.csv") as d:
has_birthday = [user for user, birthday, gender in csv.reader(d,skipinitialspace=True) if birthday == today]
if len(has_birthday) < 1:
exit ()
else:
with open("InputFile.csv") as f:
has_birthday = [user for user, birthday, gender in csv.reader(f,skipinitialspace=True) if birthday == today]
with open("InputFile.csv") as g:
male = [user for user, birthday, gender in csv.reader(g, delimiter=',', skipinitialspace=True) if birthday == today and gender == 'M']
with open("InputFile.csv") as h:
female = [user for user, birthday, gender in csv.reader(h, delimiter=',', skipinitialspace=True) if birthday == today and gender == 'F']
if len(has_birthday) == 1 and male:
frm = "{} has his birthday today."
elif len(has_birthday) == 1 and female:
frm = "{} has her birthday today."
else:
frm = "{} have their birthdays today."
test = (frm.format(" and ".join([", ".join(has_birthday[:-1]), has_birthday[-1]]))) if len(has_birthday) > 1 else frm.format(" and ".join(has_birthday))
print test
</code></pre>
| 0
|
2016-09-23T13:08:25Z
|
[
"python",
"date",
"csv",
"datetime",
"smtp"
] |
Easiest way to parallelise a call to map?
| 39,468,860
|
<p>Hey I have some code in Python which is basically a World Object with Player objects. At one point the Players all get the state of the world and need to return an action. The calculations the players do are independent and only use the instance variables of the respective player instance.</p>
<pre><code>while True:
#do stuff, calculate state with the actions array of last iteration
for i, player in enumerate(players):
actions[i] = player.get_action(state)
</code></pre>
<p>What is the easiest way to run the inner <code>for</code> loop parallel? Or is this a bigger task than I am assuming?</p>
| 1
|
2016-09-13T11:26:02Z
| 39,469,046
|
<p>The most straightforward way is to use <a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.map" rel="nofollow">multiprocessing.Pool.map</a> (which works just like <code>map</code>):</p>
<pre><code>import multiprocessing
pool = multiprocessing.Pool()
def do_stuff(player):
... # whatever you do here is executed in another process
while True:
pool.map(do_stuff, players)
</code></pre>
<p>Note however that this uses multiple processes. There is no way of doing multithreading in Python due to the <a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow">GIL</a>.</p>
<p>Usually parallelization is done with threads, which can access the same data inside your program (because they run in the same process). To share data between processes one needs to use IPC (inter-process communication) mechanisms like pipes, sockets, files etc. Which costs more resources. Also, spawning processes is much slower than spawning threads.</p>
<p>Other solutions include:</p>
<ul>
<li>vectorization: rewrite your algorithm as computations on vectors and matrices and use hardware accelerated libraries to execute it</li>
<li>using another Python distribution that doesn't have a GIL</li>
<li>implementing your piece of parallel code in another language and calling it from Python</li>
</ul>
<p>A big issue comes when your have to share data between the processes/threads. For example in your code, each task will access <code>actions</code>. If you <em>have</em> to share state, welcome to <a href="https://en.wikipedia.org/wiki/Concurrent_computing" rel="nofollow">concurrent programming</a>, a much bigger task, and one of the hardest thing to do right in software.</p>
| 2
|
2016-09-13T11:36:15Z
|
[
"python",
"python-3.x",
"parallel-processing"
] |
Accessing ReferenceProperty values in a one to many relation from a referenced key in Appengine?
| 39,468,952
|
<p>I am using AppEngine in Python and I have two Models using ndb :</p>
<pre><code># Post model
class WikiPost(ndb.Model) :
url = ndb.StringProperty(required = True)
content = ndb.TextProperty(required = True)
date = ndb.DateProperty(auto_now_add = True)
</code></pre>
<p>Second Model</p>
<pre><code>class WikiPostVersion(ndb.Model) :
r_post = ndb.KeyProperty(kind = WikiPost)
content = ndb.StringProperty()
date = ndb.DateProperty(auto_now_add = True)
</code></pre>
<p>How I can Access the values of the referenced key r_post of the model WikiPostVersion?</p>
| 0
|
2016-09-13T11:31:20Z
| 39,469,095
|
<p><code>r_post</code> is a key, so you can just call <code>.get()</code> on it.</p>
<pre><code>referenced_wikipost = my_wikipostversion_instance.r_post.get()
</code></pre>
| 0
|
2016-09-13T11:38:55Z
|
[
"python",
"google-app-engine",
"app-engine-ndb"
] |
list search obj with index number
| 39,469,002
|
<pre><code>stations = ['Schagen', 'Heerhugowaard', 'Alkmaar', 'Castricum', 'Zaandam', 'Amsterdam Sloterdijk', 'Amsterdam Centraal', 'Amsterdam Amstel', 'Utrecht Centraal', 'âs-Hertogenbosch', 'Eindhoven', 'Weert', 'Roermond', 'Sittard', 'Maastricht']
</code></pre>
<p>I tried this:</p>
<pre><code>print(sations.index[1])
</code></pre>
<p>I cant find a way to print a list item when I search for the index.</p>
| -5
|
2016-09-13T11:33:38Z
| 39,469,329
|
<p>If you want to find an object, you can use <code>index()</code>. For example</p>
<pre><code>index = 0
if "Zaandam" in stations:
index = stations.index("Zaandam")
print(stations[index])
</code></pre>
<p>The index of <code>Zaandam</code> is being logged to the variable <code>index</code> and then it's logged index is being used to <code>print</code> itself. The <code>if</code> statement is simply a wrapper.</p>
| 0
|
2016-09-13T11:49:39Z
|
[
"python",
"list",
"python-3.x"
] |
Convert date from yyyymmddHHMMSS format
| 39,469,132
|
<p>I would like to read two columns (columns 0, 4) of the following ascii file and plot them. One contains the date in yyymmddHHMMSS format which I would like to covert to a date number.</p>
<pre><code># Date RMS.I RMS.Q Cal.I Cal.Q
20121220220000 1.45485 1.42051 1.26393 1.29448
20121220230000 1.43377 1.39987 1.26803 1.29874
20121221000000 1.44888 1.41472 1.24759 1.27771
</code></pre>
<p>I have been using numpy.loadtxt but it reads all columns as float.</p>
<pre><code>mydate, myvar = np.loadtxt('infile.txt', comments="#", skiprows=1, usecols=(0,4), unpack=True)
</code></pre>
| 2
|
2016-09-13T11:40:33Z
| 39,469,791
|
<p>You can import as a <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html" rel="nofollow">structured array</a> as follow:</p>
<pre><code>mydate, myvar = np.loadtxt('infile.txt', comments="#", skiprows=1, usecols=(0,4), unpack=True, dtype=[('date', '|S10'), ('floatmio', float)])
</code></pre>
<p>it will import the date as <code>str</code> in <code>mydate</code> array. And then you can use <a href="https://docs.python.org/2/library/datetime.html" rel="nofollow"><code>datetime</code></a> library on the single array entries to handle them as follow:</p>
<pre><code>import datetime
dates = [datetime.datetime.strptime(x, '%Y%m%d%H') for x in mydate]
</code></pre>
| 1
|
2016-09-13T12:12:06Z
|
[
"python",
"numpy"
] |
Cannot set Allow quoted newlines property in bigquery using python?
| 39,469,308
|
<p>I cannot able to enable property "Allow quoted newlines" in google bigquery load job.</p>
<pre><code>configuration = {
'load': {
'createDisposition': create_disposition,
'destinationTable': {
'projectId': destination_project,
'datasetId': destination_dataset,
'tableId': destination_table,
},
'schema': {
'fields': schema_fields
},
'sourceFormat': source_format,
'sourceUris': source_uris,
'writeDisposition': write_disposition,
'allowJaggedRows': True,
'allowQuotedNewlines': True,
'ignoreUnknownValues': True
}
}
if source_format == 'CSV':
configuration['load']['skipLeadingRows'] = skip_leading_rows
configuration['load']['fieldDelimiter'] = field_delimiter
configuration['load']['encoding'] = 'UTF-8'
configuration['load']['quote'] = ''
jobs = self.service.jobs()
job_data = {
'configuration': configuration
}
query_reply = jobs \
.insert(projectId=self.project_id, body=job_data) \
.execute()
job_id = query_reply['jobReference']['jobId']
job = jobs.get(projectId=self.project_id, jobId=job_id).execute()
</code></pre>
<p>But property <em>'allowQuotedNewlines': True</em> is not working. When I inspected using bigquery UI(web view), this property is not checked. </p>
<p><a href="http://i.stack.imgur.com/JPH7M.png" rel="nofollow"><img src="http://i.stack.imgur.com/JPH7M.png" alt="enter image description here"></a> </p>
<p>Did I miss something? what is the issue?</p>
| 0
|
2016-09-13T11:48:30Z
| 39,488,929
|
<p>Try to remove the row
configuration['load']['quote'] = ''</p>
<p>If you want to allow quoted newlines, you have to specify a non-empty quote char.</p>
| 0
|
2016-09-14T11:08:29Z
|
[
"python",
"python-2.7",
"google-bigquery",
"google-api-client",
"google-api-python-client"
] |
Adding the header to a csv file
| 39,469,325
|
<p>I have a csv file with the dimensions <code>100*512</code> , I want to process it further in <code>spark</code>. The problem with the file is that it doesn't contain header i.e <code>column names</code> . I need these column names for further ETL in <code>machine learning</code> . I have the column names in another file(text file). I have to put these column names as headers in the csv file mentioned above.
e.g.</p>
<p>CSV file :-</p>
<blockquote>
<p>ab 1 23 sf 23 hjh</p>
<p>hs 6 89 iu 98 adf</p>
<p>gh 7 78 pi 54 ngj</p>
<p>jh 5 22 kj 78 jdk</p>
</blockquote>
<p>Column headers file :-</p>
<blockquote>
<p>one,two,three,four,five, six</p>
</blockquote>
<p>I want the output like this :-</p>
<blockquote>
<p>one two three four five six</p>
<p>ab 1 23 sf 23 hjh</p>
<p>hs 6 89 iu 98 adf</p>
<p>gh 7 78 pi 54 ngj</p>
<p>jh 5 22 kj 78 jdk</p>
</blockquote>
<p>Please suggest some method to add the column heads to the CSV file.(Without replacing the row of the csv file.
I tried it by converting it to pandas dataframe but can't get the expected output.</p>
| -1
|
2016-09-13T11:49:22Z
| 39,469,456
|
<p>Unix:</p>
<pre><code>cat header_file.csv data_file.csv > data_file.csv
</code></pre>
<p>Windows:</p>
<pre><code>type header_file.csv data_file.csv > data_file.csv
</code></pre>
| 1
|
2016-09-13T11:56:04Z
|
[
"python",
"csv"
] |
Adding the header to a csv file
| 39,469,325
|
<p>I have a csv file with the dimensions <code>100*512</code> , I want to process it further in <code>spark</code>. The problem with the file is that it doesn't contain header i.e <code>column names</code> . I need these column names for further ETL in <code>machine learning</code> . I have the column names in another file(text file). I have to put these column names as headers in the csv file mentioned above.
e.g.</p>
<p>CSV file :-</p>
<blockquote>
<p>ab 1 23 sf 23 hjh</p>
<p>hs 6 89 iu 98 adf</p>
<p>gh 7 78 pi 54 ngj</p>
<p>jh 5 22 kj 78 jdk</p>
</blockquote>
<p>Column headers file :-</p>
<blockquote>
<p>one,two,three,four,five, six</p>
</blockquote>
<p>I want the output like this :-</p>
<blockquote>
<p>one two three four five six</p>
<p>ab 1 23 sf 23 hjh</p>
<p>hs 6 89 iu 98 adf</p>
<p>gh 7 78 pi 54 ngj</p>
<p>jh 5 22 kj 78 jdk</p>
</blockquote>
<p>Please suggest some method to add the column heads to the CSV file.(Without replacing the row of the csv file.
I tried it by converting it to pandas dataframe but can't get the expected output.</p>
| -1
|
2016-09-13T11:49:22Z
| 39,469,474
|
<p>First read your csv file:</p>
<pre><code>from pandas import read_csv
df = read_csv('test.csv')
</code></pre>
<p>If there are two columns in your dataset(column a, and column b) use: </p>
<pre><code>df.columns = ['a', 'b']
</code></pre>
<p>Write this new dataframe to csv </p>
<pre><code>df.to_csv('test_2.csv')
</code></pre>
| 1
|
2016-09-13T11:56:53Z
|
[
"python",
"csv"
] |
Adding the header to a csv file
| 39,469,325
|
<p>I have a csv file with the dimensions <code>100*512</code> , I want to process it further in <code>spark</code>. The problem with the file is that it doesn't contain header i.e <code>column names</code> . I need these column names for further ETL in <code>machine learning</code> . I have the column names in another file(text file). I have to put these column names as headers in the csv file mentioned above.
e.g.</p>
<p>CSV file :-</p>
<blockquote>
<p>ab 1 23 sf 23 hjh</p>
<p>hs 6 89 iu 98 adf</p>
<p>gh 7 78 pi 54 ngj</p>
<p>jh 5 22 kj 78 jdk</p>
</blockquote>
<p>Column headers file :-</p>
<blockquote>
<p>one,two,three,four,five, six</p>
</blockquote>
<p>I want the output like this :-</p>
<blockquote>
<p>one two three four five six</p>
<p>ab 1 23 sf 23 hjh</p>
<p>hs 6 89 iu 98 adf</p>
<p>gh 7 78 pi 54 ngj</p>
<p>jh 5 22 kj 78 jdk</p>
</blockquote>
<p>Please suggest some method to add the column heads to the CSV file.(Without replacing the row of the csv file.
I tried it by converting it to pandas dataframe but can't get the expected output.</p>
| -1
|
2016-09-13T11:49:22Z
| 39,469,491
|
<p>you can use it :</p>
<pre><code> import csv
with open('names.csv', 'w') as csvfile:
fieldnames = ['first_name', 'last_name']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerow({'first_name': 'Baked', 'last_name': 'Beans'})
writer.writerow({'first_name': 'Lovely', 'last_name': 'Spam'})
writer.writerow({'first_name': 'Wonderful', 'last_name': 'Spam'})
</code></pre>
| 1
|
2016-09-13T11:57:41Z
|
[
"python",
"csv"
] |
Adding the header to a csv file
| 39,469,325
|
<p>I have a csv file with the dimensions <code>100*512</code> , I want to process it further in <code>spark</code>. The problem with the file is that it doesn't contain header i.e <code>column names</code> . I need these column names for further ETL in <code>machine learning</code> . I have the column names in another file(text file). I have to put these column names as headers in the csv file mentioned above.
e.g.</p>
<p>CSV file :-</p>
<blockquote>
<p>ab 1 23 sf 23 hjh</p>
<p>hs 6 89 iu 98 adf</p>
<p>gh 7 78 pi 54 ngj</p>
<p>jh 5 22 kj 78 jdk</p>
</blockquote>
<p>Column headers file :-</p>
<blockquote>
<p>one,two,three,four,five, six</p>
</blockquote>
<p>I want the output like this :-</p>
<blockquote>
<p>one two three four five six</p>
<p>ab 1 23 sf 23 hjh</p>
<p>hs 6 89 iu 98 adf</p>
<p>gh 7 78 pi 54 ngj</p>
<p>jh 5 22 kj 78 jdk</p>
</blockquote>
<p>Please suggest some method to add the column heads to the CSV file.(Without replacing the row of the csv file.
I tried it by converting it to pandas dataframe but can't get the expected output.</p>
| -1
|
2016-09-13T11:49:22Z
| 39,469,561
|
<p>Bit a old way ...</p>
<p><strong>Content of demo.csv before columns:</strong></p>
<pre><code>4444,Drowsy,bit drowsy
45888,Blurred see - hazy,little seeing vision
45933,Excessive upper pain,pain problems
112397013,air,agony
76948002,pain,agony
</code></pre>
<p><strong>Content of xyz.txt :</strong></p>
<pre><code>Col 1,Col 2,Col 3
</code></pre>
<p><strong>Code with comments inline</strong></p>
<pre><code>#Open CSV file
with open("demo.csv", "r+") as f:
#Open file which has header
with open("xyz.txt",'r') as fh:
#Read header
header = fh.read()
#Read complete data of CSV file
old = f.read()
#Get cursor to start of file
f.seek(0)
#Write header and old data to file.
f.write(header+ "\n" + old)
</code></pre>
<p><strong>Content of demo.csv:</strong></p>
<pre><code>Col 1,Col 2,Col 3
4444,Drowsy,bit drowsy
45888,Blurred see - hazy,little seeing vision
45933,Excessive upper pain,pain problems
112397013,air,agony
76948002,pain,agony
</code></pre>
| 0
|
2016-09-13T12:00:18Z
|
[
"python",
"csv"
] |
Tensorflow - shape error when reading data from file
| 39,469,395
|
<p>I am trying to train a single layer perceptron (basing my code on <a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py" rel="nofollow">this</a>) on the following data file in tensor flow:</p>
<pre><code>1,1,0.05,-1.05
1,1,0.1,-1.1
....
</code></pre>
<p>where the last column is the label (function of 3 parameters) and the first three columns are the function argument. The code that reads the data and trains the model (I simplify it for readability):</p>
<pre><code>import tensorflow as tf
... # some basics to read the data
example, label = read_file_format(filename_queue)
... # model construction and parameter setting
# Launch the graph
with tf.Session() as sess:
sess.run(init)
for epoch in range(training_epochs):
_, c = sess.run([optimizer, cost], feed_dict={x: example, y: label})
print("Optimization Finished!")
</code></pre>
<p>but when I run it, it gives the following error:</p>
<pre><code>Traceback (most recent call last):
File "nn.py", line 85, in <module>
_, c = sess.run([optimizer, cost], feed_dict={x: example, y: label})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 710, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 887, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (3,) for Tensor u'Placeholder:0', which has shape '(?, 3)'
</code></pre>
| 0
|
2016-09-13T11:52:40Z
| 39,469,609
|
<p>Your graph expects X to be a tensor of shape (?, 3). Your example data is of the shape (3,) i.e. a 1 dimensional vector of length 3. Either reshape example to (1, 3), or pass a batch of examples in one shot (e.g. 10, giving a shape of (10, 3))</p>
| 1
|
2016-09-13T12:02:41Z
|
[
"python",
"tensorflow"
] |
You have 3 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth
| 39,469,409
|
<p>I've just created Django project and ran server.
It works fine but showed me warnings like</p>
<pre><code>You have 14 unapplied migration(s)...
</code></pre>
<p>Then I ran</p>
<pre><code>python manage.py migrate
</code></pre>
<p>in the terminal. It worked fine but showed me this</p>
<pre><code>?: (1_7.W001) MIDDLEWARE_CLASSES is not set.
HINT: Django 1.7 changed the global defaults for the MIDDLEWARE_CLASSES.
django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.messages.middleware.MessageMiddleware were removed from the defaults. If your project needs these middleware then you should configure this setting.
</code></pre>
<p>And now I have this warning after starting my server.</p>
<pre><code>You have 3 unapplied migration(s).
Your project may not work properly until you apply
the migrations for app(s): admin, auth.
</code></pre>
<p>So how do I migrate correctly to get rid of this warning?</p>
<p>I am using PyCharm and tried to create the project via PyCharm and terminal and have the same issue.</p>
<pre><code>~$ python3.5 --version
Python 3.5.2
>>> django.VERSION
(1, 10, 1, 'final', 1)
</code></pre>
| -1
|
2016-09-13T11:53:18Z
| 39,470,150
|
<p>So my problem was that I used wrong python version for migration.</p>
<pre><code>python3.5 manage.py migrate
</code></pre>
<p>solves problem</p>
| 2
|
2016-09-13T12:29:46Z
|
[
"python",
"django",
"pycharm"
] |
Allow only one flask request for a particular route
| 39,469,431
|
<p>I have a flask app which uses angularjs front end. I make the http request through $http service. As shown in the code below.</p>
<pre><code>$http.post('/updateGraph', $scope.graphingParameters).success(function(response) {
$scope.graphingParameters.graph = response.graph;
$scope.listUnits = JSON.parse(response.listUnits);
$scope.myHTML = $sce.trustAsHtml($scope.graphingParameters.graph);
$scope.showME = true;
})
</code></pre>
<p>and the updateGraph function in flask is as follows.</p>
<pre><code>@app.route('/updateGraph', methods = ['POST'])
def updateGraph():
selectValues = request.json['selectValues']
selectSelected = np.array(request.json['selectSelected']).tolist()
if len(selectSelected) == 0:
selectSelected = np.array([selectValues[1:3]]).tolist()
fig, listUnits = plot_Stock_vs_Sales(selectSelected)
graph = py_offline.plot(fig, include_plotlyjs=False, output_type='div', show_link=False)
return json.dumps({ 'graph': graph, 'listUnits':listUnits.reset_index().to_json(orient='records')})
</code></pre>
<p>The problem is that suppose make the $http post from angular twice, The flask route is running twice. this is the code from the server.</p>
<blockquote>
<p>
Seconds:92
127.0.0.1 - - [12/Sep/2016 09:46:35] "POST /updateGraph HTTP/1.1" 200 -
Seconds:110
127.0.0.1 - - [12/Sep/2016 09:47:02] "POST /updateGraph HTTP/1.1" 200 -</p>
</blockquote>
<p>I want to either make the $http post request to only allow one request or make flask run only one route per user. Is this possible through flask? if not what would be the best approach through angular?</p>
| 0
|
2016-09-13T11:54:31Z
| 39,469,624
|
<p>You could look at implementing a task or work queue using something like Redis and <a href="http://python-rq.org" rel="nofollow"><code>python-rq</code></a>.</p>
<p>Essentially, when the route is run, rather than perform your work immediately, you queue a task (in this case, to update your graph) to run asynchronously. This way you can ensure that the graph is updated atomically, or using some other criteria of your choosing (for example only once every ten minutes).</p>
| 0
|
2016-09-13T12:03:48Z
|
[
"jquery",
"python",
"angularjs",
"ajax",
"flask"
] |
Allow only one flask request for a particular route
| 39,469,431
|
<p>I have a flask app which uses angularjs front end. I make the http request through $http service. As shown in the code below.</p>
<pre><code>$http.post('/updateGraph', $scope.graphingParameters).success(function(response) {
$scope.graphingParameters.graph = response.graph;
$scope.listUnits = JSON.parse(response.listUnits);
$scope.myHTML = $sce.trustAsHtml($scope.graphingParameters.graph);
$scope.showME = true;
})
</code></pre>
<p>and the updateGraph function in flask is as follows.</p>
<pre><code>@app.route('/updateGraph', methods = ['POST'])
def updateGraph():
selectValues = request.json['selectValues']
selectSelected = np.array(request.json['selectSelected']).tolist()
if len(selectSelected) == 0:
selectSelected = np.array([selectValues[1:3]]).tolist()
fig, listUnits = plot_Stock_vs_Sales(selectSelected)
graph = py_offline.plot(fig, include_plotlyjs=False, output_type='div', show_link=False)
return json.dumps({ 'graph': graph, 'listUnits':listUnits.reset_index().to_json(orient='records')})
</code></pre>
<p>The problem is that suppose make the $http post from angular twice, The flask route is running twice. this is the code from the server.</p>
<blockquote>
<p>
Seconds:92
127.0.0.1 - - [12/Sep/2016 09:46:35] "POST /updateGraph HTTP/1.1" 200 -
Seconds:110
127.0.0.1 - - [12/Sep/2016 09:47:02] "POST /updateGraph HTTP/1.1" 200 -</p>
</blockquote>
<p>I want to either make the $http post request to only allow one request or make flask run only one route per user. Is this possible through flask? if not what would be the best approach through angular?</p>
| 0
|
2016-09-13T11:54:31Z
| 39,536,266
|
<p>From your description of the situation, this is something that would be better solved on the client side.</p>
<p>I would just set a flag (define it on a global or class level) if there's a request already ongoing for that task. For example:</p>
<pre><code>if (processing) {
return;
}
processing = true;
$http.post('/updateGraph', $scope.graphingParameters).success(function(response) {
$scope.graphingParameters.graph = response.graph;
$scope.listUnits = JSON.parse(response.listUnits);
$scope.myHTML = $sce.trustAsHtml($scope.graphingParameters.graph);
$scope.showME = true;
processing = false;
})
</code></pre>
<p>This implementation can also be used to hide/disable the button or whatever triggers the request from the user so that they cannot trigger it while there's an ongoing request.</p>
<p>Note that I'm not very familiar with angular 1, so you might want to trigger that <code>processing = false</code> after the request and not only on <code>success</code>.</p>
| 0
|
2016-09-16T16:31:22Z
|
[
"jquery",
"python",
"angularjs",
"ajax",
"flask"
] |
store complex dictionary in pandas dataframe
| 39,469,643
|
<p>This question follows my previous one.it's a mother dictionary of the one before
<strong><a href="http://stackoverflow.com/questions/39458806/store-dictionary-in-pandas-dataframe">store dictionary in pandas dataframe</a></strong></p>
<p>I have a dictionary </p>
<pre><code> dictionary_example={'New York':{1234:{'choice':0,'city':'New York','choice_set':{0:{'A':100,'B':200,'C':300},1:{'A':200,'B':300,'C':300},2:{'A':500,'B':300,'C':300}}},
234:{'choice':1,'city':'New York','choice_set':{0:{'A':100,'B':400},1:{'A':100,'B':300,'C':1000}}},
1876:{'choice':2,'city':'New York','choice_set':{0:{'A': 100,'B':400,'C':300},1:{'A':100,'B':300,'C':1000},2:{'A':600,'B':200,'C':100}}
}},
'London':{1534:{'choice':0,'city':'London','choice_set':{0:{'A':100,'B':400,'C':300},1:{'A':200,'B':300,'C':300},2:{'A':500,'B':300,'C':300}}},
2134:{'choice':1,'city':'London','choice_set':{0:{'A':100,'B':600},1:{'A':170,'B':300,'C':1000}}},
1776:{'choice':2,'city':'London','choice_set':{0:{'A':100,'B':400,'C':500},1:{'A':100,'B':300},2:{'A':600,'B':200,'C':100}}}},
'Paris':{1534:{'choice':0,'city':'Paris','choice_set':{0:{'A':100,'B':400,'C':300},1:{'A':200,'B':300,'C':300},2:{'A':500,'B':300,'C':300}}},
2134:{'choice':1,'city':'Paris','choice_set':{0:{'A':100,'B':600},1:{'A':170,'B':300,'C':1000}}},
1776:{'choice':1,'city':'Paris','choice_set':{0:{'A': 100,'B':400,'C':500},1:{'A':100,'B':300}}}
}}
</code></pre>
<p>I want it become a pandas data frame like this (some specific value inside maybe not exactly accurate)</p>
<pre><code>id choice A_0 B_0 C_0 A_1 B_1 C_1 A_2 B_2 C_2 New York London Paris
1234 0 100 200 300 200 300 300 500 300 300 1 0 0
234 1 100 400 - 100 300 1000 - - - 1 0 0
1876 2 100 400 300 100 300 1000 600 200 100 1 0 0
1534 0 100 200 300 200 300 300 500 300 300 0 1 0
2134 1 100 400 - 100 300 1000 - - - 0 1 0
2006 2 100 400 300 100 300 1000 600 200 100 0 1 0
1264 0 100 200 300 200 300 300 500 300 300 0 0 1
1454 1 100 400 - 100 300 1000 - - - 0 0 1
1776 1 100 400 300 100 300 - - - - 0 0 1
</code></pre>
<p>In the old question the nice guy provide a way for the sub_dictionary:</p>
<pre><code>df = pd.read_json(json.dumps(dictionary_example)).T
def to_s(r):
return pd.read_json(json.dumps(r)).unstack()
flattened_choice_set = df["choice_set"].apply(to_s)
flattened_choice_set.columns = ['_'.join((str(col[0]), col[1])) for col in flattened_choice_set.columns]
result = pd.merge(df, flattened_choice_set,
left_index=True, right_index=True).drop("choice_set", axis=1)
</code></pre>
<p>Any way to do for the large dictionary?</p>
<p>All the best,
Kevin</p>
| 1
|
2016-09-13T12:04:58Z
| 39,471,338
|
<p>The previously provided solution, as you quote, is not a very neat one. This one is more readable and provides the solution for your current problem. If possible you should reconsider your data structure though...</p>
<pre><code>df = pd.DataFrame()
question_ids = [0,1,2]
</code></pre>
<p>Create a dataframe with a row for every city-choice combination, with dictionary in choice set column</p>
<pre><code>for _, city_value in dictionary_example.iteritems():
city_df = pd.DataFrame.from_dict(city_value).T
city_df = city_df.join(pd.DataFrame(city_df["choice_set"].to_dict()).T)
df = df.append(city_df)
</code></pre>
<p>Join the weird column names from choice set to your df</p>
<pre><code>for i in question_ids:
choice_df = pd.DataFrame(df[i].to_dict()).T
choice_df.columns = map(lambda x: "{}_{}".format(x,i), choice_df.columns)
df = df.join(choice_df)
</code></pre>
<p>Fix the city columns</p>
<pre><code>df = pd.get_dummies(df, prefix="", prefix_sep="", columns=['city'])
df.drop(question_ids + ['choice_set'], axis=1, inplace=True)
# Optional to remove NaN from questions:
# df = df.fillna(0)
df
</code></pre>
| 2
|
2016-09-13T13:28:25Z
|
[
"python",
"json",
"pandas",
"dictionary",
"dataframe"
] |
TypeError: expected string or bytes-like object pandas variable
| 39,469,711
|
<p>I have dataset like this</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'word': ['abs e learning ', 'abs e-learning', 'abs e&learning', 'abs elearning']})
</code></pre>
<p>I want to get </p>
<pre><code> word
0 abs elearning
1 abs elearning
2 abs elearning
3 abs elearning
</code></pre>
<p>I do as bellow</p>
<pre><code>re_map = {r'\be learning\b': 'elearning', r'\be-learning\b': 'elearning', r'\be&learning\b': 'elearning'}
import re
for r, map in re_map.items():
df['word'] = re.sub(r, map, df['word'])
</code></pre>
<p>and error</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-42-fbf00d9a0cba> in <module>()
3 s = df['word']
4 for r, map in re_map.items():
----> 5 df['word'] = re.sub(r, map, df['word'])
C:\Users\Edward\Anaconda3\lib\re.py in sub(pattern, repl, string, count, flags)
180 a callable, it's passed the match object and must return
181 a replacement string to be used."""
--> 182 return _compile(pattern, flags).sub(repl, string, count)
183
184 def subn(pattern, repl, string, count=0, flags=0):
TypeError: expected string or bytes-like object
</code></pre>
<p>I can apply str like this</p>
<pre><code>for r, map in re_map.items():
df['word'] = re.sub(r, map, str(df['word']))
</code></pre>
<p>There is no mistake but i cann't get pd.dataFrame as i wish</p>
<pre><code> word
0 0 0 0 abs elearning \n1 abs elearning\...\n1 0 0 abs elearning \n1 abs elearning\...\n2 0 0 abs elearning \n1 abs ele...
1 0 0 0 abs elearning \n1 abs elearning\...\n1 0 0 abs elearning \n1 abs elearning\...\n2 0 0 abs elearning \n1 abs ele...
2 0 0 0 abs elearning \n1 abs elearning\...\n1 0 0 abs elearning \n1 abs elearning\...\n2 0 0 abs elearning \n1 abs ele...
3 0 0 0 abs elearning \n1 abs elearning\...\n1 0 0 abs elearning \n1 abs elearning\...\n2 0 0 abs elearning \n1 abs ele...
</code></pre>
<p>how to improve it?</p>
| 1
|
2016-09-13T12:08:29Z
| 39,469,848
|
<p><code>df['word']</code> is a list. Converting to string just destroys your list.</p>
<p>You need to apply regex on each member:</p>
<pre><code>for r, map in re_map.items():
df['word'] = [re.sub(r, map, e) for e in df['word']]:
</code></pre>
<p>classical alternate method without list comprehension:</p>
<pre><code> for r, map in re_map.items():
d = df['word']
for i,e in enumerate(d):
d[i] = re.sub(r, map, e)
</code></pre>
<p>BTW you could simplify your regex list drastically:</p>
<pre><code>re_map = {r'\be[\-& ]learning\b': 'elearning'}
</code></pre>
<p>By doing that you only have one regex and this becomes a one-liner:</p>
<pre><code> df['word'] = [re.sub(r'\be[\-& ]learning\b', 'elearning', e) for e in df['word']]:
</code></pre>
<p>could even be faster by pre-compiling the regex once for all substitutions:</p>
<pre><code> theregex = re.compile(r'\be[\-& ]learning\b')
df['word'] = [theregex.sub('elearning', e) for e in df['word']]:
</code></pre>
| 1
|
2016-09-13T12:15:13Z
|
[
"python",
"regex"
] |
I can print a local variable but not return it (python 2.7)
| 39,469,812
|
<p>EDIT:
Adding in </p>
<pre><code>upperline = []
lowerline = []
</code></pre>
<p>above the <code>for</code> loop seems to allow the function to be called once as expected, but not more than once. If called a second time the following error will be thrown:</p>
<pre><code>transitenergy = (float(upperline[1]) - float(lowerline[1]))
IndexError: list index out of range
</code></pre>
<p>If instead </p>
<pre><code>upperline = [1,2]
lowerline = [4,5]
</code></pre>
<p>is added above the <code>for</code> loop, the function returns the expected value the first time, and then -3 every other time.</p>
<hr>
<p>I am having a problem with a <code>for</code> loop seemingly being unable to retain variables when trying to return these variables, even though I can print the variables.
If I define the function as follows, when it is called , the <code>transitenergy</code> will be printed to the console, and then the following error will be thrown:</p>
<pre><code>transitenergy = (float(upperline[1]) - float(lowerline[1]))
UnboundLocalError: local variable 'upperline' referenced before assignment"
</code></pre>
<p> </p>
<pre><code>def crossreference(datafile, lookuppointers):
pointers = [(int(lookuppointers[0]) - 1), (int(lookuppointers[1]) - 1)]
lowerpointer = min(pointers)
upperpointer = max(pointers)
for i, line in enumerate(datafile):
if i == lowerpointer:
lowerline = filter(lambda a: a!= '\t',filterstring(line))
elif i == upperpointer:
upperline = filter(lambda a: a!= '\t',filterstring(line))
break
transitenergy = (float(upperline[1]) - float(lowerline[1]))]
print transitenergy
return transitenergy
</code></pre>
<p>I have also tried moving the return statement inside the loop i.e.</p>
<pre><code>...
elif i == upperpointer:
upperline = filter(lambda a: a!= '\t',filterstring(line))
transitenergy = (float(upperline[1]) - float(lowerline[1]))
return transitenergy
</code></pre>
<p>or adding the return to a further <code>elif</code> branch i.e.</p>
<pre><code>...
elif i == upperpointer:
upperline = filter(lambda a: a!= '\t',filterstring(line))
elif i > upperpointer:
transitenergy = (float(upperline[1]) - float(lowerline[1]))
return transitenergy
</code></pre>
<p>but both of these just return a <code>NoneType</code> when the function is called and throws <code>TypeError: bad operand type for abs(): NoneType</code> when I try to call <code>abs()</code> on it (as expected of a <code>NoneType</code>). </p>
<p>The interesting part here, is if a print statement after defining the local <code>transitenergy</code> variable, in any of the trials I have described, calling the function prints <code>transitenergy</code> without a problem, and then throws the errors.</p>
<p>I should mention that the datafile used in the <code>datafile</code> argument are very large files (on the order of 100+Mb) where each line has the structure:</p>
<pre><code>" [line number+1] [float] ...."
</code></pre>
<p>(there are more numbers after this in the string but they are not relevant to the task)</p>
<p>The <code>lookuppointers</code> argument are lists of the following structure:</p>
<pre><code>[int, int, ...]
</code></pre>
<p>The integers are not ordered (hence the <code>min</code> and <code>max</code>) and refer to a [line number +1] of the <code>datafile</code></p>
<p>The line:</p>
<pre><code>filter(lambda a: a!= '\t',filterstring(line))
</code></pre>
<p>Is because I am iterating over a list of many of these files, and although they usually are in the correct format, and sometimes they will have a <code>\t</code> at the beginning.</p>
<p>The <code>filterstring</code> function is defined as:</p>
<pre><code>def filterstring(string):
return filter(lambda a:a!='',string.split(" "))
</code></pre>
<p>to turn the line in the <code>datafile</code> into a list of strings.</p>
<p>The question is how can I return the <code>transitenergy</code> variable as it is printed.</p>
<p>If there is another way that I can perform this type of cross referencing without having the whole <code>datafile</code> in memory then that would work also.</p>
| 0
|
2016-09-13T12:13:08Z
| 39,472,654
|
<p>The solution lies in in the fact that the datafile was kept open. Adding the line <code>datafile.seek(0)</code> to the function i.e.</p>
<pre><code>def crossreference(datafile, lookuppointers):
pointers = [(int(lookuppointers[0]) - 1), (int(lookuppointers[1]) - 1)]
lowerpointer = min(pointers)
upperpointer = max(pointers)
datafile.seek(0)
for i, line in enumerate(datafile):
if i == lowerpointer:
lowerline = filter(lambda a: a!= '\t',filterstring(line))
elif i == upperpointer:
upperline = filter(lambda a: a!= '\t',filterstring(line))
transitenergy = (float(upperline[1]) - float(lowerline[1]))
return transitenergy
</code></pre>
<p>Caused the file to be read from the beginning each time the function was called, as opposed to what was happening before where the file was being read from the last place it was read from.</p>
| 0
|
2016-09-13T14:35:12Z
|
[
"python",
"python-2.7",
"function",
"for-loop"
] |
Multiple Mixins and properties
| 39,469,841
|
<p>I am trying to create a mixin class that has it's own properties, but as the class has no <strong>init</strong> to initialize the "hidden" variable behind the property.</p>
<pre><code>class Software:
__metaclass__ = ABCMeta
@property
def volumes(self):
return self._volumes
@volumes.setter
def volumes(self, value):
pass
class Base(object):
def __init__(self):
self._volumes = None
class SoftwareUser(Base, Software):
def __init__(self):
super(Base, self).__init__()
</code></pre>
<p>So above is the best that I have come up with to solve this but the reality is that the _volumes dosn't really belong in the base. I could add an init to the Software class but then the super call wont work on both mixins.</p>
<p>The second is that I will need multiple mixins dependent on the incoming call they will always need the base, but the mixins will change so I dont really want variables from mixins that aren't mixed in for that call. </p>
<p>Is there a way that i can have the mixin add it's variables to the class if it is mixed in perhaps dynamically call the init of the mixin class ?.</p>
<p>Any questions let me know.</p>
<p>Thanks</p>
| 0
|
2016-09-13T12:14:58Z
| 39,471,186
|
<p>Ok so here is what I came up with, I am open to other answers, if I have made this way over complicated.</p>
<pre><code>class Software:
@property
def volumes(self):
return self._volumes
@volumes.setter
def volumes(self, value):
pass
def __init__(self):
self._volumes = None
class Base(object):
def __init__(self):
other_vars = None
class SoftwareUser(Base, Software):
def _bases_init(self, *args, **kwargs):
for base in type(self).__bases__:
base.__init__(self, *args, **kwargs)
def __init__(self, *args, **kwargs):
self._bases_init(*args, **kwargs)
</code></pre>
| 0
|
2016-09-13T13:20:48Z
|
[
"python",
"mixins"
] |
Multiple Mixins and properties
| 39,469,841
|
<p>I am trying to create a mixin class that has it's own properties, but as the class has no <strong>init</strong> to initialize the "hidden" variable behind the property.</p>
<pre><code>class Software:
__metaclass__ = ABCMeta
@property
def volumes(self):
return self._volumes
@volumes.setter
def volumes(self, value):
pass
class Base(object):
def __init__(self):
self._volumes = None
class SoftwareUser(Base, Software):
def __init__(self):
super(Base, self).__init__()
</code></pre>
<p>So above is the best that I have come up with to solve this but the reality is that the _volumes dosn't really belong in the base. I could add an init to the Software class but then the super call wont work on both mixins.</p>
<p>The second is that I will need multiple mixins dependent on the incoming call they will always need the base, but the mixins will change so I dont really want variables from mixins that aren't mixed in for that call. </p>
<p>Is there a way that i can have the mixin add it's variables to the class if it is mixed in perhaps dynamically call the init of the mixin class ?.</p>
<p>Any questions let me know.</p>
<p>Thanks</p>
| 0
|
2016-09-13T12:14:58Z
| 39,471,532
|
<p>Yes, that's wildly overcomplicated. A class (including mixins) should only be responsible for calling the <em>next</em> implementation in the MRO, not marshalling all of them. Try:</p>
<pre><code>class Software:
@property
def volumes(self):
return self._volumes
@volumes.setter
def volumes(self, value):
pass
def __init__(self):
self._volumes = None
super().__init__() # mixin calls super too
class Base(object):
def __init__(self):
other_vars = None
class SoftwareUser(Software, Base): # note order
def __init__(self):
super().__init__() # all you need here
</code></pre>
| 0
|
2016-09-13T13:38:15Z
|
[
"python",
"mixins"
] |
Numpy in1D multiple evaluation statements
| 39,469,936
|
<p>I'm trying to use numpy as opposed to nested for loops and trying to find if a value is within a particular tolerance.</p>
<p>The code in python using the nested loops works fine and I do get the results I'm looking for but unfortunately is not scalable and takes a couple of hours when the size of the list is 200k plus items.</p>
<p>What I have now as a second iteration of the process is:</p>
<pre><code>import numpy as np
import numpy.ma as ma
from numpy import newaxis
#some data provided as an example
a= np.array([['id1', 8988, 7997, 210.0, 240.0, 180, 300, 7000.0, 9038, 8938, 8047, 7947, 231.0, 189.0, 8400.0, 5600.0],
['id2', 7314, 5613, 210.0, 240.0, 180, 300, 7000.0, 7364, 7264, 5663, 5563, 231.0, 189.0, 8400.0, 5600.0],
['id3', 5520, 9888, 35.0, 55.0, -125, 235, 7000.0, 5570, 5470, 9938, 9838, 38.5, 31.5, 8400.0, 5600.0],
['id4', 6270, 4270, 0.0, 90.0, -90, 270, 7000.0, 6320, 6220, 4320, 4220, 0.0, 0.0, 8400.0, 5600.0]])
print(a)
validation = np.ma.MaskedArray(((a[:, 1:2] <= a[:, 8:9]) & (a[:, 1:2] >= a[:, 9:10])) \
& ((a[:, 2:3] <= a[:, 10:11]) & (a[:, 2:3] >= a[:, 11:12])) \
& ((a[:, 3:4] <= a[:, 12:13]) & (a[:, 3:4] >= a[:, 13:14])) \
& ((a[:, 7:8] <= a[:, 14:15]) & (a[:, 7:8] >= a[:, 15:])))
e = np.in1d(a[:, 1:2], a[validation]) <-- this is were I try to apply the check for tolerances
e1 = np.where(e[:, newaxis], a[:, :1], np.zeros(1, dtype=int))
ef = e1[~np.all(e1 == 0, axis=0)]
print('Final array', ef)
</code></pre>
<p>On the first attempt using numpy's meshgrid to create all combinations, one for each comparison, and then doing a numpy.where on the results it works but when using 100k plus items, the total amount of RAM required is more than 150GB of RAM.</p>
<p>Any help, advice, comment is appreciated.</p>
| 0
|
2016-09-13T12:19:09Z
| 39,482,725
|
<p>If I copy-n-paste your <code>a</code> I get a 4x16 array of strings</p>
<pre><code>In [37]: a
Out[37]:
array([['id1', '8988', '7997', '210.0', '240.0', '180', '300', '7000.0',
'9038', '8938', '8047', '7947', '231.0', '189.0', '8400.0',
'5600.0'],
....
dtype='<U6')
</code></pre>
<p>Applying your <code>validation</code> expression to that produces (forget about the Maskedarray bit). Of course it is trying to do a comparison of strings.</p>
<pre><code>array([[ True],
[ True],
[ True],
[ True]], dtype=bool)
</code></pre>
<p>If I remove the <code>id</code> column I get a 4x15 of floats</p>
<pre><code>In [39]: a
Out[39]:
array([[ 8988. , 7997. , 210. , 240. , 180. , 300. , 7000. ,
9038. , 8938. , 8047. , 7947. , 231. , 189. , 8400. ,
5600. ],
...]])
</code></pre>
<p>WIth that I think the <code>validation</code> test can be simplified to:</p>
<pre><code>In [41]: ((a[:, 0] <= a[:, 7]) & (a[:, 0] >= a[:, 8])) \
...: & ((a[:, 1] <= a[:, 9]) & (a[:, 1] >= a[:, 10])) \
...: & ((a[:, 2] <= a[:, 11]) & (a[:, 2] >= a[:, 12])) \
...: & ((a[:, 6] <= a[:, 13]) & (a[:, 6] >= a[:, 14]))
Out[41]: array([ True, True, True, True], dtype=bool)
</code></pre>
<p>What is this doing?</p>
<pre><code>e = np.in1d(a[:, 1:2], a[validation])
</code></pre>
<p><code>a[validation]</code> is all the <code>ok</code> rows of <code>a</code>; <code>a[:,0]</code> is the first value of each row. But <code>np.in1d</code> is meant to check the contents of one 1d array against the contents of another 1d. As you wrote it, it is using 2 2d arrays.</p>
<p>At this point, I'm going to give up. </p>
<p>Construct a simpler test case, and make sure it works at each step. Show the intermediate values. Then we can discuss the step(s) where it does not work.</p>
| 0
|
2016-09-14T04:55:11Z
|
[
"python",
"numpy",
"evaluation"
] |
Numpy in1D multiple evaluation statements
| 39,469,936
|
<p>I'm trying to use numpy as opposed to nested for loops and trying to find if a value is within a particular tolerance.</p>
<p>The code in python using the nested loops works fine and I do get the results I'm looking for but unfortunately is not scalable and takes a couple of hours when the size of the list is 200k plus items.</p>
<p>What I have now as a second iteration of the process is:</p>
<pre><code>import numpy as np
import numpy.ma as ma
from numpy import newaxis
#some data provided as an example
a= np.array([['id1', 8988, 7997, 210.0, 240.0, 180, 300, 7000.0, 9038, 8938, 8047, 7947, 231.0, 189.0, 8400.0, 5600.0],
['id2', 7314, 5613, 210.0, 240.0, 180, 300, 7000.0, 7364, 7264, 5663, 5563, 231.0, 189.0, 8400.0, 5600.0],
['id3', 5520, 9888, 35.0, 55.0, -125, 235, 7000.0, 5570, 5470, 9938, 9838, 38.5, 31.5, 8400.0, 5600.0],
['id4', 6270, 4270, 0.0, 90.0, -90, 270, 7000.0, 6320, 6220, 4320, 4220, 0.0, 0.0, 8400.0, 5600.0]])
print(a)
validation = np.ma.MaskedArray(((a[:, 1:2] <= a[:, 8:9]) & (a[:, 1:2] >= a[:, 9:10])) \
& ((a[:, 2:3] <= a[:, 10:11]) & (a[:, 2:3] >= a[:, 11:12])) \
& ((a[:, 3:4] <= a[:, 12:13]) & (a[:, 3:4] >= a[:, 13:14])) \
& ((a[:, 7:8] <= a[:, 14:15]) & (a[:, 7:8] >= a[:, 15:])))
e = np.in1d(a[:, 1:2], a[validation]) <-- this is were I try to apply the check for tolerances
e1 = np.where(e[:, newaxis], a[:, :1], np.zeros(1, dtype=int))
ef = e1[~np.all(e1 == 0, axis=0)]
print('Final array', ef)
</code></pre>
<p>On the first attempt using numpy's meshgrid to create all combinations, one for each comparison, and then doing a numpy.where on the results it works but when using 100k plus items, the total amount of RAM required is more than 150GB of RAM.</p>
<p>Any help, advice, comment is appreciated.</p>
| 0
|
2016-09-13T12:19:09Z
| 39,483,404
|
<p>As hpaulj says, first get rid of the <code>id</code>'s.</p>
<p>Second, why are your tolerances and your values in the same array? If you had <code>min_tol</code> and <code>max_tol</code> in separate arrays, you could do this much more easily.</p>
<p>You probably need something like (after removing the <code>id</code>'s):</p>
<pre><code>min_tol = a[:, 8:15:2]
max_tol = a[:, 7:14:2]
a_val = np.c_[a[:, :3], a[:, 6]]
validation = (a_val >= min_tol) & (a_val <= max_tol)
</code></pre>
<p>Although I'm really not sure at this stage...</p>
| 0
|
2016-09-14T05:58:17Z
|
[
"python",
"numpy",
"evaluation"
] |
Install Python 3.5.2, but pip for Python 2.6
| 39,469,953
|
<p>VPS-server was a version Python 2.6, I installed version Python 3.5.2.
When I try to install some packages with help <code>pip</code>, I got errors.</p>
<p>During installation packages:</p>
<pre><code>DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6
</code></pre>
<p>Versions:</p>
<pre><code># Python -V
# Python 3.5.2
# pip -V
# pip 8.1.2 from /usr/lib/python2.6/site-packages/pip-8.1.2-py2.6.egg (python 2.6)
# cat /etc/*-release
# CentOS release 6.8 (Final)
</code></pre>
<p>How to change path to pip from python 3.5 ?</p>
| 1
|
2016-09-13T12:20:10Z
| 39,470,070
|
<p>if you haven't pip in server you can use <a href="https://bootstrap.pypa.io/get-pip.py" rel="nofollow">get-pip</a> file:</p>
<p>after install python usually installed pip and you can run by <code>pip3</code> command</p>
<p>for example you can use:</p>
<pre><code>pip3 install netaddr
</code></pre>
| 1
|
2016-09-13T12:26:04Z
|
[
"python",
"centos",
"pip"
] |
Install Python 3.5.2, but pip for Python 2.6
| 39,469,953
|
<p>VPS-server was a version Python 2.6, I installed version Python 3.5.2.
When I try to install some packages with help <code>pip</code>, I got errors.</p>
<p>During installation packages:</p>
<pre><code>DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6
</code></pre>
<p>Versions:</p>
<pre><code># Python -V
# Python 3.5.2
# pip -V
# pip 8.1.2 from /usr/lib/python2.6/site-packages/pip-8.1.2-py2.6.egg (python 2.6)
# cat /etc/*-release
# CentOS release 6.8 (Final)
</code></pre>
<p>How to change path to pip from python 3.5 ?</p>
| 1
|
2016-09-13T12:20:10Z
| 39,470,072
|
<p>Upgrading pip manually solved this issue for me once. <a href="https://pip.pypa.io/en/stable/installing/#upgrading-pip" rel="nofollow" title="Upgrad pip">Update pip Documentation</a></p>
<blockquote>
<p>pip is already installed if you're using Python 2 >=2.7.9 or Python 3 >=3.4 downloaded from python.org, but you'll need to upgrade pip.</p>
</blockquote>
<p>On Linux or OS X:</p>
<pre><code>pip install -U pip
</code></pre>
<p>On Windows [5]:</p>
<pre><code>python -m pip install -U pip
</code></pre>
| 0
|
2016-09-13T12:26:08Z
|
[
"python",
"centos",
"pip"
] |
VMWare pyvmomi 6.0.0 : Operation not supported
| 39,469,985
|
<p>I am using VMWare's pyvmomi to manage my ESXI's virtual machines </p>
<p>I'm using :</p>
<p><strong>pyvmomi-6.0.0</strong> with <strong>python 2.7.5</strong> to clone a VM on</p>
<p><strong>VMWare ESXi 6.0.0 Update 1</strong> which is managed by</p>
<p><strong>vCenter Server 6</strong></p>
<p>Using pyvmomi, i can successfully retrieve vm objects, iterate through datacenters, datastores, vm etc...
But i can't clone them. </p>
<p>I am connecting to my ESXi as root</p>
<p>I'm always getting the following error :
(I've tried cloning vms and creating folders on ESXI)</p>
<pre><code>./test.py
Source VM : TEST_A
Pool cible : Pool_2
Traceback (most recent call last):
File "./vmomiTest.py", line 111, in <module>
sys.exit(main())
File "./vmomiTest.py", line 106, in main
tasks.wait_for_tasks(esxi, [task])
File "/home/user/dev/tools/tasks.py", line 53, in wait_for_tasks
raise task.info.error
pyVmomi.VmomiSupport.NotSupported: (vmodl.fault.NotSupported) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'The operation is not supported on the object.',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) []
}
</code></pre>
<p>When I read by /var/log/hostd.log file on esxi, i get the following :</p>
<blockquote>
<p>2016-09-13T10:15:17.775Z info hostd[51F85B70] [Originator@6876
sub=Vimsvc.TaskManager opID=467be296 user=root] Task Created :
haTask-2-vim.VirtualMachine.clone-416315 </p>
<p>2016-09-13T10:15:17.779Z info
hostd[51F03B70] [Originator@6876 sub=Default opID=467be296 user=root]
AdapterServer caught exception: vmodl.fault.NotSupported</p>
<p>2016-09-13T10:15:17.779Z info hostd[51F03B70] [Originator@6876
sub=Vimsvc.TaskManager opID=467be296 user=root] Task Completed :
haTask-2-vim.VirtualMachine.clone-416315 Status error</p>
</blockquote>
<p>Is there any other pre-requisites that i don't match ?
Has anyone any clue ?</p>
<p><strong>Using the following test.py example code :</strong>
</p>
<pre class="lang-py prettyprint-override"><code>def get_obj_case_insensitive(content, vimtype, name, folder=None):
obj = None
if not folder:
folder = content.rootFolder
container = content.viewManager.CreateContainerView(folder, vimtype, True)
for item in container.view:
if item.name.lower() == name.lower():
obj = item
break
return obj
def get_obj(content, vimtype, name, folder=None):
obj = None
if not folder:
folder = content.rootFolder
container = content.viewManager.CreateContainerView(folder, vimtype, True)
for item in container.view:
if item.name == name:
obj = item
break
return obj
def main():
esxi = connect.SmartConnect(user=esxi_user,
pwd=esxi_password,
host=esxi_addr,
port=443)
atexit.register(connect.Disconnect, esxi)
content = esxi.RetrieveContent()
source_vm = get_obj(content, [vim.VirtualMachine], source_vm_name)
if source_vm == None:
print "Source VM %s doesn't exist, couldn't create VM" % source_vm_name
return None
print "Source VM Found : %s" % source_vm.config.name
wanted_pool = get_obj_case_insensitive(content, [vim.ResourcePool], wanted_pool_name)
if wanted_pool == None:
print "Resource Pool couldn't be found: Pool=%s" % wanted_pool_name
return None
else:
print "Pool Found : %s " % wanted_pool.name
new_location = vim.vm.RelocateSpec()
new_location.diskMoveType = 'createNewChildDiskBacking'
new_location.datastore = content.rootFolder.childEntity[0].datastore[0]
new_location.pool = wanted_pool
ESXI.ensure_snapshot_exists(source_vm)
clone_spec = vim.vm.CloneSpec(template=False, location=new_location, snapshot=source_vm.snapshot.rootSnapshotList[0].snapshot)
task = source_vm.Clone(name=dest_vm_name, folder=source_vm.parent, spec=clone_spec)
tasks.wait_for_tasks(esxi, [task])
print "Cloning %s into %s was successfull" % (source_vm.config.name, dest_vm_name)
if __name__ == '__main__':
sys.exit(main())
</code></pre>
| 1
|
2016-09-13T12:22:06Z
| 39,470,594
|
<p>It appears this is because VMWARE has disabled many operations when directly connected to the ESXi.</p>
<p>Apparently, i should have connected to my vCenterServer in order to properly clone my VM.</p>
| 1
|
2016-09-13T12:52:19Z
|
[
"python",
"vmware",
"vsphere",
"esxi",
"pyvmomi"
] |
Matplotlib is ignoring the given colourmap
| 39,470,003
|
<p>This is my plot:</p>
<p><a href="http://i.stack.imgur.com/7q3zW.png" rel="nofollow"><img src="http://i.stack.imgur.com/7q3zW.png" alt="enter image description here"></a></p>
<p>This is supposed to be a surface plot. As you can see that has somewhat failed. Particularly it is ignoring passed colour map.</p>
<p>It gets called as so:</p>
<pre><code>from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
# Set global options
plt.rc('text', usetex=True)
plt.rc('font', family='sans-serif')
from scipy.interpolate import griddata
from matplotlib import cm
import numpy as np
class testPlot(object):
def __init__(self,trajDict):
# Concat dictionary into (n_i x D) for all i in speeds.
D = np.vstack(trajDict.values())
# Grid the data: [time,angle,velocity]
# time
self.X = D[:,0]
# angle
self.Y = D[:,1]
# velocity
self.Z = D[:,2]
# All vels
self.vels = [1.42,1.11,0.81,0.50]
def surfacePlot(self,intMethod,wire=False,surface=False):
zi = np.linspace(self.Z.min(),self.Z.max(),250)
xi = np.linspace(self.X.min(),self.X.max(),250)
yi = griddata((self.X, self.Z),
self.Y,
(xi[None,:], zi[:,None]),
method=intMethod)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d') #fig.gca(projection='3d')
zig, xig = np.meshgrid(xi, zi)
if surface:
surf = ax.plot_surface(zig, xig, yi,
cmap='Blues',
alpha=0.5)
ax.grid(False)
ax.set_ylabel('Velocity $[m/s]$')
ax.set_ylim([min(self.vels)-0.2, max(self.vels)+0.2])
ax.set_axis_off
ax.set_zlabel('Angle $[\circ]$')
ax.set_zlim([min(self.Y)-5,max(self.Y)+5])
ax.set_xlabel('Gait Cycle $[\%]$')
ax.set_xlim([self.X[0]-10,self.X[-1]])
plt.gca().invert_yaxis()
plt.show()
# Close existing windows
plt.close(fig)
</code></pre>
<p>Passing this MWE data for test:</p>
<pre><code>OrderedDict([('a', array([[ 0. , 0. , 1.42],
[ 1. , 1. , 1.42],
[ 2. , 2. , 1.42],
[ 3. , 3. , 1.42],
[ 4. , 4. , 1.42]])),
('b', array([[ 0. , 1. , 1.11],
[ 1. , 2. , 1.11],
[ 2. , 3. , 1.11],
[ 3. , 4. , 1.11],
[ 4. , 5. , 1.11]])),
('c', array([[ 0. , 4. , 0.81],
[ 1. , 5. , 0.81],
[ 2. , 6. , 0.81],
[ 3. , 7. , 0.81],
[ 4. , 8. , 0.81]])),
('d', array([[ 0. , 9. , 0.5],
[ 1. , 10. , 0.5],
[ 2. , 11. , 0.5],
[ 3. , 12. , 0.5],
[ 4. , 13. , 0.5]]))])
</code></pre>
<p>to</p>
<pre><code>myTest = testPlot(data)
myTest.surfacePlot('linear',surface=True)
</code></pre>
<p>Should give a working MWE (NOTE: it will not reproduce the plot shown above). Note that the data needs to be in the above form to work.</p>
| 0
|
2016-09-13T12:22:57Z
| 39,472,348
|
<p>Your code is fine, and the plot shown is a surface plot. You can tell because lines in the background are behind blueish areas. Remove the alpha settings, consider using a different cmap or specify a custom range for your colorbar to fit your data range.</p>
| 0
|
2016-09-13T14:19:23Z
|
[
"python",
"matplotlib"
] |
Initialize a Flask-WhooshAlchemy index when using the app factory pattern
| 39,470,007
|
<p>I am developing an app with Flask using the <a href="http://flask.pocoo.org/docs/0.11/patterns/appfactories/#factories-extensions" rel="nofollow">application factory pattern</a>. Initializing the Whoosh index doesn't work, because <code>current_app</code> cannot be used without setting up an app context explicitly. How do I do this?</p>
<p><code>__init__.py</code></p>
<pre><code>from flask import Flask
from .frontend import frontend
def create_app(configfile=None):
app = Flask(__name__)
from .models import db
db.init_app(app)
app.register_blueprint(frontend)
return app
</code></pre>
<p><code>models.py</code></p>
<pre><code>from flask_sqlalchemy import SQLAlchemy
import flask_whooshalchemy as wa
from flask import current_app
db = SQLAlchemy()
class Institution(db.Model):
__searchable__ = ['name', 'description']
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(40))
description = db.Column(db.Text)
# this does not work
wa.whoosh_index(current_app, Institution)
</code></pre>
| 1
|
2016-09-13T12:23:11Z
| 39,474,075
|
<p><a href="http://stackoverflow.com/questions/19437883/when-scattering-flask-models-runtimeerror-application-not-registered-on-db-w">You can't use <code>current_app</code> outside an application context.</a> An application context exists during a request, or when explicitly created with <code>app.app_context()</code>.</p>
<p>Import your models during <code>create_app</code> and index them with Whoosh there. Remember, the factory is where you're performing all setup for the application.</p>
<pre><code>def create_app():
app = Flask('myapp')
...
from myapp.models import Institution
wa.whoosh_index(app, Institution)
...
return app
</code></pre>
<p>If you want to keep the code local to the blueprint, you can use the blueprint's <code>record_once</code> function to perform the index when the blueprint is registered on the app.</p>
<pre><code>@bp.record_once
def record_once(state):
wa.whoosh_index(state.app, Institution)
</code></pre>
<p>This will be called at most once when registering the blueprint with an app. <code>state</code> contains the app, so you don't need the <code>current_app</code>.</p>
| 0
|
2016-09-13T15:42:51Z
|
[
"python",
"flask"
] |
Python Overflow Error: iter index too large
| 39,470,012
|
<p>I have a problem with big iterator in for loop in the code below. It generates floats by reading a string list containing numbers.</p>
<pre><code>def float_generator(tekstowe):
x = ''
for c in tekstowe:
if c != ' ':
x += c
else:
out = float(x)
x = ''
yield(out)
</code></pre>
<p>I'm getting a <strong>"OverflowError: iter index too large"</strong>. I try to use really big iter numbers (like billions of values in a searched file). Is iter range somehow limited in for loops? </p>
<p>Using Python 2.7 64 bit. Thanks.</p>
| 3
|
2016-09-13T12:23:20Z
| 39,470,311
|
<p>Looks like <code>tekstowe</code> is a sequence type that only implements <code>__getitem__</code>, not <code>__iter__</code>, so it's using the Python iterator wrapper that calls <code>__getitem__</code> with 0, then 1, 2, 3, etc., until <code>__getitem__</code> raises <code>IndexError</code>.</p>
<p>As an implementation detail, <a href="https://hg.python.org/cpython/file/2.7/Objects/iterobject.c#l57" rel="nofollow">Python 2.7.11 and higher limits the value of the index passed by the iterator wrapper to <code>LONG_MAX</code></a> (before 2.7.11, it wasn't bounds checked but it still used a <code>long</code> for index storage, so it would wrap and start indexing with negative values). This doesn't matter on most non-Windows 64 bit builds, where <code>LONG_MAX</code> is <code>2**63 - 1</code> (larger than you'd likely encounter), but on Windows, C <code>long</code>s remain 32 bit quantities even on 64 bit builds, so <code>LONG_MAX</code> remains <code>2**31 - 1</code>, which is low enough to be reached in human timescales.</p>
<p>Your options are:</p>
<ol>
<li>Change the implementation of whatever class <code>tekstowe</code> is to give it a true <code>__iter__</code> method, so it doesn't get wrapped by the sequence iterator wrapper when you use it</li>
<li>Upgrade to Python 3.4+, ideally 3.5 (2.7.10/3.4.3 and below <a href="https://hg.python.org/cpython/file/v3.4.3/Objects/iterobject.c#l57" rel="nofollow">lacks the check for overflow entirely</a>, but this could mean wraparound causes infinite looping; <a href="https://hg.python.org/cpython/file/3.5/Objects/iterobject.c#l57" rel="nofollow">3.4.4/3.5.0 added the check, and they use a signed <code>size_t</code>, testing against <code>PY_SSIZE_T_MAX</code></a>, which means it will not error until the index reaches <code>2**63 - 1</code> on any 64 bit build, Windows or otherwise)</li>
</ol>
<p>The changes to add the overflow checks were made to resolve <a href="https://bugs.python.org/issue22939" rel="nofollow">Python bug #22939</a>; the type change (from <code>long</code> to <code>Py_ssize_t</code>) for the sequence iterator's index storage occurred in 3.4.0's release, resolving <a href="https://bugs.python.org/issue17932" rel="nofollow">Python bug #17932</a>.</p>
| 4
|
2016-09-13T12:36:59Z
|
[
"python",
"loops",
"iterator"
] |
Convert iteration to map
| 39,470,080
|
<p>I've got the following code:</p>
<pre><code>@classmethod
def load(self):
with open('yaml/instruments.yaml', 'r') as ymlfile:
return {v['name']: Instrument(**v) for (v) in list(yaml.load_all(ymlfile))}
</code></pre>
<p>I'd like to load these in parallel using something like:</p>
<pre><code>return ThreadPoolExecutor.map(Instrument, list(yaml.load_all(ymlfile))
</code></pre>
<p>But I'm not quite sure how to get the parameters to pass.</p>
<p>Here's an example of instruments.yaml:</p>
<pre><code>---
name: 'corn'
#Trade December corn only
multiplier: 5000
contract_prefix: 'C'
months_traded: [3, 5, 7, 9, 12]
quandl: 'CHRIS/CME_C2'
first_contract: 196003
backtest_from: 199312
trade_only: [12]
contract_name_prefix: 'C'
quandl_database: 'CME'
slippage: 0.125 #half the spread
ib_code: 'ZC'
</code></pre>
<p>How do I refactor my code as a map so I can use ThreadPoolExecutor?</p>
| 0
|
2016-09-13T12:26:40Z
| 39,472,176
|
<p>Simple solution is to define a top level simple worker function for use in the executor:</p>
<pre><code>def make_instrument_pair(d):
return d['name'], Instrument(**d)
</code></pre>
<p>Then change:</p>
<pre><code>@classmethod
def load(self):
with open('yaml/instruments.yaml', 'r') as ymlfile:
return {v['name']: Instrument(**v) for (v) in list(yaml.load_all(ymlfile))}
</code></pre>
<p>to:</p>
<pre><code>@classmethod
def load(self):
with open('yaml/instruments.yaml') as ymlfile,\
concurrent.futures.ThreadPoolExecutor(8) as executor:
return dict(executor.map(make_instrument_pair, yaml.load_all(ymlfile)))
</code></pre>
<p>As I noted in my comments, this probably won't gain you anything; <a href="https://wiki.python.org/moin/GIL" rel="nofollow">the GIL</a> means that threads don't improve performance unless:</p>
<ol>
<li>The work is done in third party C extensions that explicitly release the GIL before doing a lot of C level work</li>
<li>The work is mostly I/O bound (or otherwise spends most of its time blocking in some way, whether it's sleeping, waiting on locks, etc.)</li>
</ol>
<p>Unless <code>Instrument</code> is really expensive to construct, even using <code>ProcessPoolExecutor</code> likely won't help; you need to do a meaningful amount of work in the tasks dispatched, or you're wasting more time on task management (and for processes, serialization and interprocess communication) than you gain in parallelism.</p>
| 1
|
2016-09-13T14:09:53Z
|
[
"python",
"python-3.x"
] |
read file with variable number of columns in python
| 39,470,115
|
<p>I'm reading a file with a variable number of columns, say 3 fixed columns + unknown/variable number of columns:</p>
<pre><code>21 48 77
15 33 15 K12
78 91 17
64 58 24 R4 C16 R8
12 45 78 Y66
87 24 25
10 33 75
18 19 64 CF D93
</code></pre>
<p>I want to store the first three column entries in specific lists/arrays, because I need to work with them, while putting all the remaining part of the line (from column[2] to the end of line) in another single string, as I don't need to act on it, but just to copy it.</p>
<p>I wrote:</p>
<pre><code>import os, sys
import numpy as np
fi = open("input.dat", "r")
fo = open("output.dat", "w")
for line in fi:
line = line.strip()
columns = line.split()
A00 = str(columns[0])
A01 = str(columns[1])
A02 = str(columns[2])
A03 = EVERTHING ELSE UNTIL END OF LINE
</code></pre>
<p>Is there an easy way to do this? Thanks in advance.</p>
| 1
|
2016-09-13T12:28:20Z
| 39,470,189
|
<p>you can use this code :</p>
<pre><code>import os, sys
import numpy as np
fi = open("input.dat", "r")
fo = open("output.dat", "w")
culumn_3 = []
for line in fi:
line = line.strip()
columns = line.split()
A00 = str(columns[0])
A01 = str(columns[1])
A02 = str(columns[2])
culumn_3.append(str(columns[3]))
print(column_3)
</code></pre>
| 1
|
2016-09-13T12:31:58Z
|
[
"python",
"file",
"multiple-columns"
] |
read file with variable number of columns in python
| 39,470,115
|
<p>I'm reading a file with a variable number of columns, say 3 fixed columns + unknown/variable number of columns:</p>
<pre><code>21 48 77
15 33 15 K12
78 91 17
64 58 24 R4 C16 R8
12 45 78 Y66
87 24 25
10 33 75
18 19 64 CF D93
</code></pre>
<p>I want to store the first three column entries in specific lists/arrays, because I need to work with them, while putting all the remaining part of the line (from column[2] to the end of line) in another single string, as I don't need to act on it, but just to copy it.</p>
<p>I wrote:</p>
<pre><code>import os, sys
import numpy as np
fi = open("input.dat", "r")
fo = open("output.dat", "w")
for line in fi:
line = line.strip()
columns = line.split()
A00 = str(columns[0])
A01 = str(columns[1])
A02 = str(columns[2])
A03 = EVERTHING ELSE UNTIL END OF LINE
</code></pre>
<p>Is there an easy way to do this? Thanks in advance.</p>
| 1
|
2016-09-13T12:28:20Z
| 39,470,258
|
<p>I think following snippet code can help you. Also, you can edit this code for your project.</p>
<pre><code>f = open("input.dat")
line = f.readline().strip() # get the first line in line
while line: # while a line exists in the file f
columns = line.split('separator') # get all the columns
while columns: # while a column exists in the line
print columns # print the column
line = f.readline().strip() # get the next line if it exists
</code></pre>
<p>I hope it helps you.</p>
| 0
|
2016-09-13T12:34:25Z
|
[
"python",
"file",
"multiple-columns"
] |
read file with variable number of columns in python
| 39,470,115
|
<p>I'm reading a file with a variable number of columns, say 3 fixed columns + unknown/variable number of columns:</p>
<pre><code>21 48 77
15 33 15 K12
78 91 17
64 58 24 R4 C16 R8
12 45 78 Y66
87 24 25
10 33 75
18 19 64 CF D93
</code></pre>
<p>I want to store the first three column entries in specific lists/arrays, because I need to work with them, while putting all the remaining part of the line (from column[2] to the end of line) in another single string, as I don't need to act on it, but just to copy it.</p>
<p>I wrote:</p>
<pre><code>import os, sys
import numpy as np
fi = open("input.dat", "r")
fo = open("output.dat", "w")
for line in fi:
line = line.strip()
columns = line.split()
A00 = str(columns[0])
A01 = str(columns[1])
A02 = str(columns[2])
A03 = EVERTHING ELSE UNTIL END OF LINE
</code></pre>
<p>Is there an easy way to do this? Thanks in advance.</p>
| 1
|
2016-09-13T12:28:20Z
| 39,470,337
|
<p>String split allows to limit number of extracted parts, so you can do following:</p>
<pre><code>A00, A01, A02, rest = line.split(" ", 3)
</code></pre>
<p>Example:</p>
<pre><code>print "1 2 3 4 5 6".split(" ", 3)
['1', '2', '3', '4 5 6']
</code></pre>
| 1
|
2016-09-13T12:38:24Z
|
[
"python",
"file",
"multiple-columns"
] |
Python Code Error
| 39,470,177
|
<p>What's wrong with this program in Python? </p>
<p>I need to take an integer input(N) - accordingly, I need to create an array of (N)integers, taking integers also as input. Finally, I need to print the sum of all the integers in the array.</p>
<p>The input is in this format: </p>
<pre><code>5
4 6 8 18 96
</code></pre>
<p>This is the code I wrote : </p>
<pre><code>N = int(input().split())
i=0
s = 0
V=[]
if N<=100 :
for i in range (0,N):
x = int(input().split())
V.append(x)
i+=1
s+=x
print (s)
</code></pre>
<p>Its showing the following error.</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 1, in <module>
N = int(input().split())
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
</code></pre>
| -3
|
2016-09-13T12:31:03Z
| 39,470,214
|
<p><code>split()</code> returns a list which you are trying to convert to an integer.
You probably wanted to convert everything in the list to an integer:</p>
<pre><code>N = [int(i) for i in input().split()]
</code></pre>
<p>You can also use <code>map</code>:</p>
<pre><code>N = list(map(int, input().split()))
</code></pre>
| 6
|
2016-09-13T12:32:42Z
|
[
"python"
] |
Python Code Error
| 39,470,177
|
<p>What's wrong with this program in Python? </p>
<p>I need to take an integer input(N) - accordingly, I need to create an array of (N)integers, taking integers also as input. Finally, I need to print the sum of all the integers in the array.</p>
<p>The input is in this format: </p>
<pre><code>5
4 6 8 18 96
</code></pre>
<p>This is the code I wrote : </p>
<pre><code>N = int(input().split())
i=0
s = 0
V=[]
if N<=100 :
for i in range (0,N):
x = int(input().split())
V.append(x)
i+=1
s+=x
print (s)
</code></pre>
<p>Its showing the following error.</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 1, in <module>
N = int(input().split())
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
</code></pre>
| -3
|
2016-09-13T12:31:03Z
| 39,470,315
|
<p>You code fails because str.split() returns a list.
<a href="https://docs.python.org/2/library/stdtypes.html#str.split" rel="nofollow">From the Python documentation</a></p>
<blockquote>
<p>Return a list of the words in the string, using sep as the delimiter
string</p>
</blockquote>
<p>If your input is a series of numbers as strings:</p>
<pre><code>1 2 3 4
</code></pre>
<p>You'll want to iterate over the list returned by input.split() to do something with each integer.</p>
<pre><code>in = input()
for num in in.split():
x.append(int(num))
</code></pre>
<p>The result of this will be:
x = [1,2,3,4]</p>
| 1
|
2016-09-13T12:37:26Z
|
[
"python"
] |
Python Code Error
| 39,470,177
|
<p>What's wrong with this program in Python? </p>
<p>I need to take an integer input(N) - accordingly, I need to create an array of (N)integers, taking integers also as input. Finally, I need to print the sum of all the integers in the array.</p>
<p>The input is in this format: </p>
<pre><code>5
4 6 8 18 96
</code></pre>
<p>This is the code I wrote : </p>
<pre><code>N = int(input().split())
i=0
s = 0
V=[]
if N<=100 :
for i in range (0,N):
x = int(input().split())
V.append(x)
i+=1
s+=x
print (s)
</code></pre>
<p>Its showing the following error.</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 1, in <module>
N = int(input().split())
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
</code></pre>
| -3
|
2016-09-13T12:31:03Z
| 39,470,385
|
<p>You could use sys module to take the input when calling the program and a lambda function to convert the string items in list to integers. You could also make use of the built-in sum function. Something like that:</p>
<pre><code>#!/usr/bin/env python
import sys
s = sum(i for i in map(lambda x: int(x), sys.argv[1].split(',')))
print s
</code></pre>
<p>Example:</p>
<pre><code>python test.py 1,2,3,4
</code></pre>
<p>The output should be <strong>10</strong>.</p>
<blockquote>
<p>Modifying your code:</p>
</blockquote>
<p>Now, if you want to modify your code to do what it intends to do, you could modify your code like that:</p>
<pre><code>#!/usr/bin/env python
N = input()
s = 0
V=[]
if N<=100 :
for i in range (0,N):
x = input()
V.append(x)
s+=x
print (s)
</code></pre>
<p><strong>Note 1</strong>: In python when you use range you don't have to manually increase the counter in the loop, it happens by default.</p>
<p><strong>Note 2</strong>: the '<code>input()</code>' function will maintain the type of the variable you will enter, so if you enter an integer you don't have to convert it to integer. (Have in mind that <code>input()</code> is <strong>not recommended to use</strong> as it can be dangerous in more complicated projects).</p>
<p><strong>Note 3</strong>: You don't need to use '<code>.split()</code>' for your input.</p>
| 2
|
2016-09-13T12:40:43Z
|
[
"python"
] |
Sqlalchemy json column - how to preform a contains query
| 39,470,239
|
<p>I have the following table in mysql(5.7.12):</p>
<pre><code>class Story(db.Model):
sections_ids = Column(JSON, nullable=False, default=[])
</code></pre>
<p><em>sections_ids</em> is basicly a list of integers [1, 2, ...,n].
I need to get all rows where sections_ids contains X.
I tried the following:</p>
<pre><code>stories = session.query(Story).filter(
X in Story.sections_ids
).all()
</code></pre>
<p>but it throws:</p>
<pre><code>NotImplementedError: Operator 'contains' is not supported on this expression
</code></pre>
| 0
|
2016-09-13T12:33:34Z
| 39,470,478
|
<p>Use <a href="https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-contains" rel="nofollow"><code>JSON_CONTAINS(json_doc, val[, path])</code></a>:</p>
<pre><code>from sqlalchemy import func
# JSON_CONTAINS returns 0 or 1, not found or found. Not sure if MySQL
# likes integer values in WHERE, added == 1 just to be safe
session.query(Story).filter(func.json_contains(Story.section_ids, X) == 1).all()
</code></pre>
<p>As you're searching an array at the top level, you do not need to give <em>path</em>.</p>
| 1
|
2016-09-13T12:46:02Z
|
[
"python",
"mysql",
"json",
"sqlalchemy"
] |
Spark can't read an Orc table (returns empty table)
| 39,470,316
|
<p>Do I have to do anything special to be able to read Orc tables with Spark?</p>
<p>I have two table copies in txt and orc. When reading txt table everything is ok. When reading orc table I get no errors but spark returns an empty table. </p>
<p>Here is my code in python:</p>
<pre><code>import pyspark
CONF = (pyspark.SparkConf().setMaster("yarn-client"))
sc = pyspark.SparkContext(conf = CONF)
from pyspark.sql import HiveContext
sq = HiveContext(sc)
df = sq.sql(""" select * from sample_07 """)
print df.show(10)
</code></pre>
<p><strong>UPDATE:</strong></p>
<p>If I specify the path to data stored in sample_07 and register it as temporary table, it works though:</p>
<pre><code>sq = HiveContext(sc)
orcFile = sq.read.format("orc").load("/user/some/sample_07/")
orcFile.registerTempTable("tempTableName");
df = sq.sql("SELECT * FROM tempTableName LIMIT 10 ")
</code></pre>
| 1
|
2016-09-13T12:37:26Z
| 39,470,577
|
<p>Can you try adding the database name before the table name as a.table_name</p>
| 0
|
2016-09-13T12:51:24Z
|
[
"python",
"apache-spark",
"orc"
] |
Spark can't read an Orc table (returns empty table)
| 39,470,316
|
<p>Do I have to do anything special to be able to read Orc tables with Spark?</p>
<p>I have two table copies in txt and orc. When reading txt table everything is ok. When reading orc table I get no errors but spark returns an empty table. </p>
<p>Here is my code in python:</p>
<pre><code>import pyspark
CONF = (pyspark.SparkConf().setMaster("yarn-client"))
sc = pyspark.SparkContext(conf = CONF)
from pyspark.sql import HiveContext
sq = HiveContext(sc)
df = sq.sql(""" select * from sample_07 """)
print df.show(10)
</code></pre>
<p><strong>UPDATE:</strong></p>
<p>If I specify the path to data stored in sample_07 and register it as temporary table, it works though:</p>
<pre><code>sq = HiveContext(sc)
orcFile = sq.read.format("orc").load("/user/some/sample_07/")
orcFile.registerTempTable("tempTableName");
df = sq.sql("SELECT * FROM tempTableName LIMIT 10 ")
</code></pre>
| 1
|
2016-09-13T12:37:26Z
| 39,477,515
|
<p>I don't think there is anything specific for ORC. can you run query on hive and make sure data is read properly. The empty table might be because hive cannot read the data the way you defined.</p>
| 0
|
2016-09-13T19:21:31Z
|
[
"python",
"apache-spark",
"orc"
] |
Get reference of function inside function for usage in prototype/creation function
| 39,470,486
|
<p>I have two nested functions: The outer creates a creation method / prototype, the inner will create a concrete example of that prototype:</p>
<pre><code>class Example:
def __init__(self, str):
self.str = str
def make_prototype(proto_name):
def make_example(example_name):
return Example(proto_name + ' ' + example_name)
return make_example
proto = make_prototype('Prototype 1')
ex1 = proto('Example 1')
</code></pre>
<p>Now, I'd like to remember the used creation function in the <code>Example</code>. I did it the following way:</p>
<pre><code>class Example:
def __init__(self, str, proto):
self.str = str
self.proto = proto
def make_prototype(proto_name):
class make_example:
def __call__(self, example_name):
return Example(proto_name + ' ' + example_name, self)
return make_example()
proto = make_prototype('Prototype 1')
ex1 = proto('Example 1')
ex2 = ex1.proto('Example 2')
</code></pre>
<p>I think that's a relatively elegant and understandable solution. BUT would there be a way to do it without the nested <code>class make_example</code>? Would there be a way to do it like in the first version and getting a reference to the function <code>make_example</code> directly inside <code>make_example</code>? Something like:</p>
<pre><code>class Example:
def __init__(self, str, proto):
self.str = str
self.proto = proto
def make_prototype(proto_name):
def make_example(example_name):
return Example(proto_name + ' ' + example_name, REFERENCE TO THIS FUNC)
return make_example
proto = make_prototype('Prototype 1')
ex1 = proto('Example 1')
ex2 = ex1.proto('Example 2')
</code></pre>
| 0
|
2016-09-13T12:46:23Z
| 39,470,759
|
<p>You can use <code>__call__</code> class method. Your example would look like this:</p>
<pre><code>class Example:
def __init__(self, str, proto):
self.str = str
self.proto = proto
class MakePrototype():
def __init__(self, name):
self.name = name
def __call__(self, proto_name):
return Example(proto_name, self)
proto = MakePrototype('Prototype 1')
ex1 = proto('Example 1')
ex2 = ex1.proto('Example 2')
</code></pre>
| 1
|
2016-09-13T13:00:44Z
|
[
"python"
] |
How do I define a complex type in an Avro Schema
| 39,470,557
|
<p>I have reviewed avro documentation as well as several examples online (and similar StackOverflow questions). I then attempted to define an avro schema, and had to progressively back out fields to determine what my issue was (the error message from the avro library in python was not as helpful as one would hope). I have a JSON document that I would like to convert to Avro and I need a schema to be specified for that purpose (using avro-tools to generate the schema from the json did not work as expected and yielded an AvroTypeException when attempting to convert the json into avro). I am using Avro version 1.7.7. Here is the JSON document for which I would like to define the avro schema:</p>
<pre><code>{
"method": "Do_Thing",
"code": 200,
"reason": "OK",
"siteId": {
"string": "a1283632-121a-4a3f-9560-7b73830f94j8"
}
}
</code></pre>
<p>I was able to define the schema for the non-complex types but not for the complex "siteId" field:</p>
<pre><code>{
"namespace" : "com.example",
"name" : "methodEvent",
"type" : "record",
"fields" : [
{"name": "method", "type": "string"},
{"name": "code", "type": "int"},
{"name": "reason", "type": "string"}
{"name": "siteId", "type": [ "null", "string" ]}
]
}
</code></pre>
<p>Attempting to use the previous schema to convert the Json object to avro yields an avro.io.AvroTypeException: The datum [See JSON Object above] is not an example of the schema [See Avro Schema Object above]. I only see this error when attempting to define a field in the schema to represent the "siteId" field in the above json.</p>
| 0
|
2016-09-13T12:50:17Z
| 39,472,234
|
<p>I was able to resolve the issue with the following schema:</p>
<pre><code>{
"namespace" : "com.example",
"name" : "methodEvent",
"type" : "record",
"fields" : [
{"name": "method", "type": "string"},
{"name": "code", "type": "int"},
{"name": "reason", "type": "string"}
{
"name": "siteId",
"type": {
"name" : "siteId",
"type" : "record",
"fields" : [
"name" : "string",
"type" : [ "null", "string" ]
]
}
},
"default" : null
]
}
</code></pre>
| 0
|
2016-09-13T14:13:18Z
|
[
"python",
"json",
"avro"
] |
How do I define a complex type in an Avro Schema
| 39,470,557
|
<p>I have reviewed avro documentation as well as several examples online (and similar StackOverflow questions). I then attempted to define an avro schema, and had to progressively back out fields to determine what my issue was (the error message from the avro library in python was not as helpful as one would hope). I have a JSON document that I would like to convert to Avro and I need a schema to be specified for that purpose (using avro-tools to generate the schema from the json did not work as expected and yielded an AvroTypeException when attempting to convert the json into avro). I am using Avro version 1.7.7. Here is the JSON document for which I would like to define the avro schema:</p>
<pre><code>{
"method": "Do_Thing",
"code": 200,
"reason": "OK",
"siteId": {
"string": "a1283632-121a-4a3f-9560-7b73830f94j8"
}
}
</code></pre>
<p>I was able to define the schema for the non-complex types but not for the complex "siteId" field:</p>
<pre><code>{
"namespace" : "com.example",
"name" : "methodEvent",
"type" : "record",
"fields" : [
{"name": "method", "type": "string"},
{"name": "code", "type": "int"},
{"name": "reason", "type": "string"}
{"name": "siteId", "type": [ "null", "string" ]}
]
}
</code></pre>
<p>Attempting to use the previous schema to convert the Json object to avro yields an avro.io.AvroTypeException: The datum [See JSON Object above] is not an example of the schema [See Avro Schema Object above]. I only see this error when attempting to define a field in the schema to represent the "siteId" field in the above json.</p>
| 0
|
2016-09-13T12:50:17Z
| 39,482,927
|
<p>Avro's python implementation represents unions differently than their JSON encoding: it "unwraps" them, so the <code>siteId</code> field is expected to be just the string, without the wrapping object. See below for a few examples.</p>
<h3>Valid JSON encodings</h3>
<p>Non-null <code>siteid</code>:</p>
<pre><code>{
"method": "Do_Thing",
"code": 200,
"reason": "OK",
"siteId": {
"string": "a1283632-121a-4a3f-9560-7b73830f94j8"
}
}
</code></pre>
<p>Null <code>siteid</code>:</p>
<pre><code>{
"method": "Do_Thing",
"code": 200,
"reason": "OK",
"siteId": null
}
</code></pre>
<h3>Valid python objects (in-memory representation)</h3>
<p>Non-null <code>siteid</code>:</p>
<pre><code>{
"method": "Do_Thing",
"code": 200,
"reason": "OK",
"siteId": "a1283632-121a-4a3f-9560-7b73830f94j8"
}
</code></pre>
<p>Null <code>siteid</code>:</p>
<pre><code>{
"method": "Do_Thing",
"code": 200,
"reason": "OK",
"siteId": null
}
</code></pre>
<p>Note that <code>null</code>s are <a href="https://avro.apache.org/docs/1.8.1/spec.html#json_encoding" rel="nofollow">unwrapped</a> in both cases which is why <a href="http://stackoverflow.com/a/39472234/1062617">your solution</a> isn't working.</p>
<p>Unfortunately, the python implementation doesn't have a JSON decoder/encoder currently (AFAIK), so there is no easy way to translate between the two representations. Depending on the source of your JSON-encoded data, the simplest might be to edit it to not wrap union instances anymore.</p>
| 0
|
2016-09-14T05:16:51Z
|
[
"python",
"json",
"avro"
] |
how to select part.surface in abaqus using python script
| 39,470,645
|
<p>How do I select the surfaces on part in Abaqus? I have tried:</p>
<pre><code>tubePart.surface(faces = tubePart.faces[4:8],name = 'innerFaces')
</code></pre>
<p>but it keeps saying part object has no attribute surface.</p>
| 0
|
2016-09-13T12:54:35Z
| 39,472,795
|
<p>Ideally, you should create a new surface by calling <code>Surface()</code> function (not <code>surface()</code>), i.e.</p>
<pre><code>tubePart.Surface(...)
</code></pre>
<p>Secondly, there must be <code>side1Faces</code> instead of <code>faces</code> (thanks to agentp for comment). Thus, the final peace of code should look like this:</p>
<pre><code>tubePart.Surface(side1Faces = tubePart.faces[4:8],name = 'innerFaces')
</code></pre>
| 1
|
2016-09-13T14:42:07Z
|
[
"python",
"abaqus"
] |
nan cost in tensorflow training perceptron
| 39,470,802
|
<p>I am trying to train a single layer perceptron (basing my code on <a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py" rel="nofollow">this</a>) on the following data file in tensor flow:</p>
<pre><code>1,1,0.05,-1.05
1,1,0.1,-1.1
....
</code></pre>
<p>where the last column is the label (function of 3 parameters) and the first three columns are the function argument. The code that reads the data and trains the model (I simplify it for readability):</p>
<pre><code>import tensorflow as tf
... # some basics to read the data
example, label = read_file_format(filename_queue)
... # model construction and parameter setting
n_hidden_1 = 4 # 1st layer number of features
n_input = 3
n_output = 1
...
# calls a function which produces a prediction
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
for epoch in range(training_epochs):
_, c = sess.run([optimizer, cost], feed_dict={x: example.reshape(1,3), y: label.reshape(-1,1)})
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "Cost:",c)
</code></pre>
<p>but when I run it, something seems to be very wrong:</p>
<pre><code>('Epoch:', '0001', 'Cost:', nan)
('Epoch:', '0002', 'Cost:', nan)
....
('Epoch:', '0015', 'Cost:', nan)
</code></pre>
<p>This is the complete code for the multilaye_perceptron function, etc:</p>
<pre><code># Parameters
learning_rate = 0.001
training_epochs = 15
display_step = 1
# Network Parameters
n_hidden_1 = 4 # 1st layer number of features
n_input = 3
n_output = 1
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_output])
# Create model
def multilayer_perceptron(x, weights, biases):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Output layer with linear activation
out_layer = tf.matmul(layer_1, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_output]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_output]))
}
</code></pre>
| 0
|
2016-09-13T13:02:36Z
| 39,472,292
|
<p>Is this one example at a time? I would go batches and increase batch size to 128 or similar, as long as you are getting nans.</p>
<p>When I am getting nans it is usually either of the three:
- batch size too small (in your case then just 1)
- log(0) somewhere
- learning rate too high and uncapped gradients</p>
| 1
|
2016-09-13T14:16:21Z
|
[
"python",
"tensorflow"
] |
How to wait for coroutines to complete synchronously within method if event loop is already running?
| 39,470,824
|
<p>I'm trying to create a Python-based CLI that communicates with a web service via websockets. One issue that I'm encountering is that requests made by the CLI to the web service intermittently fail to get processed. Looking at the logs from the web service, I can see that the problem is caused by the fact that frequently these requests are being made at the same time (or even after) the socket has closed:</p>
<pre class="lang-none prettyprint-override"><code>2016-09-13 13:28:10,930 [22 ] INFO DeviceBridge - Device bridge has opened
2016-09-13 13:28:11,936 [21 ] DEBUG DeviceBridge - Device bridge has received message
2016-09-13 13:28:11,937 [21 ] DEBUG DeviceBridge - Device bridge has received valid message
2016-09-13 13:28:11,937 [21 ] WARN DeviceBridge - Unable to process request: {"value": false, "path": "testcube.pwms[0].enabled", "op": "replace"}
2016-09-13 13:28:11,936 [5 ] DEBUG DeviceBridge - Device bridge has closed
</code></pre>
<p>In my CLI I define a class <code>CommunicationService</code> that is responsible for handling all direct communication with the web service. Internally, it uses the <a href="https://websockets.readthedocs.io/en/stable/" rel="nofollow"><code>websockets</code></a> package to handle communication, which itself is built on top of <code>asyncio</code>.</p>
<p><code>CommunicationService</code> contains the following method for sending requests:</p>
<pre><code>def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
asyncio.ensure_future(self._ws.send(request))
</code></pre>
<p>...where <code>ws</code> is a websocket opened earlier in another method:</p>
<pre><code>self._ws = await websockets.connect(websocket_address)
</code></pre>
<p>What I want is to be able to await the future returned by <code>asyncio.ensure_future</code> and, if necessary, sleep for a short while after in order to give the web service time to process the request before the websocket is closed.</p>
<p>However, since <code>send_request</code> is a synchronous method, it can't simply <code>await</code> these futures. Making it asynchronous would be pointless as there would be nothing to await the coroutine object it returned. I also can't use <code>loop.run_until_complete</code> as the loop is already running by the time it is invoked.</p>
<p>I found someone describing a problem very similar to the one I have at <a href="https://mail.python.org/pipermail/python-list/2016-April/707139.html" rel="nofollow">mail.python.org</a>. The solution that was posted in that thread was to make the function return the coroutine object in the case the loop was already running:</p>
<pre><code>def aio_map(coro, iterable, loop=None):
if loop is None:
loop = asyncio.get_event_loop()
coroutines = map(coro, iterable)
coros = asyncio.gather(*coroutines, return_exceptions=True, loop=loop)
if loop.is_running():
return coros
else:
return loop.run_until_complete(coros)
</code></pre>
<p>This is not possible for me, as I'm working with PyRx (Python implementation of the reactive framework) and <code>send_request</code> is only called as a subscriber of an Rx observable, which means the return value gets discarded and is not available to my code:</p>
<pre><code>class AnonymousObserver(ObserverBase):
...
def _on_next_core(self, value):
self._next(value)
</code></pre>
<p>On a side note, I'm not sure if this is some sort of problem with <code>asyncio</code> that's commonly come across or whether I'm just not getting it, but I'm finding it pretty frustrating to use. In C# (for instance), all I would need to do is probably something like the following:</p>
<pre class="lang-cs prettyprint-override"><code>void SendRequest(string request)
{
this.ws.Send(request).Wait();
// Task.Delay(500).Wait(); // Uncomment If necessary
}
</code></pre>
<p>Meanwhile, <code>asyncio</code>'s version of "wait" unhelpfully just returns another coroutine that I'm forced to discard.</p>
<p><strong>Update</strong></p>
<p>I've found a way around this issue that seems to work. I have an asynchronous callback that gets executed after the command has executed and before the CLI terminates, so I just changed it from this...</p>
<pre><code>async def after_command():
await comms.stop()
</code></pre>
<p>...to this:</p>
<pre><code>async def after_command():
await asyncio.sleep(0.25) # Allow time for communication
await comms.stop()
</code></pre>
<p>I'd still be happy to receive any answers to this problem for future reference, though. I might not be able to rely on workarounds like this in other situations, and I still think it would be better practice to have the delay executed inside <code>send_request</code> so that clients of <code>CommunicationService</code> do not have to concern themselves with timing issues.</p>
<p>In regards to Vincent's question:</p>
<blockquote>
<p>Does your loop run in a different thread, or is send_request called by some callback?</p>
</blockquote>
<p>Everything runs in the same thread - it's called by a callback. What happens is that I define all my commands to use asynchronous callbacks, and when executed some of them will try to send a request to the web service. Since they're asynchronous, they don't do this until they're executed via a call to <code>loop.run_until_complete</code> at the top level of the CLI - which means the loop is running by the time they're mid-way through execution and making this request (via an indirect call to <code>send_request</code>).</p>
<p><strong>Update 2</strong></p>
<p>Here's a solution based on Vincent's proposal of adding a "done" callback.</p>
<p>A new boolean field <code>_busy</code> is added to <code>CommunicationService</code> to represent if comms activity is occurring or not.</p>
<p><code>CommunicationService.send_request</code> is modified to set <code>_busy</code> true before sending the request, and then provides a callback to <code>_ws.send</code> to reset <code>_busy</code> once done:</p>
<pre><code>def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
def callback(_):
self._busy = False
self._busy = True
asyncio.ensure_future(self._ws.send(request)).add_done_callback(callback)
</code></pre>
<p><code>CommunicationService.stop</code> is now implemented to wait for this flag to be set false before progressing:</p>
<pre><code>async def stop(self) -> None:
"""
Terminate communications with TestCube Web Service.
"""
if self._listen_task is None or self._ws is None:
return
# Wait for comms activity to stop.
while self._busy:
await asyncio.sleep(0.1)
# Allow short delay after final request is processed.
await asyncio.sleep(0.1)
self._listen_task.cancel()
await asyncio.wait([self._listen_task, self._ws.close()])
self._listen_task = None
self._ws = None
logger.info('Terminated connection to TestCube Web Service')
</code></pre>
<p>This seems to work too, and at least this way all communication timing logic is encapsulated within the <code>CommunicationService</code> class as it should be.</p>
<p><strong>Update 3</strong></p>
<p>Nicer solution based on Vincent's proposal.</p>
<p>Instead of <code>self._busy</code> we have <code>self._send_request_tasks = []</code>.</p>
<p>New <code>send_request</code> implementation:</p>
<pre><code>def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
task = asyncio.ensure_future(self._ws.send(request))
self._send_request_tasks.append(task)
</code></pre>
<p>New <code>stop</code> implementation:</p>
<pre><code>async def stop(self) -> None:
if self._listen_task is None or self._ws is None:
return
# Wait for comms activity to stop.
if self._send_request_tasks:
await asyncio.wait(self._send_request_tasks)
...
</code></pre>
| 0
|
2016-09-13T13:03:12Z
| 39,472,811
|
<p>You could use a <code>set</code> of tasks:</p>
<pre><code>self._send_request_tasks = set()
</code></pre>
<p>Schedule the tasks using <code>ensure_future</code> and clean up using <code>add_done_callback</code>:</p>
<pre><code>def send_request(self, request: str) -> None:
task = asyncio.ensure_future(self._ws.send(request))
self._send_request_tasks.add(task)
task.add_done_callback(self._send_request_tasks.remove)
</code></pre>
<p>And wait for the <code>set</code> of tasks to complete:</p>
<pre><code>async def stop(self):
if self._send_request_tasks:
await asyncio.wait(self._send_request_tasks)
</code></pre>
| 1
|
2016-09-13T14:42:54Z
|
[
"python",
"python-3.x",
"python-asyncio"
] |
Python merge list concating unique values as comma seperated
| 39,470,941
|
<p>I am trying to get this to work. </p>
<p>Here is my data:</p>
<p>data.csv</p>
<pre><code>id,fname,lname,education,gradyear,attributes
1,john,smith,mit,2003,qa
1,john,smith,harvard,207,admin
1,john,smith,ft,212,master
2,john,doe,htw,2000,dev
</code></pre>
<p>Trying to use this code. Found this code on the Internet, don't fully understand it.</p>
<pre><code>from itertools import groupby
import csv
import pprint
t = csv.reader(open('data.csv'))
t = list(t)
def join_rows(rows):
def join_tuple(tup):
for x in tup:
if x:
return x
else:
return x
return [join_tuple(x) for x in zip(*rows)]
for name, rows in groupby(sorted(t), lambda x:x[0]):
print join_rows(rows)
</code></pre>
<p>However, it does not merge unique values as comma separated.</p>
<p>The output is:</p>
<pre><code>['1', 'john', 'smith', 'ft', '212', 'master']
['2', 'john', 'doe', 'htw', '2000', 'dev']
['id', 'fname', 'lname', 'education', 'gradyear', 'attributes']
</code></pre>
<p>How can I make it like:</p>
<pre><code>['1', 'john', 'smith', 'mit,harvard,ft', '2003,207,212', 'qa,admin,master']
['2', 'john', 'doe', 'htw', '2000', 'dev']
['id', 'fname', 'lname', 'education', 'gradyear', 'attributes']
</code></pre>
<p>If there are more entries for the same column, it should also work. Should not be limited to 3 rows.</p>
<p>Grrrrr .... anybody have tips or ideas?</p>
<p>Thanks in advance!</p>
| 0
|
2016-09-13T13:08:34Z
| 39,471,208
|
<p>You can change the definition of <code>join_rows</code> to</p>
<pre><code>import itertools
def join_rows(rows):
return [(e[0] if i < 3 else ','.join(e)) for (i, e) in enumerate(zip(*rows))]
</code></pre>
<p>What this does is to zip all entries belonging to the same id into tuples. For the first 3 tuples, the first item is returned; for the latter, they are joined by commas.</p>
<pre><code>['1', 'john', 'smith', 'ft,harvard,mit', '212,207,2003', 'master,admin,qa']
['2', 'john', 'doe', 'htw', '2000', 'dev']
['id', 'fname', 'lname', 'education', 'gradyear', 'attributes']
</code></pre>
| 3
|
2016-09-13T13:21:42Z
|
[
"python"
] |
Get intersection of list elements with different sublist datatypes
| 39,470,994
|
<p>I have two lists, which contains list elements, e.g:</p>
<pre><code>list1 = [['placeholder1', {'data': 'data1'}], ['placeholder2', {'data': 'data2'}], ['placeholder2', {'data': 'data1'}]]
list2 = [['placeholder2', {'data': 'data2'}], ['placeholder3', {'data': 'data5'}]]
intersection_result = [['placeholder2', {'data': 'data2'}]]
</code></pre>
<p>The structure of the sub-list elements is just an example. It can also happen that all the sub-list elements contains strings <code>['asdf', 'qwert']</code> or a mixture of string and numbers <code>['sdfs', 232]</code>. However, the sub-list structure is always the same (in both lists).</p>
<p>How can I get the intersection of lists elements which are identical in both lists?</p>
| 1
|
2016-09-13T13:11:09Z
| 39,471,364
|
<p>If my understanding is correct, you can get the intersection by checking and selecting <a href="https://docs.python.org/3/library/functions.html#any" rel="nofollow"><code>any()</code></a> of the elements in the smallest list which are equal to ones in the larger one.</p>
<p>With a comprehension, this would look like this:</p>
<pre><code>intersection_res = [l for l in min(list2, list1, key=len) if any(l == l2 for l2 in max(list1, list2, key=len))]
</code></pre>
<p>This uses, <a href="https://docs.python.org/3/library/functions.html#min" rel="nofollow"><code>min</code></a> and <a href="https://docs.python.org/3/library/functions.html#max" rel="nofollow"><code>max</code></a> with a key assigned to <a href="https://docs.python.org/3/library/functions.html#len" rel="nofollow"><code>len</code></a> to always select from the smaller list and check against the larger one.</p>
<p>This yields:</p>
<pre><code>print(intersection_res)
[['placeholder2', {'data': 'data2'}]]
</code></pre>
<p>This comprehension can be trimmed down if you pre-assign the min-max lists or, of course, if you are always certain which list is larger than the other:</p>
<pre><code>sm, la = list1, list2 if len(list1) < len(list2) else list2, list1
intersection_res = [l for l in sm if any(l == l2 for l2 in la)]
</code></pre>
| 3
|
2016-09-13T13:29:51Z
|
[
"python",
"list",
"python-3.x",
"intersection",
"python-3.5"
] |
Get intersection of list elements with different sublist datatypes
| 39,470,994
|
<p>I have two lists, which contains list elements, e.g:</p>
<pre><code>list1 = [['placeholder1', {'data': 'data1'}], ['placeholder2', {'data': 'data2'}], ['placeholder2', {'data': 'data1'}]]
list2 = [['placeholder2', {'data': 'data2'}], ['placeholder3', {'data': 'data5'}]]
intersection_result = [['placeholder2', {'data': 'data2'}]]
</code></pre>
<p>The structure of the sub-list elements is just an example. It can also happen that all the sub-list elements contains strings <code>['asdf', 'qwert']</code> or a mixture of string and numbers <code>['sdfs', 232]</code>. However, the sub-list structure is always the same (in both lists).</p>
<p>How can I get the intersection of lists elements which are identical in both lists?</p>
| 1
|
2016-09-13T13:11:09Z
| 39,471,590
|
<p>A simple solution, which would be <strong>independent of the structure of your data</strong>.
You can generate <a href="http://stackoverflow.com/questions/16735786/how-to-generate-unique-equal-hash-for-equal-dictionaries">signature hashes</a> (using json or pformat) for your data, and find common hashes in both list1 and list2.</p>
<p><strong>Demo</strong> : <a href="http://ideone.com/5i9cs8" rel="nofollow">http://ideone.com/5i9cs8</a> </p>
<pre><code>import json
list1 = [['placeholder1', {'data': 'data1'}], ['placeholder2', {'data': 'data2'}], ['placeholder2', {'data': 'data1'}]]
list2 = [['placeholder2', {'data': 'data2'}], ['placeholder3', {'data': 'data5'}]]
sig1 = { hash(json.dumps(x, sort_keys=True)):x for x in list1 }
sig2 = { hash(json.dumps(x, sort_keys=True)):x for x in list2 }
result = {x:sig1[x] for x in sig1 if x in sig2}
print(result)
#prints {-7754841686355067234: ['placeholder2', {'data': 'data2'}]}
</code></pre>
<ul>
<li>If your dictionaries have data which does not support json serialization e.g. datetime, pformat will work well, or you can use cPickle, <code>str</code> will also work for simple cases. You can make the choice based on your dataset and efficiency required.</li>
</ul>
| 1
|
2016-09-13T13:40:43Z
|
[
"python",
"list",
"python-3.x",
"intersection",
"python-3.5"
] |
Ansible installation using cygwin - error [Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-lJqIyR/cffi/]
| 39,471,023
|
<p>I was trying to install Ansible in Cygwin, but I am getting the following error.</p>
<p>I have even reinstalled pythonsetup tools yet still I am getting this error:</p>
<pre><code>$ pip install ansible
Requirement already satisfied (use --upgrade to upgrade): ansible in /usr/lib/python2.7/site-packages/ansible-2.1.1.0-py2.7.egg
Requirement already satisfied (use --upgrade to upgrade): paramiko in /usr/lib/python2.7/site-packages/paramiko-2.0.2-py2.7.egg (from ansible)
Requirement already satisfied (use --upgrade to upgrade): jinja2 in /usr/lib/python2.7/site-packages/Jinja2-2.8-py2.7.egg (from ansible)
Requirement already satisfied (use --upgrade to upgrade): PyYAML in /usr/lib/python2.7/site-packages/PyYAML-3.12-py2.7-cygwin-2.6.0-x86_64.egg (from ansible)
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python2.7/site-packages (from ansible)
Requirement already satisfied (use --upgrade to upgrade): pycrypto>=2.6 in /usr/lib/python2.7/site-packages (from ansible)
Collecting cryptography>=1.1 (from paramiko->ansible)
Using cached cryptography-1.5.tar.gz
Requirement already satisfied (use --upgrade to upgrade): pyasn1>=0.1.7 in /usr/lib/python2.7/site-packages/pyasn1-0.1.9-py2.7.egg (from paramiko->ansible)
Requirement already satisfied (use --upgrade to upgrade): MarkupSafe in /usr/lib/python2.7/site-packages/MarkupSafe-0.23-py2.7.egg (from jinja2->ansible)
Collecting idna>=2.0 (from cryptography>=1.1->paramiko->ansible)
Using cached idna-2.1-py2.py3-none-any.whl
Collecting six>=1.4.1 (from cryptography>=1.1->paramiko->ansible)
Using cached six-1.10.0-py2.py3-none-any.whl
Collecting enum34 (from cryptography>=1.1->paramiko->ansible)
Using cached enum34-1.1.6-py2-none-any.whl
Collecting ipaddress (from cryptography>=1.1->paramiko->ansible)
Using cached ipaddress-1.0.17-py2-none-any.whl
Collecting cffi>=1.4.1 (from cryptography>=1.1->paramiko->ansible)
Using cached cffi-1.8.2.tar.gz
Complete output from command python setup.py egg_info:
unable to execute 'gcc': No such file or directory
unable to execute 'gcc': No such file or directory
No working compiler found, or bogus compiler options
passed to the compiler from Python's distutils module.
See the error messages above.
(If they are about -mno-fused-madd and you are on OS/X 10.8,
see http://stackoverflow.com/questions/22313407/ .)
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-lJqIyR/cffi/
</code></pre>
| 0
|
2016-09-13T13:12:42Z
| 39,471,336
|
<p>Run Cygwin Setup and check <code>Devel</code> branch for installation. </p>
<hr>
<p>It should be obvious from the error message that you need to install a compiler. In cases like this run Cygwin Setup and type the missing component name (<code>gcc</code>) to locate the package. In case of <code>gcc</code> requirement for pip/Ansible the whole <code>Devel</code> might not be necessary, but it shouldn't be a problem.</p>
| 0
|
2016-09-13T13:28:03Z
|
[
"python",
"cygwin",
"ansible"
] |
Selecting a combobox option with a <div> tag using Selenium and Python
| 39,471,097
|
<p>I have been trying to automate some really boring stuff (because of how tedious I have been making mistakes and I want to reduce them as close as zero as I can), in essence I got assets that have to be entered into our system one by one through a horrible process. This is my problem right now:</p>
<p>My objective is to select the option 'CELL PHONES' on the drop down list (ctl00_CPH1_cmbClasses_DropDown). Also for security reasons and the fact that it is a protected Corporate Page (I already handled login and navigation till this point) I can only show snippets of the code as to not compromise it. </p>
<p>EDIT 1 (Modified this to add more of the HTML code)</p>
<pre><code><td class="rcbInputCell rcbInputCellLeft" style="width:100%;"><input name="ctl00$CPH1$cmbClasses" type="text" class="rcbInput radPreventDecorate" id="ctl00_CPH1_cmbClasses_Input" value="" /></td><td class="rcbArrowCell rcbArrowCellRight"><a id="ctl00_CPH1_cmbClasses_Arrow" style="overflow: hidden;display: block;position: relative;outline: none;">select</a></td>
</tr>
</table><div class="rcbSlide" style="z-index:6000;"><div id="ctl00_CPH1_cmbClasses_DropDown" class="RadComboBoxDropDown RadComboBoxDropDown_WebBlue " style="display:none;width:140px;"><div class="rcbScroll rcbWidth" style="width:100%;"><ul class="rcbList" style="list-style:none;margin:0;padding:0;zoom:1;"><li class="rcbItem"></li><li class="rcbItem">CELL PHONES</li><li class="rcbItem">CELLULAR PHONE SCRAP (WITHOUT BATTERIES)</li><li class="rcbItem">COMPUTER - DESKTOP</li><li class="rcbItem">COMPUTER -TOWER</li><li class="rcbItem">COMPUTERS</li><li class="rcbItem">COMPUTERS - SFF</li><li class="rcbItem">COPPER BEARING - LOW GRADE</li><li class="rcbItem">Desktop</li><li class="rcbItem">FLOPPY DISK DRIVES</li><li class="rcbItem">GARBAGE - NON HAZARDOUS</li><li class="rcbItem">LAPTOPS</li><li class="rcbItem">LCD Monitor</li><li class="rcbItem">MISC. ELECTRONICS</li><li class="rcbItem">MISCELLANEOUS</li><li class="rcbItem">MODEMS</li><li class="rcbItem">NETWORK EQUIPMENT</li><li class="rcbItem">OCC</li><li class="rcbItem">PHONES - DIGITAL</li><li class="rcbItem">PRINTERS</li><li class="rcbItem">SERVERS</li><li class="rcbItem">SERVERS - TOWER</li><li class="rcbItem">Telecom Equipment</li><li class="rcbItem">Telephone</li><li class="rcbItem">Telephone Accessory</li><li class="rcbItem">TEST EQUIPMENT</li><li class="rcbItem">WIRE &amp; CABLE - MISC. </li></ul></div></div></div><input id="ctl00_CPH1_cmbClasses_ClientState" name="ctl00_CPH1_cmbClasses_ClientState" type="hidden" />
</div>
</code></pre>
<p>This is the combobox code from the page (corporate web form), I am trying to select it but my current codes and attempts (some from other post here in Stack Overflow) have failed so far, this is what I have attempted so far:</p>
<pre><code>def fast_multiselect(driver, element_id, labels):
select = browser.find_element_by_id('ctl00_CPH1_cmbClasses_DropDown')
for option in select.find_element_by_name('CELL PHONE'):
if option.text in labels:
option.click()
</code></pre>
<p>This was my first attempt (several iterations of the same code) and the result was Python not listing any errors but not selecting the option I wanted so following advice from here I went for this:</p>
<pre><code>selectDropDownList = browser.find_element_by_id("ctl00_CPH1_cmbClasses_DropDown > option[value='CELL PHONE']").click()
</code></pre>
<p>And this was the result:</p>
<blockquote>
<p>Traceback (most recent call last):
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\Scripts\Add Asset.py", line 77, in
selectDropDownList = browser.find_element_by_id("ctl00_CPH1_cmbClasses_DropDown > option[value='CELL PHONE']").click()
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 269, in find_element_by_id
return self.find_element(by=By.ID, value=id_)
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 752, in find_element
'value': value})['value']
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"id","selector":"ctl00_CPH1_cmbClasses_DropDown > option[value='CELL PHONE']"}
(Session info: chrome=52.0.2743.116)
(Driver info: chromedriver=2.23.409699 (49b0fa931cda1caad0ae15b7d1b68004acd05129),platform=Windows NT 10.0.10586 x86_64)</p>
</blockquote>
<p>I continued marching forward and this was my last attempt:</p>
<pre><code>Select(browser.find_element_by_id('ctl00_CPH1_cmbClasses_DropDown')).select_by_value('CELL PHONES')
</code></pre>
<p>And the result is:</p>
<blockquote>
<p>Traceback (most recent call last):
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\Scripts\Add Asset.py", line 78, in
Select(browser.find_element_by_id('ctl00_CPH1_cmbClasses_DropDown')).select_by_value('CELL PHONES')
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\support\select.py", line 39, in <strong>init</strong>
webelement.tag_name)
selenium.common.exceptions.UnexpectedTagNameException: Message: Select only works on elements, not on </p>
</blockquote>
| 0
|
2016-09-13T13:16:25Z
| 39,471,316
|
<p>Your problem is that this element is not <code>Select</code> but <code><div></code>, so you cannot use Selenium's Select class.</p>
<p>I don't see page which you are working at, but i suppose that <code><div></code> with id = ctl00_CPH1_cmbClasses_DropDown is element which you have to click on to show dropdown list?</p>
<p>If it is, then you have to find that element by id, click on it, then find another element <code><li></code> that contains text 'CELL PHONE' or whatever you want - for example using xpath.</p>
<p><a href="http://docs.seleniumhq.org/docs/03_webdriver.jsp#locating-ui-elements-webelements" rel="nofollow">this WebDriver docs page will help you</a></p>
| 0
|
2016-09-13T13:26:57Z
|
[
"python",
"selenium"
] |
Selecting a combobox option with a <div> tag using Selenium and Python
| 39,471,097
|
<p>I have been trying to automate some really boring stuff (because of how tedious I have been making mistakes and I want to reduce them as close as zero as I can), in essence I got assets that have to be entered into our system one by one through a horrible process. This is my problem right now:</p>
<p>My objective is to select the option 'CELL PHONES' on the drop down list (ctl00_CPH1_cmbClasses_DropDown). Also for security reasons and the fact that it is a protected Corporate Page (I already handled login and navigation till this point) I can only show snippets of the code as to not compromise it. </p>
<p>EDIT 1 (Modified this to add more of the HTML code)</p>
<pre><code><td class="rcbInputCell rcbInputCellLeft" style="width:100%;"><input name="ctl00$CPH1$cmbClasses" type="text" class="rcbInput radPreventDecorate" id="ctl00_CPH1_cmbClasses_Input" value="" /></td><td class="rcbArrowCell rcbArrowCellRight"><a id="ctl00_CPH1_cmbClasses_Arrow" style="overflow: hidden;display: block;position: relative;outline: none;">select</a></td>
</tr>
</table><div class="rcbSlide" style="z-index:6000;"><div id="ctl00_CPH1_cmbClasses_DropDown" class="RadComboBoxDropDown RadComboBoxDropDown_WebBlue " style="display:none;width:140px;"><div class="rcbScroll rcbWidth" style="width:100%;"><ul class="rcbList" style="list-style:none;margin:0;padding:0;zoom:1;"><li class="rcbItem"></li><li class="rcbItem">CELL PHONES</li><li class="rcbItem">CELLULAR PHONE SCRAP (WITHOUT BATTERIES)</li><li class="rcbItem">COMPUTER - DESKTOP</li><li class="rcbItem">COMPUTER -TOWER</li><li class="rcbItem">COMPUTERS</li><li class="rcbItem">COMPUTERS - SFF</li><li class="rcbItem">COPPER BEARING - LOW GRADE</li><li class="rcbItem">Desktop</li><li class="rcbItem">FLOPPY DISK DRIVES</li><li class="rcbItem">GARBAGE - NON HAZARDOUS</li><li class="rcbItem">LAPTOPS</li><li class="rcbItem">LCD Monitor</li><li class="rcbItem">MISC. ELECTRONICS</li><li class="rcbItem">MISCELLANEOUS</li><li class="rcbItem">MODEMS</li><li class="rcbItem">NETWORK EQUIPMENT</li><li class="rcbItem">OCC</li><li class="rcbItem">PHONES - DIGITAL</li><li class="rcbItem">PRINTERS</li><li class="rcbItem">SERVERS</li><li class="rcbItem">SERVERS - TOWER</li><li class="rcbItem">Telecom Equipment</li><li class="rcbItem">Telephone</li><li class="rcbItem">Telephone Accessory</li><li class="rcbItem">TEST EQUIPMENT</li><li class="rcbItem">WIRE &amp; CABLE - MISC. </li></ul></div></div></div><input id="ctl00_CPH1_cmbClasses_ClientState" name="ctl00_CPH1_cmbClasses_ClientState" type="hidden" />
</div>
</code></pre>
<p>This is the combobox code from the page (corporate web form), I am trying to select it but my current codes and attempts (some from other post here in Stack Overflow) have failed so far, this is what I have attempted so far:</p>
<pre><code>def fast_multiselect(driver, element_id, labels):
select = browser.find_element_by_id('ctl00_CPH1_cmbClasses_DropDown')
for option in select.find_element_by_name('CELL PHONE'):
if option.text in labels:
option.click()
</code></pre>
<p>This was my first attempt (several iterations of the same code) and the result was Python not listing any errors but not selecting the option I wanted so following advice from here I went for this:</p>
<pre><code>selectDropDownList = browser.find_element_by_id("ctl00_CPH1_cmbClasses_DropDown > option[value='CELL PHONE']").click()
</code></pre>
<p>And this was the result:</p>
<blockquote>
<p>Traceback (most recent call last):
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\Scripts\Add Asset.py", line 77, in
selectDropDownList = browser.find_element_by_id("ctl00_CPH1_cmbClasses_DropDown > option[value='CELL PHONE']").click()
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 269, in find_element_by_id
return self.find_element(by=By.ID, value=id_)
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 752, in find_element
'value': value})['value']
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"id","selector":"ctl00_CPH1_cmbClasses_DropDown > option[value='CELL PHONE']"}
(Session info: chrome=52.0.2743.116)
(Driver info: chromedriver=2.23.409699 (49b0fa931cda1caad0ae15b7d1b68004acd05129),platform=Windows NT 10.0.10586 x86_64)</p>
</blockquote>
<p>I continued marching forward and this was my last attempt:</p>
<pre><code>Select(browser.find_element_by_id('ctl00_CPH1_cmbClasses_DropDown')).select_by_value('CELL PHONES')
</code></pre>
<p>And the result is:</p>
<blockquote>
<p>Traceback (most recent call last):
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\Scripts\Add Asset.py", line 78, in
Select(browser.find_element_by_id('ctl00_CPH1_cmbClasses_DropDown')).select_by_value('CELL PHONES')
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\support\select.py", line 39, in <strong>init</strong>
webelement.tag_name)
selenium.common.exceptions.UnexpectedTagNameException: Message: Select only works on elements, not on </p>
</blockquote>
| 0
|
2016-09-13T13:16:25Z
| 39,471,567
|
<p>Before this make sure your drop-down is visible because there is a div with display:none., the second div.</p>
<p>Assuming dropdown is visible, use the following xpath to match 'CELL PHONES'</p>
<pre><code>browser.find_element_by_xpath('//div/ul[@class='rcbList']/li[@class='rcbItem'][.='CELL PHONES']')
</code></pre>
| 0
|
2016-09-13T13:39:21Z
|
[
"python",
"selenium"
] |
Selecting a combobox option with a <div> tag using Selenium and Python
| 39,471,097
|
<p>I have been trying to automate some really boring stuff (because of how tedious I have been making mistakes and I want to reduce them as close as zero as I can), in essence I got assets that have to be entered into our system one by one through a horrible process. This is my problem right now:</p>
<p>My objective is to select the option 'CELL PHONES' on the drop down list (ctl00_CPH1_cmbClasses_DropDown). Also for security reasons and the fact that it is a protected Corporate Page (I already handled login and navigation till this point) I can only show snippets of the code as to not compromise it. </p>
<p>EDIT 1 (Modified this to add more of the HTML code)</p>
<pre><code><td class="rcbInputCell rcbInputCellLeft" style="width:100%;"><input name="ctl00$CPH1$cmbClasses" type="text" class="rcbInput radPreventDecorate" id="ctl00_CPH1_cmbClasses_Input" value="" /></td><td class="rcbArrowCell rcbArrowCellRight"><a id="ctl00_CPH1_cmbClasses_Arrow" style="overflow: hidden;display: block;position: relative;outline: none;">select</a></td>
</tr>
</table><div class="rcbSlide" style="z-index:6000;"><div id="ctl00_CPH1_cmbClasses_DropDown" class="RadComboBoxDropDown RadComboBoxDropDown_WebBlue " style="display:none;width:140px;"><div class="rcbScroll rcbWidth" style="width:100%;"><ul class="rcbList" style="list-style:none;margin:0;padding:0;zoom:1;"><li class="rcbItem"></li><li class="rcbItem">CELL PHONES</li><li class="rcbItem">CELLULAR PHONE SCRAP (WITHOUT BATTERIES)</li><li class="rcbItem">COMPUTER - DESKTOP</li><li class="rcbItem">COMPUTER -TOWER</li><li class="rcbItem">COMPUTERS</li><li class="rcbItem">COMPUTERS - SFF</li><li class="rcbItem">COPPER BEARING - LOW GRADE</li><li class="rcbItem">Desktop</li><li class="rcbItem">FLOPPY DISK DRIVES</li><li class="rcbItem">GARBAGE - NON HAZARDOUS</li><li class="rcbItem">LAPTOPS</li><li class="rcbItem">LCD Monitor</li><li class="rcbItem">MISC. ELECTRONICS</li><li class="rcbItem">MISCELLANEOUS</li><li class="rcbItem">MODEMS</li><li class="rcbItem">NETWORK EQUIPMENT</li><li class="rcbItem">OCC</li><li class="rcbItem">PHONES - DIGITAL</li><li class="rcbItem">PRINTERS</li><li class="rcbItem">SERVERS</li><li class="rcbItem">SERVERS - TOWER</li><li class="rcbItem">Telecom Equipment</li><li class="rcbItem">Telephone</li><li class="rcbItem">Telephone Accessory</li><li class="rcbItem">TEST EQUIPMENT</li><li class="rcbItem">WIRE &amp; CABLE - MISC. </li></ul></div></div></div><input id="ctl00_CPH1_cmbClasses_ClientState" name="ctl00_CPH1_cmbClasses_ClientState" type="hidden" />
</div>
</code></pre>
<p>This is the combobox code from the page (corporate web form), I am trying to select it but my current codes and attempts (some from other post here in Stack Overflow) have failed so far, this is what I have attempted so far:</p>
<pre><code>def fast_multiselect(driver, element_id, labels):
select = browser.find_element_by_id('ctl00_CPH1_cmbClasses_DropDown')
for option in select.find_element_by_name('CELL PHONE'):
if option.text in labels:
option.click()
</code></pre>
<p>This was my first attempt (several iterations of the same code) and the result was Python not listing any errors but not selecting the option I wanted so following advice from here I went for this:</p>
<pre><code>selectDropDownList = browser.find_element_by_id("ctl00_CPH1_cmbClasses_DropDown > option[value='CELL PHONE']").click()
</code></pre>
<p>And this was the result:</p>
<blockquote>
<p>Traceback (most recent call last):
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\Scripts\Add Asset.py", line 77, in
selectDropDownList = browser.find_element_by_id("ctl00_CPH1_cmbClasses_DropDown > option[value='CELL PHONE']").click()
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 269, in find_element_by_id
return self.find_element(by=By.ID, value=id_)
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 752, in find_element
'value': value})['value']
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 236, in execute
self.error_handler.check_response(response)
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 192, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"id","selector":"ctl00_CPH1_cmbClasses_DropDown > option[value='CELL PHONE']"}
(Session info: chrome=52.0.2743.116)
(Driver info: chromedriver=2.23.409699 (49b0fa931cda1caad0ae15b7d1b68004acd05129),platform=Windows NT 10.0.10586 x86_64)</p>
</blockquote>
<p>I continued marching forward and this was my last attempt:</p>
<pre><code>Select(browser.find_element_by_id('ctl00_CPH1_cmbClasses_DropDown')).select_by_value('CELL PHONES')
</code></pre>
<p>And the result is:</p>
<blockquote>
<p>Traceback (most recent call last):
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\Scripts\Add Asset.py", line 78, in
Select(browser.find_element_by_id('ctl00_CPH1_cmbClasses_DropDown')).select_by_value('CELL PHONES')
File "C:\Users\AMSUser\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\support\select.py", line 39, in <strong>init</strong>
webelement.tag_name)
selenium.common.exceptions.UnexpectedTagNameException: Message: Select only works on elements, not on </p>
</blockquote>
| 0
|
2016-09-13T13:16:25Z
| 39,475,038
|
<p>After some tinkering and great advice from everyone that posted here, this is the solution that worked for me:</p>
<pre><code>dropArrow = browser.find_element_by_id('ctl00_CPH1_cmbClasses_Arrow') dropArrow.click() time.sleep(1) dropdown1 = browser.find_element_by_xpath('//*[@id="ctl00_CPH1_cmbClasses_DropDown"]/div/ul/li[.="CELL PHONES"]') dropdown1.click()
</code></pre>
<p>So brief explanation on what was happening, my first problem was that the list wasn't visible, that was solved by clicking on the drop down with:</p>
<pre><code>dropArrow = browser.find_element_by_id('ctl00_CPH1_cmbClasses_Arrow') dropArrow.click()
</code></pre>
<p>After that, I had the problem that even though I had the drop down list down it still said it was not visible; this I solved by making the script wait a second so he could register the options with:</p>
<pre><code>time.sleep(1)
</code></pre>
<p>And lastly I selected the item with:</p>
<pre><code>dropdown1 = browser.find_element_by_xpath('//*[@id="ctl00_CPH1_cmbClasses_DropDown"]/div/ul/li[.="CELL PHONES"]') dropdown1.click()
</code></pre>
| 1
|
2016-09-13T16:38:46Z
|
[
"python",
"selenium"
] |
How to find a particular row of a pascals triangle in python?
| 39,471,161
|
<p>Here is a snapshot of my code to write a pascal's triangle up to n rows into a file named "<strong>pascalrow.txt</strong>" after which it takes a row number as an input, and if that row is found, it reopens the file, finds the row number and displays the whole row.</p>
<p>It is kind of working but..as soon as I <strong>move above 9 rows then I am returned with None</strong>.</p>
<p>For example : I <a href="http://i.stack.imgur.com/wfMHk.jpg" rel="nofollow">tried this</a> and it worked perfectly, as expected.
But then <a href="http://i.stack.imgur.com/aFjkw.jpg" rel="nofollow">I tried doing this</a> and I got freaked out. In the second picture, it can be seen that anything above row number 9, I gave as an input, It returns me none. Btw, I used files cause I didn't want to think about much how to return the row cause I was feeling real lazy. Anyway can anyone help me understand as to why its happening and the possible fix? :3</p>
<pre><code>def pascal(n):
if n==0:
return [1]
else:
N = pascal(n-1)
return [1] + [N[i] + N[i+1] for i in range(n-1)] + [1]
def pascal_triangle(n,fw):
for i in range(n):
fw.write(str(pascal(i))+"\n")
fw.close()
def findRow(fr,row):
for x in fr:
for y in x:
if y==str(row):
return (x)
n=int(input("Enter the number of rows to print : "))
fw = open('pascalrow.txt', 'w')
fr = open('pascalrow.txt', 'r')
row = int(input("Enter the row to search : "))
pascal_triangle(n, fw)
a = findRow(fr,row)
print("The",row," th row is : ",a)
</code></pre>
| 0
|
2016-09-13T13:19:09Z
| 39,471,466
|
<p>The fact that your code works at all is amazing. </p>
<p>You write a stringed list to a line in a file, then read the file and look at the line character by character. So, by a miracle, you find line 8 because the first 8 you find is a character in '[1, 8, 28...]' Of course, it fails for row, e.g. 6, and anything above 9 (because '10' won't match any character.)</p>
<p>So, dump all the file nonsense, and write:</p>
<pre><code>a = pascal(n)
</code></pre>
| 1
|
2016-09-13T13:34:35Z
|
[
"python",
"python-3.x",
"pascals-triangle"
] |
Selenium Page down by ActionChains
| 39,471,163
|
<p>I have a problem using function to scroll down using PageDown key via Selenium's ActionChains in python 3.5 on Ubuntu 16.04 x64.</p>
<p>What I want is that my program scrolls down by PageDown twice, so it reaches bottom at the end and so I can have selected element always visible.
Tried making another function using Keys.END, but it did not work, so I assume it has something to do with ActionChains not closing or something.</p>
<p><strong>The function looks like this:</strong></p>
<pre><code>from selenium.webdriver.common.action_chains import ActionChains
</code></pre>
<p>...</p>
<pre><code>def scrollDown(self):
body = browser.find_element_by_xpath('/html/body')
body.click()
ActionChains(browser).send_keys(Keys.PAGE_DOWN).perform()
</code></pre>
<p><strong>and I use it in another file like this:</strong></p>
<pre><code>mod.scrollDown()
</code></pre>
<p>The first time I use it, it does scroll down as would if PageDown key would be pressed, while another time nothing happens.
It does not matter where i call it, the second (or third...) time it does not execute.
Tried doing it manually and pressed PageDown button twice, works as expected.
Console does not return any error not does the IDE.</p>
| 0
|
2016-09-13T13:19:22Z
| 39,472,743
|
<p>Maybe, if you think it has to do with the action chains you can just do it like this:</p>
<p>body = browser.find_element_by_css_selector('body')
body.send_keys(Keys.PAGE_DOWN)</p>
<p>Hope it works!</p>
| 0
|
2016-09-13T14:39:17Z
|
[
"python",
"selenium",
"ubuntu"
] |
Sorting Angularjs ng-repeat by date
| 39,471,200
|
<p>I am relatively new to AngularJS. Could use some help</p>
<p>I have a table with the following info</p>
<pre><code><table>
<tr>
<th><span ng-click="sortType = 'first_name'; sortReverse = !sortReverse">Referral Name</span></th>
<th><span ng-click="sortType = 'date'; sortReverse = !sortReverse">Referral Name</span></th>
</tr>
<tr ng-repeat="x in referral | orderBy:sortType:sortReverse">
<td>name</td>
<td>date</td>
</tr>
</tabe>
</code></pre>
<p>And the js code is as follows (after the controller connections)</p>
<pre><code>$scope.sortType = '';
$scope.sortReverse = false;
</code></pre>
<p>This works perfectly for ascending and descending when sorting the name.</p>
<p>Unfortunately it works similarly in the case of date too (it is sorting alphabetically, rather than by date).</p>
<p>The date format I am getting from the backend(python) is in this format:</p>
<pre><code>i["date"] = i["date"].strftime("%B %d, %Y")
September 13, 2016 <-- this format
</code></pre>
<p>I understand the mistake I made, but I am not able to find the solution for it.</p>
<p>How can I sort by date? </p>
<p>Thanks in advance guys.</p>
| 0
|
2016-09-13T13:21:18Z
| 39,471,490
|
<p>As you have noticed, the value you receive is type of string and therefore it is sorted alphabetically. You need to convert it into Date() beforehand. So basically what you need is to loop over the array of data you got and add a new property (or replace existing one) with a new Date object:</p>
<pre><code>referral.forEach((ref) => {
ref.date_obj = new Date(ref.date)
};
</code></pre>
<p>I just checked, JavaScript seems to be parsing format "September 13, 2016" pretty well.</p>
| 0
|
2016-09-13T13:35:54Z
|
[
"python",
"html",
"angularjs",
"sorting"
] |
Sorting Angularjs ng-repeat by date
| 39,471,200
|
<p>I am relatively new to AngularJS. Could use some help</p>
<p>I have a table with the following info</p>
<pre><code><table>
<tr>
<th><span ng-click="sortType = 'first_name'; sortReverse = !sortReverse">Referral Name</span></th>
<th><span ng-click="sortType = 'date'; sortReverse = !sortReverse">Referral Name</span></th>
</tr>
<tr ng-repeat="x in referral | orderBy:sortType:sortReverse">
<td>name</td>
<td>date</td>
</tr>
</tabe>
</code></pre>
<p>And the js code is as follows (after the controller connections)</p>
<pre><code>$scope.sortType = '';
$scope.sortReverse = false;
</code></pre>
<p>This works perfectly for ascending and descending when sorting the name.</p>
<p>Unfortunately it works similarly in the case of date too (it is sorting alphabetically, rather than by date).</p>
<p>The date format I am getting from the backend(python) is in this format:</p>
<pre><code>i["date"] = i["date"].strftime("%B %d, %Y")
September 13, 2016 <-- this format
</code></pre>
<p>I understand the mistake I made, but I am not able to find the solution for it.</p>
<p>How can I sort by date? </p>
<p>Thanks in advance guys.</p>
| 0
|
2016-09-13T13:21:18Z
| 39,471,561
|
<p>Ideally you'd have a sortable object for date. One candidate is an isoformatted date:</p>
<pre><code>i["date"] = i["date"].isoformat()
</code></pre>
<p>Now sorting should work just fine but it'll display wonky. So you'll need to use a date filter to format it on the UI:</p>
<pre><code><table>
<tr>
<th><span ng-click="sortType = 'first_name'; sortReverse = !sortReverse">Referral Name</span></th>
<th><span ng-click="sortType = 'date'; sortReverse = !sortReverse">Referral Name</span></th>
</tr>
<tr ng-repeat="x in referral | orderBy:sortType:sortReverse">
<td>name</td>
<td>{{x.date | date : 'MMMM d, yyyy'}}</td>
</tr>
</table>
</code></pre>
| 1
|
2016-09-13T13:39:14Z
|
[
"python",
"html",
"angularjs",
"sorting"
] |
Parameter passed in save() difficult to understand
| 39,471,280
|
<p>I am learning Django from a most recommended and beneficial book named 'Django By Example'. There is a project called Bookmark. I am now stuck in the forms part which is for downloading the image and saving image object to the database. I could understand validation part(clean_url) and also downloading part. I could not get into the parameter passed in to save()</p>
<blockquote>
<p>save(self, force_insert=False, force_update=False, commit=True)</p>
</blockquote>
<p>and saving image object </p>
<blockquote>
<p>image.image.save(image_name, ContentFile(response.read()), save=False)</p>
</blockquote>
<p>Where is force_insert and force_update been used in this function?</p>
<p>Also i did not understand the parameter part in image.image.save() because image has field like title, url, description, image etc. What image_name is refering to? I think response.read() part is for image field.</p>
<p>Could anyone please make me clear?</p>
<p><strong>Here is the code</strong> </p>
<pre><code>class Image(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL,
related_name='images_created')
title = models.CharField(max_length=200)
slug = models.SlugField(max_length=200, blank=True)
url = models.URLField()
image = models.ImageField(upload_to='images/%Y/%m/%d')
description = models.TextField(blank=True)
created = models.DateTimeField(auto_now_add=True,
db_index=True)
users_like = models.ManyToManyField(settings.AUTH_USER_MODEL,
related_name='images_liked',
blank=True)
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>def image_create(request):
"""
View for creating an Image using the JavaScript Bookmarklet.
"""
if request.method == 'POST':
# form is sent
form = ImageCreateForm(data=request.POST)
if form.is_valid():
# form data is valid
cd = form.cleaned_data
new_item = form.save(commit=False)
# assign current user to the item
new_item.user = request.user
new_item.save()
messages.success(request, 'Image added successfully')
# redirect to new created item detail view
return redirect(new_item.get_absolute_url())
else:
# build form with data provided by the bookmarklet via GET
form = ImageCreateForm(data=request.GET)
return render(request, 'images/image/create.html', {'section': 'images',
'form': form})
</code></pre>
<p><strong>forms.py</strong></p>
<pre><code>class ImageCreateForm(forms.ModelForm):
class Meta:
model = Image
fields = ('title', 'url', 'description')
widgets = {
'url': forms.HiddenInput,
}
def clean_url(self):
url = self.cleaned_data['url']
valid_extensions = ['jpg', 'jpeg']
extension = url.rsplit('.', 1)[1].lower()
if extension not in valid_extensions:
raise forms.ValidationError('The given URL does not match valid image extensions.')
return url
def save(self, force_insert=False, force_update=False, commit=True):
image = super(ImageCreateForm, self).save(commit=False)
image_url = self.cleaned_data['url']
image_name = '{}.{}'.format(slugify(image.title),
image_url.rsplit('.', 1)[1].lower())
# download image from the given URL
response = request.urlopen(image_url)
image.image.save(image_name,
ContentFile(response.read()),
save=False)
print('image',image)
if commit:
image.save()
return image
</code></pre>
<p><strong>Update</strong></p>
<p>To create an image i have to use this url localhost:8000/images/create/?title=title for image&url=<a href="http://www.demo.com/image/image.jpg" rel="nofollow">http://www.demo.com/image/image.jpg</a> </p>
| 0
|
2016-09-13T13:25:29Z
| 39,701,714
|
<pre><code>image = super(ImageCreateForm, self).save(commit=False)
</code></pre>
<p>In this statement we are assigning super function to a variable image.</p>
<p>super method is called with ImageCreateForm. Then whatever is returned with super() method, we call the save() method of forms.ModelForm with commit=False.</p>
<p>Now if you look in your model Image you will find "image" attribute which is of of type models.ImageField</p>
<p>For the purpose of understanding, let me rename "image" attribute of model as "image_of_model" </p>
<p>The ImageField has a method save of the form</p>
<pre><code>save(name,content, save)
image.image.save(image_name,
ContentFile(response.read()),
save=False)
</code></pre>
<p>As per my given terminology the above statement becomes.</p>
<pre><code>image.image_of_model.save(image_name,
ContentFile(response.read()),
save=False)
</code></pre>
<p>I hope this sorts out your confusion.</p>
| 1
|
2016-09-26T11:32:44Z
|
[
"python",
"django",
"python-3.x",
"django-models",
"django-forms"
] |
How to make all possible combinations of a word in Python
| 39,471,284
|
<p>I have a txt file with all of the letters in the alphabet that looks like this:</p>
<p>a</p>
<p>b</p>
<p>c</p>
<p>etc..</p>
<p>I also have a word list of words that is only 3 letters long:</p>
<p>ago</p>
<p>age</p>
<p>bat</p>
<p>bag</p>
<p>etc...</p>
<p>I want to create a list that prints out all of the combinations possible starting with the first word ago:</p>
<p>My test program looks like this:</p>
<pre><code>allcombi=[]
s= list("ago")
the.list=[]
with open("alfabeth.txt", "r", encoding = "utf-8") as letters:
for line in letters:
letter = line.strip()
s[0]=letter
print(s)
</code></pre>
<p>Now I only change the first letter, but I have a really hard time trying to join the letters because it only looks like this:</p>
<p>['a', 'g', 'o']
['b', 'g', 'o']
....</p>
<p>HELP WITH:</p>
<ol>
<li><p>Print it out as ['ago','bgo'] instead</p></li>
<li><p>Instead of just changing the first letter, change it one letter at a time in index 0,1 and 2 one letter at a time in the word. The output should be 27*3 rows long with ['ago','bgo',........,'agx',agy,'agz'] </p></li>
</ol>
<p>I will later search for all of the items in my new list in a dictionary but that I can figure out myself it's just this part that really gotten me stuck.</p>
| 0
|
2016-09-13T13:25:36Z
| 39,471,476
|
<p>as @Farhan.K put in the comments, what you are looking for is a string method that creates a new string from an iterable: <code>join</code> </p>
<p>Join is a method of a string where it joins an iterable containing strings with that original string in-between them. for example if you have a list of words that are to be a sentance, you can join them with a space separating each one by calling <code>' '.join(listOfWords)</code>. In your case, you have a list of chars that need to be joined without any delimiter, so you pass an empty string as the separator: <code>''.join(listOfChars)</code></p>
| 0
|
2016-09-13T13:35:18Z
|
[
"python",
"string",
"list",
"replace",
"letter"
] |
How to make all possible combinations of a word in Python
| 39,471,284
|
<p>I have a txt file with all of the letters in the alphabet that looks like this:</p>
<p>a</p>
<p>b</p>
<p>c</p>
<p>etc..</p>
<p>I also have a word list of words that is only 3 letters long:</p>
<p>ago</p>
<p>age</p>
<p>bat</p>
<p>bag</p>
<p>etc...</p>
<p>I want to create a list that prints out all of the combinations possible starting with the first word ago:</p>
<p>My test program looks like this:</p>
<pre><code>allcombi=[]
s= list("ago")
the.list=[]
with open("alfabeth.txt", "r", encoding = "utf-8") as letters:
for line in letters:
letter = line.strip()
s[0]=letter
print(s)
</code></pre>
<p>Now I only change the first letter, but I have a really hard time trying to join the letters because it only looks like this:</p>
<p>['a', 'g', 'o']
['b', 'g', 'o']
....</p>
<p>HELP WITH:</p>
<ol>
<li><p>Print it out as ['ago','bgo'] instead</p></li>
<li><p>Instead of just changing the first letter, change it one letter at a time in index 0,1 and 2 one letter at a time in the word. The output should be 27*3 rows long with ['ago','bgo',........,'agx',agy,'agz'] </p></li>
</ol>
<p>I will later search for all of the items in my new list in a dictionary but that I can figure out myself it's just this part that really gotten me stuck.</p>
| 0
|
2016-09-13T13:25:36Z
| 39,471,480
|
<p>This will generate a list of all combinations for a given word:</p>
<pre><code>from string import ascii_lowercase
word = "ago"
combos = []
for i in xrange(len(word)):
for l in ascii_lowercase:
combos.append( word[:i]+l+word[i+1:] )
</code></pre>
| 1
|
2016-09-13T13:35:31Z
|
[
"python",
"string",
"list",
"replace",
"letter"
] |
How to make all possible combinations of a word in Python
| 39,471,284
|
<p>I have a txt file with all of the letters in the alphabet that looks like this:</p>
<p>a</p>
<p>b</p>
<p>c</p>
<p>etc..</p>
<p>I also have a word list of words that is only 3 letters long:</p>
<p>ago</p>
<p>age</p>
<p>bat</p>
<p>bag</p>
<p>etc...</p>
<p>I want to create a list that prints out all of the combinations possible starting with the first word ago:</p>
<p>My test program looks like this:</p>
<pre><code>allcombi=[]
s= list("ago")
the.list=[]
with open("alfabeth.txt", "r", encoding = "utf-8") as letters:
for line in letters:
letter = line.strip()
s[0]=letter
print(s)
</code></pre>
<p>Now I only change the first letter, but I have a really hard time trying to join the letters because it only looks like this:</p>
<p>['a', 'g', 'o']
['b', 'g', 'o']
....</p>
<p>HELP WITH:</p>
<ol>
<li><p>Print it out as ['ago','bgo'] instead</p></li>
<li><p>Instead of just changing the first letter, change it one letter at a time in index 0,1 and 2 one letter at a time in the word. The output should be 27*3 rows long with ['ago','bgo',........,'agx',agy,'agz'] </p></li>
</ol>
<p>I will later search for all of the items in my new list in a dictionary but that I can figure out myself it's just this part that really gotten me stuck.</p>
| 0
|
2016-09-13T13:25:36Z
| 39,471,524
|
<p>here with list comprehension</p>
<pre><code>[b+'g'+e for b in alphabet for e in alphabet]
</code></pre>
<p>and you can define alphabet with another list comprehension</p>
<pre><code>alphabet=[chr(c) for c in range(ord('a'),ord('z')+1)]
</code></pre>
<p>perhaps not much shorter than writing char by char...</p>
| 0
|
2016-09-13T13:37:50Z
|
[
"python",
"string",
"list",
"replace",
"letter"
] |
How to make all possible combinations of a word in Python
| 39,471,284
|
<p>I have a txt file with all of the letters in the alphabet that looks like this:</p>
<p>a</p>
<p>b</p>
<p>c</p>
<p>etc..</p>
<p>I also have a word list of words that is only 3 letters long:</p>
<p>ago</p>
<p>age</p>
<p>bat</p>
<p>bag</p>
<p>etc...</p>
<p>I want to create a list that prints out all of the combinations possible starting with the first word ago:</p>
<p>My test program looks like this:</p>
<pre><code>allcombi=[]
s= list("ago")
the.list=[]
with open("alfabeth.txt", "r", encoding = "utf-8") as letters:
for line in letters:
letter = line.strip()
s[0]=letter
print(s)
</code></pre>
<p>Now I only change the first letter, but I have a really hard time trying to join the letters because it only looks like this:</p>
<p>['a', 'g', 'o']
['b', 'g', 'o']
....</p>
<p>HELP WITH:</p>
<ol>
<li><p>Print it out as ['ago','bgo'] instead</p></li>
<li><p>Instead of just changing the first letter, change it one letter at a time in index 0,1 and 2 one letter at a time in the word. The output should be 27*3 rows long with ['ago','bgo',........,'agx',agy,'agz'] </p></li>
</ol>
<p>I will later search for all of the items in my new list in a dictionary but that I can figure out myself it's just this part that really gotten me stuck.</p>
| 0
|
2016-09-13T13:25:36Z
| 39,471,542
|
<p>You need nested loops for starters. When you get the grip of what you actually trying to do, then you can see the itertools package. </p>
<p>With the code that you have provided, you should need something like:</p>
<pre><code>s = list('ago')
the_list=[]
with open("alfabeth.txt", "r", encoding = "utf-8") as letters:
lines = [line for line in letters]
for i in range(len(s)):
for ii in lines:
tmp_s = list(s)
tmp_s[i] = ii
print(''.join(tmp_s))
</code></pre>
<p>And with itertools this becomes:</p>
<pre><code>from itertools import product
s = list('ago')
with open("alfabeth.txt", "r", encoding = "utf-8") as letters:
lines = [line.strip() for line in letters]
for i in product(range(len(s)), lines):
print(''.join(s[:i[0]] + [i[-1]] + s[i[0] + 1:]))
</code></pre>
| 0
|
2016-09-13T13:38:31Z
|
[
"python",
"string",
"list",
"replace",
"letter"
] |
How to make all possible combinations of a word in Python
| 39,471,284
|
<p>I have a txt file with all of the letters in the alphabet that looks like this:</p>
<p>a</p>
<p>b</p>
<p>c</p>
<p>etc..</p>
<p>I also have a word list of words that is only 3 letters long:</p>
<p>ago</p>
<p>age</p>
<p>bat</p>
<p>bag</p>
<p>etc...</p>
<p>I want to create a list that prints out all of the combinations possible starting with the first word ago:</p>
<p>My test program looks like this:</p>
<pre><code>allcombi=[]
s= list("ago")
the.list=[]
with open("alfabeth.txt", "r", encoding = "utf-8") as letters:
for line in letters:
letter = line.strip()
s[0]=letter
print(s)
</code></pre>
<p>Now I only change the first letter, but I have a really hard time trying to join the letters because it only looks like this:</p>
<p>['a', 'g', 'o']
['b', 'g', 'o']
....</p>
<p>HELP WITH:</p>
<ol>
<li><p>Print it out as ['ago','bgo'] instead</p></li>
<li><p>Instead of just changing the first letter, change it one letter at a time in index 0,1 and 2 one letter at a time in the word. The output should be 27*3 rows long with ['ago','bgo',........,'agx',agy,'agz'] </p></li>
</ol>
<p>I will later search for all of the items in my new list in a dictionary but that I can figure out myself it's just this part that really gotten me stuck.</p>
| 0
|
2016-09-13T13:25:36Z
| 39,472,337
|
<p>I figured it out! With a neat while loop too. So proud. Posting answer here anyway.</p>
<pre><code> allakombi=[]
s= list("söt")#startord
characters=[]
with open("alfabetet.txt", "r", encoding = "utf-8") as bokstäver:
for rad in bokstäver:
bokstav = rad.strip()
characters.append(bokstav)
k=0
while k<3:
i=0
while i <len(characters):
s= list("söt")
s[k]=characters[i]
i=i+1
s="".join(s)
allakombi.append(s)
k=k+1
print(allakombi)
</code></pre>
| 0
|
2016-09-13T14:18:39Z
|
[
"python",
"string",
"list",
"replace",
"letter"
] |
How to make all possible combinations of a word in Python
| 39,471,284
|
<p>I have a txt file with all of the letters in the alphabet that looks like this:</p>
<p>a</p>
<p>b</p>
<p>c</p>
<p>etc..</p>
<p>I also have a word list of words that is only 3 letters long:</p>
<p>ago</p>
<p>age</p>
<p>bat</p>
<p>bag</p>
<p>etc...</p>
<p>I want to create a list that prints out all of the combinations possible starting with the first word ago:</p>
<p>My test program looks like this:</p>
<pre><code>allcombi=[]
s= list("ago")
the.list=[]
with open("alfabeth.txt", "r", encoding = "utf-8") as letters:
for line in letters:
letter = line.strip()
s[0]=letter
print(s)
</code></pre>
<p>Now I only change the first letter, but I have a really hard time trying to join the letters because it only looks like this:</p>
<p>['a', 'g', 'o']
['b', 'g', 'o']
....</p>
<p>HELP WITH:</p>
<ol>
<li><p>Print it out as ['ago','bgo'] instead</p></li>
<li><p>Instead of just changing the first letter, change it one letter at a time in index 0,1 and 2 one letter at a time in the word. The output should be 27*3 rows long with ['ago','bgo',........,'agx',agy,'agz'] </p></li>
</ol>
<p>I will later search for all of the items in my new list in a dictionary but that I can figure out myself it's just this part that really gotten me stuck.</p>
| 0
|
2016-09-13T13:25:36Z
| 39,472,405
|
<p>You need a few nested loops to get the combinations, as an example:</p>
<pre class="lang-python prettyprint-override"><code>from string import ascii_lowercase
words = ["ago"]
combs = []
for word in words:
for i, letter in enumerate(word):
for l in ascii_lowercase:
tmp = list(word)
tmp[i] = l
combs.append("".join(tmp))
print combs
>>> ['ago', 'bgo', 'cgo', 'dgo', 'ego', 'fgo', 'ggo', 'hgo', 'igo', 'jgo', 'kgo', 'lgo', 'mgo', 'ngo', 'ogo', 'pgo', 'qgo', 'rgo', 'sgo', 'tgo', 'ugo', 'vgo', 'wgo', 'xgo', 'ygo', 'zgo', 'aao', 'abo', 'aco', 'ado', 'aeo', 'afo', 'ago', 'aho', 'aio', 'ajo', 'ako', 'alo', 'amo', 'ano', 'aoo', 'apo', 'aqo', 'aro', 'aso', 'ato', 'auo', 'avo', 'awo', 'axo', 'ayo', 'azo', 'aga', 'agb', 'agc', 'agd', 'age', 'agf', 'agg', 'agh', 'agi', 'agj', 'agk', 'agl', 'agm', 'agn', 'ago', 'agp', 'agq', 'agr', 'ags', 'agt', 'agu', 'agv', 'agw', 'agx', 'agy', 'agz']
</code></pre>
| 0
|
2016-09-13T14:22:33Z
|
[
"python",
"string",
"list",
"replace",
"letter"
] |
Python array[0:1] not the same as array[0]
| 39,471,344
|
<p>I'm using Python to split a string of 2 bytes <code>b'\x01\x00'</code>. The string of bytes is stored in a variable called <code>flags</code>.</p>
<p>Why when I say <code>flags[0]</code> do I get <code>b'\x00'</code> but when I say <code>flags[0:1]</code> I get the expected answer of <code>b'\x01'</code>.</p>
<p>Should both of these operations not be exactly the same?</p>
<p>What I did:</p>
<pre><code>>>> flags = b'\x01\x00'
>>> flags[0:1]
b'\x01'
>>> bytes(flags[0])
b'\x00'
</code></pre>
| -1
|
2016-09-13T13:28:46Z
| 39,471,400
|
<p>Yes, you should get the same thing. In both cases <code>b'\x01'</code>. <code>flags</code> is probably not what you think it is.</p>
<pre><code>>>> flags = b'\x01\x00'
>>> flags[0]
'\x01'
>>> flags[0:1]
'\x01'
</code></pre>
| -1
|
2016-09-13T13:31:44Z
|
[
"python",
"python-3.x",
"indexing",
"bytestring"
] |
Python array[0:1] not the same as array[0]
| 39,471,344
|
<p>I'm using Python to split a string of 2 bytes <code>b'\x01\x00'</code>. The string of bytes is stored in a variable called <code>flags</code>.</p>
<p>Why when I say <code>flags[0]</code> do I get <code>b'\x00'</code> but when I say <code>flags[0:1]</code> I get the expected answer of <code>b'\x01'</code>.</p>
<p>Should both of these operations not be exactly the same?</p>
<p>What I did:</p>
<pre><code>>>> flags = b'\x01\x00'
>>> flags[0:1]
b'\x01'
>>> bytes(flags[0])
b'\x00'
</code></pre>
| -1
|
2016-09-13T13:28:46Z
| 39,471,402
|
<p>In Python 3, <code>bytes</code> is a sequence type containing <em>integers</em> (each in the range 0 - 255) so indexing to a specific index gives you an integer.</p>
<p>And just like slicing a list produces a new list object for the slice, so does slicing a <code>bytes</code> object produce a new <code>bytes</code> instance. And the representation of a <code>bytes</code> instance tries to show you a <code>b'...'</code> literal syntax with the integers represented as either printable ASCII characters or an applicable escape sequence when the byte isn't printable. All this is great for developing but may hide the fact that bytes are really a sequence of integers.</p>
<p>However, you will still get the <em>same piece of information</em>; <code>flags[0:1]</code> is a one-byte long <code>bytes</code> value with the <code>\x01</code> byte in it, and <code>flags[0]</code> will give you the integer <code>1</code>:</p>
<pre><code>>>> flags = b'\x01\x00'
>>> flags[0]
1
>>> flags[0:1]
b'\x01'
</code></pre>
<p>What you <strong>really</strong> did was not use <code>flags[0]</code>, you used <code>bytes(flags[0])</code> instead. Passing in a single integer to the <code>bytes()</code> type creates a new <code>bytes</code> object of the specified length, pre-filled with <code>\x00</code> bytes:</p>
<pre><code>>>> flags[0]
1
>>> bytes(1)
b'\x00'
</code></pre>
<p>Since <code>flags[0]</code> produces the integer 1, you told <code>bytes()</code> to return a new bytes value of length 1, filled with <code>\x00</code> bytes.</p>
<p>From the <a href="https://docs.python.org/3/library/stdtypes.html#bytes" rel="nofollow"><code>bytes</code> documentation</a>:</p>
<blockquote>
<p>Bytes objects are immutable sequences of single bytes.</p>
<p>[...]</p>
<p>While bytes literals and representations are based on ASCII text, bytes objects actually behave like immutable sequences of integers, with each value in the sequence restricted such that <code>0 <= x < 256</code>.</p>
<p>[...]</p>
<p>In addition to the literal forms, bytes objects can be created in a number of other ways:</p>
<ul>
<li><strong>A zero-filled bytes object of a specified length: <code>bytes(10)</code></strong></li>
</ul>
</blockquote>
<p>Bold emphasis mine.</p>
<p>If you wanted to create a new <code>bytes</code> object with that one byte in it, you'll need to put the integer value in a list first:</p>
<pre><code>>>> bytes([flags[0]])
b'\x01'
</code></pre>
| 5
|
2016-09-13T13:31:44Z
|
[
"python",
"python-3.x",
"indexing",
"bytestring"
] |
unable to execute Celery beat the second time
| 39,471,380
|
<p>I am using Celery beat for getting the site data after every 10 seconds. Therefore I update the settings in my Django project. I am using rabbitmq with celery.</p>
<p><strong>settings.py</strong></p>
<pre><code># This is the settings file
# Rabbitmq configuration
BROKER_URL = "amqp://abcd:abcd@localhost:5672/abcd"
# Celery configuration
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Asia/Kolkata'
CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERYBEAT_SCHEDULE = {
# Executes every Monday morning at 7:30 A.M
'update-app-data': {
'task': 'myapp.tasks.fetch_data_task',
'schedule': timedelta(seconds=10),
},
</code></pre>
<p><strong>celery.py</strong></p>
<pre><code>from __future__ import absolute_import
import os
from celery import Celery
from django.conf import settings
# Indicate Celery to use the default Django settings module
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
app = Celery('myapp')
app.config_from_object('django.conf:settings')
# This line will tell Celery to autodiscover all your tasks.py that are in
# playstore folders
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
app_keywords = Celery('keywords')
app_keywords.config_from_object('django.conf:settings')
# This line will tell Celery to autodiscover all your tasks.py that are in
# keywords folders
app_keywords.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
app1 = Celery('myapp1')
app1.config_from_object('django.conf:settings')
# This line will tell Celery to autodiscover all your tasks.py that are in
# your app folders
app1.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
</code></pre>
<p><strong>tasks.py</strong></p>
<pre><code>@task(bind=True)
def fetch_data_task(self, data):
logger.info("Start task")
import pdb;pdb.set_trace()
# post the data to view
headers, cookies = utils.get_csrf_token()
requests.post(settings.SITE_VARIABLES['site_url'] + "/site/general_data/",
data=json.dumps(data), headers=headers, cookies=cookies
)
if data['reviews']:
reviews_data = {'app_id': data['app_data'][
'app_id'], 'reviews': data['reviews'][0]}
requests.post(settings.SITE_VARIABLES['site_url'] + "/site/blog/reviews/",
data=json.dumps(reviews_data), headers=headers, cookies=cookies
)
logger.info("Task fetch data finished")
</code></pre>
<p>Now once I call <code>fetch_data_task</code> in my api after login to the site, The task is queued in rabbimq and then It should the call the function along with the arguments.</p>
<p>Here is the line where I am calling the task for the very first time</p>
<p><code>tasks.fetch_data_task.apply_async((data,))</code></p>
<p>This queues the task and the task executes each time but it gives me the following error</p>
<blockquote>
<p>[2016-09-13 18:57:43,044: ERROR/MainProcess] Task playstore.tasks.fetch_data_task[3b88c6d0-48db-49c1-b7d1-0b8469775d53] </p>
<p>raised unexpected: TypeError("fetch_data_task() missing 1 required positional argument: 'data'",)</p>
<p>Traceback (most recent call last):</p>
<p>File "/Users/chitrankdixit/.virtualenvs/hashgrowth-> >dev/lib/python3.5/site-packages/celery/app/trace.py", line 240, in >trace_task
R = retval = fun(*args, **kwargs)
File "/Users/chitrankdixit/.virtualenvs/hashgrowth->dev/lib/python3.5/site-packages/celery/app/trace.py", line 438, in ><strong>protected_call</strong>
return self.run(*args, **kwargs)
TypeError: fetch_data_task() missing 1 required positional argument: 'data'</p>
</blockquote>
<p>If anyone has worked with celery and rabbitmq and also worked with periodic task using celery please suggest me to execute the tasks properly.</p>
| 1
|
2016-09-13T13:30:39Z
| 39,471,965
|
<p>The exception tells you what the error is: your task expects a positional argument, but you do not provide any arguments in your schedule definition.</p>
<pre><code>CELERYBEAT_SCHEDULE = {
# Executes every Monday morning at 7:30 A.M
'update-app-data': {
'task': 'myapp.tasks.fetch_data_task',
'schedule': timedelta(seconds=10),
'args': ({
# whatever goes into 'data'
},) # tuple with one entry, don't omit the comma
},
</code></pre>
<p>Calling the task from any other place in your code does not have any effect on the schedule.</p>
| 1
|
2016-09-13T13:59:23Z
|
[
"python",
"django",
"celery",
"django-celery",
"celerybeat"
] |
Calling a function from a script, re-import all packages?
| 39,471,558
|
<p>Say I have some script, with a function <code>my_function</code>. Now, this function needs several packages. So, let's say the file looks like this:</p>
<pre><code>import package_A
import package_B
def my_function():
do_something
</code></pre>
<p>Now, if I want to use this function somewhere else, I may say </p>
<pre><code>from my_file import my_function
my_function()
</code></pre>
<p>However, at this point, the call will halt with the error that package_A and package_B are not known. </p>
<p>How do I solve this? Do I have to make all the imports I do for <code>my_function</code> again in the script calling <code>my_function</code>? And if so, is there a way to automatically check and import all the imports in that file?</p>
| 0
|
2016-09-13T13:39:08Z
| 39,471,667
|
<p>You can have several scripts calling each other can can import several packages in each script and it won't throw up an error until you have packages that are required for the functions in that script.</p>
<p><a href="http://stackoverflow.com/questions/15696461/import-python-script-into-another">Found this link which will answer your question better</a></p>
| -1
|
2016-09-13T13:44:09Z
|
[
"python",
"import"
] |
PyQt - Draw rectangle behind a widget
| 39,471,592
|
<p>I have a QStackedWidget with a QLineEdit and a couple other widgets inside of it. This QStackedWidget is fairly dynamic - you can move it within its layout by clicking/dragging, change its current widget by right clicking it, etc.</p>
<p>I'd like to draw a simple, gray rectangle or a gray rounded rectangle around the QStackedWidget to let people know that the QLineEdit they're looking at is important. This drawn rectangle has to be able to follow the QStackedWidget so that it follows properly with the widget when I move it to other locations on-screen.</p>
<p>I've tried several approaches so far but they've all fallen short in some regard or another or it just wouldn't move with the widget. Can anyone show me how?</p>
| 0
|
2016-09-13T13:40:52Z
| 39,477,210
|
<p>Depending on how your clicking/dragging is implemented, you should just be able to place the <code>QStackedWidget</code> inside another <code>QFrame</code>, or put a <code>QFrame</code> inside your <code>QStackedWidget</code> and put all the other controls inside the <code>QFrame</code>. <code>QFrame</code>'s support drawing borders around them.</p>
<pre><code>frame = QFrame()
frame.setFrameStyle(QFrame.StyledPanel)
frame.setLineWidth(2)
</code></pre>
| 1
|
2016-09-13T19:00:43Z
|
[
"python",
"pyqt",
"draw"
] |
Python interpolation data list of lists
| 39,471,625
|
<p>I have got this data:</p>
<pre><code>X = [[10, 6, 0], [8, 6, 0], [4, 3, 0]]
Y = [[29, 28, 27], [26, 25, 24], [23, 22, 21]]
</code></pre>
<p>I need to interpolate between values in X. for instance:</p>
<pre><code>D = np.linspace(10,0,num = 6)
Out:[ 0. 2. 4. 6. 8. 10.]
</code></pre>
<p>So result should be like:</p>
<pre><code> "D" "Y[0]" "Y[1]" "Y[2]"
0 27 24 21
2 ? ? ?
4 ? ? 23
6 28 25 ?
8 ? 26 ?
10 29 ? ?
</code></pre>
<p>I know there is <code>np.interp()</code> I tried and it works only for one dimensional list:</p>
<pre><code>z =[0,5,10]
v= [29,28,27]
x = np.linspace(10,0,num = 4)
d=np.interp(x, z, v)
print (d)
</code></pre>
<p>But If I have list of lists it doen't work.</p>
| 2
|
2016-09-13T13:42:12Z
| 39,474,665
|
<p>Use <code>zip</code>. Also, it looks like you want to reverse the sublists. Maybe something like:</p>
<pre><code>points = np.linspace(0,10,num = 6)
cols = (points,) + tuple(np.interp(points,x[::-1],y[::-1]) for x,y in zip(X,Y))
np.stack(cols,axis=1)
</code></pre>
<p>which has output:</p>
<pre><code>array([[ 0. , 27. , 24. , 21. ],
[ 2. , 27.33333333, 24.33333333, 21.66666667],
[ 4. , 27.66666667, 24.66666667, 23. ],
[ 6. , 28. , 25. , 23. ],
[ 8. , 28.5 , 26. , 23. ],
[ 10. , 29. , 26. , 23. ]])
</code></pre>
<p>This shows interpolation. It seems like you might want to use extrapolation in some entries in some columns.</p>
| 2
|
2016-09-13T16:15:40Z
|
[
"python",
"interpolation"
] |
SoftLayer API: How to specify OS Reload with RAID options?
| 39,471,640
|
<p>Using the Python SoftLayer library, I've been attempting to submit an OS reload via the SoftLayer API to get consistent disk setup for provisioned servers. These servers have either RAID10 or RAID1 setup, using all available disks in the array. Upon initial provisioning, the servers are setup properly. </p>
<p>When I submit the OS reload using the Python library using a method like the following: </p>
<pre><code>def reload_server(server_id):
conf = {
"upgradeHardDriveFirmware": "1",
"upgradeBios": "1",
"hardDrives": [
{
"complexType": "SoftLayer_Hardware_Component_HardDrive",
"partitions": [
{ "name": "/boot", "minimumSize": "1"},
{ "name": "/swap0", "minimumSize": "20"},
{ "name": "/", "minimumSize": "15"},
{ "name": "/disk", "minimumSize": "1", "grow": "1"}
]
}
]
}
return client['Hardware_Server'].reloadOperatingSystem('FORCE', conf, id=server_id)
</code></pre>
<p>The reload is initiated, but the partitions setup only use the first disk rather than a RAID block device. Consequently, they do not have the RAID setup. In other words, for a 6 disk server intending to have RAID10, which should have a single block device visible in the OS (<code>/dev/sda</code>), <code>/dev/sda</code> is setup to with those partitions and the other disks - <code>/dev/sdb</code>, <code>/dev/sdc</code>, <code>/dev/sdc</code> et al - are block devices: </p>
<pre><code>root@server ~ $blkid
/dev/sda1: UUID="6c80b9ef-0228-4f6d-8ff9-7ed851f383f9" TYPE="ext2"
/dev/sda5: UUID="58e05f19-aa62-42cd-858b-568f415a0201" TYPE="swap"
/dev/sda6: UUID="8d7c0396-a3d3-4e72-847e-f8b3bbbda120" TYPE="ext4"
/dev/sda7: UUID="TmNPZO-V1Dq-xSRU-hHM2-02A8-9mJi-mRPLjo" TYPE="ext4"
/dev/sdb1: LABEL="/disk1" UUID="a19883ec-1fd0-472d-a2ef-188f943a0ab3" TYPE="ext4"
/dev/sdc1: LABEL="/disk2" UUID="c6bd0fc6-3d5c-4c29-9b33-a61b15793d5d" TYPE="ext4"
/dev/sdd1: LABEL="/disk3" UUID="5bda0575-1bfa-473b-83bc-519f705f2213" TYPE="ext4"
/dev/sde1: LABEL="/disk4" UUID="43fe460d-8ad4-41f9-b840-11f3d36d8788" TYPE="ext4"
/dev/sdf1: LABEL="/disk5" UUID="9b34ca0f-bc54-41fe-934a-daabdaa8521b" TYPE="ext4"
</code></pre>
<p>How do I submit an OS reload to ensure that the RAID is setup appropriately and is not lost upon reload? And how do I do this consistently, because we've submitted OS reloads via cURL using a similar payload and the reload resulted in a correct RAID setup.</p>
<p><strong>Edit:</strong> To clarify, I'm not trying to change the RAID configuration. I want to keep the existing RAID configuration. I am only attempting to change the partitions. Namely, resize swap, decrease root partition, and specify the grow partition for our automated tools. When I submit the OS reload and change the partition structure, the RAID arrays are gone. </p>
| 0
|
2016-09-13T13:42:42Z
| 39,472,073
|
<p>That is the expected behavior for reload, the partitions are applied only to the first disk and it is not possible to specify RAID configuration via API for reload.</p>
<p>You have two options to mantain your RAID configuration:</p>
<p>1.- Do not specify any partition configuration for the reload, so the OS of your server will be reload, but it will mantain the same RAID configuration.</p>
<p>2.- You can spececify a script which will be executed after the reload (customProvisioningScripUri that is name of the property you need to add see more <a href="http://sldn.softlayer.com/reference/datatypes/SoftLayer_Container_Hardware_Server_Configuration" rel="nofollow">here</a>), in that script you can automate the creation of RAID you want.</p>
<p>Regards</p>
| 0
|
2016-09-13T14:05:09Z
|
[
"python",
"softlayer"
] |
How to randomly remove a percentage of items from a list
| 39,471,676
|
<p>I have two lists of equal length, one is a data series the other is simply a time series. They represent simulated values measured over time.</p>
<p>I want to create a function that removes a set percentage or fraction from both lists but at random. I.e. if my fraction is 0.2, I want to randomly remove 20% of the items from both lists, but they have to be the same items (same index in each list) removed.</p>
<p>For example, let n = 0.2 (20% to be deleted)</p>
<pre><code>a = [0,1,2,3,4,5,6,7,8,9]
b = [0,1,4,9,16,25,36,49,64,81]
</code></pre>
<p>After the randomly removed 20%, they become</p>
<pre><code>a_new = [0,1,3,4,5,6,8,9]
b_new = [0,1,9,16,25,36,64,81]
</code></pre>
<p>The relationship isn't as straightforward as the example, so I can't just perform this action on one list and then work out the second; they already exist as two lists. And they have to remain in the original order.</p>
<p>Thanks!</p>
| 0
|
2016-09-13T13:44:39Z
| 39,471,736
|
<pre><code>import random
a = [0,1,2,3,4,5,6,7,8,9]
b = [0,1,4,9,16,25,36,49,64,81]
frac = 0.2 # how much of a/b do you want to exclude
# generate a list of indices to exclude. Turn in into a set for O(1) lookup time
inds = set(random.sample(list(range(len(a))), int(frac*len(a))))
# use `enumerate` to get list indices as well as elements.
# Filter by index, but take only the elements
new_a = [n for i,n in enumerate(a) if i not in inds]
new_b = [n for i,n in enumerate(b) if i not in inds]
</code></pre>
| 6
|
2016-09-13T13:47:39Z
|
[
"python",
"list"
] |
How to randomly remove a percentage of items from a list
| 39,471,676
|
<p>I have two lists of equal length, one is a data series the other is simply a time series. They represent simulated values measured over time.</p>
<p>I want to create a function that removes a set percentage or fraction from both lists but at random. I.e. if my fraction is 0.2, I want to randomly remove 20% of the items from both lists, but they have to be the same items (same index in each list) removed.</p>
<p>For example, let n = 0.2 (20% to be deleted)</p>
<pre><code>a = [0,1,2,3,4,5,6,7,8,9]
b = [0,1,4,9,16,25,36,49,64,81]
</code></pre>
<p>After the randomly removed 20%, they become</p>
<pre><code>a_new = [0,1,3,4,5,6,8,9]
b_new = [0,1,9,16,25,36,64,81]
</code></pre>
<p>The relationship isn't as straightforward as the example, so I can't just perform this action on one list and then work out the second; they already exist as two lists. And they have to remain in the original order.</p>
<p>Thanks!</p>
| 0
|
2016-09-13T13:44:39Z
| 39,471,869
|
<p>If <code>a</code> and <code>b</code> are not very large, you could get away with using <code>zip</code>:</p>
<pre><code>import random
a = [0,1,2,3,4,5,6,7,8,9]
b = [0,1,4,9,16,25,36,49,64,81]
frac = 0.2 # how much of a/b do you want to exclude
ab = list(zip(a,b)) # a list of tuples where the first element is from `a` and the second is from `b`
new_ab = random.sample(ab, int(len(a)*(1-frac))) # sample those tuples
new_a, new_b = zip(*new_ab) # unzip the tuples to get `a` and `b` back
</code></pre>
<p>Note that this won't preserve the original order of <code>a</code> and <code>b</code></p>
| 0
|
2016-09-13T13:54:09Z
|
[
"python",
"list"
] |
How to randomly remove a percentage of items from a list
| 39,471,676
|
<p>I have two lists of equal length, one is a data series the other is simply a time series. They represent simulated values measured over time.</p>
<p>I want to create a function that removes a set percentage or fraction from both lists but at random. I.e. if my fraction is 0.2, I want to randomly remove 20% of the items from both lists, but they have to be the same items (same index in each list) removed.</p>
<p>For example, let n = 0.2 (20% to be deleted)</p>
<pre><code>a = [0,1,2,3,4,5,6,7,8,9]
b = [0,1,4,9,16,25,36,49,64,81]
</code></pre>
<p>After the randomly removed 20%, they become</p>
<pre><code>a_new = [0,1,3,4,5,6,8,9]
b_new = [0,1,9,16,25,36,64,81]
</code></pre>
<p>The relationship isn't as straightforward as the example, so I can't just perform this action on one list and then work out the second; they already exist as two lists. And they have to remain in the original order.</p>
<p>Thanks!</p>
| 0
|
2016-09-13T13:44:39Z
| 39,471,907
|
<p>You can also operate the <em>zipped</em> a and b sequence, get the random sample of the indexes (to maintain the original order of items) and <em>unzip</em> into <code>a_new</code> and <code>b_new</code> again:</p>
<pre><code>import random
a = [0,1,2,3,4,5,6,7,8,9]
b = [0,1,4,9,16,25,36,49,64,81]
frac = 0.2
c = zip(a, b) # c = list(zip(a, b)) on Python 3
indices = random.sample(range(len(c)), frac * len(c))
a_new, b_new = zip(*sorted(c[i] for i in sorted(indices)))
print(a_new)
print(b_new)
</code></pre>
<p>It can print:</p>
<pre><code>(0, 2, 3, 5, 6, 7, 8, 9)
(0, 4, 9, 25, 36, 49, 64, 81)
</code></pre>
| 0
|
2016-09-13T13:56:08Z
|
[
"python",
"list"
] |
How to randomly remove a percentage of items from a list
| 39,471,676
|
<p>I have two lists of equal length, one is a data series the other is simply a time series. They represent simulated values measured over time.</p>
<p>I want to create a function that removes a set percentage or fraction from both lists but at random. I.e. if my fraction is 0.2, I want to randomly remove 20% of the items from both lists, but they have to be the same items (same index in each list) removed.</p>
<p>For example, let n = 0.2 (20% to be deleted)</p>
<pre><code>a = [0,1,2,3,4,5,6,7,8,9]
b = [0,1,4,9,16,25,36,49,64,81]
</code></pre>
<p>After the randomly removed 20%, they become</p>
<pre><code>a_new = [0,1,3,4,5,6,8,9]
b_new = [0,1,9,16,25,36,64,81]
</code></pre>
<p>The relationship isn't as straightforward as the example, so I can't just perform this action on one list and then work out the second; they already exist as two lists. And they have to remain in the original order.</p>
<p>Thanks!</p>
| 0
|
2016-09-13T13:44:39Z
| 39,471,911
|
<pre><code>import random
a = [0,1,2,3,4,5,6,7,8,9]
b = [0,1,4,9,16,25,36,49,64,81]
frac = 0.2 # how much of a/b do you want to exclude
new_a, new_b = [], []
for i in range(len(a)):
if random.random()>frac: # with probability, add an element from `a` and `b` to the output
new_a.append(a[i])
new_b.append(b[i])
</code></pre>
| 0
|
2016-09-13T13:56:27Z
|
[
"python",
"list"
] |
How to randomly remove a percentage of items from a list
| 39,471,676
|
<p>I have two lists of equal length, one is a data series the other is simply a time series. They represent simulated values measured over time.</p>
<p>I want to create a function that removes a set percentage or fraction from both lists but at random. I.e. if my fraction is 0.2, I want to randomly remove 20% of the items from both lists, but they have to be the same items (same index in each list) removed.</p>
<p>For example, let n = 0.2 (20% to be deleted)</p>
<pre><code>a = [0,1,2,3,4,5,6,7,8,9]
b = [0,1,4,9,16,25,36,49,64,81]
</code></pre>
<p>After the randomly removed 20%, they become</p>
<pre><code>a_new = [0,1,3,4,5,6,8,9]
b_new = [0,1,9,16,25,36,64,81]
</code></pre>
<p>The relationship isn't as straightforward as the example, so I can't just perform this action on one list and then work out the second; they already exist as two lists. And they have to remain in the original order.</p>
<p>Thanks!</p>
| 0
|
2016-09-13T13:44:39Z
| 39,471,950
|
<pre><code>l = len(a)
n_drop = int(l * n)
n_keep = l - n_drop
ind = [1] * n_keep + [0] * n_drop
random.shuffle(ind)
new_a = [ e for e, i in zip(a, ind) if i ]
new_b = [ e for e, i in zip(b, ind) if i ]
</code></pre>
| 0
|
2016-09-13T13:58:28Z
|
[
"python",
"list"
] |
How to randomly remove a percentage of items from a list
| 39,471,676
|
<p>I have two lists of equal length, one is a data series the other is simply a time series. They represent simulated values measured over time.</p>
<p>I want to create a function that removes a set percentage or fraction from both lists but at random. I.e. if my fraction is 0.2, I want to randomly remove 20% of the items from both lists, but they have to be the same items (same index in each list) removed.</p>
<p>For example, let n = 0.2 (20% to be deleted)</p>
<pre><code>a = [0,1,2,3,4,5,6,7,8,9]
b = [0,1,4,9,16,25,36,49,64,81]
</code></pre>
<p>After the randomly removed 20%, they become</p>
<pre><code>a_new = [0,1,3,4,5,6,8,9]
b_new = [0,1,9,16,25,36,64,81]
</code></pre>
<p>The relationship isn't as straightforward as the example, so I can't just perform this action on one list and then work out the second; they already exist as two lists. And they have to remain in the original order.</p>
<p>Thanks!</p>
| 0
|
2016-09-13T13:44:39Z
| 39,471,997
|
<pre><code>from random import randint as r
a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
b = [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
percentage = 0.3
g = (r(0, len(a)-1) for _ in xrange(int(len(a) * (1-percentage))))
c, d = [], []
for i in g:
c.append(a[i])
d.append(b[i])
a, b = c, d
print a
print b
</code></pre>
| 1
|
2016-09-13T14:01:03Z
|
[
"python",
"list"
] |
How to compare two columns of two different dataframes whose columns are not unique?
| 39,471,817
|
<p>I'm having two different dataframes : df1 and df2</p>
<pre><code>df1 :
Id lkey
0 foo foo
1 bar bar
2 baz baz
3 foo foo
4 bar bar
5 foo foo
6 bar bar
7 bar bar
8 bar bar
df2 :
e rkey value y
0 aaa foo aaa foo
1 NaN bar bbb bar
2 ccc baz ccc baz
3 NaN mac ddd fff
4 NaN xyz eee mmm
5 NaN mnb fff NaN
6 NaN foo aaa NaN
</code></pre>
<p>Edit1 : added 6th row as a duplicate.</p>
<p>I want perform one task on this dataframes. I want to compare lkey and rkey columns. </p>
<p>Edit2 :</p>
<p><strong>Note :</strong> <em>lkey</em> column contains all duplicate values and <em>rkey</em> column contains some duplicate values.</p>
<p>Pick up first value of lkey column i.e. foo compare this value with values of rkey column of dataframe. If match is find I want to know this row's value of value column in the df1 dataframe column name as match. (In every case match will get for lkey and rkey i.e. whatever lkey values present in df1 available in the rkey column of df2.) </p>
<p>I'm already tried with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow">merge</a>.</p>
<pre><code>result = df1.merge(df2, left_on='lkey', right_on='rkey', how='outer')
</code></pre>
<p>output : </p>
<pre><code> Id lkey e rkey value y
0 foo foo aaa foo aaa foo
1 foo foo aaa foo aaa foo
2 foo foo aaa foo aaa foo
3 bar bar NaN bar bbb bar
4 bar bar NaN bar bbb bar
5 bar bar NaN bar bbb bar
6 bar bar NaN bar bbb bar
7 bar bar NaN bar bbb bar
8 baz baz ccc baz ccc baz
9 NaN NaN NaN mac ddd fff
10 NaN NaN NaN xyz eee mmm
11 NaN NaN NaN mnb fff NaN
</code></pre>
<p>I don't want 11 rows. In my df1 only 9 rows are available with column Id and lkey. I just want to add match column with specific mapping.</p>
<p><strong>Expected Output :</strong></p>
<pre><code>Id lkey match
0 foo foo aaa
1 bar bar bbb
2 baz baz ccc
3 foo foo aaa
4 bar bar bbb
5 foo foo aaa
6 bar bar bbb
7 bar bar bbb
8 bar bar bbb
</code></pre>
<p>How I can achieve what I want to do?</p>
<p>Edit : previously I was saying rkey column contains unique values but I'm facing issue because of that only, rkey column contains duplicate values.</p>
| 0
|
2016-09-13T13:51:32Z
| 39,472,100
|
<pre><code>>>> (df1
.merge(df2[['rkey', 'value']].drop_duplicates(), left_on='lkey', right_on='rkey', how='left')
.drop('rkey', axis='columns')
.rename(columns={'value': 'match'})
)
Id lkey match
0 foo foo aaa
1 bar bar bbb
2 baz baz ccc
3 foo foo aaa
4 bar bar bbb
5 foo foo aaa
6 bar bar bbb
7 bar bar bbb
8 bar bar bbb
</code></pre>
<p>If the key column in both dataframes had the same name, you can just use <code>on='key'</code> and wouldn't need to drop the right key.</p>
| 2
|
2016-09-13T14:06:42Z
|
[
"python",
"pandas",
"dataframe"
] |
Data not saving SQLite3 python3.4
| 39,471,877
|
<p>I am currently trying to create a sqlite database of peoples names and ip
While my code seems to work when I run it the data doesn't show up when I run <code>SELECT * from ips;</code> in terminal after running <code>SQLite3 ips</code>
Below is my code. Both it and the <code>SELECT * from ips;</code> are running in ~/Desktop/SQL</p>
<pre><code>import sqlite3 as sql
import socket
import struct
def iptoint(ip):
return str(struct.unpack("i",socket.inet_aton(ip))[0])
database = sql.connect("ips")
createTable = True
if createTable:
database.execute('''CREATE TABLE main.ips
(FIRST_NAME TEXT PRIMARY KEY NOT NULL,
SECOND_NAME TEXT NOT NULL,
IP INT32 NOT NULL);''')
sampleIps = [("Edward","E","60.222.168.44")]
for first,second,ip in sampleIps:
string = "INSERT INTO ips VALUES ('%s','%s','%s');"%(first,second,iptoint(ip))
print(string)
#Printing the string gives me INSERT INTO ips VALUES ('Edward','E','749264444');
database.execute("INSERT INTO ips VALUES ('%s','%s','%s');"%(first,second,iptoint(ip)))
database.close()
</code></pre>
<p>My computer is running OSX 10.11.4, python 3.4 and SQLite 3.14.1</p>
<p>I have tried changing ips to main.ips and back</p>
| 1
|
2016-09-13T13:54:26Z
| 39,471,952
|
<p>It doesn't look like you are committing to the database. You need to commit before closing the connection in order to actually save your changes to the database.</p>
<p><code>database.commit()</code></p>
| 1
|
2016-09-13T13:58:42Z
|
[
"python",
"sql",
"sqlite",
"sqlite3",
"python-3.4"
] |
Is it possible to limit mocked function calls count?
| 39,471,928
|
<p>I have encountered a problem when I write a unit test. This is a chunck from an unit test file:</p>
<pre><code>main.obj = MainObj.objects.create(short_url="a1b2c3")
with unittest.mock.patch('prj.apps.app.models.base.generate_url_string', return_value="a1b2c3") as mocked_generate_url_string:
obj.generate_short_url()
</code></pre>
<p>This is a chunk of code from the file 'prj.apps.app.models.base' (file which imports function 'generate_url_string' which is being mocked):</p>
<pre><code>from ..utils import generate_url_string
.....................
def generate_short_url(self):
short_url = generate_url_string()
while MainObj.objects.filter(short_url=short_url).count():
short_url = generate_url_string()
return short_url
</code></pre>
<p>I want to show in the unit test that the function 'generate_short_url' doesn't return repeated values if some objects in the system have similar short_urls. I mocked 'generate_url_string' with predefined return result for this purpose.
The problem is that I couldn't limit number of calls of mocked function with this value, and as a result the code goes to an infinite loop.
I would like to call my function with predefined result ('a1b2c3') only once. After that I want function to work as usual. Something like this:</p>
<pre><code>with unittest.mock.patch('prj.apps.app.models.base.generate_url_string', return_value="a1b2c3", times_to_call=1) as mocked_generate_url_string:
obj.generate_short_url()
</code></pre>
<p>But I see no any attributes like 'times_to_call' in a mocking library.
Is there any way to handle that ?</p>
| 0
|
2016-09-13T13:57:22Z
| 39,472,541
|
<p>Define a generator that first yields the fixed value, then yields the return value of the real function (which is passed as an argument to avoid calling the patched value).</p>
<pre><code>def mocked(x):
yield "a1b2c3"
while True:
yield x()
</code></pre>
<p>Then, use the generator as the side effect of the patched function.</p>
<pre><code>with unittest.mock.patch(
'prj.apps.app.models.base.generate_url_string',
side_effect=mocked(prj.apps.app.models.base.generate_url_string)) as mocked_generate_url_string:
obj.generate_short_url()
</code></pre>
| 3
|
2016-09-13T14:29:07Z
|
[
"python",
"django",
"unit-testing",
"django-models",
"mocking"
] |
Tkinter grid method
| 39,471,932
|
<p>I'm using Tkinter to create a GUI for my computer science coursework based on steganography. I'm using the <code>.grid()</code> function on the widgets in my window to lay them out, however I can't get this particular part to look how I want it to.</p>
<p>Here's what my GUI currently looks like: <a href="http://imgur.com/LNEZtEL" rel="nofollow">http://imgur.com/LNEZtEL</a>
(or just the part with the error).</p>
<p>I want the remaining characters label to sit directly underneath the text entry box, but for some reason row 4 starts a large way down underneath the box. If I label the GUI with columns and rows anchored north west it looks like this: <a href="http://imgur.com/a/V7dTW" rel="nofollow">http://imgur.com/a/V7dTW</a>.</p>
<p>If I shrink the image box on the left, it looks how I want, however I don't want the image this small: <a href="http://imgur.com/a/0Dudu" rel="nofollow">http://imgur.com/a/0Dudu</a>.</p>
<p>The image box has a rowspan of 2, so what is causing the 4th row to start so low down from the text entry box? Here's roughly what I want the GUI to look like: <a href="http://imgur.com/a/ck04A" rel="nofollow">http://imgur.com/a/ck04A</a>.</p>
<p>Full code:</p>
<pre><code>imageButton = Button(root, text="Add Image", command = add_image)
imageButton.grid(row = 2, columnspan = 2, sticky = W, padx = 30, pady = 20)
steg_widgets.append(imageButton)
image = Image.open("square.jpg")
image = image.resize((250,250))
photo = ImageTk.PhotoImage(image)
pictureLabel = Label(root, image = photo)
pictureLabel.image = photo
pictureLabel.grid(column = 0, row = 3, columnspan = 2, rowspan = 2, padx = 20, pady = (0, 20), sticky = NW)
steg_widgets.append(pictureLabel)
nameLabel = Label(root, text = "Brandon Edwards - OCR Computer Science Coursework 2016/2017")
nameLabel.grid(row = 0, column = 2, columnspan = 2, padx = (0, 20), pady = 10)
steg_widgets.append(nameLabel)
inputTextLabel = Label(root, text = "Enter text:")
inputTextLabel.grid(row = 2, column = 2, sticky = W)
steg_widgets.append(inputTextLabel)
startButton = Button(root, text="Go!", command = start_stega)
startButton.grid(row = 2, column = 2, sticky = E)
steg_widgets.append(startButton)
inputTextBox = Text(root, height = 10, width = 30)
inputTextBox.grid(row = 3, column = 2, sticky = NW)
steg_widgets.append(inputTextBox)
maxCharLabel = Label(root, text = "Remaining characters:")
maxCharLabel.grid(row = 4, column = 2, sticky = NW)
steg_widgets.append(maxCharLabel)
saveButton = Button(root, text="Save Image", command = save_image)
saveButton.grid(row = 2, column = 3, sticky = W)
steg_widgets.append(saveButton)
</code></pre>
| -1
|
2016-09-13T13:57:31Z
| 39,472,278
|
<p>I recommend breaking your UI down into logical sections, and laying out each section separately. </p>
<p>For example, you clearly have two distinct sections: the image and button on the left, and the other widgets on the right. Start by creating containers for those two groups:</p>
<pre><code>import Tkinter as tk
...
left_side = tk.Frame(root)
right_side = tk.Frame(root)
</code></pre>
<p>Since they are side-by-side, <code>pack</code> is the simplest way to lay them out:</p>
<pre><code>left_side.pack(side="left", fill="y", expand=False)
right_side.pack(side="right", fill="both", expand=True)
</code></pre>
<p>Next, you can focus on just one side. You can use <code>pack</code> or <code>grid</code>. This uses <code>grid</code> for illustrative purposes:</p>
<pre><code>image = tk.Canvas(left_side, ...)
button = tk.Button(left_side, ...)
left_side.grid_rowconfigure(0, weight=1)
left_side.grid_columnconfigure(0, weight=1)
image.grid(row=0, column=0, sticky="nw")
button.grid(row=1, column=0, sticky="n")
</code></pre>
<p>Finally, work on the right side. Since widgets are stacked top-to-bottom, <code>pack</code> is the natural choice:</p>
<pre><code>l1 = tk.Label(right_side, text="Enter text:")
l2 = tk.Label(right_side, text="Remaining characters")
text = tk.Text(right_side)
l1.pack(side="top", fill="x")
text.pack(side="top", fill="both", expand=True)
l2.pack(side="top", fill="x")
</code></pre>
| 0
|
2016-09-13T14:15:38Z
|
[
"python",
"tkinter"
] |
Plotting projectile motion of 1 y-position values vs. 2 x-position values using matplotlib and numpy
| 39,471,954
|
<p>Hi i'm trying to get a plot of the trajectory of a mass under projectile motion. One with a force acting on the horizontal axis and one without (basically 2 sets of x values plotted against a 1 set of y-values). Here's what i have so far.. I'm new to programming and i can't seem to figure out where this went wrong. Hope you guys can help me. Thank you!</p>
<pre><code>import numpy as np
import matplotlib.pyplot as pl
def position(y0, v0, theta, g, t):
y= y0 + v0*np.sin(theta)*t + (g*t**2)/2
return y
def position2(x0, v0, theta, c, e, alpha, t):
x1 = x0 + v0*(np.cos(theta))*t + c*(t*(e-1)+(2-2*e)/alpha)
return x1
def position3(x0, v0, theta, t):
x2 = x0 + v0*(np.cos(theta))*t
return x2
t = np.linspace(0,10,1000)
#part1
m = 1
theta = 45
y0 = 2
x0 = 0
v0 = 3
k = 1
alpha = 0.5
g = -9.8
c = (-k/m)*(1/alpha**2)
e = -(np.e**(-alpha*t))
x1 = []
x2 = []
y = []
for a in t:
x1_data = position2(x0, v0, theta, c, e, alpha, t)
x1.append(x1_data)
x2_data = position3(x0, v0, theta, t)
x2.append(x2_data)
y_data = position(y0, v0, theta, g, t)
y.append(y_data)
print x1_data
print x2_data
print y_data
pl.title('Constant and Time-Dependent Forces')
pl.xlabel(b'x-position')
pl.ylabel(b'y-position')
x1label = 'projectile 1'
x2label = "'normal' projectile"
plot1 = pl.plot(x1_data, y, 'r')
plot2 = pl.plot(x2_data, y, 'b')
pl.legend()
pl.show()
</code></pre>
| 0
|
2016-09-13T13:58:48Z
| 39,472,745
|
<p>I went through your code since i am new to <code>matplotlib</code> and wanted to play a bit with it. The only mistake i found is in the for loop where you do <code>for a in t:</code> but end up passing <code>t</code> to the functions instead of <code>a</code>.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as pl
sin = np.sin
cos = np.cos
pi = np.pi
def y_position(y0, v0, phi, g, t):
y_t = y0 + v0 * sin(phi) * t + (g * t**2) / 2
return y_t
def x_position_force(x0, v0, phi, k, m, alpha, t):
term1 = (-k / m) * (1 / alpha ** 2)
term2 = -np.e ** (-alpha * t)
x_t = x0 + v0 * cos(phi) * t + term1 * (t * (term2 - 1) + (2 - 2 * term2) / alpha)
return x_t
def x_position_no_force(x0, v0, phi, t):
x_t = x0 + v0 * cos(phi) * t
return x_t
time = np.linspace(0, 10, 100)
#------------- I N P U T -------------#
x_init = 0
y_init = 2
v_init = 3
theta = 45
gravity = -9.8
m = 1
k = 1
alpha = 0.5
#------------- I N P U T -------------#
x_with_force = []
x_with_no_force = []
y = []
for time_i in time:
x_with_force.append(x_position_force(x_init, v_init, theta, k, m, alpha, time_i))
x_with_no_force.append(x_position_no_force(x_init, v_init, theta, time_i))
y.append(y_position(y_init, v_init, theta, gravity, time_i))
# print(x1_data)
# print(x2_data)
# print(y_data)
pl.subplot(211)
pl.title('Constant and Time-Dependent Forces')
pl.xlabel('time')
plot1 = pl.plot(time, x_with_force, 'r', label='x_coord_dynamicF')
plot2 = pl.plot(time, x_with_no_force, 'g', label='x_coord_staticF')
plot3 = pl.plot(time, y, 'b', label='y_coord')
pl.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3, ncol=2, mode="expand", borderaxespad=0.)
pl.subplot(212)
pl.title('Trajectory (x,y)')
pl.xlabel('X')
pl.ylabel('Y')
plot4 = pl.plot(x_with_force, y, 'r^')
plot5 = pl.plot(x_with_no_force, y, 'b*')
pl.show()
</code></pre>
<p>I changed a number of things though to make the code inline with <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow">PEP8</a>. In my opinion it is the use of bad variable names that lead you to the mistake you did. So i would recommend taking the time to type those few extra characters that ultimately help you and the people reading your code.</p>
| 0
|
2016-09-13T14:39:29Z
|
[
"python",
"numpy",
"matplotlib",
"plot",
"projectile"
] |
Reproducible builds in python
| 39,471,960
|
<p>I need to ship a compiled version of a python script and be able to prove (using a hash) that the compiled file is indeed the same as the original one.</p>
<p>What we use so far is a simple:</p>
<pre><code>find . -name "*.py" -print0 | xargs -0 python2 -m py_compile
</code></pre>
<p>The issue is that this is not reproducible (not sure what are the fluctuating factors but 2 executions will not give us the same .pyc for the same python file) and forces us to always ship the same compiled version instead of being able to just give the build script to anyone to produce a new compiled version.</p>
<p>Is there a way to achieve that?</p>
<p>Thanks</p>
| 4
|
2016-09-13T13:59:11Z
| 39,472,343
|
<p>Compiled Python files include a four-byte magic number and the four-byte datetime of compilation. This probably accounts for the discrepancies you are seeing.</p>
<p>If you omit bytes 5-8 from the checksumming process then you should see constant checksums for a given version of Python.</p>
<p>The format of the <code>.pyc</code> file is given in <a href="http://nedbatchelder.com/blog/200804/the_structure_of_pyc_files.html">this blog post</a> by Ned Batchelder.</p>
| 6
|
2016-09-13T14:19:07Z
|
[
"python",
"binary-reproducibility"
] |
How to animate the colorbar in matplotlib
| 39,472,017
|
<p>I have an animation where the range of the data varies a lot. I would like to have a <code>colorbar</code> which tracks the max and the min of the data (i.e. I would like it not to be fixed). The question is how to do this.</p>
<p>Ideally I would like the <code>colorbar</code> to be on its own axis.</p>
<p>I have tried the following four things</p>
<h2>1. Naive approach</h2>
<p>The problem: A new colorbar is plottet for each frame</p>
<pre><code>#!/usr/bin/env python
"""
An animated image
"""
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig = plt.figure()
ax = fig.add_subplot(111)
def f(x, y):
return np.exp(x) + np.sin(y)
x = np.linspace(0, 1, 120)
y = np.linspace(0, 2 * np.pi, 100).reshape(-1, 1)
frames = []
for i in range(10):
x += 1
curVals = f(x, y)
vmax = np.max(curVals)
vmin = np.min(curVals)
levels = np.linspace(vmin, vmax, 200, endpoint = True)
frame = ax.contourf(curVals, vmax=vmax, vmin=vmin, levels=levels)
cbar = fig.colorbar(frame)
frames.append(frame.collections)
ani = animation.ArtistAnimation(fig, frames, blit=False)
plt.show()
</code></pre>
<h2>2. Adding to the images</h2>
<p>Changing the for loop above to</p>
<pre><code>initFrame = ax.contourf(f(x,y))
cbar = fig.colorbar(initFrame)
for i in range(10):
x += 1
curVals = f(x, y)
vmax = np.max(curVals)
vmin = np.min(curVals)
levels = np.linspace(vmin, vmax, 200, endpoint = True)
frame = ax.contourf(curVals, vmax=vmax, vmin=vmin, levels=levels)
cbar.set_clim(vmin = vmin, vmax = vmax)
cbar.draw_all()
frames.append(frame.collections + [cbar])
</code></pre>
<p>The problem: This raises</p>
<pre><code>AttributeError: 'Colorbar' object has no attribute 'set_visible'
</code></pre>
<h2>3. Plotting on its own axis</h2>
<p>The problem: The <code>colorbar</code> is not updated.</p>
<pre><code> #!/usr/bin/env python
"""
An animated image
"""
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
def f(x, y):
return np.exp(x) + np.sin(y)
x = np.linspace(0, 1, 120)
y = np.linspace(0, 2 * np.pi, 100).reshape(-1, 1)
frames = []
for i in range(10):
x += 1
curVals = f(x, y)
vmax = np.max(curVals)
vmin = np.min(curVals)
levels = np.linspace(vmin, vmax, 200, endpoint = True)
frame = ax1.contourf(curVals, vmax=vmax, vmin=vmin, levels=levels)
cbar = fig.colorbar(frame, cax=ax2) # Colorbar does not update
frames.append(frame.collections)
ani = animation.ArtistAnimation(fig, frames, blit=False)
plt.show()
</code></pre>
<h2>A combination of 2. and 4.</h2>
<p>The problem: The <code>colorbar</code> is constant.</p>
<p>A similar question is posted <a href="http://stackoverflow.com/questions/31562176/matplotlib-animation-with-changing-colorbar-and-title-for-each-frame">here</a>, but it looks like the OP is satisfied with a fixed <code>colorbar</code>.</p>
| 5
|
2016-09-13T14:02:24Z
| 39,596,853
|
<p>While I'm not sure how to do this specifically using an <code>ArtistAnimation</code>, using a <code>FuncAnimation</code> is fairly straightforward. If I make the following modifications to your "naive" version 1 it works.</p>
<p><strong>Modified Version 1</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig = plt.figure()
ax = fig.add_subplot(111)
# I like to position my colorbars this way, but you don't have to
div = make_axes_locatable(ax)
cax = div.append_axes('right', '5%', '5%')
def f(x, y):
return np.exp(x) + np.sin(y)
x = np.linspace(0, 1, 120)
y = np.linspace(0, 2 * np.pi, 100).reshape(-1, 1)
frames = []
for i in range(10):
x += 1
curVals = f(x, y)
frames.append(curVals)
cv0 = frames[0]
cf = ax.contourf(cv0, 200)
cb = fig.colorbar(cf, cax=cax)
tx = ax.set_title('Frame 0')
def animate(i):
arr = frames[i]
vmax = np.max(arr)
vmin = np.min(arr)
levels = np.linspace(vmin, vmax, 200, endpoint = True)
cf = ax.contourf(arr, vmax=vmax, vmin=vmin, levels=levels)
cax.cla()
fig.colorbar(cf, cax=cax)
tx.set_text('Frame {0}'.format(i))
ani = animation.FuncAnimation(fig, animate, frames=10)
plt.show()
</code></pre>
<p>The main difference is that I do the levels calculations and contouring in a function instead of creating a list of artists. The colorbar works because you can clear the axes from the previous frame and redo it every frame.</p>
<p>Doing this redo is necessary when using <code>contour</code> or <code>contourf</code>, because you can't just dynamically change the data. However, as you have plotted so many contour levels and the result looks smooth, I think you may be better off using <code>imshow</code> instead - it means you can actually just use the same artist and change the data, and the colorbar updates itself automatically. It's also much faster!</p>
<p><strong>Better Version</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig = plt.figure()
ax = fig.add_subplot(111)
# I like to position my colorbars this way, but you don't have to
div = make_axes_locatable(ax)
cax = div.append_axes('right', '5%', '5%')
def f(x, y):
return np.exp(x) + np.sin(y)
x = np.linspace(0, 1, 120)
y = np.linspace(0, 2 * np.pi, 100).reshape(-1, 1)
# This is now a list of arrays rather than a list of artists
frames = []
for i in range(10):
x += 1
curVals = f(x, y)
frames.append(curVals)
cv0 = frames[0]
im = ax.imshow(cv0, origin='lower') # Here make an AxesImage rather than contour
cb = fig.colorbar(im, cax=cax)
tx = ax.set_title('Frame 0')
def animate(i):
arr = frames[i]
vmax = np.max(arr)
vmin = np.min(arr)
im.set_data(arr)
im.set_clim(vmin, vmax)
tx.set_text('Frame {0}'.format(i))
# In this version you don't have to do anything to the colorbar,
# it updates itself when the mappable it watches (im) changes
ani = animation.FuncAnimation(fig, animate, frames=10)
plt.show()
</code></pre>
| 1
|
2016-09-20T14:31:36Z
|
[
"python",
"animation",
"matplotlib",
"colorbar"
] |
Each row sharey individually?
| 39,472,115
|
<p>I have a two-by-two plot that I am creating dynamically. In the first row I want to plot density functions, in the second row CDFs. I want </p>
<ul>
<li>each of the columns to share x</li>
<li>each of the rows to share y</li>
</ul>
<p>That is, two objects aligned vertically have the same x-axis, and two plots aligned horizontally have the same y-axis.</p>
<p>However, <code>sharex</code> and <code>sharey</code> force them to be the same for all of the subplots. How can I fix this sort of axes sharing? I understand that I could be manually giving each axes a share partner, but that wouldn't work with the generic structure that follows: </p>
<pre><code>fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True)
for i, lam in enumerate(lams):
axesNow = [axs[i] for axs in axes] # pick the ith column from axes
for i, Param.p in enumerate(pp):
axesNow[0].plot(somethingWithPDF)
axesNow[1].plot(somethingWithCDF)
for ax in axes.flatten(): ax.legend()
</code></pre>
<p><a href="http://i.stack.imgur.com/7SxTV.png" rel="nofollow"><img src="http://i.stack.imgur.com/7SxTV.png" alt="enter image description here"></a></p>
| 0
|
2016-09-13T14:07:25Z
| 39,472,960
|
<p>What about something like this, where all axes are built individually:</p>
<pre><code>x1 = np.arange(5)
y1 = np.arange(3, 8)
ax1 = plt.subplot(223)
ax1.plot(x1, y1)
ax1.set_title("ax1")
x2 = np.arange(5, 10)
y2 = np.arange(3, 8)
ax2 = plt.subplot(224, sharey=ax1)
ax2.plot(x2, y2)
ax2.set_title("ax2")
#plt.setp(ax2.get_yticklabels(), visible=False) # Use this to hide axes labels
x3 = x1
y3 = np.arange(13, 8, -1)
ax3 = plt.subplot(221, sharex=ax1)
ax3.plot(x3, y3)
ax3.set_title("ax3")
#plt.setp(ax3.get_xticklabels(), visible=False)
x4 = x2
y4 = y3
ax4 = plt.subplot(222, sharex=ax2, sharey=ax3)
ax4.plot(x4, y4)
ax4.set_title("ax4")
#plt.setp(ax4.get_xticklabels(), visible=False)
#plt.setp(ax4.get_yticklabels(), visible=False)
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/evera.png" rel="nofollow"><img src="http://i.stack.imgur.com/evera.png" alt="enter image description here"></a></p>
| 0
|
2016-09-13T14:49:42Z
|
[
"python",
"matplotlib"
] |
Each row sharey individually?
| 39,472,115
|
<p>I have a two-by-two plot that I am creating dynamically. In the first row I want to plot density functions, in the second row CDFs. I want </p>
<ul>
<li>each of the columns to share x</li>
<li>each of the rows to share y</li>
</ul>
<p>That is, two objects aligned vertically have the same x-axis, and two plots aligned horizontally have the same y-axis.</p>
<p>However, <code>sharex</code> and <code>sharey</code> force them to be the same for all of the subplots. How can I fix this sort of axes sharing? I understand that I could be manually giving each axes a share partner, but that wouldn't work with the generic structure that follows: </p>
<pre><code>fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True)
for i, lam in enumerate(lams):
axesNow = [axs[i] for axs in axes] # pick the ith column from axes
for i, Param.p in enumerate(pp):
axesNow[0].plot(somethingWithPDF)
axesNow[1].plot(somethingWithCDF)
for ax in axes.flatten(): ax.legend()
</code></pre>
<p><a href="http://i.stack.imgur.com/7SxTV.png" rel="nofollow"><img src="http://i.stack.imgur.com/7SxTV.png" alt="enter image description here"></a></p>
| 0
|
2016-09-13T14:07:25Z
| 39,474,426
|
<p>The <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.subplots" rel="nofollow">pyplot.subplots documentation</a> describes the <code>'col'</code> and <code>'row'</code> options for the <code>sharex</code> and <code>sharey</code> kwargs. In particular, I think you want:</p>
<pre><code>fig, axes = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row')
</code></pre>
| 1
|
2016-09-13T16:01:16Z
|
[
"python",
"matplotlib"
] |
Scrapy: Exclude content inside script tags in the HTML body
| 39,472,207
|
<p>I am currently extracting the entire text inside the body tag (excluding spacing like \r\n) using the following code:</p>
<pre><code>full_text = response.xpath('normalize-space(/html/body)').extract()
</code></pre>
<p>The problem is this is picking up javascript inside script tags within body.</p>
<p>Do you know how I can exclude the content within any script tags?</p>
<p>I've tried doing this but it isn't working:</p>
<pre><code>full_text = response.xpath('normalize-space(/html/body/*[not(self::script)])').extract()
</code></pre>
<p>Any help appreciated.</p>
| -1
|
2016-09-13T14:11:50Z
| 39,476,990
|
<p>you can follow the answer on this question <a href="http://stackoverflow.com/a/19780110/2204978">Scraping text without javascript code using scrapy</a></p>
<pre><code>from w3lib.html import remove_tags, remove_tags_with_content
input = hxs.select('//div[@id="content"]').extract()
output = remove_tags(remove_tags_with_content(input, ('script', )))
</code></pre>
| 1
|
2016-09-13T18:44:22Z
|
[
"python",
"xpath",
"scrapy"
] |
Pandas: Add Series to DataFrame ordered by column
| 39,472,287
|
<p>I am sure this was asked before, but I couldn't find it. I want to add a Series as a new column to the DataFrame. All the Series Index names are contained in one column of the DataFrame, but the Dataframe has more rows than the Series.</p>
<pre><code>DataFrame:
0 London 231
1 Beijing 328
12 New York 920
3 Singapore 1003
Series:
London AB
New York AC
Singapore B
</code></pre>
<p>and the result should look like</p>
<pre><code>0 London 231 AB
1 Beijing 328 NaN
12 New York 920 AC
3 Singapore 1003 B
</code></pre>
<p>How can I do this without loops? Thanks!</p>
| 0
|
2016-09-13T14:16:07Z
| 39,472,789
|
<ol>
<li>set <code>index</code> to city names for both <code>df</code> and <code>series</code></li>
<li>combine via pandas <code>merge</code></li>
</ol>
<hr>
<pre><code>import pandas as pd
cities = ['London', 'Beijing', 'New York', 'Singapore']
df_data = {
'col_1': [0,1,12,3],
'col_2': [231, 328, 920, 1003],
}
df = pd.DataFrame(df_data, index=cities)
cities2 = ['London','New York','Singapore']
series = pd.Series(['AB', 'AC', 'B'], index=cities2)
combined = pd.merge(
left=df,
right=pd.DataFrame(series),
how='left',
left_index=True,
right_index=True
)
print combined
</code></pre>
<hr>
<p>OUTPUT:</p>
<pre><code> col_1 col_2 0
London 0 231 AB
Beijing 1 328 NaN
New York 12 920 AC
Singapore 3 1003 B
</code></pre>
| 0
|
2016-09-13T14:41:58Z
|
[
"python",
"pandas",
"dataframe",
"merge",
"series"
] |
Pandas: Add Series to DataFrame ordered by column
| 39,472,287
|
<p>I am sure this was asked before, but I couldn't find it. I want to add a Series as a new column to the DataFrame. All the Series Index names are contained in one column of the DataFrame, but the Dataframe has more rows than the Series.</p>
<pre><code>DataFrame:
0 London 231
1 Beijing 328
12 New York 920
3 Singapore 1003
Series:
London AB
New York AC
Singapore B
</code></pre>
<p>and the result should look like</p>
<pre><code>0 London 231 AB
1 Beijing 328 NaN
12 New York 920 AC
3 Singapore 1003 B
</code></pre>
<p>How can I do this without loops? Thanks!</p>
| 0
|
2016-09-13T14:16:07Z
| 39,473,929
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow">pandas.DataFrame.merge()</a></p>
<pre><code>df = pd.DataFrame({'A': [0,1,12,3], 'B': ['London', 'Beijing', 'New York', 'Singapore'], 'C': [231, 328, 920, 1003] })
A B C
0 0 London 231
1 1 Beijing 328
2 12 New York 920
3 3 Singapore 1003
s = pd.Series(['AB', 'AC', 'B'], index=['London', 'New York', 'Singapore'])
London AB
New York AC
Singapore B
dtype: object
df2 = pd.DataFrame({'D': s.index, 'E': s.values })
D E
0 London AB
1 New York AC
2 Singapore B
</code></pre>
<p>Then, you can merge the two data frames:</p>
<pre><code>merged = df.merge(df2, how='left', left_on='B', right_on='D')
A B C D E
0 0 London 231 London AB
1 1 Beijing 328 NaN NaN
2 12 New York 920 New York AC
3 3 Singapore 1003 Singapore B
</code></pre>
<p>You can drop columns <code>D</code></p>
<pre><code>merged = merged.drop('D', axis=1)
A B C E
0 0 London 231 AB
1 1 Beijing 328 NaN
2 12 New York 920 AC
3 3 Singapore 1003 B
</code></pre>
| 0
|
2016-09-13T15:35:20Z
|
[
"python",
"pandas",
"dataframe",
"merge",
"series"
] |
Pandas: Add Series to DataFrame ordered by column
| 39,472,287
|
<p>I am sure this was asked before, but I couldn't find it. I want to add a Series as a new column to the DataFrame. All the Series Index names are contained in one column of the DataFrame, but the Dataframe has more rows than the Series.</p>
<pre><code>DataFrame:
0 London 231
1 Beijing 328
12 New York 920
3 Singapore 1003
Series:
London AB
New York AC
Singapore B
</code></pre>
<p>and the result should look like</p>
<pre><code>0 London 231 AB
1 Beijing 328 NaN
12 New York 920 AC
3 Singapore 1003 B
</code></pre>
<p>How can I do this without loops? Thanks!</p>
| 0
|
2016-09-13T14:16:07Z
| 39,496,867
|
<p>based on @Joe R solution with some modificaiton. say, df is your DataFrame and s is your Series</p>
<pre><code>s = s.to_frame().reset_index()
df = df.merge(s,how='left',left_on=df['B'],right_on=s['index']).ix[:,[0,1,3]]
</code></pre>
| 0
|
2016-09-14T18:00:19Z
|
[
"python",
"pandas",
"dataframe",
"merge",
"series"
] |
Reusing the same layers for training and testing, but creating different nodes
| 39,472,359
|
<p>I'm trying to (re)train AlexNet (based on the code <a href="http://www.cs.toronto.edu/~guerzhoy/tf_alexnet/" rel="nofollow">found here</a>) for a particular binary classification problem. Since my GPU is not very powerful, I settled on a batch size of 8 for training. This size determines the shape of the input tensor (8,227,227,3). However, one can use a larger batch size for the testing process, since there is no backprop involved.</p>
<p>My question is, how could I reuse the already trained hidden layers to create a different network on the same graph specifically for testing?</p>
<p>Here's a snippet of what I have tried to do:</p>
<pre><code>NUM_TRAINING_STEPS = 200
BATCH_SIZE = 1
LEARNING_RATE = 1e-1
IMAGE_SIZE = 227
NUM_CHANNELS = 3
NUM_CLASSES = 2
def main():
graph = tf.Graph()
trace = Tracer()
train_data = readImage(filename1)
test_data = readImage(filename2)
train_labels = np.array([[0.0,1.0]])
with graph.as_default():
batch_data = tf.placeholder(tf.float32, shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS) )
batch_labels = tf.placeholder(tf.float32, shape=(BATCH_SIZE, NUM_CLASSES) )
logits_training = createNetwork(batch_data)
loss = lossLayer(logits_training, batch_labels)
train_prediction = tf.nn.softmax(logits_training)
print 'Prediction shape: ' + str(train_prediction.get_shape())
optimizer = tf.train.GradientDescentOptimizer(learning_rate=LEARNING_RATE).minimize(loss)
test_placeholder = tf.placeholder(tf.float32, shape=(1, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS) )
logits_test = createNetwork(test_placeholder)
test_prediction = tf.nn.softmax(logits_test)
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
for step in range(NUM_TRAINING_STEPS):
print 'Step #: ' + str(step+1)
feed_dict = {batch_data: train_data, batch_labels : train_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
feed_dict = {batch_data:test_data, test_placeholder:test_data}
logits1, logits2 = session.run([logits_training,logits_test],feed_dict=feed_dict)
print (logits1 - logits2)
return
</code></pre>
<p>I'm only training with a single image, just to evaluate whether network is actually being trained and if the values of logits1 and logits2 are the same. They are not, by several orders of magnitude.</p>
<p>createNetwork is a function which loads the weights for AlexNet and builds the model, based on the code for the myalexnet.py script found on the page to which I linked.</p>
<p>I've tried to replicate the examples from the Udacity course on Deep Learning, in particular, assignments 3 and 4.</p>
<p>If anyone could figure out how I could use the same layers for training and testing, I would be very grateful.</p>
| 0
|
2016-09-13T14:19:53Z
| 39,495,101
|
<p>Use <code>shape=None</code>for your placeholders: <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/io_ops.html#placeholder" rel="nofollow">placeholder doc</a></p>
<p>This way you can feed any shape of data. Another (worse) option is to recreate your graph for testing with the shapes that you need, and load the ckpt that was created during training.</p>
| 0
|
2016-09-14T16:08:31Z
|
[
"python",
"tensorflow"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.