title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Numpy: advanced slices
39,064,607
<p>I need a fast way to find the smallest element among three neighbor elements in a string, and add it to element under the central element. For border elements only two upper elements are checking.</p> <p>For example I have a numpy array:</p> <pre><code> [1, 2, 3, 4, 5], [0, 0, 0, 0, 0] </code></pre> <p>I should get this:</p> <pre><code> [1, 2, 3, 4, 5], [1, 1, 2, 3, 4] </code></pre> <p>I have this code:</p> <pre><code>for h in range(1, matrix.shape[0]): matrix[h][0] = min(matrix[h - 1][0], matrix[h - 1][1]) matrix[h][1:-1] = ...(DONT KNOW, WHAT SHOULD BE HERE!!) matrix[h][-1] = min(matrix[h - 1][-2], matrix[h - 1][-1]) </code></pre> <p>How can I count it without using more <code>for</code> loops because I have too much data and I need to make it fast? <code>Edit:</code> David-z, here is my project) <a href="http://i.stack.imgur.com/NSJHo.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/NSJHo.jpg" alt="Seam carving"></a></p>
1
2016-08-21T13:15:11Z
39,067,003
<p>Here's a slight variation on <code>Daniel's</code> solution:</p> <p>Start with the row or list of values:</p> <pre><code>In [439]: z=[10,40,90,13,21,58,64,56,34,69] </code></pre> <p>Replicate the 1st and last values; I could do that with concatenate, indexing, or the simple <code>pad</code> (internally <code>pad</code> is rather complicated because it is so general):</p> <pre><code>In [440]: z1=np.pad(z,(1,1),'edge') In [441]: z1 Out[441]: array([10, 10, 40, 90, 13, 21, 58, 64, 56, 34, 69, 69]) </code></pre> <p>Now the make the 3 row matrix (the core of Daniel's solution):</p> <pre><code>In [443]: [z1[0:-2], z1[1:-1], z1[2:]] Out[443]: [array([10, 10, 40, 90, 13, 21, 58, 64, 56, 34]), array([10, 40, 90, 13, 21, 58, 64, 56, 34, 69]), array([40, 90, 13, 21, 58, 64, 56, 34, 69, 69])] </code></pre> <p><code>np.min</code> on <code>axis=0</code> is the equivalent to the <code>minimum.reduce</code>:</p> <pre><code>In [444]: np.min([z1[0:-2], z1[1:-1], z1[2:]],axis=0) Out[444]: array([10, 10, 13, 13, 13, 21, 56, 34, 34, 34]) </code></pre> <p>=========</p> <p>Extending this to a 2d array:</p> <pre><code>In [454]: y=np.array(z).reshape(2,5) # same values, reshape In [455]: y1=np.pad(y,((0,0),(1,1)),'edge') # 2d pad In [456]: y1 Out[456]: array([[10, 10, 40, 90, 13, 21, 21], [58, 58, 64, 56, 34, 69, 69]]) In [457]: Y=np.array([y1[:,0:-2], y1[:,1:-1], y1[:,2:]]) In [458]: Y # 3d array Out[458]: array([[[10, 10, 40, 90, 13], [58, 58, 64, 56, 34]], [[10, 40, 90, 13, 21], [58, 64, 56, 34, 69]], [[40, 90, 13, 21, 21], [64, 56, 34, 69, 69]]]) In [459]: np.min(Y,axis=0) Out[459]: array([[10, 10, 13, 13, 13], [58, 56, 34, 34, 34]]) </code></pre> <p>===============</p> <p>An <code>as_strided</code> alternative (for advanced numpy users only :))</p> <pre><code>In [462]: np.lib.stride_tricks.as_strided(z1,shape=(10,3),strides=(4,4)).T Out[462]: array([[10, 10, 40, 90, 13, 21, 58, 64, 56, 34], [10, 40, 90, 13, 21, 58, 64, 56, 34, 69], [40, 90, 13, 21, 58, 64, 56, 34, 69, 69]]) </code></pre>
1
2016-08-21T17:35:16Z
[ "python", "arrays", "performance", "numpy", "slice" ]
Mean Squared error in Python
39,064,684
<p>I'm trying to made function that will calculate mean squared error from y (true values) and y_pred (predicted ones) not using sklearn or other implementations.</p> <p>I'll try next:</p> <pre><code>def mserror(y, y_pred): i=0 for i in range (len(y)): i+=1 mse = ((y - y_pred) ** 2).mean(y) return mse </code></pre> <p>Can you please correct me what I m doing wrong with the calculation and who it can be fixed?</p>
0
2016-08-21T13:23:56Z
39,065,217
<p>You are modifying the index for no reason. A for loop increments it anyways. Also you are not using the index for example you are not using any y[i]-y_pred[i] hence you don't need the loop at all. </p> <p>Use the arrays </p> <pre><code>mse = np.mean((y-y_pred)**2) </code></pre>
0
2016-08-21T14:20:08Z
[ "python", "numpy", "scikit-learn" ]
Regular expression for anything except ']]' characters
39,064,725
<p>I had posted a question earlier but it wasn't very clear to understand so it here goes again:</p> <p>I have a string which looks like this :</p> <pre><code>{ "1000":[ [some whitespace and nonwhitespace characters], [some whitespace and nonwhitespace characters], .... [some whitespace and nonwhitespace characters]], "1001":[ [some whitespace and nonwhitespace characters], [some whitespace and nonwhitespace characters], .... [some whitespace and nonwhitespace characters]], ... } </code></pre> <p>and I want to extract a record like the one shown below using regex :</p> <pre><code>"1000":[ [some whitespace and nonwhitespace characters], [some whitespace and nonwhitespace characters], .... [some whitespace and nonwhitespace characters]] </code></pre> <p>I'm doing this in <strong>python</strong> using <strong>re</strong> module</p> <p>Now for this I have in my mind the pattern :</p> <pre><code>' "[0-9]{4}":(anything except ]] ) ' </code></pre> <p>but I can't figure out what will be the pattern for <strong>anything except ']]'</strong></p> <p>could anyone help?</p>
-1
2016-08-21T13:28:52Z
39,064,800
<p>A regex solution can be achieved using something like:</p> <pre><code>\d{4}":(.*?)]] </code></pre> <p>But you really <em>don't</em> want to use regex here if your string is a valid JSON. It's very natural for Python to work with JSONs. Assuming your data is:</p> <pre><code>data = {key1: [[str1], [str2], ...], ...} </code></pre> <p>You can simply grab the value of <code>key1</code> by accessing the corresponding key:</p> <pre><code>data[key1] </code></pre> <p>this will give you:</p>
0
2016-08-21T13:36:23Z
[ "python", "regex", "regex-negation" ]
Giving input to terminal in python
39,064,796
<p>I'm writing a code to read serial input. Once the serial input has been read, I have to add a time stamp below it and then the output from a certain software. To get the output from the software, I want python to write a certain command to the terminal, and then read the output that comes on the terminal. Could you suggest how do I go about doing the last step: namely, writing to the terminal then reading the output? I'm a beginner in python, so please excuse me if this sounds trivial. </p>
0
2016-08-21T13:36:05Z
39,064,972
<p>You would need to have python implemented into the software. </p> <p>Also, I believe this is a task for GCSE Computing this year as I was privileged enough to choose what test we are doing and there was a question about serial numbers.</p>
0
2016-08-21T13:54:02Z
[ "python", "python-2.7", "terminal" ]
Giving input to terminal in python
39,064,796
<p>I'm writing a code to read serial input. Once the serial input has been read, I have to add a time stamp below it and then the output from a certain software. To get the output from the software, I want python to write a certain command to the terminal, and then read the output that comes on the terminal. Could you suggest how do I go about doing the last step: namely, writing to the terminal then reading the output? I'm a beginner in python, so please excuse me if this sounds trivial. </p>
0
2016-08-21T13:36:05Z
39,067,408
<p>To run a command and get the returned output you can use the subprocess module's check_output function.</p> <pre><code>import subprocess output = subprocess.check_output("ls -a", shell=True) </code></pre> <p>That will return the current directory contents in MacOS/Linux and store the output for you to read from later in your program. The "shell=True" allows you to execute a command as a string "ls -a". If you do not use "shell=True" you will pass the command as a list of each part of the command, example subprocess.check_output(["ls", "-a"]). Subprocess is a great module included with Python that allows a lot of command line execution.</p> <p>So with subprocess you should be able to call another program, code, command, etc.by using a shell command.</p>
0
2016-08-21T18:14:54Z
[ "python", "python-2.7", "terminal" ]
Numpy arrays means
39,064,851
<p>I have an array in numpy with values like these:</p> <pre><code>array([ 22.1, 10.4, 9.3, 18.5, 12.9, 7.2, 11.8, 13.2, 4.8, 10.6, 8.6, 17.4, 9.2, 9.7, 19. , 22.4, 12.5, 24.4, 11.3, 14.6, 18. , 12.5, 5.6, 15.5, 9.7, 12. , 15. ]) </code></pre> <p>How can I replace all of the values in array on the mean of this array (the same mean for the whole array instead all of the values)?</p>
0
2016-08-21T13:41:28Z
39,064,997
<p>If I understand your question correctly, you can use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.place.html" rel="nofollow">np.place()</a></p> <pre><code>arr = np.array([ 22.1, 10.4, 9.3, 18.5, 12.9, 7.2, 11.8, 13.2, 4.8, 10.6, 8.6, 17.4, 9.2, 9.7, 19. , 22.4, 12.5, 24.4, 11.3, 14.6, 18. , 12.5, 5.6, 15.5, 9.7, 12. , 15. ]) np.place(arr, arr, np.mean(arr)) print(arr) [ 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667] </code></pre>
-1
2016-08-21T13:56:42Z
[ "python", "arrays", "numpy" ]
Numpy arrays means
39,064,851
<p>I have an array in numpy with values like these:</p> <pre><code>array([ 22.1, 10.4, 9.3, 18.5, 12.9, 7.2, 11.8, 13.2, 4.8, 10.6, 8.6, 17.4, 9.2, 9.7, 19. , 22.4, 12.5, 24.4, 11.3, 14.6, 18. , 12.5, 5.6, 15.5, 9.7, 12. , 15. ]) </code></pre> <p>How can I replace all of the values in array on the mean of this array (the same mean for the whole array instead all of the values)?</p>
0
2016-08-21T13:41:28Z
39,065,001
<p>Do you need something like this?:</p> <pre><code>import numpy as np a = np.array([ 22.1, 10.4, 9.3, 18.5, 12.9, 7.2, 11.8, 13.2, 4.8, 10.6, 8.6, 17.4, 9.2, 9.7, 19. , 22.4, 12.5, 24.4, 11.3, 14.6, 18. , 12.5, 5.6, 15.5, 9.7, 12. , 15. ]) a[:] = np.mean(a) print a </code></pre> <p>This gives:</p> <pre><code>[ 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667 13.26666667] </code></pre>
1
2016-08-21T13:57:03Z
[ "python", "arrays", "numpy" ]
Custom argumental TypeError messages
39,064,894
<p>I would like to return a custom set of messages based on the specific cause of the <code>TypeError</code>.</p> <pre><code>def f(x, y): value = x + y return "Success! ({})".format(value) def safe(function, *args): try: result = function(*args) except TypeError: if "not_enough_args": # What actual condition can go here? return "Not enough arguments." elif "too_many_args": # What actual condition can go here? return "Too many arguments." else: return "A TypeError occurred!" else: return result safe(f, 2) # "Not enough arguments." safe(f, 2, 2) # "Success!" safe(f, 2, 2, 2) # "Too many arguments." safe(f, '2', 2) # "A TypeError occurred!" </code></pre> <p>Use of the actual <code>TypeError</code> object would be preferable.</p>
1
2016-08-21T13:46:21Z
39,065,046
<p>Here's a possible solution to your question:</p> <pre><code>def f(x, y): value = x + y return "Success! ({})".format(value) def safe(function, *args): try: result = function(*args) except TypeError as e: return str(e) else: return result def safe2(function, *args): try: result = function(*args) except TypeError as e: if "required positional argument" in str(e): return "Not enough arguments." elif "positional arguments but" in str(e): return "Too many arguments." else: return "A TypeError occurred!" else: return result print(safe(f, 2)) print(safe(f, 2, 2)) print(safe(f, 2, 2, 2)) print(safe(f, '2', 2)) print('-' * 80) print(safe2(f, 2)) print(safe2(f, 2, 2)) print(safe2(f, 2, 2, 2)) print(safe2(f, '2', 2)) </code></pre> <p>If you don't need to identify which TypeError it's been raised (safe), returning the exception could just do the job. If not (safe2), you could just parse somehow the exception as a string.</p> <p>If you don't want to parse strings and having custom TypeError exceptions, then you'd just need to subclass TypeError because the <a href="https://docs.python.org/2/library/exceptions.html#exception-hierarchy" rel="nofollow">existing hierarchy</a> isn't providind you specializations of that one.</p>
0
2016-08-21T14:01:04Z
[ "python", "python-3.x", "exception", "response" ]
Custom argumental TypeError messages
39,064,894
<p>I would like to return a custom set of messages based on the specific cause of the <code>TypeError</code>.</p> <pre><code>def f(x, y): value = x + y return "Success! ({})".format(value) def safe(function, *args): try: result = function(*args) except TypeError: if "not_enough_args": # What actual condition can go here? return "Not enough arguments." elif "too_many_args": # What actual condition can go here? return "Too many arguments." else: return "A TypeError occurred!" else: return result safe(f, 2) # "Not enough arguments." safe(f, 2, 2) # "Success!" safe(f, 2, 2, 2) # "Too many arguments." safe(f, '2', 2) # "A TypeError occurred!" </code></pre> <p>Use of the actual <code>TypeError</code> object would be preferable.</p>
1
2016-08-21T13:46:21Z
39,065,053
<p>I wouldn't mess around with changing the nature of the exception object, and having a long list of conditional statements to print out your own custom message. Instead just return back the exception object. You can do this in your <code>except</code> line as: </p> <pre><code>except TypeError as e </code></pre> <p>So, now you will have an object <code>e</code> of your <code>TypeError</code> exception. From there, with the object in hand, you can do whatever you want.</p> <p>Observe the example below. You will get whatever exception happens, and you will not modify its nature, instead you are using the object as is and you can print out the exact message of the failure encountered.</p> <pre><code>def f(x, y): value = x + y return "Success! ({})".format(value) def safe(function, *args): try: result = function(*args) except TypeError as e: return e else: return result print(safe(f, 5, 6, 7)) # f() takes 2 positional arguments but 3 were given print(safe(6)) # 'int' object is not callable </code></pre> <p>Furthermore, what you can do outside of the <code>safe</code> method, you can simply check for the type coming back from the <code>safe</code> method and act accordingly. </p> <p>Example: </p> <pre><code>result = safe(5) if type(result) is TypeError: print(result) </code></pre>
0
2016-08-21T14:01:25Z
[ "python", "python-3.x", "exception", "response" ]
Using save method to extend M2M relationship
39,064,980
<p>I have a model like this </p> <pre><code>class Authority(models.Model): name=models.CharField(max_length=100) country=models.ForeignKey(Country) category=models.ForeignKey(Category) competitors=models.ManyToManyField("self",related_name="competitors") </code></pre> <p>I want authorities having the same country and category and itself to be automatically give an M2M relationship,so i did this</p> <pre><code>def save(self,*args,**kwargs): z=Authority.objects.filter(country=self.country).filter(category=self.category) this_authority=Authority.objects.get(id=self.id) for a in z: this_authority.competitors.add(a) super(Authority,self).save(*args,**kwargs) </code></pre> <p>It wasn't working and not bringing any error,I also tries this below</p> <pre><code>def save(self,*args,**kwargs): z=Authority.objects.filter(country=self.country).filter(category=self.category) this_authority=Authority.objects.get(id=self.id) self.competitors=z super(Authority,self).save(*args,**kwargs) </code></pre> <p>What might be wrong with my code?Thanks in advance.</p>
0
2016-08-21T13:55:12Z
39,067,346
<p>The reason this isn't working the way you expect is because of how Django handles creating m2m relationships in the database. Long story very short, when you save something like a new <code>Authority</code> to the database, Django first writes the new object then goes back in and writes the m2m relationships for that new object. As a result, it's tough to do anything useful to m2m relationships in a custom <code>save</code> method.</p> <p>A <a href="https://docs.djangoproject.com/en/1.10/ref/signals/#django.db.models.signals.post_save" rel="nofollow">post-save signal</a> may do the trick here. <code>kwargs['created'] = True</code> if we're creating a new object and <code>kwargs['instance']</code> is the instance whose save fired off the signal receiver.</p> <pre><code>@receiver(post_save, sender = Authority) def update_m2m_relationships(sender, **kwargs): if kwargs['created']: #only fire when creating new objects competitors_to_add = Authority.objects.filter( country = kwargs['instance'].country, category = kwargs['instance'].category ) for c in competitors_to_add: c.competitors.add(kwargs['instance']) c.save() #not creating a new object; this receiver does not fire here kwargs['instance'].competitors.add(c) #all competitors have been added to the instance's m2m field kwargs['instance'].save() </code></pre> <p>It's important to only fire this when creating new objects. If you don't include that restriction, then the receiver will trigger itself as you update other objects in your for loop.</p> <p>I haven't tested this out but I think it'll work. Let me know if it doesn't and I'll do my best to help. </p>
0
2016-08-21T18:09:13Z
[ "python", "django", "python-2.7", "django-models" ]
aggregating hourly time series by Day via pd.TimeGrouper('D'); issue @ timestamp 00:00:00 (hour 24)
39,065,034
<p><strong>df:</strong></p> <pre><code> hour rev datetime 2016-05-01 01:00:00 1 -0.02 2016-05-01 02:00:00 2 -0.01 2016-05-01 03:00:00 3 -0.02 2016-05-01 04:00:00 4 -0.02 2016-05-01 05:00:00 5 -0.01 2016-05-01 06:00:00 6 -0.03 2016-05-01 07:00:00 7 -0.10 2016-05-01 08:00:00 8 -0.09 2016-05-01 09:00:00 9 -0.08 2016-05-01 10:00:00 10 -0.10 2016-05-01 11:00:00 11 -0.12 2016-05-01 12:00:00 12 -0.14 2016-05-01 13:00:00 13 -0.17 2016-05-01 14:00:00 14 -0.16 2016-05-01 15:00:00 15 -0.15 2016-05-01 16:00:00 16 -0.15 2016-05-01 17:00:00 17 -0.17 2016-05-01 18:00:00 18 -0.16 2016-05-01 19:00:00 19 -0.18 2016-05-01 20:00:00 20 -0.17 2016-05-01 21:00:00 21 -0.14 2016-05-01 22:00:00 22 -0.16 2016-05-01 23:00:00 23 -0.08 2016-05-02 00:00:00 24 -0.06 </code></pre> <p><strong>df.reset_index().to_dict('rec'):</strong></p> <pre><code>[{'datetime': Timestamp('2016-05-01 01:00:00'), 'hour': 1L, 'rev': -0.02}, {'datetime': Timestamp('2016-05-01 02:00:00'), 'hour': 2L, 'rev': -0.01}, {'datetime': Timestamp('2016-05-01 03:00:00'), 'hour': 3L, 'rev': -0.02}, {'datetime': Timestamp('2016-05-01 04:00:00'), 'hour': 4L, 'rev': -0.02}, {'datetime': Timestamp('2016-05-01 05:00:00'), 'hour': 5L, 'rev': -0.01}, {'datetime': Timestamp('2016-05-01 06:00:00'), 'hour': 6L, 'rev': -0.03}, {'datetime': Timestamp('2016-05-01 07:00:00'), 'hour': 7L, 'rev': -0.1}, {'datetime': Timestamp('2016-05-01 08:00:00'), 'hour': 8L, 'rev': -0.09}, {'datetime': Timestamp('2016-05-01 09:00:00'), 'hour': 9L, 'rev': -0.08}, {'datetime': Timestamp('2016-05-01 10:00:00'), 'hour': 10L, 'rev': -0.1}, {'datetime': Timestamp('2016-05-01 11:00:00'), 'hour': 11L, 'rev': -0.12}, {'datetime': Timestamp('2016-05-01 12:00:00'), 'hour': 12L, 'rev': -0.14}, {'datetime': Timestamp('2016-05-01 13:00:00'), 'hour': 13L, 'rev': -0.17}, {'datetime': Timestamp('2016-05-01 14:00:00'), 'hour': 14L, 'rev': -0.16}, {'datetime': Timestamp('2016-05-01 15:00:00'), 'hour': 15L, 'rev': -0.15}, {'datetime': Timestamp('2016-05-01 16:00:00'), 'hour': 16L, 'rev': -0.15}, {'datetime': Timestamp('2016-05-01 17:00:00'), 'hour': 17L, 'rev': -0.17}, {'datetime': Timestamp('2016-05-01 18:00:00'), 'hour': 18L, 'rev': -0.16}, {'datetime': Timestamp('2016-05-01 19:00:00'), 'hour': 19L, 'rev': -0.18}, {'datetime': Timestamp('2016-05-01 20:00:00'), 'hour': 20L, 'rev': -0.17}, {'datetime': Timestamp('2016-05-01 21:00:00'), 'hour': 21L, 'rev': -0.14}, {'datetime': Timestamp('2016-05-01 22:00:00'), 'hour': 22L, 'rev': -0.16}, {'datetime': Timestamp('2016-05-01 23:00:00'), 'hour': 23L, 'rev': -0.08}, {'datetime': Timestamp('2016-05-02 00:00:00'), 'hour': 24L, 'rev': -0.06}] df.set_index('datetime', inplace=True) </code></pre> <p>I want to aggregate the data by <strong>DAY</strong>. So I do:</p> <pre><code>dfgrped = df.groupby([pd.TimeGrouper('D')]) </code></pre> <p>I want to compute stats like the <strong>sum</strong>:</p> <pre><code>dfgrped.agg(sum) hour rev datetime 2016-05-01 276 -2.43 2016-05-02 24 -0.06 </code></pre> <p>As you can see the aggregation occurs for <code>2016-05-01</code> and <code>2016-05-02</code>. </p> <p>Notice, that the last hourly data entry in df occurs at 2016-05-02 00:00:00, which is meant to be the data for the last hour of the previous day i.e. 24 hourly data points for each day.</p> <p>However, given the datetime stamp, things don't work out the way I intended. I want all <code>24</code> hours to be aggregated for <code>2016-05-01</code>.</p> <p>I imagine this sort of issue must arise often in various applications when a measurement is taken at the end of the hour. This isn't a problem until the last hour, which occurs at the <code>00:00:00</code> timestamp of the following day.</p> <p>How to address this issue in pandas? </p>
1
2016-08-21T13:59:44Z
39,065,432
<p>A little bit hack solution, if your starting point for each day is larger than one second, you can subtract one second from the date time column and then groupby date, which seems to work for your case:</p> <pre><code>from datetime import timedelta import pandas as pd df.groupby((df.datetime - timedelta(seconds = 1)).dt.date).sum() # hour rev # datetime # 2016-05-01 300 -2.49 </code></pre>
1
2016-08-21T14:43:44Z
[ "python", "pandas", "group-by", "aggregation" ]
aggregating hourly time series by Day via pd.TimeGrouper('D'); issue @ timestamp 00:00:00 (hour 24)
39,065,034
<p><strong>df:</strong></p> <pre><code> hour rev datetime 2016-05-01 01:00:00 1 -0.02 2016-05-01 02:00:00 2 -0.01 2016-05-01 03:00:00 3 -0.02 2016-05-01 04:00:00 4 -0.02 2016-05-01 05:00:00 5 -0.01 2016-05-01 06:00:00 6 -0.03 2016-05-01 07:00:00 7 -0.10 2016-05-01 08:00:00 8 -0.09 2016-05-01 09:00:00 9 -0.08 2016-05-01 10:00:00 10 -0.10 2016-05-01 11:00:00 11 -0.12 2016-05-01 12:00:00 12 -0.14 2016-05-01 13:00:00 13 -0.17 2016-05-01 14:00:00 14 -0.16 2016-05-01 15:00:00 15 -0.15 2016-05-01 16:00:00 16 -0.15 2016-05-01 17:00:00 17 -0.17 2016-05-01 18:00:00 18 -0.16 2016-05-01 19:00:00 19 -0.18 2016-05-01 20:00:00 20 -0.17 2016-05-01 21:00:00 21 -0.14 2016-05-01 22:00:00 22 -0.16 2016-05-01 23:00:00 23 -0.08 2016-05-02 00:00:00 24 -0.06 </code></pre> <p><strong>df.reset_index().to_dict('rec'):</strong></p> <pre><code>[{'datetime': Timestamp('2016-05-01 01:00:00'), 'hour': 1L, 'rev': -0.02}, {'datetime': Timestamp('2016-05-01 02:00:00'), 'hour': 2L, 'rev': -0.01}, {'datetime': Timestamp('2016-05-01 03:00:00'), 'hour': 3L, 'rev': -0.02}, {'datetime': Timestamp('2016-05-01 04:00:00'), 'hour': 4L, 'rev': -0.02}, {'datetime': Timestamp('2016-05-01 05:00:00'), 'hour': 5L, 'rev': -0.01}, {'datetime': Timestamp('2016-05-01 06:00:00'), 'hour': 6L, 'rev': -0.03}, {'datetime': Timestamp('2016-05-01 07:00:00'), 'hour': 7L, 'rev': -0.1}, {'datetime': Timestamp('2016-05-01 08:00:00'), 'hour': 8L, 'rev': -0.09}, {'datetime': Timestamp('2016-05-01 09:00:00'), 'hour': 9L, 'rev': -0.08}, {'datetime': Timestamp('2016-05-01 10:00:00'), 'hour': 10L, 'rev': -0.1}, {'datetime': Timestamp('2016-05-01 11:00:00'), 'hour': 11L, 'rev': -0.12}, {'datetime': Timestamp('2016-05-01 12:00:00'), 'hour': 12L, 'rev': -0.14}, {'datetime': Timestamp('2016-05-01 13:00:00'), 'hour': 13L, 'rev': -0.17}, {'datetime': Timestamp('2016-05-01 14:00:00'), 'hour': 14L, 'rev': -0.16}, {'datetime': Timestamp('2016-05-01 15:00:00'), 'hour': 15L, 'rev': -0.15}, {'datetime': Timestamp('2016-05-01 16:00:00'), 'hour': 16L, 'rev': -0.15}, {'datetime': Timestamp('2016-05-01 17:00:00'), 'hour': 17L, 'rev': -0.17}, {'datetime': Timestamp('2016-05-01 18:00:00'), 'hour': 18L, 'rev': -0.16}, {'datetime': Timestamp('2016-05-01 19:00:00'), 'hour': 19L, 'rev': -0.18}, {'datetime': Timestamp('2016-05-01 20:00:00'), 'hour': 20L, 'rev': -0.17}, {'datetime': Timestamp('2016-05-01 21:00:00'), 'hour': 21L, 'rev': -0.14}, {'datetime': Timestamp('2016-05-01 22:00:00'), 'hour': 22L, 'rev': -0.16}, {'datetime': Timestamp('2016-05-01 23:00:00'), 'hour': 23L, 'rev': -0.08}, {'datetime': Timestamp('2016-05-02 00:00:00'), 'hour': 24L, 'rev': -0.06}] df.set_index('datetime', inplace=True) </code></pre> <p>I want to aggregate the data by <strong>DAY</strong>. So I do:</p> <pre><code>dfgrped = df.groupby([pd.TimeGrouper('D')]) </code></pre> <p>I want to compute stats like the <strong>sum</strong>:</p> <pre><code>dfgrped.agg(sum) hour rev datetime 2016-05-01 276 -2.43 2016-05-02 24 -0.06 </code></pre> <p>As you can see the aggregation occurs for <code>2016-05-01</code> and <code>2016-05-02</code>. </p> <p>Notice, that the last hourly data entry in df occurs at 2016-05-02 00:00:00, which is meant to be the data for the last hour of the previous day i.e. 24 hourly data points for each day.</p> <p>However, given the datetime stamp, things don't work out the way I intended. I want all <code>24</code> hours to be aggregated for <code>2016-05-01</code>.</p> <p>I imagine this sort of issue must arise often in various applications when a measurement is taken at the end of the hour. This isn't a problem until the last hour, which occurs at the <code>00:00:00</code> timestamp of the following day.</p> <p>How to address this issue in pandas? </p>
1
2016-08-21T13:59:44Z
39,069,525
<p>Simply <code>.shift(-1)</code> or .<code>roll(-1)</code>, the <code>rev</code> column, backward one. So timestamp would be period start vs period end. You would need to add one timestamp. </p>
0
2016-08-21T22:51:04Z
[ "python", "pandas", "group-by", "aggregation" ]
aggregating hourly time series by Day via pd.TimeGrouper('D'); issue @ timestamp 00:00:00 (hour 24)
39,065,034
<p><strong>df:</strong></p> <pre><code> hour rev datetime 2016-05-01 01:00:00 1 -0.02 2016-05-01 02:00:00 2 -0.01 2016-05-01 03:00:00 3 -0.02 2016-05-01 04:00:00 4 -0.02 2016-05-01 05:00:00 5 -0.01 2016-05-01 06:00:00 6 -0.03 2016-05-01 07:00:00 7 -0.10 2016-05-01 08:00:00 8 -0.09 2016-05-01 09:00:00 9 -0.08 2016-05-01 10:00:00 10 -0.10 2016-05-01 11:00:00 11 -0.12 2016-05-01 12:00:00 12 -0.14 2016-05-01 13:00:00 13 -0.17 2016-05-01 14:00:00 14 -0.16 2016-05-01 15:00:00 15 -0.15 2016-05-01 16:00:00 16 -0.15 2016-05-01 17:00:00 17 -0.17 2016-05-01 18:00:00 18 -0.16 2016-05-01 19:00:00 19 -0.18 2016-05-01 20:00:00 20 -0.17 2016-05-01 21:00:00 21 -0.14 2016-05-01 22:00:00 22 -0.16 2016-05-01 23:00:00 23 -0.08 2016-05-02 00:00:00 24 -0.06 </code></pre> <p><strong>df.reset_index().to_dict('rec'):</strong></p> <pre><code>[{'datetime': Timestamp('2016-05-01 01:00:00'), 'hour': 1L, 'rev': -0.02}, {'datetime': Timestamp('2016-05-01 02:00:00'), 'hour': 2L, 'rev': -0.01}, {'datetime': Timestamp('2016-05-01 03:00:00'), 'hour': 3L, 'rev': -0.02}, {'datetime': Timestamp('2016-05-01 04:00:00'), 'hour': 4L, 'rev': -0.02}, {'datetime': Timestamp('2016-05-01 05:00:00'), 'hour': 5L, 'rev': -0.01}, {'datetime': Timestamp('2016-05-01 06:00:00'), 'hour': 6L, 'rev': -0.03}, {'datetime': Timestamp('2016-05-01 07:00:00'), 'hour': 7L, 'rev': -0.1}, {'datetime': Timestamp('2016-05-01 08:00:00'), 'hour': 8L, 'rev': -0.09}, {'datetime': Timestamp('2016-05-01 09:00:00'), 'hour': 9L, 'rev': -0.08}, {'datetime': Timestamp('2016-05-01 10:00:00'), 'hour': 10L, 'rev': -0.1}, {'datetime': Timestamp('2016-05-01 11:00:00'), 'hour': 11L, 'rev': -0.12}, {'datetime': Timestamp('2016-05-01 12:00:00'), 'hour': 12L, 'rev': -0.14}, {'datetime': Timestamp('2016-05-01 13:00:00'), 'hour': 13L, 'rev': -0.17}, {'datetime': Timestamp('2016-05-01 14:00:00'), 'hour': 14L, 'rev': -0.16}, {'datetime': Timestamp('2016-05-01 15:00:00'), 'hour': 15L, 'rev': -0.15}, {'datetime': Timestamp('2016-05-01 16:00:00'), 'hour': 16L, 'rev': -0.15}, {'datetime': Timestamp('2016-05-01 17:00:00'), 'hour': 17L, 'rev': -0.17}, {'datetime': Timestamp('2016-05-01 18:00:00'), 'hour': 18L, 'rev': -0.16}, {'datetime': Timestamp('2016-05-01 19:00:00'), 'hour': 19L, 'rev': -0.18}, {'datetime': Timestamp('2016-05-01 20:00:00'), 'hour': 20L, 'rev': -0.17}, {'datetime': Timestamp('2016-05-01 21:00:00'), 'hour': 21L, 'rev': -0.14}, {'datetime': Timestamp('2016-05-01 22:00:00'), 'hour': 22L, 'rev': -0.16}, {'datetime': Timestamp('2016-05-01 23:00:00'), 'hour': 23L, 'rev': -0.08}, {'datetime': Timestamp('2016-05-02 00:00:00'), 'hour': 24L, 'rev': -0.06}] df.set_index('datetime', inplace=True) </code></pre> <p>I want to aggregate the data by <strong>DAY</strong>. So I do:</p> <pre><code>dfgrped = df.groupby([pd.TimeGrouper('D')]) </code></pre> <p>I want to compute stats like the <strong>sum</strong>:</p> <pre><code>dfgrped.agg(sum) hour rev datetime 2016-05-01 276 -2.43 2016-05-02 24 -0.06 </code></pre> <p>As you can see the aggregation occurs for <code>2016-05-01</code> and <code>2016-05-02</code>. </p> <p>Notice, that the last hourly data entry in df occurs at 2016-05-02 00:00:00, which is meant to be the data for the last hour of the previous day i.e. 24 hourly data points for each day.</p> <p>However, given the datetime stamp, things don't work out the way I intended. I want all <code>24</code> hours to be aggregated for <code>2016-05-01</code>.</p> <p>I imagine this sort of issue must arise often in various applications when a measurement is taken at the end of the hour. This isn't a problem until the last hour, which occurs at the <code>00:00:00</code> timestamp of the following day.</p> <p>How to address this issue in pandas? </p>
1
2016-08-21T13:59:44Z
39,109,185
<p>it looks like another hack, but it should do the job:</p> <pre><code>In [79]: df.assign(t=df.datetime - pd.Timedelta(hours=1)).drop('datetime',1).groupby(pd.TimeGrouper('D', key='t')).sum() Out[79]: hour rev t 2016-05-01 300 -2.49 </code></pre>
1
2016-08-23T19:23:01Z
[ "python", "pandas", "group-by", "aggregation" ]
Obtaining values from ipywidgets.widgets.SelectMultiple
39,065,074
<p>I am not able to get the values from a MultipleSelect widget after changing the initial selection. The selection looks fine, but the values do not show. The code to create the SelectMultiple widget:</p> <pre><code>from ipywidgets import widgets from IPython.display import display w = widgets.SelectMultiple(description="Fruits", options=['Apples', 'Oranges', 'Pears']) display(w) </code></pre> <p>The selection widget appears as expected, and if nothing is done with it, a subsequent <code>w.value</code> correctly returns the visual selection (for me, this is the last option, 'Pears'). However, after making a selection by mouse (say, selecting 'Apples' or 'Apples' and 'Oranges'), <code>w.value</code> returns an empty tuple. </p> <p>The exact same code with <code>widgets.SelectMultiple</code> replaced with <code>widgets.Dropdown</code> works as expected (<code>w.value</code> showing the selected value, also after changing the selection). </p> <p>What am I doing wrong?</p>
0
2016-08-21T14:04:01Z
39,066,628
<p>I believe it to be a browser issue. I usually use Chrome, where it works just fine. Today, I used IE11, and whereas all other <code>ipywidgets.widgets</code> worked as expected, the <code>widgets.SelectMultiple</code> does not. </p>
0
2016-08-21T16:55:40Z
[ "python", "jupyter-notebook", "ipywidgets" ]
Making script content 'safe' for HTTPS display (Bokeh)
39,065,089
<p>I was searching for a solution and came across a removed question from <a href="http://stackoverflow.com/users/5082720/maria-saz">maria saz</a>. Fortunately, I was able to see it <a href="http://webcache.googleusercontent.com/search?q=cache:Cwo3LBTyviIJ:stackoverflow.com/questions/39005081/making-script-content-safe-for-https-display-bokeh+&amp;cd=3&amp;hl=en&amp;ct=clnk" rel="nofollow">cached</a> by google. Since I have the exact same question, I borrowed the original text:</p> <p>""" I'm building a website with Bokeh plots, using inline embedding. However, using an https connection to the site blocks the plots from being rendered, as the source is deemed 'unauthenticated'. Is there a way to solve this?</p> <p>In the 'Security Overview', it says:</p> <blockquote> <p>Blocked mixed content Your page requested insecure resources that were blocked.</p> </blockquote> <p>Where the two blocked requests were:</p> <p>bokeh-0.12.1.min.css</p> <p>bokeh-0.12.1.min.js</p> <p>... """</p> <p>In my case, I can add a bit of further information. The site is being built using django and google app engine, and if I allow the unsafe content, the bokeh plots work as expected.</p> <p>How can I serve my content so the scripts will load without the warning?</p>
0
2016-08-21T14:05:36Z
39,070,704
<p>If you are proxying a Bokeh server behind a proxy that is terminating SSL, then you need to configure the proxy to pass the originating protocol on to the Bokeh server, and also use the <code>--use-xheaders</code> command line option to invoke the Bokeh server:</p> <pre><code>bokeh serve app_script.py --use-xheaders </code></pre> <p>Can't tell you how to forward the protocol since you haven't specified what your setup is, but there is an example for Nginx and SSL in the User's Guide: </p> <p><a href="http://bokeh.pydata.org/en/latest/docs/user_guide/server.html#reverse-proxying-with-nginx-and-ssl" rel="nofollow">http://bokeh.pydata.org/en/latest/docs/user_guide/server.html#reverse-proxying-with-nginx-and-ssl</a></p>
0
2016-08-22T02:43:34Z
[ "python", "django", "google-app-engine", "https", "bokeh" ]
Making script content 'safe' for HTTPS display (Bokeh)
39,065,089
<p>I was searching for a solution and came across a removed question from <a href="http://stackoverflow.com/users/5082720/maria-saz">maria saz</a>. Fortunately, I was able to see it <a href="http://webcache.googleusercontent.com/search?q=cache:Cwo3LBTyviIJ:stackoverflow.com/questions/39005081/making-script-content-safe-for-https-display-bokeh+&amp;cd=3&amp;hl=en&amp;ct=clnk" rel="nofollow">cached</a> by google. Since I have the exact same question, I borrowed the original text:</p> <p>""" I'm building a website with Bokeh plots, using inline embedding. However, using an https connection to the site blocks the plots from being rendered, as the source is deemed 'unauthenticated'. Is there a way to solve this?</p> <p>In the 'Security Overview', it says:</p> <blockquote> <p>Blocked mixed content Your page requested insecure resources that were blocked.</p> </blockquote> <p>Where the two blocked requests were:</p> <p>bokeh-0.12.1.min.css</p> <p>bokeh-0.12.1.min.js</p> <p>... """</p> <p>In my case, I can add a bit of further information. The site is being built using django and google app engine, and if I allow the unsafe content, the bokeh plots work as expected.</p> <p>How can I serve my content so the scripts will load without the warning?</p>
0
2016-08-21T14:05:36Z
40,094,283
<p>This issue was addressed simply by referring to the CDN address with "https://"</p> <p>See the <a href="http://bokeh.pydata.org/en/latest/docs/user_guide/embed.html" rel="nofollow">bokeh embed notes</a></p>
0
2016-10-17T19:35:01Z
[ "python", "django", "google-app-engine", "https", "bokeh" ]
I am new to boto.Help me i am getting this error
39,065,124
<pre><code>import boto3 &gt;&gt;&gt; client = boto3.client('ec2') &gt;&gt;&gt; response = client.create_tags(DryRun = True | False, Resources = ['ABC', ], Tags = [{ 'Key' : 'vennkata', 'Value' : 'ratnam' }, ]) Traceback(most recent call last) : File "&lt;stdin&gt;", line 1, in &lt; module &gt; File "/usr/lib/python2.7/dist-packages/botocore/client.py", line 159, in _api_call return self._make_api_call(operation_name, kwargs) botocore.exceptions.EndpointConnectionError : Could not connect to the endpoint URL : "https://ec2.us-west.amazonaws.com/" </code></pre> <p>Can anyone provide suggestions to avoid this error @ the time of creating a snapshot using volume id?</p>
-2
2016-08-21T14:09:26Z
39,065,236
<p>Could not connect to the endpoint URL : <a href="https://ec2" rel="nofollow">https://ec2</a>.<strong>us-west</strong>.amazonaws.com</p> <p><code>us-west</code> is not a valid region. Currently supported regions are <code>us-west-1</code> and <code>us-west-2</code>. See <a href="http://docs.aws.amazon.com/general/latest/gr/rande.html#ec2_region" rel="nofollow"> AWS Regions and Endpoints - Amazon Web Services</a> You must have misconfigured the region. Check <strong>~/.aws/config</strong> and fix it or set the correct value in the shell.</p> <pre><code>export AWS_DEFAULT_REGION=us-west-1 </code></pre>
4
2016-08-21T14:21:59Z
[ "python", "amazon-web-services", "amazon-ec2", "boto3" ]
How can I do context depended substitution (updating of fields) in the vcf (variant call format) files?
39,065,129
<p>I have a vcf file that looks like this:</p> <p><strong>CHROM POS ID REF ALT QUAL FILTER INFO FORMAT 2ms01e 2ms02g 2ms03g 2ms04h</strong></p> <p>2 15882505 . T A 12134.90 PASS AC=2;AF=0.250;AN=8;BaseQRankSum=-0.021;ClippingRankSum=0.000;DP=695;ExcessHet=3.6798;FS=0.523;MLEAC=2;MLEAF=0.250;MQ=60.00;MQRankSum=0.000;QD=25.18;ReadPosRankSum=1.280;SOR=0.630 GT:AD:DP:GQ:PL:PG:PB:PI:PW:PC 0/1:59,89:148:99:3620,0,2177:1|0:.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.:1452:|:0.5 0/1:125,209:334:99:8549,0,4529:.:.:.:.:. 0/0:130,0:130:99:0,400,5809:.:.:.:.:. 0/0:82,0:82:99:0,250,3702:.:.:.:.:.</p> <p>2 15882583 . G T 1221.33 PASS AC=1;AF=0.125;AN=8;BaseQRankSum=-2.475;ClippingRankSum=0.000;DP=929;ExcessHet=3.0103;FS=0.000;MLEAC=1;MLEAF=0.125;MQ=60.00;MQRankSum=0.000;QD=9.25;ReadPosRankSum=0.299;SOR=0.686 GT:AD:DP:GQ:PL:PG:PB:PI:PW:PC 0/0:178,0:178:99:0,539,7601:0/0:.:.:0/0:. 0/0:446,0:446:99:0,1343,16290:.:.:.:.:. 0/0:172,0:172:99:0,517,6205:.:.:.:.:. 0/1:75,57:132:99:1253,0,2863:.:.:.:.:.</p> <p>The very first line is the header (which has other header information infront of it, its removed in here) and the columns are tab separated.</p> <p><em>For convenience with understanding the data structure I am sharing a sub-sample of the file in this link (on dropbox which can be downloaded):</em> <a href="https://www.dropbox.com/sh/coihujii38t5prd/AABDXv8ACGIYczeMtzKBo0eea?dl=0" rel="nofollow">https://www.dropbox.com/sh/coihujii38t5prd/AABDXv8ACGIYczeMtzKBo0eea?dl=0</a></p> <p>Please download the file which is only about 300 Kb and can be open via text editors. This will help to understand the problem better.</p> <p><strong>Problem:</strong></p> <p><strong>I need to do a context dependent substitution (value updates).</strong> - All the header information (tagged with # infront of the line) needs to stay the same.</p> <ul> <li><p>The values for different lines corresponds to the very last header (i.e. CHROM POS ID ....)</p></li> <li><p>First of all we need to look into the PG (phased genotype) field in the FORMAT column. Different tags for the fields are separated by ":". And there is a corresponding value for that particular field in the SAMPLE column (which is 2ms01e, for now). So, for the first line the PG value for sample (2ms01e) is 1|0.</p></li> <li>Now, we will need to look into the GT field in the FORMAT column in the same line and update its corresponding value to the same value as PG. i.e. change 0/1 to 1|0. Its important to keep the order as is in PG (if its 1|0 or 0|1, it needs to be exact).</li> </ul> <p><strong><em>But, if the PG field has it values 0/1, 0/0, 1/0 or any other (i.e has a slash to it) the GT filed need not be changed (or updated).</em></strong></p> <p><strong>Final output:</strong> So, the GT value from the first line of data should change from: GT:AD:DP:GQ:PL:PG:PB:PI:PW:PC</p> <p>0/1:59,89:148:99:3620,0,2177:1|0:.....</p> <p>to </p> <p>GT:AD:DP:GQ:PL:PG:PB:PI:PW:PC</p> <p>1|0:59,89:148:99:3620,0,2177:1|0:.....</p> <p>You can see here that only the value for GT field has changed while all the other field values stay the same.</p> <p>For the second line the GT value stays the same - i.e. 0/0 to 0/0, since the PG value for this line is 0/0 (same at the GT value, so no change).</p> <p><strong>Easy method:</strong> I think its best if the value from PG field can be copy-pasted to the GT field values in the SAMPLE (2ms01e) column. The GT field value is 1st position and PG filed is 6th position, with different fields separated by ":". So, all we need to do is update the value in the first field with the values from the 6th field.</p> <p>This easy method might work, since when PG has slash "/" GT will have slash too and order doesn't matter. But I am not sure if it will work for every line. But, this would be an easy solution, and I can at least check and make sure if it worked.</p> <p><strong>Hard method:</strong> If easy method does not work as expected I think considering every context becomes important.</p> <p><strong><em>Context:</em></strong></p> <p>Is PG field value with a pipe (|). If yes it needs to be changed.</p> <p>If there is no PG field in the FORMAT column - then skip it.</p> <p>The separator of the fields in FORMAT column is ":" .So, is in SAMPLE column. So, counting the distance between field is important. GT and PG fields are 1st and 6th position.</p> <p><strong><em>Any kind of solution is appreciated but if its python its better, so I can read and manipulate it if my context changes. Also, explanation of the given solution would help a great deal.</em></strong></p> <p>Thanks in advance and sorry for being so picky. I have very moderate computer skills but still not with programming.</p> <p>Cheers ! :))</p> <ul> - </ul>
0
2016-08-21T14:09:57Z
39,065,328
<p>The below answer won't contain any details about any logic, but it will give you one possible starting point so you can play around by yourself:</p> <pre><code>class VCFProcessor(): def __init__(self): pass def load(self, filename): with open(filename, "rb") as f: self.load_string(f.read()) def load_string(self, data): index = 0 for line in data.split("\n"): # Skip empty rows if line.strip() == "": continue # Assuming there is only header and valid rows if self.is_valid_row(line): self.process_row(index, line) else: self.process_header(line) index += 1 def is_valid_row(self, line): columns = line.split(":") if len(columns) == 46: return True def process_row(self, index, line): print "Processing line {0} with {1} columns".format(index, len(line.split(":"))) def process_header(self, line): print "Header has {0} columns".format(len(line.split(":"))) if __name__ == "__main__": data = """CHROM POS ID REF ALT QUAL FILTER INFO FORMAT 2ms01e 2ms02g 2ms03g 2ms04h 2 15882505 . T A 12134.90 PASS AC=2;AF=0.250;AN=8;BaseQRankSum=-0.021;ClippingRankSum=0.000;DP=695;ExcessHet=3.6798;FS=0.523;MLEAC=2;MLEAF=0.250;MQ=60.00;MQRankSum=0.000;QD=25.18;ReadPosRankSum=1.280;SOR=0.630 GT:AD:DP:GQ:PL:PG:PB:PI:PW:PC 0/1:59,89:148:99:3620,0,2177:1|0:.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.:1452:|:0.5 0/1:125,209:334:99:8549,0,4529:.:.:.:.:. 0/0:130,0:130:99:0,400,5809:.:.:.:.:. 0/0:82,0:82:99:0,250,3702:.:.:.:.:. 2 15882583 . G T 1221.33 PASS AC=1;AF=0.125;AN=8;BaseQRankSum=-2.475;ClippingRankSum=0.000;DP=929;ExcessHet=3.0103;FS=0.000;MLEAC=1;MLEAF=0.125;MQ=60.00;MQRankSum=0.000;QD=9.25;ReadPosRankSum=0.299;SOR=0.686 GT:AD:DP:GQ:PL:PG:PB:PI:PW:PC 0/0:178,0:178:99:0,539,7601:0/0:.:.:0/0:. 0/0:446,0:446:99:0,1343,16290:.:.:.:.:. 0/0:172,0:172:99:0,517,6205:.:.:.:.:. 0/1:75,57:132:99:1253,0,2863:.:.:.:.:. """ v = VCFProcessor() v.load_string(data) </code></pre> <p>Hope it helps.</p>
0
2016-08-21T14:31:09Z
[ "python", "awk", "sed", "string-substitution" ]
How can I do context depended substitution (updating of fields) in the vcf (variant call format) files?
39,065,129
<p>I have a vcf file that looks like this:</p> <p><strong>CHROM POS ID REF ALT QUAL FILTER INFO FORMAT 2ms01e 2ms02g 2ms03g 2ms04h</strong></p> <p>2 15882505 . T A 12134.90 PASS AC=2;AF=0.250;AN=8;BaseQRankSum=-0.021;ClippingRankSum=0.000;DP=695;ExcessHet=3.6798;FS=0.523;MLEAC=2;MLEAF=0.250;MQ=60.00;MQRankSum=0.000;QD=25.18;ReadPosRankSum=1.280;SOR=0.630 GT:AD:DP:GQ:PL:PG:PB:PI:PW:PC 0/1:59,89:148:99:3620,0,2177:1|0:.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.:1452:|:0.5 0/1:125,209:334:99:8549,0,4529:.:.:.:.:. 0/0:130,0:130:99:0,400,5809:.:.:.:.:. 0/0:82,0:82:99:0,250,3702:.:.:.:.:.</p> <p>2 15882583 . G T 1221.33 PASS AC=1;AF=0.125;AN=8;BaseQRankSum=-2.475;ClippingRankSum=0.000;DP=929;ExcessHet=3.0103;FS=0.000;MLEAC=1;MLEAF=0.125;MQ=60.00;MQRankSum=0.000;QD=9.25;ReadPosRankSum=0.299;SOR=0.686 GT:AD:DP:GQ:PL:PG:PB:PI:PW:PC 0/0:178,0:178:99:0,539,7601:0/0:.:.:0/0:. 0/0:446,0:446:99:0,1343,16290:.:.:.:.:. 0/0:172,0:172:99:0,517,6205:.:.:.:.:. 0/1:75,57:132:99:1253,0,2863:.:.:.:.:.</p> <p>The very first line is the header (which has other header information infront of it, its removed in here) and the columns are tab separated.</p> <p><em>For convenience with understanding the data structure I am sharing a sub-sample of the file in this link (on dropbox which can be downloaded):</em> <a href="https://www.dropbox.com/sh/coihujii38t5prd/AABDXv8ACGIYczeMtzKBo0eea?dl=0" rel="nofollow">https://www.dropbox.com/sh/coihujii38t5prd/AABDXv8ACGIYczeMtzKBo0eea?dl=0</a></p> <p>Please download the file which is only about 300 Kb and can be open via text editors. This will help to understand the problem better.</p> <p><strong>Problem:</strong></p> <p><strong>I need to do a context dependent substitution (value updates).</strong> - All the header information (tagged with # infront of the line) needs to stay the same.</p> <ul> <li><p>The values for different lines corresponds to the very last header (i.e. CHROM POS ID ....)</p></li> <li><p>First of all we need to look into the PG (phased genotype) field in the FORMAT column. Different tags for the fields are separated by ":". And there is a corresponding value for that particular field in the SAMPLE column (which is 2ms01e, for now). So, for the first line the PG value for sample (2ms01e) is 1|0.</p></li> <li>Now, we will need to look into the GT field in the FORMAT column in the same line and update its corresponding value to the same value as PG. i.e. change 0/1 to 1|0. Its important to keep the order as is in PG (if its 1|0 or 0|1, it needs to be exact).</li> </ul> <p><strong><em>But, if the PG field has it values 0/1, 0/0, 1/0 or any other (i.e has a slash to it) the GT filed need not be changed (or updated).</em></strong></p> <p><strong>Final output:</strong> So, the GT value from the first line of data should change from: GT:AD:DP:GQ:PL:PG:PB:PI:PW:PC</p> <p>0/1:59,89:148:99:3620,0,2177:1|0:.....</p> <p>to </p> <p>GT:AD:DP:GQ:PL:PG:PB:PI:PW:PC</p> <p>1|0:59,89:148:99:3620,0,2177:1|0:.....</p> <p>You can see here that only the value for GT field has changed while all the other field values stay the same.</p> <p>For the second line the GT value stays the same - i.e. 0/0 to 0/0, since the PG value for this line is 0/0 (same at the GT value, so no change).</p> <p><strong>Easy method:</strong> I think its best if the value from PG field can be copy-pasted to the GT field values in the SAMPLE (2ms01e) column. The GT field value is 1st position and PG filed is 6th position, with different fields separated by ":". So, all we need to do is update the value in the first field with the values from the 6th field.</p> <p>This easy method might work, since when PG has slash "/" GT will have slash too and order doesn't matter. But I am not sure if it will work for every line. But, this would be an easy solution, and I can at least check and make sure if it worked.</p> <p><strong>Hard method:</strong> If easy method does not work as expected I think considering every context becomes important.</p> <p><strong><em>Context:</em></strong></p> <p>Is PG field value with a pipe (|). If yes it needs to be changed.</p> <p>If there is no PG field in the FORMAT column - then skip it.</p> <p>The separator of the fields in FORMAT column is ":" .So, is in SAMPLE column. So, counting the distance between field is important. GT and PG fields are 1st and 6th position.</p> <p><strong><em>Any kind of solution is appreciated but if its python its better, so I can read and manipulate it if my context changes. Also, explanation of the given solution would help a great deal.</em></strong></p> <p>Thanks in advance and sorry for being so picky. I have very moderate computer skills but still not with programming.</p> <p>Cheers ! :))</p> <ul> - </ul>
0
2016-08-21T14:09:57Z
39,068,360
<pre><code>$ cat &gt; another_mess.awk $0!="" { n=split($10,a,":") # split $10 at ":" to a array if(substr(a[6],2,1)=="|") { # if "|" in PG a[1]=a[6] # copy PG to GT $10="" # empty $10 for(i=1;i&lt;=n;i++) # rebuild $10 $10=$10a[i] (i&lt;n?":":"") # use ":" as delimiter } print $10 # PRINT $10 TO TEST, CHANGE TO $0 } $ awk -f another_mess.awk mess.in 1|0:59,89:148:99:3620,0,2177:1|0:.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.,.:1452:|:0.5 0/0:178,0:178:99:0,539,7601:0/0:.:.:0/0:. </code></pre>
1
2016-08-21T20:00:02Z
[ "python", "awk", "sed", "string-substitution" ]
What Should the Structure of virtualenv Environment Look Like
39,065,136
<p>This is one of my first times really using <code>virtualenv</code> and when I first activated it I was (and am) a bit confused about where my actual project (like the code) should go. Currently (after making and activating the <code>virtualenv</code>) this is what my project looks like in <code>PyCharm</code>:</p> <pre><code>Project Name |-project-name &lt;= I called my virtualenv project-name |-bin |-Lots of stuff here |-include |-Lots of stuff here |-lib |-Lots of stuff here |-.Python |-pip-selfcheck.json </code></pre> <p><strong>In this environment, where should I put my actual code?</strong></p>
0
2016-08-21T14:10:44Z
39,065,222
<p>I don't recommend to put your project to virtualenv folder. I think you should do it in this way:</p> <p>Do it in terminal if you're using Linux:</p> <ol> <li><code>mkdir project-name</code>.</li> <li><code>cd project-name</code>.</li> <li><code>virtualenvwrapper env</code>.</li> <li><code>source env/bin/activate</code>.</li> </ol> <p>So you will have <code>project-name</code> folder where you will have all files according to your project + virtualenv folder called <code>env</code>.</p> <p>If you don't have <code>virtualenvwrapper</code>, then just install it using <code>apt-get</code>:</p> <p><code>sudo apt-get install virtualenvwrapper</code></p>
1
2016-08-21T14:20:54Z
[ "python", "python-3.x", "pycharm", "virtualenv", "development-environment" ]
What Should the Structure of virtualenv Environment Look Like
39,065,136
<p>This is one of my first times really using <code>virtualenv</code> and when I first activated it I was (and am) a bit confused about where my actual project (like the code) should go. Currently (after making and activating the <code>virtualenv</code>) this is what my project looks like in <code>PyCharm</code>:</p> <pre><code>Project Name |-project-name &lt;= I called my virtualenv project-name |-bin |-Lots of stuff here |-include |-Lots of stuff here |-lib |-Lots of stuff here |-.Python |-pip-selfcheck.json </code></pre> <p><strong>In this environment, where should I put my actual code?</strong></p>
0
2016-08-21T14:10:44Z
39,065,253
<p>When you activate a virtual env using <code>virutalenv env</code>, env (where all of your dependencies will be installed), sits at the top of your root directory. Let's say you use <a href="https://www.djangoproject.com/" rel="nofollow">Django</a> to create a project, you would then follow these steps:</p> <ol> <li>Type <code>source env/bin/activate</code> to activate virtual environment</li> <li>Type <code>pip install django</code> to install Django</li> <li>Type <code>django-admin startproject my-example-proj</code>, which will install Django in your root directory</li> </ol> <p>You should now how two directories: <strong>env</strong> and <strong>my-example-proj</strong>. You project never goes inside the <strong>env</strong> directory. That's where you install dependencies using <strong><a href="https://pip.pypa.io/en/stable/installing/" rel="nofollow">pip</a></strong>. </p>
1
2016-08-21T14:23:19Z
[ "python", "python-3.x", "pycharm", "virtualenv", "development-environment" ]
Internal server error code 500 when uploading a file to a directory with Flask
39,065,221
<p>Here is the backend Flask code for a basic file uploading form</p> <pre><code>@app.route('/gallery',methods=['GET','POST']) def gallery(): error = None if request.method == "POST": if 'file' not in request.files: flash("No file part") return redirect(request.url) file = request.files['file'] print file file.save(os.path.join(app.config['UPLOAD_FOLDER'], file.filename)) flash('file uploaded successfully') return render_template('gallery.html') </code></pre> <p>And the HTML frontend code for the form:</p> <pre><code>&lt;div class="page"&gt; &lt;h2&gt;Gallery&lt;/h2&gt; &lt;p&gt;Upload image&lt;/p&gt; &lt;form action="{{ url_for('gallery') }}" method="post" enctype="multipart/form-data"&gt; &lt;input type="file" name="file"&gt;&lt;input type="submit"&gt; &lt;/form&gt; &lt;/div&gt; </code></pre> <p>I have the UPLOAD_FOLDER variable set to an /uploads directory on the root of my project in which I would like to Flask to store images, however every time I submit an image to upload I get a 500 error.</p> <p>Strangely enough, if I upload the file to the root of my project directory instead I do not get an error. The error only occurs if I upload the file to my upload directory.</p> <p>[Edit] Added traceback</p> <pre><code>Traceback (most recent call last): File "/Library/Python/2.7/site-packages/flask/app.py", line 1836, in __call__ return self.wsgi_app(environ, start_response) File "/Library/Python/2.7/site-packages/flask/app.py", line 1820, in wsgi_app response = self.make_response(self.handle_exception(e)) File "/Library/Python/2.7/site-packages/flask/app.py", line 1403, in handle_exception reraise(exc_type, exc_value, tb) File "/Library/Python/2.7/site-packages/flask/app.py", line 1817, in wsgi_app response = self.full_dispatch_request() File "/Library/Python/2.7/sitea-packages/flask/app.py", line 1477, in full_dispatch_request rv = self.handle_user_exception(e) File "/Library/Python/2.7/site-packages/flask/app.py", line 1381, in handle_user_exception reraise(exc_type, exc_value, tb) File "/Library/Python/2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request rv = self.dispatch_request() File "/Library/Python/2.7/site-packages/flask/app.py", line 1461, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/Users/fred/personalsite/cool_site.py", line 70, in gallery file.save(os.path.join(app.config['UPLOAD_FOLDER'], file.filename)) File "/Library/Python/2.7/site-packages/werkzeug/datastructures.py", line 2653, in save dst = open(dst, 'wb') IOError: [Errno 2] No such file or directory: u'/uploads/Lecture 1.pages' </code></pre>
0
2016-08-21T14:20:46Z
39,065,394
<p>What your error says is that the folder does not exist. Create the folder <code>/uploads/</code> and make it readable for the program. If the folder should be relative to your Flask directory use <code>uploads/</code> instead. </p>
1
2016-08-21T14:39:14Z
[ "python", "html", "file-upload", "flask", "server" ]
Internal server error code 500 when uploading a file to a directory with Flask
39,065,221
<p>Here is the backend Flask code for a basic file uploading form</p> <pre><code>@app.route('/gallery',methods=['GET','POST']) def gallery(): error = None if request.method == "POST": if 'file' not in request.files: flash("No file part") return redirect(request.url) file = request.files['file'] print file file.save(os.path.join(app.config['UPLOAD_FOLDER'], file.filename)) flash('file uploaded successfully') return render_template('gallery.html') </code></pre> <p>And the HTML frontend code for the form:</p> <pre><code>&lt;div class="page"&gt; &lt;h2&gt;Gallery&lt;/h2&gt; &lt;p&gt;Upload image&lt;/p&gt; &lt;form action="{{ url_for('gallery') }}" method="post" enctype="multipart/form-data"&gt; &lt;input type="file" name="file"&gt;&lt;input type="submit"&gt; &lt;/form&gt; &lt;/div&gt; </code></pre> <p>I have the UPLOAD_FOLDER variable set to an /uploads directory on the root of my project in which I would like to Flask to store images, however every time I submit an image to upload I get a 500 error.</p> <p>Strangely enough, if I upload the file to the root of my project directory instead I do not get an error. The error only occurs if I upload the file to my upload directory.</p> <p>[Edit] Added traceback</p> <pre><code>Traceback (most recent call last): File "/Library/Python/2.7/site-packages/flask/app.py", line 1836, in __call__ return self.wsgi_app(environ, start_response) File "/Library/Python/2.7/site-packages/flask/app.py", line 1820, in wsgi_app response = self.make_response(self.handle_exception(e)) File "/Library/Python/2.7/site-packages/flask/app.py", line 1403, in handle_exception reraise(exc_type, exc_value, tb) File "/Library/Python/2.7/site-packages/flask/app.py", line 1817, in wsgi_app response = self.full_dispatch_request() File "/Library/Python/2.7/sitea-packages/flask/app.py", line 1477, in full_dispatch_request rv = self.handle_user_exception(e) File "/Library/Python/2.7/site-packages/flask/app.py", line 1381, in handle_user_exception reraise(exc_type, exc_value, tb) File "/Library/Python/2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request rv = self.dispatch_request() File "/Library/Python/2.7/site-packages/flask/app.py", line 1461, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/Users/fred/personalsite/cool_site.py", line 70, in gallery file.save(os.path.join(app.config['UPLOAD_FOLDER'], file.filename)) File "/Library/Python/2.7/site-packages/werkzeug/datastructures.py", line 2653, in save dst = open(dst, 'wb') IOError: [Errno 2] No such file or directory: u'/uploads/Lecture 1.pages' </code></pre>
0
2016-08-21T14:20:46Z
39,065,508
<p>I fixed the issue. It was happening because my UPLOAD_FOLDER variable was set to /uploads when it should have been been set relative to the Flask directory (uploads/)</p>
0
2016-08-21T14:52:55Z
[ "python", "html", "file-upload", "flask", "server" ]
Python script taking too much time and memory to execute
39,065,405
<p>I have a python script which uses quite a few libraries.</p> <pre><code>import time import cgitb cgitb.enable() import numpy as np import MySQLdb as mysql import cv2 import sys import rpy2.robjects as robj import rpy2.robjects.numpy2ri rpy2.robjects.numpy2ri.activate() from rpy2.robjects.packages import importr R = robj.r DTW = importr('dtw') </code></pre> <p>Am using the below line of codes to check the memory usage(I took it up from SO only. Can't find the link right now. It gives the usage in MB).</p> <pre><code>process= psutil.Process(os.getpid()) print process.memory_info()[0]/float(2**20) </code></pre> <p>Also, am using m3.large plan on Amazon. Attacing an image for the specification part.</p> <p><a href="http://i.stack.imgur.com/xIYd0.png" rel="nofollow"><img src="http://i.stack.imgur.com/xIYd0.png" alt="enter image description here"></a></p> <p><strong>Now the question:</strong></p> <p>The standard execution takes around 8-9 seconds. But when am executing it parallely for around 7-8 time, it's execution time shoots up to 55-60 seconds. When am trying to run it parallely for more than 10 users, time taken goes up to 120 seconds.</p> <p>I tried checking the memory consumption, for a single run it takes upto 70MB for loading the libraries, then the function in the script takes 90MB.(Am also not sure that to calculate the memory consumption for the function, shall I deduct the two data i.e 90-70=20MB)</p> <p>Anyways,when am running it parallely, the memory consumption increases to ~200MB for the function part. For the ditto same parameters.</p> <p>Later I tried to execute the same function twice and thrice in the same script i.e called the main function 3 times in the same script, now the memory consumption is 80MB till the point libraries are imported, then for the 1st time, the memory concumption for the function is 80MB, 2nd time it is 550MB and for the third time it's 700MB. (This is totally weird for me).</p> <p>As far as I understand it, the core fundamental of parallel computing is only not being followed here.</p> <p>Can anyone please share some light on this.</p> <p>How do I reduce the memory consumption of the script.( Am calling it via php file. It's one of the api call).</p> <p>Why does the import statement consumes that much memory each time. How do I keep the execution time to 8-9 second for each call for whatever number of times it's executing parallely.</p> <p><strong>EDIT</strong></p> <p>Adding the sample code:</p> <pre><code>import psutil import os import time start_time = time.time() import cgitb cgitb.enable() import numpy as np import MySQLdb as mysql import cv2 import sys import rpy2.robjects as robj import rpy2.robjects.numpy2ri rpy2.robjects.numpy2ri.activate() from rpy2.robjects.packages import importr R = robj.r DTW = importr('dtw') process= psutil.Process(os.getpid()) print " Memory Consumed after libraries load: " print process.memory_info()[0]/float(2**20) st_pt=4 # Generate our data (numpy arrays) template = np.array([range(700),range(700),range(700)]).transpose() query = np.array([range(10000),range(10000),range(10000)]).transpose() # dtw algo as a function def dtw(template,query): alignment = R.dtw(R.matrix(template,nrow=template.shape[0],ncol=template.shape[1]),R.matrix(query,nrow=query.shape[0],ncol=query.shape[1]),keep=True, step_pattern=R.rabinerJuangStepPattern(st_pt,"c"), open_begin=True, open_end=True) dist = alignment.rx('distance')[0] return dist #running dtw function with parameters = template, query and calculating memory consumtion #run 1 dtw(template,query) process= psutil.Process(os.getpid()) print " Memory Consumed at dtw1: " print process.memory_info()[0]/float(2**20) #run 2 dtw(template,query) process= psutil.Process(os.getpid()) print " Memory Consumed at dtw2: " print process.memory_info()[0]/float(2**20) #run 3 dtw(template,query) process= psutil.Process(os.getpid()) print " Memory Consumed at dtw3: " print process.memory_info()[0]/float(2**20) #time taken print(" --- %s seconds ---" % (time.time() - start_time)) </code></pre> <p>Output for 1st run is :</p> <p>Memory Consumed after libraries load: 74.234375 </p> <p>Memory Consumed at dtw1: 350.53125 </p> <p>Memory Consumed at dtw2: 377.3125 </p> <p>Memory Consumed at dtw3: 537.9140625 --- 8.82202100754 seconds ---</p> <p>And when I run it parallel for 5 times, the output is as follows:</p> <p>Memory Consumed after libraries load: 74.87109375 </p> <p>Memory Consumed at dtw1: 351.16796875 </p> <p>Memory Consumed at dtw2: 377.94921875 </p> <p>Memory Consumed at dtw3: 538.55078125 --- 25.3154160976 seconds ---</p>
-1
2016-08-21T14:40:29Z
39,065,540
<p>You showed only imports, not app or particulat function, so my help is lot limited.</p> <ol> <li>As I undrstand, you are embedding R language to python (the rpy2 lib). You are probably working with videos (cv2 lib). There culd be lot of another libs python needs to load to memory because of that. You are simply using lot of libs. Try to make your code as simple as possible.</li> <li>I think you've stumbled upon Pyton's GIL (Global Interpreter Lock) when you tried to execute script in parallel. But it's my gues, I really can't know from your sample what is app really doing.</li> </ol>
1
2016-08-21T14:56:18Z
[ "python", "memory" ]
Best way to set Entry Background Color in Python GTK3 and set back to default
39,065,408
<p>What is the best way to set background color for one entry and set it back to the default color?</p> <p>My script is now working but I am very sure this is not the best way.<br> Also I still have two problems:</p> <ol> <li>If I insert a text, not containing string "red" or "green" and select this text I cant see my selection because It is all white.</li> <li>I think there are better ways then the way I insert <code>self.entry_default_background_color_str</code> into the CSS text. </li> </ol> <hr> <pre><code>import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk from gi.repository import Gdk class Window(Gtk.Window): def __init__(self): self.screen = Gdk.Screen.get_default() self.gtk_provider = Gtk.CssProvider() self.gtk_context = Gtk.StyleContext() self.gtk_context.add_provider_for_screen(self.screen, self.gtk_provider, Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION) Gtk.Window.__init__(self, title="Check Input") self.set_size_request(300, 80) self.mainbox = Gtk.VBox() self.add(self.mainbox) # entry self.name_entry = Gtk.Entry() self.name_entry.set_name("name_entry") self.mainbox.pack_start(self.name_entry, True, True, 0) self.name_entry.connect("changed", self.check_input) entry_context = self.name_entry.get_style_context() self.entry_default_background_color = entry_context.get_background_color(Gtk.StateType.NORMAL) self.entry_default_background_color_str = self.entry_default_background_color.to_string() self.show_all() def check_input(self, _widget=None): if "red" in self.name_entry.get_text(): self.gtk_provider.load_from_data('#name_entry { background: red; }') elif "green" in self.name_entry.get_text(): self.gtk_provider.load_from_data('#name_entry { background: green; }') else: self.gtk_provider.load_from_data('#name_entry { background: ' + self.entry_default_background_color_str + '; }') def main(): window = Window() Gtk.main() if __name__ == "__main__": main() </code></pre>
2
2016-08-21T14:41:18Z
39,066,419
<p>I will first address the issues you mention, as they give insight into what is going on in GTK and OP's code. <strong>The answer to the main question (and proper code for doing this) is all the way at the bottom of the answer.</strong></p> <hr> <blockquote> <ol> <li>If I insert a text, not containing string "red" or "green" and select this text I cant see my selection because It is all white.</li> </ol> </blockquote> <p>The reason this happens is because the <code>background</code> property is used, this sets all background related properties of the Entry to that color. So both that of the selection as well as the "real" background.</p> <p>Then the question is what property do we use, this is a part of GTK which is poorly documented but we can find out using the <a href="https://wiki.gnome.org/Projects/GTK+/Inspector" rel="nofollow">GtkInspector</a> which lets us see what style properties are changing. This yields we should use <code>background-image</code> instead and that <code>background-color</code> is used for the background of the selection.</p> <p>Just setting the <code>background-image</code> to the color doesn't work, that gives a error because a image is expected. So now we have to figure out a way to make our <code>color</code> into something we can set as a <code>background-image</code> luckily the inspector shows us the way GTK does it internally namely by wrapping the color like this: <code>linear-gradient(red)</code>. Doing so creates a uniform red image which can be used as the background.</p> <p>Applying this knowledge to your code gives us:</p> <pre><code>if "red" in self.name_entry.get_text(): self.gtk_provider.load_from_data('#name_entry { background-image: linear-gradient(red); }') elif "green" in self.name_entry.get_text(): self.gtk_provider.load_from_data('#name_entry { background-image: linear-gradient(green); }') </code></pre> <hr> <blockquote> <ol start="2"> <li>I think there are better ways then the way I insert <code>self.entry_default_background_color_str</code> into the CSS text.</li> </ol> </blockquote> <p>There is indeed a better way, namely don't do it. We can easily return to the default by just feeding the <code>CssProvider</code> an empty version of the css, this will overwrite the old one and thus remove any old style properties like for example the color.</p> <p>Combining this with the previous section gives us:</p> <pre><code>if "red" in self.name_entry.get_text(): self.gtk_provider.load_from_data('#name_entry { background-image: linear-gradient(red); }') elif "green" in self.name_entry.get_text(): self.gtk_provider.load_from_data('#name_entry { background-image: linear-gradient(green); }') else: self.gtk_provider.load_from_data('#name_entry {}') </code></pre> <hr> <blockquote> <p>What is the best way to set background color for one entry and set it back to the default color?</p> </blockquote> <p>Now that I addressed the issues with your code, now this all important question. The way your are doing it now is replacing the CSS file, which works fine but in the long run is really inefficient. Normally you would load the CSS and uses classes and ids to tell it which styling to apply. </p> <p>Below I adapted your code to do it this way, check the comments for the explanation. </p> <pre><code>def __init__(self): screen = Gdk.Screen.get_default() gtk_provider = Gtk.CssProvider() gtk_context = Gtk.StyleContext() gtk_context.add_provider_for_screen(screen, gtk_provider, Gtk.STYLE_PROVIDER_PRIORITY_APPLICATION) # Create the window Gtk.Window.__init__(self, title="Check Input") self.set_size_request(300, 80) self.mainbox = Gtk.VBox() self.add(self.mainbox) # Load the CSS gtk_provider.load_from_data(""" #name_entry.red { background-image: linear-gradient(red); } #name_entry.green { background-image: linear-gradient(green); } """) # Create the entry and give it a name which will be the ID name_entry = Gtk.Entry() name_entry.set_name("name_entry") self.mainbox.pack_start(name_entry, True, True, 0) # Add the listener name_entry.connect("changed", self.check_input) self.show_all() def check_input(self, entry): # Get the style context for this widget entry_style_context = entry.get_style_context() # Check if our text contains red if "red" in entry.get_text(): # Add the red class, so now the styling with .red is applied entry_style_context.add_class("red") # Check if our text contains green elif "green" in entry.get_text(): # Add the red class, so now the styling with .green is applied entry_style_context.add_class("green") else: # When the text doesn't contain it remove the color classes to show the default behaviour entry_style_context.remove_class("red") entry_style_context.remove_class("green") </code></pre>
1
2016-08-21T16:30:38Z
[ "python", "css", "gtk3" ]
dryscrape click "load more button"
39,065,492
<p>I've been trying to scrape www.ratemyprofessors.com and I need to click the "load more" button to scrape the all the data that i need. However, the code that im using right now isn't working</p> <pre><code>loadButton = session.at_xpath(path) loadButton.click() </code></pre> <p>The path is definitely correct since <code>loadButton.text()</code> equals "load more", however it gives me an error saying basically "failed to click because of overlapping element".</p> <p>Does anyone know how to fix this or a workaround? from what i've been reading we can also simulate the function that javascript is running in the network tab. However i have some trouble finding the function since onclick doesn't directly call a function, but instead</p> <pre><code>onclick="javascript:mtvn.btg.Controller.sendLinkEvent({ linkName:\'PROFMIDPANE:LoadMore\', linkType:\'o\' } ); </code></pre> <p>btw I am using python and the "load more" button is located on the left side under the list of professors after you perform a search for school</p> <p>I've been reading some relevant posts but haven't come cross anything useful</p> <p>Any help would be appreciated!</p> <p><a href="http://i.stack.imgur.com/LeE6Q.png" rel="nofollow">my network/params tab</a></p>
1
2016-08-21T14:51:16Z
39,067,140
<p>You can do it all using <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests</a> and <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow">bs4</a>, when you click the load more button a request is made:</p> <p><a href="http://i.stack.imgur.com/OfSuV.png" rel="nofollow"><img src="http://i.stack.imgur.com/OfSuV.png" alt="enter image description here"></a></p> <p>So once you have a page, you can get all the teachers and ratings in <em>json</em> format using the url <a href="http://www.ratemyprofessors.com/ShowRatings.jsp?tid=881718" rel="nofollow">http://www.ratemyprofessors.com/ShowRatings.jsp?tid=881718</a> below:</p> <pre><code>import requests from bs4 import BeautifulSoup params = {"solrformat": "true", "rows": "1000", # set it high number to always get all rows. "q": "", "defType": "edismax", "qf": "teacherfullname_t^1000 autosuggest", "bf": "pow(total_number_of_ratings_i,2.1)", "sort": "total_number_of_ratings_i desc", "siteName": "rmp", "fl": "pk_id teacherfirstname_t teacherlastname_t total_number_of_ratings_i averageratingscore_rf schoolid_s"} url = "http://search.mtvnservices.com/typeahead/suggest/" query = '*:* AND schoolid_s:{id} AND teacherdepartment_s:"{subject}"' with requests.Session() as s: s.headers.update({"User-Agent": "Mozilla/5.0 (X11; Linux x86_64)"}) soup = BeautifulSoup(s.get("http://www.ratemyprofessors.com/ShowRatings.jsp?tid=881718").content) # pass the school id which we can parse from the page. params["q"] = query.format(id=soup.select_one("[data-schoolid]")["data-schoolid"], subject="History") res = s.get(url, params=params) json_data = res.json() from pprint import pprint as pp pp(json_data["response"]["docs"]) </code></pre> <p>Gives us:</p> <pre><code>[{u'averageratingscore_rf': 4.6, u'pk_id': 1347824, u'schoolid_s': u'4873', u'teacherfirstname_t': u'JP', u'teacherlastname_t': u'Godwin', u'total_number_of_ratings_i': 88}, {u'averageratingscore_rf': 3.38, u'pk_id': 692471, u'schoolid_s': u'4873', u'teacherfirstname_t': u'James', u'teacherlastname_t': u'Page', u'total_number_of_ratings_i': 49}, {u'averageratingscore_rf': 3.5, u'pk_id': 555487, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Kevin', u'teacherlastname_t': u'Davis', u'total_number_of_ratings_i': 44}, {u'averageratingscore_rf': 4.4, u'pk_id': 1289399, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Jane', u'teacherlastname_t': u'England', u'total_number_of_ratings_i': 33}, {u'averageratingscore_rf': 3.46, u'pk_id': 1230841, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Simone', u'teacherlastname_t': u'De Santiago Ramos', u'total_number_of_ratings_i': 24}, {u'averageratingscore_rf': 3.15, u'pk_id': 701257, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Jack', u'teacherlastname_t': u'Pyle', u'total_number_of_ratings_i': 23}, {u'averageratingscore_rf': 4.13, u'pk_id': 1466455, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Chris', u'teacherlastname_t': u'Politz', u'total_number_of_ratings_i': 20}, {u'averageratingscore_rf': 4.67, u'pk_id': 1218949, u'schoolid_s': u'4873', u'teacherfirstname_t': u'James', u'teacherlastname_t': u'Hathcock', u'total_number_of_ratings_i': 18}, {u'averageratingscore_rf': 3.93, u'pk_id': 1648329, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Joshua', u'teacherlastname_t': u'Montandon', u'total_number_of_ratings_i': 15}, {u'averageratingscore_rf': 2.79, u'pk_id': 1543864, u'schoolid_s': u'4873', u'teacherfirstname_t': u'M', u'teacherlastname_t': u'Antle', u'total_number_of_ratings_i': 14}, {u'averageratingscore_rf': 3.83, u'pk_id': 1096585, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Scotty', u'teacherlastname_t': u'Edler', u'total_number_of_ratings_i': 12}, {u'averageratingscore_rf': 3.92, u'pk_id': 1260089, u'schoolid_s': u'4873', u'teacherfirstname_t': u'James', u'teacherlastname_t': u'Reynolds', u'total_number_of_ratings_i': 12}, {u'averageratingscore_rf': 4.42, u'pk_id': 1418409, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Steve', u'teacherlastname_t': u'Wolfrum', u'total_number_of_ratings_i': 12}, {u'averageratingscore_rf': 4.45, u'pk_id': 899881, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Karen', u'teacherlastname_t': u'Stewart', u'total_number_of_ratings_i': 11}, {u'averageratingscore_rf': 3.2, u'pk_id': 592508, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Crystal', u'teacherlastname_t': u'Wright', u'total_number_of_ratings_i': 10}, {u'averageratingscore_rf': 4.5, u'pk_id': 891457, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Lisa', u'teacherlastname_t': u'Morales', u'total_number_of_ratings_i': 10}, {u'averageratingscore_rf': 2.9, u'pk_id': 1329058, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Mark', u'teacherlastname_t': u'Thompson', u'total_number_of_ratings_i': 10}, {u'averageratingscore_rf': 4.0, u'pk_id': 1339373, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Charles', u'teacherlastname_t': u'Williams', u'total_number_of_ratings_i': 10}, {u'averageratingscore_rf': 4.5, u'pk_id': 1587880, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Noelle', u'teacherlastname_t': u'Depperschmidt', u'total_number_of_ratings_i': 10}, {u'averageratingscore_rf': 4.39, u'pk_id': 1426470, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Adrien', u'teacherlastname_t': u'Ivan', u'total_number_of_ratings_i': 9}, {u'averageratingscore_rf': 5.0, u'pk_id': 1871677, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Kevin', u'teacherlastname_t': u'Eades', u'total_number_of_ratings_i': 9}, {u'averageratingscore_rf': 4.81, u'pk_id': 393151, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Sharon', u'teacherlastname_t': u'Romero', u'total_number_of_ratings_i': 8}, {u'averageratingscore_rf': 3.69, u'pk_id': 1377603, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Joseph', u'teacherlastname_t': u'Ialenti', u'total_number_of_ratings_i': 8}, {u'averageratingscore_rf': 3.43, u'pk_id': 1752608, u'schoolid_s': u'4873', u'teacherfirstname_t': u'James', u'teacherlastname_t': u'Jones', u'total_number_of_ratings_i': 7}, {u'averageratingscore_rf': 3.43, u'pk_id': 1782369, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Sara', u'teacherlastname_t': u'Ruppel', u'total_number_of_ratings_i': 7}, {u'averageratingscore_rf': 3.33, u'pk_id': 1096000, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Scott', u'teacherlastname_t': u'Harp', u'total_number_of_ratings_i': 6}, {u'averageratingscore_rf': 2.17, u'pk_id': 2061535, u'schoolid_s': u'4873', u'teacherfirstname_t': u'David', u'teacherlastname_t': u'Powell', u'total_number_of_ratings_i': 6}, {u'averageratingscore_rf': 4.1, u'pk_id': 556560, u'schoolid_s': u'4873', u'teacherfirstname_t': u'', u'teacherlastname_t': u'English', u'total_number_of_ratings_i': 5}, {u'averageratingscore_rf': 3.9, u'pk_id': 2032232, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Robin', u'teacherlastname_t': u'Jett', u'total_number_of_ratings_i': 5}, {u'averageratingscore_rf': 3.3, u'pk_id': 1242893, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Dennis', u'teacherlastname_t': u'Spillman', u'total_number_of_ratings_i': 5}, {u'averageratingscore_rf': 5.0, u'pk_id': 1209837, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Jared', u'teacherlastname_t': u'Sutton', u'total_number_of_ratings_i': 4}, {u'averageratingscore_rf': 3.38, u'pk_id': 1587886, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Arianna', u'teacherlastname_t': u'Warren', u'total_number_of_ratings_i': 4}, {u'averageratingscore_rf': 4.4, u'pk_id': 1643053, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Kimberly', u'teacherlastname_t': u'Lacoco', u'total_number_of_ratings_i': 4}, {u'averageratingscore_rf': 2.5, u'pk_id': 1857299, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Kevin', u'teacherlastname_t': u'Pyle', u'total_number_of_ratings_i': 4}, {u'averageratingscore_rf': 2.33, u'pk_id': 892723, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Keith', u'teacherlastname_t': u'Mitchener', u'total_number_of_ratings_i': 3}, {u'averageratingscore_rf': 3.5, u'pk_id': 1448008, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Sally', u'teacherlastname_t': u'Stratso', u'total_number_of_ratings_i': 3}, {u'averageratingscore_rf': 3.25, u'pk_id': 680381, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Todd', u'teacherlastname_t': u'Venable', u'total_number_of_ratings_i': 2}, {u'averageratingscore_rf': 5.0, u'pk_id': 1256069, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Amanda', u'teacherlastname_t': u'Campbell-Wyatt', u'total_number_of_ratings_i': 2}, {u'averageratingscore_rf': 5.0, u'pk_id': 2142326, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Jeremy', u'teacherlastname_t': u'Godwin', u'total_number_of_ratings_i': 2}, {u'averageratingscore_rf': 1.5, u'pk_id': 697421, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Woody', u'teacherlastname_t': u'Paige', u'total_number_of_ratings_i': 1}, {u'averageratingscore_rf': 1.0, u'pk_id': 881718, u'schoolid_s': u'4873', u'teacherfirstname_t': u'M', u'teacherlastname_t': u'Sullivan', u'total_number_of_ratings_i': 1}, {u'averageratingscore_rf': 1.5, u'pk_id': 1607181, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Nancy', u'teacherlastname_t': u'Coffelt', u'total_number_of_ratings_i': 1}, {u'averageratingscore_rf': 5.0, u'pk_id': 1710114, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Jason', u'teacherlastname_t': u'Scheller', u'total_number_of_ratings_i': 1}, {u'averageratingscore_rf': 4.0, u'pk_id': 2164391, u'schoolid_s': u'4873', u'teacherfirstname_t': u'James', u'teacherlastname_t': u'Paige', u'total_number_of_ratings_i': 1}, {u'pk_id': 2083511, u'schoolid_s': u'4873', u'teacherfirstname_t': u'Stephen ', u'teacherlastname_t': u'Wolfrum', u'total_number_of_ratings_i': 0}] </code></pre> <p>All you need to so is pass the school id and the subject to the query string and you can get whatever you like.</p>
0
2016-08-21T17:48:24Z
[ "javascript", "python", "web-scraping" ]
Why is apply not being applied
39,065,524
<p>I have this initial dataframe:</p> <pre><code>df = pd.DataFrame(data = {'colX': ['TQ95','SM90','SJ07','SH97','TF28']}) </code></pre> <p>So df is as follows:</p> <pre><code> colX 0 TQ95 1 SM90 2 SJ07 3 SH97 4 TF28 </code></pre> <p>No I create a very very simple function and apply it to df:</p> <pre><code>def foo(x): return x + 'bar' df.apply(foo) </code></pre> <p>Returns:</p> <pre><code> colX 0 TQ95bar 1 SM90bar 2 SJ07bar 3 SH97bar 4 TF28bar </code></pre> <p>So why does the following:</p> <pre><code>def bar(x): if len(x) == 4: return 'x' elif len(x) == 6: return 'y' else: return 'z' df.apply(bar) </code></pre> <p>Return this?:</p> <pre><code>colX z dtype: object </code></pre> <p>Rather than this?:</p> <pre><code> colX 0 x 1 x 2 x 3 x 4 x </code></pre>
1
2016-08-21T14:54:21Z
39,065,598
<p><code>df.apply(foo)</code> is executed on every column of the DataFrame (for your case, only on <code>colX</code>). It is the same as <code>df['colX'] + 'bar'</code>. You are appending 'bar' to every string in that Series (<code>x</code> in that function is a Series so <code>x + 'bar'</code> is also a Series).</p> <p><code>df.apply(bar)</code> is also executed on that single column. It returns a scalar rather than a Series though. If you want it to be applied to every row, you need to pass <code>axis=1</code>:</p> <pre><code>df.apply(bar, axis=1) Out: 0 z 1 z 2 z 3 z 4 z dtype: object </code></pre>
2
2016-08-21T15:01:29Z
[ "python", "pandas" ]
Why is apply not being applied
39,065,524
<p>I have this initial dataframe:</p> <pre><code>df = pd.DataFrame(data = {'colX': ['TQ95','SM90','SJ07','SH97','TF28']}) </code></pre> <p>So df is as follows:</p> <pre><code> colX 0 TQ95 1 SM90 2 SJ07 3 SH97 4 TF28 </code></pre> <p>No I create a very very simple function and apply it to df:</p> <pre><code>def foo(x): return x + 'bar' df.apply(foo) </code></pre> <p>Returns:</p> <pre><code> colX 0 TQ95bar 1 SM90bar 2 SJ07bar 3 SH97bar 4 TF28bar </code></pre> <p>So why does the following:</p> <pre><code>def bar(x): if len(x) == 4: return 'x' elif len(x) == 6: return 'y' else: return 'z' df.apply(bar) </code></pre> <p>Return this?:</p> <pre><code>colX z dtype: object </code></pre> <p>Rather than this?:</p> <pre><code> colX 0 x 1 x 2 x 3 x 4 x </code></pre>
1
2016-08-21T14:54:21Z
39,065,614
<p>Did you mean:</p> <pre><code>df['colX'].apply(bar) </code></pre> <p>Thus, only the length of the <strong>column value</strong> is checked.<br> The complete code:</p> <pre><code>import pandas as pd def bar(x): if len(x) == 4: return 'x' elif len(x) == 6: return 'y' else: return 'z' df = pd.DataFrame(data = {'colX': ['TQ95','SM90','SJ07','SH97','TF28']}) df['colX'] = df['colX'].apply(bar) </code></pre>
3
2016-08-21T15:03:00Z
[ "python", "pandas" ]
Django Save creating two records instead of one
39,065,568
<p>Django is creating two records in MySQL instead of one.</p> <p>I call a function via a link </p> <pre><code>&lt;a href="{% url 'markpresent' id=c.id %}"&gt;&lt;button class="btn btn-primary"&gt;Thats Me!&lt;/button&gt;&lt;/a&gt; </code></pre> <p>The function itself is very straight forward. I take the variable via a request.get, create a new object, and finally save it. However, when I check the DB there are two records, not just one.</p> <pre><code>def markpresent(request, id): new_attendance = attendance(clientid_id = id, date = datetime.datetime.now(), camp = 3) new_attendance.save() return render(request, 'clienttracker/markpresent.html', { 'client': id, }) </code></pre> <p>Model</p> <pre><code>class attendance(models.Model): clientid = models.ForeignKey(newclients, on_delete=models.CASCADE) date = models.DateField() camp = models.CharField(max_length = 3, default=0) </code></pre> <p>Any help and direction would be appreciated.</p> <p><strong>SOLUTION BASED ON ANSWERS</strong></p> <pre><code>&lt;form action="{% url 'markpresent' %}" method="post"&gt; {% csrf_token %} &lt;button type="submit" name="client" value="{{ c.id }}" class="btn btn-primary"&gt;Thats Me!&lt;/button&gt; &lt;/form&gt; def markpresent(request): id = request.POST.get('client') new_attendance = attendance(clientid_id = id, date = datetime.datetime.now(), camp = 3) new_attendance.save() return render(request, 'clienttracker/markpresent.html', { 'client': id, }) </code></pre> <p>Thanks</p>
0
2016-08-21T14:58:46Z
39,065,663
<p>You should avoid modifying your database on a GET request. Various things could cause a duplicate request - for instance, a request for an asset or favicon being caught by the same URL pattern and routed to the same view - so you should always require a POST before adding an entry in your database.</p>
1
2016-08-21T15:07:31Z
[ "python", "mysql", "django" ]
Django Save creating two records instead of one
39,065,568
<p>Django is creating two records in MySQL instead of one.</p> <p>I call a function via a link </p> <pre><code>&lt;a href="{% url 'markpresent' id=c.id %}"&gt;&lt;button class="btn btn-primary"&gt;Thats Me!&lt;/button&gt;&lt;/a&gt; </code></pre> <p>The function itself is very straight forward. I take the variable via a request.get, create a new object, and finally save it. However, when I check the DB there are two records, not just one.</p> <pre><code>def markpresent(request, id): new_attendance = attendance(clientid_id = id, date = datetime.datetime.now(), camp = 3) new_attendance.save() return render(request, 'clienttracker/markpresent.html', { 'client': id, }) </code></pre> <p>Model</p> <pre><code>class attendance(models.Model): clientid = models.ForeignKey(newclients, on_delete=models.CASCADE) date = models.DateField() camp = models.CharField(max_length = 3, default=0) </code></pre> <p>Any help and direction would be appreciated.</p> <p><strong>SOLUTION BASED ON ANSWERS</strong></p> <pre><code>&lt;form action="{% url 'markpresent' %}" method="post"&gt; {% csrf_token %} &lt;button type="submit" name="client" value="{{ c.id }}" class="btn btn-primary"&gt;Thats Me!&lt;/button&gt; &lt;/form&gt; def markpresent(request): id = request.POST.get('client') new_attendance = attendance(clientid_id = id, date = datetime.datetime.now(), camp = 3) new_attendance.save() return render(request, 'clienttracker/markpresent.html', { 'client': id, }) </code></pre> <p>Thanks</p>
0
2016-08-21T14:58:46Z
39,065,701
<p>Are you using Google Chrome? If yes, then Google Chrome has something like lazy loading. So if you will type your URL in Google Chrome, it will try to load site behind the scenes and if you will tap enter, then you will get this URL again. The same is when you're trying to go over anchor with a link. It's an edge case, but it happens. Try with firefox or disable that function.</p>
1
2016-08-21T15:11:13Z
[ "python", "mysql", "django" ]
Invalid Literal Issue when trying to convert RGB to HEX
39,065,585
<p>While writing a few unit tests, I had to convert RGB colors to HEX. My function for the conversion is </p> <pre><code> def rgb_to_hex(rgb): return '#%02x%02x%02x' % rgb </code></pre> <p>The output that I am getting using the unit test function (Selenium using Python ) is in the format <code>rgba(255, 255, 255, 1)</code>. </p> <p>Passing this in the <code>rgb_to_hex()</code> [ without the rgba] gives me this error : </p> <pre><code>ValueError: invalid literal for int() with base 10: '(255, 255, 255, 1)' </code></pre> <p>I read <a href="http://stackoverflow.com/questions/1841565/valueerror-invalid-literal-for-int-with-base-10">this</a> link, which makes me think the space between the values is the reason for this. However, I'm not able to resolve this. How to get past this? </p>
0
2016-08-21T15:00:16Z
39,087,273
<p>There may be many reasons : 1. rgb should be tuple in this case having 3 values. So (255, 255, 255) in form of tuple needs to be passed here instead of (255, 255, 255, 1) 2. rgb has to be a tuple, if it is string this will not work.</p> <p>Try using following commands in python interpreter</p> <pre><code>"#%02x%02x%02x" % (255, 255, 255) </code></pre> <p>it will give the expected result "#ffffff"</p> <p>if we run the following </p> <pre><code> "#%02x%02x%02x" % (255, 255, 255,1) </code></pre> <p>it will say not all arguments converted during string formatting.</p> <p>But from stacktrace shown in the question looks like you are passing '(255, 255, 255, 1)' as single string, which obviously can not be parsed to an it. So make sure you are converting "(255, 255, 255, 1)" string into a tuple (255, 255, 255) before passing it to the formatter. you can use split function on string to parse it and then create a tuple from the splitted value. remember to clip off brackets from the splitted strings.</p> <p>e.g</p> <p>def rgb_to_hex(rgb): #for example if rgb = "(255, 255, 255, 1)"</p> <pre><code>new_string = rgb[1:-4] # remove starting brace and , 1) from last # new_strings will be "255, 255, 255" string_fractions = input_string.split(",") # string fractions will be ['255', ' 255', ' 255'] # now notice it is list of strings and we need a tuple of ints int_tuples = tuple(map(int, input_string.split(","))) # int_tuples will be (255, 255, 255) return '#%02x%02x%02x' % int_tuples </code></pre>
0
2016-08-22T19:25:31Z
[ "python", "selenium", "type-conversion" ]
python - best approach when analysing scraped data
39,065,615
<p>Newbie here. I have managed to put together a script which scrapes some information from a website. This happens daily, and the data is saved on a csv file. content of each file is similar to this:</p> <pre><code>date, ticker, company name, momentum indicator, other ratios.... 2016-08-19, GSK, GlaxoSmithKline, 42, .... 2016-08-19, RDSB, Royal Dutch Shell, 98, ..... .... </code></pre> <p>I have accumulated 3 months worth of daily data, so around 80 files. (Every row in the file has the same date and then the different shares). What I would like to do now is to check, on a share by share basis, the evolution of the momentum indicator and other ratios.</p> <p>for example, I think I should end up with a series of lists such as </p> <pre><code>GSK_momentum_indicator = (42, 43, 38, 47,...) RDSB_momentum_indicator = (98, 91, 77, 79,...) </code></pre> <p>Now, as a newbie, I have 2 questions: 1) what do you think is the best approach for this? Is it using lists, dictionaries, anything else? 2) <strong>how</strong> did you decide the above? are there guidelines for which strategy to use? is there a good resource I can read as a newbie to learn more about this subject?</p> <p>thanks!</p> <p>PS. in case it makes a difference, I'm using python 3.5.2.</p>
0
2016-08-21T15:03:05Z
39,078,390
<p>In order to process the data you've collected, you could use one of the python modules, <code>csv</code> or <code>pandas</code>. The <code>csv</code> module is used to read/write data from/to csv files and then you can convert the data into python lists and dictionaries and use accordingly. For detailed docs go <a href="https://docs.python.org/2/library/csv.html" rel="nofollow">here</a>.</p> <p>But if you have large dataset then you should go for <code>pandas</code> which a specialized tool for data analysis. The <code>pandas.read_csv</code> function takes the name of the csv file as argument and returns a DataFrame object on which you can perform various operation. For detailed docs go <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow">here</a>.</p>
1
2016-08-22T11:29:46Z
[ "python" ]
Pygame can't open song on Raspberry Pi
39,065,639
<p>I have a program that plays a random wav file when it is run. However, when I run the program I get this error:</p> <pre><code>Traceback (most recent call last): File "pi/home/python_music/music.py", line 19, in &lt;module&gt; pygame.mixer.music.load(song) error: Couldn't open 'Song1.wav ' </code></pre> <p>This is the code:</p> <pre><code>import pygame import random f = open('List.txt', 'r+') songs = [] while True: x = f.readline() if x == '': break songs.append(x) f.close() while True: y = randint(0, len(songs)) song = songs[y] pygame.mixer.init() pygame.mixer.music.load(song) pygame.mixer.music.play() while True: if pygame.mixer.music.get_busy()== False: pygame.mixer.quit() break </code></pre> <p>and List.txt looks like this:</p> <pre><code>Song1.wav Song2.wav Song3.wav . . . Song12.wav </code></pre> <p>The program is run on a Raspberry Pi with Raspbian, using Pygame. Why do I get this error?</p>
0
2016-08-21T15:05:37Z
39,065,780
<ol> <li>Make sure your .wavs are in the same folder as your .py</li> <li>Make sure they are called 'Song1.wav'</li> <li>Try <code>if not x == '':</code></li> <li>Put <code>break</code> under <code>songs.append</code></li> </ol>
0
2016-08-21T15:19:05Z
[ "python", "raspberry-pi", "pygame", "music" ]
Pygame can't open song on Raspberry Pi
39,065,639
<p>I have a program that plays a random wav file when it is run. However, when I run the program I get this error:</p> <pre><code>Traceback (most recent call last): File "pi/home/python_music/music.py", line 19, in &lt;module&gt; pygame.mixer.music.load(song) error: Couldn't open 'Song1.wav ' </code></pre> <p>This is the code:</p> <pre><code>import pygame import random f = open('List.txt', 'r+') songs = [] while True: x = f.readline() if x == '': break songs.append(x) f.close() while True: y = randint(0, len(songs)) song = songs[y] pygame.mixer.init() pygame.mixer.music.load(song) pygame.mixer.music.play() while True: if pygame.mixer.music.get_busy()== False: pygame.mixer.quit() break </code></pre> <p>and List.txt looks like this:</p> <pre><code>Song1.wav Song2.wav Song3.wav . . . Song12.wav </code></pre> <p>The program is run on a Raspberry Pi with Raspbian, using Pygame. Why do I get this error?</p>
0
2016-08-21T15:05:37Z
39,065,799
<p>In your code, you're reading the entire line of the file including the newline character. You can avoid this by instead of having </p> <pre><code>f = open('List.txt', 'r+') songs = [] while True: x = f.readline() if x == '': break songs.append(x) f.close() y = randint(0, len(songs)) </code></pre> <p>It can be replaced with</p> <pre><code>f = open('List.txt', 'r+') songs = f.read().splitlines() f.close() </code></pre> <p>You will also reach an index out of range at some point because the max index of a list is one less than the length, meaning you need:</p> <pre><code>y = random.randint(0, len(songs) - 1) </code></pre> <p>In my code I had to put random.randint instead of randint (I don't know if it's like this for you)</p>
1
2016-08-21T15:20:52Z
[ "python", "raspberry-pi", "pygame", "music" ]
python- pandas- concatenate columns with a loop
39,065,656
<p>I have a list of columns that I need to concatenate. An example table would be:</p> <pre><code>import numpy as np cats1=['T_JW', 'T_BE', 'T_FI', 'T_DE', 'T_AP', 'T_KI', 'T_HE'] data=np.array([random.sample(range(0,2)*7,7)]*3) df_=pd.DataFrame(data, columns=cats1) </code></pre> <p>So I need to get the concatenation of each line (if it's possible with a blank space between each value). I tried:</p> <pre><code>listaFin=['']*1000 for i in cats1: lista=list(df_[i]) listaFin=zip(listaFin,lista) </code></pre> <p>But I get a list of tuples:</p> <pre><code>listaFin: [((((((('', 0), 0), 1), 0), 1), 0), 1), ((((((('', 0), 0), 1), 0), 1), 0), 1), ((((((('', 0), 0), 1), 0), 1), 0), 1)] </code></pre> <p>And I need to get something like </p> <pre><code>[0 0 1 0 1 0 1, 0 0 1 0 1 0 1, 0 0 1 0 1 0 1] </code></pre> <p>How can I do this only using one loop or less (i don't want to use a double loop)?</p> <p>Thanks.</p>
1
2016-08-21T15:07:03Z
39,065,941
<p>I don't think you can have a list of space delimited integers in Python without them being in a string (I might be wrong). Having said that, the answer I have is:</p> <pre><code>output = [] for i in range(0,df_.shape[0]): output.append(' '.join(str(x) for x in list(df_.loc[i]))) print(output) </code></pre> <p>output looks like this: ['1 0 0 0 1 0 1', '1 0 0 0 1 0 1', '1 0 0 0 1 0 1']</p>
0
2016-08-21T15:38:55Z
[ "python", "pandas", "dataframe", "zip", "concatenation" ]
Pandas Cumulative Time Series Range in Data Frame
39,065,904
<p>I'm looking to have an "expanding" date range based on the values in a starttime and endcolumn.</p> <p>If any part of a record occurs in a prior record, I want to return a starttime that is the minimum of the two starttime records and an endtime that is the maximum of the two endtime records.</p> <p>These will be grouped by an order id</p> <pre><code>Order starttime endtime RollingStart RollingEnd 1 2015-07-01 10:24:43.047 2015-07-01 10:24:43.150 2015-07-01 10:24:43.047 2015-07-01 10:24:43.150 1 2015-07-01 10:24:43.137 2015-07-01 10:24:43.200 2015-07-01 10:24:43.047 2015-07-01 10:24:43.200 1 2015-07-01 10:24:43.197 2015-07-01 10:24:57.257 2015-07-01 10:24:43.047 2015-07-01 10:24:57.257 1 2015-07-01 10:24:57.465 2015-07-01 10:25:13.470 2015-07-01 10:24:57.465 2015-07-01 10:25:13.470 1 2015-07-01 10:24:57.730 2015-07-01 10:25:13.485 2015-07-01 10:24:57.465 2015-07-01 10:25:13.485 2 2015-07-01 10:48:57.465 2015-07-01 10:48:13.485 2015-07-01 10:48:57.465 2015-07-01 10:48:13.485 </code></pre> <p>So, in the above example, Order 1 has an initial range that runs from 2015-07-01 10:24:43.047 to 2015-07-01 10:24:57.257 and then another one from 2015-07-01 10:24:57.465 to 2015-07-01 10:25:13.485</p> <p>Note that while the starttimes are in order, the endtimes are not necessarily due to the nature of the data (there are short term events and long term events)</p> <p>In the end, I only want the last record of each orderid,rolling start combination (so in this case, the last two records</p> <p>I tried </p> <pre><code>df['RollingStart'] = np.where((df['endtime'] &gt;= df['RollingStart'].shift()) &amp; (df['RollingEnd'].shift()&gt;= df['starttime']), min(df['starttime'],df['RollingStart']),df['starttime']) </code></pre> <p>(this obviously doesn't include the order id)</p> <p>But the error I receive is</p> <pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>Any ideas would be much appreciated</p> <p>Code to replicate follows:</p> <pre><code>from io import StringIO import io text = """Order starttime endtime 1 2015-07-01 10:24:43.047 2015-07-01 10:24:43.150 1 2015-07-01 10:24:43.137 2015-07-01 10:24:43.200 1 2015-07-01 10:24:43.197 2015-07-01 10:24:57.257 1 2015-07-01 10:24:57.465 2015-07-01 10:25:13.470 1 2015-07-01 10:24:57.730 2015-07-01 10:25:13.485 2 2015-07-01 10:48:57.465 2015-07-01 10:48:13.485""" df = pd.read_csv(StringIO(text), sep='\s{2,}', engine='python', parse_dates=[1, 2]) df['RollingStart'] = np.where((df['endtime'] &gt;= df['RollingStart'].shift()) &amp; (df['RollingEnd'].shift()&gt;= df['start']), min(df['starttime'],df['RollingStart']),df['starttime']) df = pd.read_csv(StringIO(text), sep='\s{2,}', engine='python', parse_dates=[1, 2]) df['RollingStart']=df['starttime'] df['RollingEnd']=df['endtime'] df['RollingStart'] = np.where((df['endtime'] &gt;= df['RollingStart'].shift()) &amp; (df['RollingEnd'].shift()&gt;= df['starttime']),min(df['starttime'],df['RollingStart']),df['starttime']) </code></pre> <p>Error is:</p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 2, in &lt;module&gt; File "C:\Anaconda3\lib\site-packages\pandas\core\generic.py", line 731, in __nonzero__ .format(self.__class__.__name__)) ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>Thanks</p>
0
2016-08-21T15:32:45Z
39,066,017
<p>It looks like you are trying to return a value based on a value that isn't set yet, </p> <p><code>df['start'] =...conditions... df['start'].shift()</code></p> <p>looks to me like you are trying to set a condition on a column that Pandas doesn't know anything about.</p> <p>If you are just trying to set the "start" value as the latest time in these columns, try building a statement with or statements, or a make a temporary array and use max, if you are just trying to get the latest time</p> <p><code>df['start'] = np.where(max(df['enddatetime'],df['startdatetime'],))</code></p> <p>If the above is way off, do you have the code to reproduce this df so I can see if I get the same error?</p>
1
2016-08-21T15:46:59Z
[ "python", "datetime", "pandas", "time-series" ]
Pandas Cumulative Time Series Range in Data Frame
39,065,904
<p>I'm looking to have an "expanding" date range based on the values in a starttime and endcolumn.</p> <p>If any part of a record occurs in a prior record, I want to return a starttime that is the minimum of the two starttime records and an endtime that is the maximum of the two endtime records.</p> <p>These will be grouped by an order id</p> <pre><code>Order starttime endtime RollingStart RollingEnd 1 2015-07-01 10:24:43.047 2015-07-01 10:24:43.150 2015-07-01 10:24:43.047 2015-07-01 10:24:43.150 1 2015-07-01 10:24:43.137 2015-07-01 10:24:43.200 2015-07-01 10:24:43.047 2015-07-01 10:24:43.200 1 2015-07-01 10:24:43.197 2015-07-01 10:24:57.257 2015-07-01 10:24:43.047 2015-07-01 10:24:57.257 1 2015-07-01 10:24:57.465 2015-07-01 10:25:13.470 2015-07-01 10:24:57.465 2015-07-01 10:25:13.470 1 2015-07-01 10:24:57.730 2015-07-01 10:25:13.485 2015-07-01 10:24:57.465 2015-07-01 10:25:13.485 2 2015-07-01 10:48:57.465 2015-07-01 10:48:13.485 2015-07-01 10:48:57.465 2015-07-01 10:48:13.485 </code></pre> <p>So, in the above example, Order 1 has an initial range that runs from 2015-07-01 10:24:43.047 to 2015-07-01 10:24:57.257 and then another one from 2015-07-01 10:24:57.465 to 2015-07-01 10:25:13.485</p> <p>Note that while the starttimes are in order, the endtimes are not necessarily due to the nature of the data (there are short term events and long term events)</p> <p>In the end, I only want the last record of each orderid,rolling start combination (so in this case, the last two records</p> <p>I tried </p> <pre><code>df['RollingStart'] = np.where((df['endtime'] &gt;= df['RollingStart'].shift()) &amp; (df['RollingEnd'].shift()&gt;= df['starttime']), min(df['starttime'],df['RollingStart']),df['starttime']) </code></pre> <p>(this obviously doesn't include the order id)</p> <p>But the error I receive is</p> <pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>Any ideas would be much appreciated</p> <p>Code to replicate follows:</p> <pre><code>from io import StringIO import io text = """Order starttime endtime 1 2015-07-01 10:24:43.047 2015-07-01 10:24:43.150 1 2015-07-01 10:24:43.137 2015-07-01 10:24:43.200 1 2015-07-01 10:24:43.197 2015-07-01 10:24:57.257 1 2015-07-01 10:24:57.465 2015-07-01 10:25:13.470 1 2015-07-01 10:24:57.730 2015-07-01 10:25:13.485 2 2015-07-01 10:48:57.465 2015-07-01 10:48:13.485""" df = pd.read_csv(StringIO(text), sep='\s{2,}', engine='python', parse_dates=[1, 2]) df['RollingStart'] = np.where((df['endtime'] &gt;= df['RollingStart'].shift()) &amp; (df['RollingEnd'].shift()&gt;= df['start']), min(df['starttime'],df['RollingStart']),df['starttime']) df = pd.read_csv(StringIO(text), sep='\s{2,}', engine='python', parse_dates=[1, 2]) df['RollingStart']=df['starttime'] df['RollingEnd']=df['endtime'] df['RollingStart'] = np.where((df['endtime'] &gt;= df['RollingStart'].shift()) &amp; (df['RollingEnd'].shift()&gt;= df['starttime']),min(df['starttime'],df['RollingStart']),df['starttime']) </code></pre> <p>Error is:</p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 2, in &lt;module&gt; File "C:\Anaconda3\lib\site-packages\pandas\core\generic.py", line 731, in __nonzero__ .format(self.__class__.__name__)) ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>Thanks</p>
0
2016-08-21T15:32:45Z
39,068,728
<p>Try this: </p> <p>Version 1</p> <pre><code>NaT = pd.NaT df['Rolling2'] = np.where(df['starttime'].shift(-1) &gt; df['endtime'], NaT,'drop') df['Rolling2'] = df['Rolling2'].shift(1) df['RollingStart'] = np.where(df['Rolling2'] =='drop',None,df['starttime']) df['RollingStart'] = pd.to_datetime(df['RollingStart']).ffill() df['RollingEnd'] = df['endtime'] del df['Rolling2'] </code></pre> <p>Version 2. </p> <pre><code>df['RollingStart'] = df['starttime'] df['RollingEnd'] = df['endtime'] df['RollingStart'] = np.where(df['RollingEnd'].shift()&gt;= df['starttime'] ,pd.NaT , df['RollingStart']) df['RollingStart'] = pd.to_datetime(df['RollingStart']).ffill() Order starttime endtime RollingStart RollingEnd 0 1 2015-07-01 10:24:43.047 2015-07-01 10:24:43.150 2015-07-01 10:24:43.047 2015-07-01 10:24:43.150 1 1 2015-07-01 10:24:43.137 2015-07-01 10:24:43.200 2015-07-01 10:24:43.047 2015-07-01 10:24:43.200 2 1 2015-07-01 10:24:43.197 2015-07-01 10:24:57.257 2015-07-01 10:24:43.047 2015-07-01 10:24:57.257 3 1 2015-07-01 10:24:57.465 2015-07-01 10:25:13.470 2015-07-01 10:24:57.465 2015-07-01 10:25:13.470 4 1 2015-07-01 10:24:57.730 2015-07-01 10:25:13.485 2015-07-01 10:24:57.465 2015-07-01 10:25:13.485 5 2 2015-07-01 10:48:57.465 2015-07-01 10:48:13.485 2015-07-01 10:48:57.465 2015-07-01 10:48:13.485 </code></pre>
0
2016-08-21T20:48:37Z
[ "python", "datetime", "pandas", "time-series" ]
Running lines of code inside code for new thread instead of specifying a target function to run
39,066,005
<p>I've tried a lot of searching but didn't really know how to word my problem, so it may be that there's a solution that I couldn't find because I didn't know how to search for it.</p> <p>I have a single line of code that I'd like to run in a separate thread. So far I have the impression that to create a new thread you have to put the code you want to run inside its own function and then call that function using the <code>target</code> argument when starting the thread:</p> <pre><code>threading.Thread(target = functionName).start() </code></pre> <p>This is fine and I have it working like this, however because I'm only running a single line of code it seems a bit pointless to have it in its own function, and I'd like to get rid of this. I want to create the thread and effectively specify the actual line of code itself as the <code>target</code> instead of the function name.</p> <p>I can do this in C#:</p> <pre><code>new Thread(delegate() { // any amount of code goes here and it will be run in its own thread }).Start(); </code></pre> <p>But is there any way I can do this in Python?</p>
0
2016-08-21T15:45:53Z
39,066,032
<p>Using python you can make inline functions using <code>lambda</code></p> <pre><code>threading.Thread(target = lambda: print('hi')).start() </code></pre> <p><em>In this case, note that 'hi' won't show in the terminal as it's in another thread</em></p>
0
2016-08-21T15:49:35Z
[ "python" ]
What are the difference between django-compressor and django-sass processor?
39,066,027
<p>I'm using django compressor, to compress JS and CSS. As the introduction about django compressor says:"Django Compressor processes, combines and minifies linked and inline Javascript or CSS in a Django template into cacheable static files."</p> <p>Looks like the same goal of django-sass-processor. The problem when I use django-compressor is that during the development, my application is slow (more than 2 second to render a page that has no DB access in the view and doesn't process anything). Is that normal?</p> <p>I thought to speed up the apps, using the sass processor that checks the timestamp and compile the sass file only if there's some change (is this the purpose of the tool right?).</p> <p>By the way, I have a bit of confusion, can you explain me what are and how to use it?</p> <h1>Edit 1:</h1> <p>When I request the index ( <a href="http://localhost:8000/" rel="nofollow">http://localhost:8000/</a>) for example, the following resources are requested:</p> <pre><code>[21/Aug/2016 16:09:43] "GET / HTTP/1.1" 200 2346 [21/Aug/2016 16:09:43] "GET /static/CACHE/css/bootstrap.min.35ea483046e0.css HTTP/1.1" 200 145948 [21/Aug/2016 16:09:43] "GET /static/CACHE/css/bitdepot.762c234abcad.css HTTP/1.1" 200 5359 [21/Aug/2016 16:09:43] "GET /static/CACHE/css/core.d64c40e32055.css HTTP/1.1" 200 6517 </code></pre> <p>On each request, the files in CACHE changes, even if I don't touch it. I think the app is slower cause of it.</p>
0
2016-08-21T15:48:41Z
39,066,080
<p>I haven't used django befor however the SASS Processor will process SASS code which is kinda like advanced/modified CSS</p>
0
2016-08-21T15:55:13Z
[ "python", "django", "sass", "django-compressor" ]
Why are these tuple values seemingly losing their precision
39,066,064
<p>I have the following:</p> <pre><code>import pandas as pd def TupFirst(x): return x[0] def TupSecond(x): return x[1] df = pd.DataFrame(data = {'colX': [(51.2990505474, 0.802680507953),(51.7491674401, -4.96357522689)]}) df['colY'] = df['colX'].apply(TupFirst) df['colZ'] = df['colX'].apply(TupSecond) df </code></pre> <p>Which returns the following dataframe:</p> <pre><code> colX colY colZ 0 (51.2990505474, 0.802680507953) 51.299051 0.802681 1 (51.7491674401, -4.96357522689) 51.749167 -4.963575 </code></pre> <p>Why are the values in colY and colZ being rounded?</p>
2
2016-08-21T15:53:22Z
39,066,157
<p>What you're seeing is the result of a display configuration in pandas. The full precision is still there. Check it with:</p> <p><code>print(df.loc[1, 'colz'])</code></p>
2
2016-08-21T16:02:33Z
[ "python", "pandas" ]
Django - default field value depends on other field value
39,066,086
<p><br>I have a problem to set a default field value. What I want to do?<br>I want that price in class Packages be a default value of priceNoTax in class Bill. As you can see, all three classes are logically connected.<br>Example: Account 1 has a package with id 1. Price of this package is 100. Default value of priceNoTax for Account 1 is 100.<br><br>How to do that? I am relative new at this, so I need help.<br></p> <p>models.py</p> <pre><code>class Packages(models.Model): #other fields price = models.IntegerField(validators=[MinValueValidator(1)], verbose_name="Price of package") class Account(models.Model): startDate = models.DateField(verbose_name="Start date") finishDate = models.DateField(verbose_name="Finish date") idPackage = models.ForeignKey(Packages, on_delete=models.CASCADE, verbose_name="Package") class Bill(models.Model): date = models.DateField(default=datetime.now()) tax = models.FloatField(default=0.20) priceNoTax = models.IntegerField() priceTax = models.FloatField(default=priceNoTax+(priceNoTax*tax)) idAccount = models.ForeignKey(Account, on_delete=models.CASCADE, verbose_name="Account") def __str__(self): return self.date </code></pre> <p>Thanks a lot!!!</p>
2
2016-08-21T15:55:31Z
39,086,622
<p>perhaps add this to your Bill class?</p> <pre><code>def save(self, *args, **kwargs): if self.priceNoTax is None: self.priceNoTax = self.idAccount.idPackage.price super(Bill, self).save(*args, **kwargs) </code></pre>
0
2016-08-22T18:44:32Z
[ "python", "django", "default-value" ]
Command similar to pygame's get_rel() in tkinter?
39,066,114
<p>I was wondering if when I bind the <code>&lt;Motion&gt;</code> event to a function, if in that function I could define an <code>x</code> and <code>y</code> variable using a version of pygame's <code>mouse.get_rel()</code>, but in tkinter.</p>
1
2016-08-21T15:58:51Z
39,067,890
<p>Event processing is different in <code>tkinter</code> than in <code>pygame</code>, so instead of direct <code>mouse.get_rel()</code> equivalent, it seemed more appropriate to create something that would call a callback function every time the mouse moved (which is what binding to the <code>'&lt;Motion&gt;'</code> event accomplishes). To assist in doing this a <code>RelativeMotion</code> class is defined to hide and encapsulate as many of the messy details as possible. Far below is a screenshot of it running.</p> <pre><code>import tkinter as tk class RelativeMotion(object): """ Relative mouse motion controller. """ def __init__(self, callback): self.__call__ = self._init_location # first call self.callback = callback def __call__(self, *args, **kwargs): # Implicit invocations of special methods resolve to instance's type. self.__call__(*args, **kwargs) # redirect call to instance itself def _init_location(self, event): self.x, self.y = event.x, event.y # initialize current position self.__call__ = self._update_location # change for any subsequent calls def _update_location(self, event): dx, dy = event.x-self.x, event.y-self.y self.x, self.y = event.x, event.y self.callback(dx, dy) class Application(tk.Frame): # usage demo def __init__(self, master=None): tk.Frame.__init__(self, master) self.grid() self.createWidgets() self.relative_motion = RelativeMotion(self.canvas_mouse_motion) self.canvas.bind('&lt;Motion&gt;', self.relative_motion) # when over canvas def createWidgets(self): self.motion_str = tk.StringVar() self.canvas_mouse_motion(0, 0) # initialize motion_str text label_width = len(self.motion_str.get()) self.motion_lbl = tk.Label(self, bg='white', width=label_width, textvariable=self.motion_str) self.motion_lbl.grid(row=0, column=0) self.quit_btn = tk.Button(self, text='Quit', command=self.quit) self.quit_btn.grid(row=1, column=0) self.canvas = tk.Canvas(self, width='2i', height='2i', bg='white') self.canvas.grid(row=1, column=1) def canvas_mouse_motion(self, dx, dy): self.motion_str.set('{:3d}, {:3d}'.format(dx, dy)) app = Application() app.master.title('Relative Motion Demo') app.mainloop() </code></pre> <p><strong>Running</strong></p> <p><a href="http://i.stack.imgur.com/cTAuX.png" rel="nofollow"><img src="http://i.stack.imgur.com/cTAuX.png" alt="relative motion demo screenshot"></a></p>
0
2016-08-21T19:07:42Z
[ "python", "tkinter", "pygame", "python-3.5" ]
Display response(xml) by making a HTTP GET request using javascript?
39,066,181
<p>I have very new to JS and I have done my research but I guess I'm kind of using the wrong technique or something. Like in python to make GET request we do:</p> <pre><code>request_text = requests.get(url).text </code></pre> <p>I want to do the same thing but using JS i.e. display the content from <code>"http://synd.cricbuzz.com/j2me/1.0/livematches.xml"</code> in the raw(xml) format and I have found this script somewhere but it doesn't work. </p> <pre><code>&lt;h2&gt;AJAX&lt;/h2&gt; &lt;button type="button" onclick="loadDoc()"&gt;Request data&lt;/button&gt; &lt;p id="demo"&gt;&lt;/p&gt; &lt;script&gt; function loadDoc() { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (xhttp.readyState == 4 &amp;&amp; xhttp.status == 200) { document.getElementById("demo").innerHTML = xhttp.responseText; } }; xhttp.open("GET", "http://synd.cricbuzz.com/j2me/1.0/livematches.xml", false); xhttp.send(); } &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>I just need the direction on how to do the same i.e. how to send a GET/POST request using JS and render it on a webpage?</p>
0
2016-08-21T16:05:10Z
39,066,520
<p>When I use</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>function test(url) { var req = new XMLHttpRequest(); req.open('GET', url); req.onload = function() { var section = document.createElement('section'); var h2 = document.createElement('h2'); h2.textContent = 'Received from ' + url; section.appendChild(h2); var pre = document.createElement('pre'); pre.textContent = req.responseText; section.appendChild(pre); document.body.appendChild(section); }; req.onerror = function(evt) { document.body.insertAdjacentHTML('beforeEnd', '&lt;p&gt;Error requesting ' + url + '.&lt;\/p&gt;'); }; req.send(); } document.addEventListener('DOMContentLoaded', function() { test('http://home.arcor.de/martin.honnen/cdtest/test2011060701.xml'); test('http://synd.cricbuzz.com/j2me/1.0/livematches.xml'); }, false);</code></pre> </div> </div> </p> <p>the first URL works as the server is set up to allow the <a href="https://en.wikipedia.org/wiki/Cross-origin_resource_sharing" rel="nofollow">CORS</a> request for that directory while the second fails as the server does not allow it. So unless you serve your HTML with the script from <code>synd.cricbuzz.com</code> or unless you can change the configuration of <code>synd.cricbuzz.com</code> to allow a CORS request you won't be able to request the XML from that server.</p> <p>Note also that in modern browsers (current versions of Mozilla, Chrome, Edge) you can use the <code>Promise</code> based <code>fetch</code> method instead of <code>XMLHttpRequest</code>, as shown below. But the same origin policy is not different for <code>fetch</code>, so the same as stated above holds.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>function test(url) { fetch(url).then(function(response) { if(response.ok) { response.text().then(function(text) { var section = document.createElement('section'); var h2 = document.createElement('h2'); h2.textContent = 'Received from ' + url; section.appendChild(h2); var pre = document.createElement('pre'); pre.textContent = text; section.appendChild(pre); document.body.appendChild(section); }); } else { document.body.insertAdjacentHTML('beforeEnd', '&lt;p&gt;Error requesting ' + url + '; status: ' + response.status + '.&lt;\/p&gt;'); } }) .catch(function(error) { document.body.insertAdjacentHTML('beforeEnd', '&lt;p&gt;Error "' + error.message + '" requesting ' + url + '.&lt;\/p&gt;'); }); } document.addEventListener('DOMContentLoaded', function() { test('http://home.arcor.de/martin.honnen/cdtest/test2011060701.xml'); test('http://synd.cricbuzz.com/j2me/1.0/livematches.xml'); }, false);</code></pre> </div> </div> </p>
1
2016-08-21T16:43:01Z
[ "javascript", "python", "xml" ]
How can I use a SplitArrayField in a ModelForm
39,066,199
<p>I am trying to show a postgresql ArrayField as multiple input fields in a form that users can submit data.</p> <p>Lets say I had a model:</p> <pre><code>class Venue(models.Model): additional_links = ArrayField(models.URLField(validators=[URLValidator]), null=True) </code></pre> <p>That a form was using:</p> <pre><code>class VenueForm(forms.ModelForm): class Meta: model = Venue exclude = ['created_date'] widgets = { 'additional_links': forms.Textarea(), } </code></pre> <p>How would I make the ArrayField use a SplitArrayField in the ModelForm?</p> <p>I tried:</p> <pre><code>class VenueForm(forms.ModelForm): additional_links = SplitArrayField(forms.TextInput(), size=5, remove_trailing_nulls=True) class Meta: .. </code></pre> <p>and the same in the Meta class widgets:</p> <pre><code> widgets = { 'additional_links': forms.SplitArrayField(forms.TextInput(), size=3, remove_trailing_nulls=True) } </code></pre> <p>I also tried different form inputs/fields, but I always get the following error:</p> <pre><code>/lib/python3.5/site-packages/django/contrib/postgres/forms/array.py", line 155, in __init__ widget = SplitArrayWidget(widget=base_field.widget, size=size) AttributeError: 'TextInput' object has no attribute 'widget' </code></pre>
0
2016-08-21T16:07:36Z
39,556,532
<p>The SplitArrayField and SplitArrayWidget it is very different things. Usage SplitArrayField in forms.ModelForm is adding a new field to a form.</p> <p>The SplitArrayWidget is default widget for SplitArrayField, so you don`t need change it.</p> <p>A problem is if you need set this widget to the field "additional_links", because value for the ArrayField must be comma-separated values without spaces.</p> <p><strong>Example</strong>:</p> <p><em><a href="https://www.google.com,https://www.google.com.ru,https://www.google.com.ua" rel="nofollow">https://www.google.com,https://www.google.com.ru,https://www.google.com.ua</a></em></p> <p>Hence, the Django use django.forms.widgets.TextInput by default for ArrayField. It is very no comfortably for humans.</p> <p>But if you still want use the SplitArrayWidget for the ArrayField you need modified this widget.</p> <p><strong>My version:</strong></p> <pre><code>from django.contrib import postgres class SplitInputsArrayWidget(postgres.forms.SplitArrayWidget): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def value_from_datadict(self, data, files, name): value_from_datadict = super().value_from_datadict(data, files, name) # convert a list to a string, with commas-separated values value_from_datadict = ','.join(value_from_datadict) return value_from_datadict def render(self, name, value, attrs=None): # if object has value, then # convert a sting to a list by commas between values if value is not None: value = value.split(',') return super().render(name, value, attrs=None) </code></pre> <p>How to use it:</p> <p>Model:</p> <p>class Article(models.Model): """ Model for article """</p> <pre><code>links = ArrayField( models.URLField(max_length=1000), size=MAX_COUNT_LINKS, verbose_name=_('Links'), help_text=_('Useful links'), ) </code></pre> <p>Form:</p> <p>class ArticleAdminModelForm(forms.ModelForm): """</p> <pre><code>""" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # field 'links' widget_base_field = self.fields['links'].base_field.widget count_inputs = self.fields['links'].max_length self.fields['links'].widget = SplitInputsArrayWidget( widget_base_field, count_inputs, attrs={'class': 'span12'} ) </code></pre> <p>Result in the admin:</p> <p>Django 1.10 (skin Django-Suit) Python 3.4</p> <p><a href="http://i.stack.imgur.com/2FJGP.png" rel="nofollow"><img src="http://i.stack.imgur.com/2FJGP.png" alt="Without data"></a> <a href="http://i.stack.imgur.com/RUHNK.png" rel="nofollow"><img src="http://i.stack.imgur.com/RUHNK.png" alt="Filled data"></a></p> <p>Useful links in web:</p> <p><a href="https://docs.djangoproject.com/en/1.10/_modules/django/contrib/postgres/forms/array/#SplitArrayField" rel="nofollow">https://docs.djangoproject.com/en/1.10/_modules/django/contrib/postgres/forms/array/#SplitArrayField</a></p> <p><a href="https://bradmontgomery.net/blog/nice-arrayfield-widgets-choices-and-chosenjs/" rel="nofollow">https://bradmontgomery.net/blog/nice-arrayfield-widgets-choices-and-chosenjs/</a></p> <p>More my widgets is in my new Django-project (see a file <a href="https://github.com/setivolkylany/programmerHelper/blob/master/utils/django/widgets.py" rel="nofollow">https://github.com/setivolkylany/programmerHelper/blob/master/utils/django/widgets.py</a>)</p> <p>Unfortunately right now this code without tests, so late I will update my answer.</p>
0
2016-09-18T10:16:31Z
[ "python", "django", "postgresql", "python-3.x" ]
How to decrypt a RC2 ciphertext?
39,066,220
<p>Python 3.5, pycrypto 2.7a1, Windows, RC2 ciphering</p> <p>Example:</p> <pre><code>print('Введите текс, который хотите зашифровать:') text = input() with open('plaintext.txt', 'w') as f: f.write(text) key = os.urandom(32) with open('rc2key.bin', 'wb') as keyfile: keyfile.write(key) iv = Random.new().read(ARC2.block_size) cipher = ARC2.new(key, ARC2.MODE_CFB, iv) ciphertext = iv + cipher.encrypt(bytes(text, "utf-8")) with open('iv.bin', 'wb') as f: f.write(iv) with open('ciphertext.bin', 'wb') as f: f.write(ciphertext) print(ciphertext.decode("cp1251")) </code></pre> <p>And I'd like to know how can I decrypt this text, I tried, but couldn't do it. </p> <p>My try to decrypt:</p> <pre><code>os.system('cls') print('Дешифруем значит') with open('ciphertext.bin', 'rb') as f: ciphertext = f.read() with open('rc2key.bin', 'rb') as f: key = f.read() with open('iv.bin', 'rb') as f: iv = f.read() ciphertext = ciphertext.decode('cp1251') iv = iv.decode('cp1251') text = ciphertext.replace(iv, '') text = cipher.decrypt(text) with open('plaintext.txt', 'w') as f: f.write(text) print(text.decode("ascii")) </code></pre> <p>But I understood that I need cipher variable, and I can't save it to .txt or .bin file, so that why I'm asking for help. </p>
-1
2016-08-21T16:09:47Z
39,066,539
<p>The IV is a non-secret value and is commonly written in front of the ciphertext. Since, you've done that already, you don't need to write an additional IV file. RC2 has a block size of 64 bit, so the IV will always be 8 byte long.</p> <pre><code>with open('ciphertext.bin', 'rb') as f: ciphertext = f.read() with open('rc2key.bin', 'rb') as f: key = f.read() iv = ciphertext[:ARC2.block_size] ciphertext = ciphertext[ARC2.block_size:] cipher = ARC2.new(key, ARC2.MODE_CFB, iv) text = cipher.decrypt(ciphertext).decode("utf-8") with open('plaintext.txt', 'w') as f: f.write(text) print(text) </code></pre> <p>Other problems:</p> <ul> <li><p>Don't simply decode binary data such as ciphertexts, keys or IV, because those are most likely not printable.</p></li> <li><p>Don't re-use the same <code>cipher</code> object if you're doing something different. The decryption needs a freshly initialized <code>ARC2</code> object.</p></li> </ul>
0
2016-08-21T16:44:55Z
[ "python", "python-3.x", "encryption", "pycrypto", "rc2-cipher" ]
Get (first and) second highest values in pandas columns
39,066,260
<p>I am using pandas to analyse some election results. I have a DF, Results, which has a row for each constituency and columns representing the votes for the various parties (over 100 of them):</p> <pre><code>In[60]: Results.columns Out[60]: Index(['Constituency', 'Region', 'Country', 'ID', 'Type', 'Electorate', 'Total', 'Unnamed: 9', '30-50', 'Above', ... 'WP', 'WRP', 'WVPTFP', 'Yorks', 'Young', 'Zeb', 'Party', 'Votes', 'Share', 'Turnout'], dtype='object', length=147) </code></pre> <p>So...</p> <pre><code>In[63]: Results.head() Out[63]: Constituency Region Country ID Type \ PAID 1 Aberavon Wales Wales W07000049 County 2 Aberconwy Wales Wales W07000058 County 3 Aberdeen North Scotland Scotland S14000001 Burgh 4 Aberdeen South Scotland Scotland S14000002 Burgh 5 Aberdeenshire West &amp; Kincardine Scotland Scotland S14000058 County Electorate Total Unnamed: 9 30-50 Above ... WP WRP WVPTFP \ PAID ... 1 49821 31523 NaN NaN NaN ... NaN NaN NaN 2 45525 30148 NaN NaN NaN ... NaN NaN NaN 3 67745 43936 NaN NaN NaN ... NaN NaN NaN 4 68056 48551 NaN NaN NaN ... NaN NaN NaN 5 73445 55196 NaN NaN NaN ... NaN NaN NaN Yorks Young Zeb Party Votes Share Turnout PAID 1 NaN NaN NaN Lab 15416 0.489040 0.632725 2 NaN NaN NaN Con 12513 0.415052 0.662230 3 NaN NaN NaN SNP 24793 0.564298 0.648550 4 NaN NaN NaN SNP 20221 0.416490 0.713398 5 NaN NaN NaN SNP 22949 0.415773 0.751528 [5 rows x 147 columns] </code></pre> <p>The per-constituency results for each party are given in the columns <code>Results.ix[:, 'Unnamed: 9': 'Zeb']</code></p> <p>I can find the winning party (i.e. the party which polled highest number of votes) and the number of votes it polled using:</p> <pre><code>RawResults = Results.ix[:, 'Unnamed: 9': 'Zeb'] Results['Party'] = RawResults.idxmax(axis=1) Results['Votes'] = RawResults.max(axis=1).astype(int) </code></pre> <p>But, I also need to know how many votes the second-place party got (and ideally its index/name). So is there any way in pandas to return the <em>second</em> highest value/index in a set of columns for each row?</p>
1
2016-08-21T16:14:55Z
39,066,373
<p>You could just sort your results, such that the first rows will contain the max. Then you can simply use indexing to get the first n places.</p> <pre><code>RawResults = Results.ix[:, 'Unnamed: 9': 'Zeb'].sort_values(by='votes', ascending=False) RawResults.iloc[0, :] # First place RawResults.iloc[1, :] # Second place RawResults.iloc[n, :] # nth place </code></pre>
1
2016-08-21T16:25:41Z
[ "python", "pandas", "numpy", "dataframe" ]
Get (first and) second highest values in pandas columns
39,066,260
<p>I am using pandas to analyse some election results. I have a DF, Results, which has a row for each constituency and columns representing the votes for the various parties (over 100 of them):</p> <pre><code>In[60]: Results.columns Out[60]: Index(['Constituency', 'Region', 'Country', 'ID', 'Type', 'Electorate', 'Total', 'Unnamed: 9', '30-50', 'Above', ... 'WP', 'WRP', 'WVPTFP', 'Yorks', 'Young', 'Zeb', 'Party', 'Votes', 'Share', 'Turnout'], dtype='object', length=147) </code></pre> <p>So...</p> <pre><code>In[63]: Results.head() Out[63]: Constituency Region Country ID Type \ PAID 1 Aberavon Wales Wales W07000049 County 2 Aberconwy Wales Wales W07000058 County 3 Aberdeen North Scotland Scotland S14000001 Burgh 4 Aberdeen South Scotland Scotland S14000002 Burgh 5 Aberdeenshire West &amp; Kincardine Scotland Scotland S14000058 County Electorate Total Unnamed: 9 30-50 Above ... WP WRP WVPTFP \ PAID ... 1 49821 31523 NaN NaN NaN ... NaN NaN NaN 2 45525 30148 NaN NaN NaN ... NaN NaN NaN 3 67745 43936 NaN NaN NaN ... NaN NaN NaN 4 68056 48551 NaN NaN NaN ... NaN NaN NaN 5 73445 55196 NaN NaN NaN ... NaN NaN NaN Yorks Young Zeb Party Votes Share Turnout PAID 1 NaN NaN NaN Lab 15416 0.489040 0.632725 2 NaN NaN NaN Con 12513 0.415052 0.662230 3 NaN NaN NaN SNP 24793 0.564298 0.648550 4 NaN NaN NaN SNP 20221 0.416490 0.713398 5 NaN NaN NaN SNP 22949 0.415773 0.751528 [5 rows x 147 columns] </code></pre> <p>The per-constituency results for each party are given in the columns <code>Results.ix[:, 'Unnamed: 9': 'Zeb']</code></p> <p>I can find the winning party (i.e. the party which polled highest number of votes) and the number of votes it polled using:</p> <pre><code>RawResults = Results.ix[:, 'Unnamed: 9': 'Zeb'] Results['Party'] = RawResults.idxmax(axis=1) Results['Votes'] = RawResults.max(axis=1).astype(int) </code></pre> <p>But, I also need to know how many votes the second-place party got (and ideally its index/name). So is there any way in pandas to return the <em>second</em> highest value/index in a set of columns for each row?</p>
1
2016-08-21T16:14:55Z
39,067,057
<p>Here is a NumPy solution:</p> <pre><code>In [120]: df Out[120]: a b c d e f g h 0 1.334444 0.322029 0.302296 -0.841236 -0.360488 -0.860188 -0.157942 1.522082 1 2.056572 0.991643 0.160067 -0.066473 0.235132 0.533202 1.282371 -2.050731 2 0.955586 -0.966734 0.055210 -0.993924 -0.553841 0.173793 -0.534548 -1.796006 3 1.201001 1.067291 -0.562357 -0.794284 -0.554820 -0.011836 0.519928 0.514669 4 -0.243972 -0.048144 0.498007 0.862016 1.284717 -0.886455 -0.757603 0.541992 5 0.739435 -0.767399 1.574173 1.197063 -1.147961 -0.903858 0.011073 -1.404868 6 -1.258282 -0.049719 0.400063 0.611456 0.443289 -1.110945 1.352029 0.215460 7 0.029121 -0.771431 -0.285119 -0.018216 0.408425 -1.458476 -1.363583 0.155134 8 1.427226 -1.005345 0.208665 -0.674917 0.287929 -1.259707 0.220420 -1.087245 9 0.452589 0.214592 -1.875423 0.487496 2.411265 0.062324 -0.327891 0.256577 In [121]: np.sort(df.values)[:,-2:] Out[121]: array([[ 1.33444404, 1.52208164], [ 1.28237078, 2.05657214], [ 0.17379254, 0.95558613], [ 1.06729107, 1.20100071], [ 0.86201603, 1.28471676], [ 1.19706331, 1.57417327], [ 0.61145573, 1.35202868], [ 0.15513379, 0.40842477], [ 0.28792928, 1.42722604], [ 0.48749578, 2.41126532]]) </code></pre> <p>or as a pandas Data Frame:</p> <pre><code>In [122]: pd.DataFrame(np.sort(df.values)[:,-2:], columns=['2nd-largest','largest']) Out[122]: 2nd-largest largest 0 1.334444 1.522082 1 1.282371 2.056572 2 0.173793 0.955586 3 1.067291 1.201001 4 0.862016 1.284717 5 1.197063 1.574173 6 0.611456 1.352029 7 0.155134 0.408425 8 0.287929 1.427226 9 0.487496 2.411265 </code></pre> <p>or a <a href="http://stackoverflow.com/questions/39066260/get-first-and-second-highest-values-in-pandas-columns/39067057?noredirect=1#comment65482994_39067057">faster solution from @Divakar</a>:</p> <pre><code>In [6]: df Out[6]: a b c d e f g h 0 0.649517 -0.223116 0.264734 -1.121666 0.151591 -1.335756 -0.155459 -2.500680 1 0.172981 1.233523 0.220378 1.188080 -0.289469 -0.039150 1.476852 0.736908 2 -1.904024 0.109314 0.045741 -0.341214 -0.332267 -1.363889 0.177705 -0.892018 3 -2.606532 -0.483314 0.054624 0.979734 0.205173 0.350247 -1.088776 1.501327 4 1.627655 -1.261631 0.589899 -0.660119 0.742390 -1.088103 0.228557 0.714746 5 0.423972 -0.506975 -0.783718 -2.044002 -0.692734 0.980399 1.007460 0.161516 6 -0.777123 -0.838311 -1.116104 -0.433797 0.599724 -0.884832 -0.086431 -0.738298 7 1.131621 1.218199 0.645709 0.066216 -0.265023 0.606963 -0.194694 0.463576 8 0.421164 0.626731 -0.547738 0.989820 -1.383061 -0.060413 -1.342769 -0.777907 9 -1.152690 0.696714 -0.155727 -0.991975 -0.806530 1.454522 0.788688 0.409516 In [7]: a = df.values In [8]: a[np.arange(len(df))[:,None],np.argpartition(-a,np.arange(2),axis=1)[:,:2]] Out[8]: array([[ 0.64951665, 0.26473378], [ 1.47685226, 1.23352348], [ 0.17770473, 0.10931398], [ 1.50132666, 0.97973383], [ 1.62765464, 0.74238959], [ 1.00745981, 0.98039898], [ 0.5997243 , -0.0864306 ], [ 1.21819904, 1.13162068], [ 0.98982033, 0.62673128], [ 1.45452173, 0.78868785]]) </code></pre>
2
2016-08-21T17:40:51Z
[ "python", "pandas", "numpy", "dataframe" ]
How to merge two tables as mentioned in the following post?
39,066,282
<p>I have two data sets: 1) Date::</p> <pre><code>01/03/16 00:00:01 01/03/16 00:00:11 01/03/16 00:00:21 01/03/16 00:00:31 01/03/16 00:00:41 01/03/16 00:00:51 01/03/16 00:01:01 01/03/16 00:01:11 01/03/16 00:01:21 ..... </code></pre> <p>until 31/03/16 23:59:58 with each date row have a difference of 10 sec.</p> <p>and</p> <p>2) Start Date::</p> <pre><code>29/02/16 21:58:03 01/03/16 07:07:18 01/03/16 07:07:37 01/03/16 07:07:38 01/03/16 07:07:47 01/03/16 07:10:06 01/03/16 07:10:36 01/03/16 08:46:09 </code></pre> <p>..... </p> <p>End Date::</p> <pre><code>01/03/16 07:07:18 01/03/16 07:07:37 01/03/16 07:07:37 01/03/16 07:07:38 01/03/16 07:09:56 01/03/16 07:10:06 01/03/16 08:46:09 01/03/16 08:46:29 ..... </code></pre> <p>Location::</p> <pre><code>Bedroom Living Room Bathroom Kitchen Bathroom Kitchen Bedroom Living Room Kitchen Bathroom ..... </code></pre> <p>How to merge these two data sets by time so that in the first data set such that for every range of start and end time in second data set it will show that location for the same time range in first data set.</p> <p>E.g. for first row in 2nd data set the location is Bedroom from 29/02/16 21:58:03 to 01/03/16 07:07:18 so after joining it should show bedroom from first row till the end time i.e. 01/03/16 07:07:18 in 1st data set.</p>
-2
2016-08-21T16:16:47Z
39,067,793
<p>you could try this:</p> <pre><code>Option Explicit Sub main() Dim startDateRng As Range, endDateRng As Range, dateRng As Range, locationRng As Range Dim iniCell As Long, endCell As Long, iCell As Long SetRanges startDateRng, endDateRng, dateRng, locationRng For iCell = 1 To startDateRng.Count '&lt;--| iterate through startDateRng values (and corresponding endDateRng ones) iniCell = FindDateIndex(startDateRng(iCell), dateRng, 1) '&lt;--| get the first valid date in DateRng endCell = FindDateIndex(endDateRng(iCell), dateRng, -1) '&lt;--| get the last valid date in DateRng If endCell - iniCell + 1 &gt; 0 Then dateRng(iniCell).Resize(endCell - iniCell + 1).Offset(, 1).value = locationRng(iCell) '&lt;--| if a valid range has been found then write values Next iCell End Sub Function FindDateIndex(rngToSearchFor As Range, rngToSearchIn As Range, indexShift As Long) As Long Dim index As Variant index = Application.Match(rngToSearchFor.Value2, rngToSearchIn, 1) If IsError(index) Then FindDateIndex = 1 Else FindDateIndex = index If indexShift = 1 Then If rngToSearchIn(index) &lt; rngToSearchFor Then FindDateIndex = FindDateIndex + indexShift Else If rngToSearchIn(index) &gt; rngToSearchFor Then FindDateIndex = FindDateIndex + indexShift End If End If End Function Sub SetRanges(startDateRng As Range, endDateRng As Range, dateRng As Range, locationRng As Range) Set startDateRng = SetRange(Worksheets("Start Date")) Set endDateRng = SetRange(Worksheets("End Date")) Set dateRng = SetRange(Worksheets("Date")) Set locationRng = SetRange(Worksheets("Location")) End Sub Function SetRange(ws As Worksheet) As Range With ws Set SetRange = .Range("A1", .Cells(.Rows.Count, 1).End(xlUp)) End With End Function </code></pre>
0
2016-08-21T18:56:55Z
[ "python", "excel", "vba" ]
Theano function not working on a simple, 4-value array
39,066,284
<p>I was working through the theano documentation/tutorial, and the very first example was this: </p> <pre><code>&gt;&gt;&gt; import numpy &gt;&gt;&gt; import theano.tensor as T &gt;&gt;&gt; from theano import function &gt;&gt;&gt; x = T.dscalar('x') &gt;&gt;&gt; y = T.dscalar('y') &gt;&gt;&gt; z = x + y &gt;&gt;&gt; f = function([x, y], z) </code></pre> <p>This seemed simple enough, and so I wrote my own program that expanded on it:</p> <pre><code>import numpy as np import theano.tensor as T from theano import function x = T.dscalar('x') y = T.dscalar('y') z = x + y f = function([x, y], z) print f(2, 3) print np.allclose(f(16.3, 12.1), 28.4) print "" r = (2, 3), (2, 2), (2, 1), (2, 0) for i in r: print i print f(i) </code></pre> <p>And for some reason, it won't iterate:</p> <pre><code>5.0 True (2, 3) Traceback (most recent call last): File "TheanoBase2.py", line 20, in &lt;module&gt; print f(i) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 786, in __call__ allow_downcast=s.allow_downcast) File "/usr/local/lib/python2.7/dist-packages/theano/tensor/type.py", line 177, in filter data.shape)) TypeError: ('Bad input argument to theano function with name "TheanoBase2.py:9" at index 0(0-based)', 'Wrong number of dimensions: expected 0, got 1 with shape (2,).') </code></pre> <p>Why does <code>print f(2, 3)</code> work and <code>print f(i)</code> not work, when they're the exact same expression. I tried replacing/enclosing the brackets with square brackets, and the result was the same. </p>
0
2016-08-21T16:17:38Z
39,075,731
<p><code>function f</code> take two scalars as input and return their sum, each element of <code>r</code> i.e (x, y) is a <a href="https://docs.python.org/2/tutorial/datastructures.html#tuples-and-sequences" rel="nofollow">tuple</a> not a scalar. This should work:</p> <pre><code>import numpy as np import theano.tensor as T from theano import function x = T.dscalar('x') y = T.dscalar('y') z = x + y f = function([x, y], z) print f(2, 3) print np.allclose(f(16.3, 12.1), 28.4) print "" r = (2, 3), (2, 2), (2, 1), (2, 0) for i in r: print i print f(i[0], i[1]) </code></pre>
1
2016-08-22T09:22:44Z
[ "python", "arrays", "theano", "brackets", "square-bracket" ]
bitwise left shift in python
39,066,369
<p>I have submitted a code-snippet shown below:</p> <pre><code>def shift(): m=8 low='10100111' lw=int(low,2) lw=lw&lt;&lt;1 l_bin=bin(lw)[2:].zfill(m) </code></pre> <p>Output ==> 101001110(9-bits)</p> <p>while desired output ==> 01001110(8-bits)</p> <p>I could understand that right shifting the lw causes the integer value to shift from 167 to 334. But I want the output to be desired output.</p>
-1
2016-08-21T16:25:12Z
39,066,466
<p>You have to mask the upper bits if you want to emulate a byte (like it would behave in C)</p> <pre><code>lw=(lw&lt;&lt;1) &amp; 0xFF </code></pre> <p>of course, that is, if you keep <code>m</code> set to 8 or it won't work.</p> <p>If <code>m</code> varies, you can compute (or pre-compute) the mask value like this:</p> <pre><code>mask_value = 2**m-1 lw=(lw&lt;&lt;1) &amp; mask_value </code></pre> <p>(the aim of all this is to prefer arithmetic operations to string opertations which are more CPU/memory intensive)</p>
2
2016-08-21T16:36:00Z
[ "python", "python-2.7", "bit-manipulation", "bit-shift" ]
Normalising an array of dicts which contain both values directly and other dicts, across whole array
39,066,371
<p>I have a huge array which looks like (example rows):</p> <pre><code>[ { 'value':21, 'openValues':{ 'a':24, 'b':56, 'c':78 } }, { 'value':12, 'openValues':{ 'a':98, 'b':3 } }, { 'value':900, 'openValues':{ 'a':7811, 'b':171, 'c':11211, 'd':4231 } } ] </code></pre> <p>And I want to normalise all the values in each key and the values within each dict within a key to be between 0 and 1. So for example:</p> <p>Here are the calculations to be performed:</p> <pre><code> [{ 'value':(21-12)/(900-12), 'openValues':{'a':(24-24)/(7811-24),'b':(56-3)/(171-3),'c':(78-78)/(11211-78)} }, { 'value':(12-12)/(900-12), 'openValues':{'a':(98-24)/(7811-24),'b':(3-3)/(171-3)} }, { 'value':(900-12)/(900-12), 'openValues':{'a':(7811-24)/(7811-24),'b':(171-3)/(171-3),'c':(11211-78)/(11211-78),'d':(4231-4231)/(4231-4231)} }] </code></pre> <p>As you can see, each <code>value</code> has been normalised (subtract the minimum value and then divide by the range of values), and same with each key-value pair within <code>openValues</code>.</p> <p>How can I do this?</p> <p>I want to find a quicker method than having to create additional max/min/range values and dicts, as this has been my existing method (this is an example for calculating the max and min of the <code>openValues</code> dict:</p> <pre><code> openValuesMin = {} openValuesMax = {} for i, dict in enumerate(array): for property,value in dict['openValues'].items(): if property not in openValuesMax: openValuesMax[property] = 0 if openValuesMax[property]&lt;value: openValuesMax[property]=value if property not in openValuesMin: openValuesMin[property] = 0 if openValuesMin[property]&gt;value: openValuesMin[property] = value openValuesRange = {key: openValuesMax[key] - openValuesMin.get(key, 0) for key in openValuesMax.keys()} </code></pre> <p>Is there a one line solution to normalising everything in this way?</p>
-3
2016-08-21T16:25:34Z
39,067,821
<p>Not sure I've understood your question very well, but assuming you want to normalize between [0-1] considering the min &amp; max values from all posible items in your arrays, here's a possible solution:</p> <pre><code>array = [ { 'value': 21, 'openValues': { 'a': 24, 'b': 56, 'c': 78 } }, { 'value': 12, 'openValues': { 'a': 98, 'b': 3 } }, { 'value': 900, 'openValues': { 'a': 7811, 'b': 171, 'c': 11211, 'd': 4231 } } ] def normalize(v0, v1, t): return float(t - v0) / float(v1 - v0) def f(v0, v1, item): return { "value": normalize(v0, v1, item["value"]), "openValues": { k: normalize(v0, v1, v) for k, v in item["openValues"].iteritems() } } values = sum([[item["value"]] + item["openValues"].values() for item in array], []) v_min, v_max = min(values), max(values) output = [f(v_min, v_max, item) for item in array] print output </code></pre> <p>EDIT:</p> <p>If you want to normalize considering values and openValues separately, you could extend the above code like this</p> <pre><code>array = [ { 'value': 21, 'openValues': { 'a': 24, 'b': 56, 'c': 78 } }, { 'value': 12, 'openValues': { 'a': 98, 'b': 3 } }, { 'value': 900, 'openValues': { 'a': 7811, 'b': 171, 'c': 11211, 'd': 4231 } } ] def normalize(v0, v1, t): return float(t - v0) / float(v1 - v0) def f(vmin0, vmax0, vmin1, vmax1, item): return { "value": normalize(vmin0, vmax0, item["value"]), "openValues": { k: normalize(vmin1, vmax1, v) for k, v in item["openValues"].iteritems() } } values = [item["value"] for item in array] v_min0, v_max0 = min(values), max(values) values = sum([item["openValues"].values() for item in array], []) v_min1, v_max1 = min(values), max(values) output = [f(v_min0, v_max0, v_min1, v_max1, item) for item in array] print output </code></pre>
1
2016-08-21T19:01:01Z
[ "python", "arrays", "dictionary", "normalization" ]
Why am I getting error 404 "Video not found" from the YouTube API?
39,066,378
<p>I'm currently writing a slack/youtube plugin to add posted youtube links to a playlist. I think the code is ok but I am just starting out and don't know if it's oauth or me.</p> <p>Here is the error:</p> <pre><code> Traceback (most recent call last): File "slackapi.py", line 124, in &lt;module&gt; add_video_to_playlist(youtube,vidID) File "slackapi.py", line 88, in add_video_to_playlist 'videoId': vidID File "/usr/local/lib/python3.5/dist-packages/oauth2client/util.py", line 137, in positional_wrapper return wrapped(*args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/googleapiclient/http.py", line 838, in execute raise HttpError(resp, content, uri=self.uri) googleapiclient.errors.HttpError: &lt;HttpError 404 when requesting https://www.googleapis.com/youtube/v3/playlistItems?alt=json&amp;part=snippet returned "Video not found." </code></pre> <p>Here is the code:</p> <pre><code>#!/usr/bin/python # -*- coding: utf-8 -*- import httplib2 import os import sys import time import urllib import re from slackclient import SlackClient # yt cmds below from apiclient.discovery import build from apiclient.errors import HttpError from oauth2client.client import flow_from_clientsecrets from oauth2client.file import Storage from oauth2client.tools import argparser, run_flow from urllib.parse import urlparse, parse_qs # starterbot's ID as an environment variable BOT_ID = os.environ.get('BOT_ID') # constants AT_BOT = '&lt;@' + BOT_ID + '&gt;' EXAMPLE_COMMAND = 'do' # youtube constants plID = 'PL7KBspcfHWhvOPW-merPTB5vIT1KMK6dS' CLIENT_SECRETS_FILE = 'client_secrets.json' YT_COMMAND = 'youtube.' YOUTUBE_SCOPE = "https://www.googleapis.com/auth/youtube" YOUTUBE_API_SERVICE_NAME = "youtube" YOUTUBE_API_VERSION = "v3" # This variable defines a message to display if the CLIENT_SECRETS_FILE is # missing. MISSING_CLIENT_SECRETS_MESSAGE = \ """ WARNING: Please configure OAuth 2.0 To make this sample run you will need to populate the client_secrets.json file found at: %s with information from the Developers Console https://console.developers.google.com/ For more information about the client_secrets.json file format, please visit: https://developers.google.com/api-client-library/python/guide/aaa_client_secrets """ \ % os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE)) # This OAuth 2.0 access scope allows for full read/write access to the # authenticated user's account. def get_authenticated_service(): flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE, scope=YOUTUBE_SCOPE, message=MISSING_CLIENT_SECRETS_MESSAGE) storage = Storage("%s-oauth2.json" % sys.argv[0]) credentials = storage.get() if credentials is None or credentials.invalid: credentials = run(flow, storage) return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, http=credentials.authorize(httplib2.Http())) # instantiate Slack client slack_client = SlackClient(os.environ.get('SLACK_BOT_TOKEN')) def add_video_to_playlist(youtube,vidID): add_video_request=youtube.playlistItems().insert( part="snippet", body={ 'snippet': { 'playlistId': plID, 'resourceId': { 'kind': 'youtube#video', 'videoId': vidID } #'position': 0 } } ).execute() def parse_slack_output(slack_rtm_output): output_list = slack_rtm_output if output_list and len(output_list) &gt; 0: for output in output_list: if output and 'text' in output and YT_COMMAND in output['text']: # return youtube link return output['text'].lower(), \ output['channel'] return None, None if __name__ == '__main__': READ_WEBSOCKET_DELAY = 1 # 1 second delay between reading from firehose if slack_client.rtm_connect(): print ('StarterBot connected and running!') while True: (command, channel) = \ parse_slack_output(slack_client.rtm_read()) if command and channel: youtube = get_authenticated_service() command = command.split('|', 1)[0] pattern = r'(?:https?:\/\/)?(?:[0-9A-Z-]+\.)?(?:youtube|youtu|youtube-nocookie)\.(?:com|be)\/(?:watch\?v=|watch\?.+&amp;v=|embed\/|v\/|.+\?v=)?([^&amp;=\n%\?]{11})' vidID = re.findall(pattern, command) response = "Your video ID is " + ' '.join(vidID) slack_client.api_call("chat.postMessage", channel=channel, text=response, as_user=True) add_video_to_playlist(youtube,vidID) #handle_command(command, channel) time.sleep(READ_WEBSOCKET_DELAY) else: print ('Connection failed. Invalid Slack token or bot ID?') </code></pre>
0
2016-08-21T16:26:15Z
39,066,464
<p>A 404 error means that the site or API cannot be found. You will come across this a lot with APIs. Unfortunately I can't help you further as I haven't used Python with APIs but just check what your requesting and sending</p>
0
2016-08-21T16:35:26Z
[ "python", "youtube-api", "python-3.5" ]
Why am I getting error 404 "Video not found" from the YouTube API?
39,066,378
<p>I'm currently writing a slack/youtube plugin to add posted youtube links to a playlist. I think the code is ok but I am just starting out and don't know if it's oauth or me.</p> <p>Here is the error:</p> <pre><code> Traceback (most recent call last): File "slackapi.py", line 124, in &lt;module&gt; add_video_to_playlist(youtube,vidID) File "slackapi.py", line 88, in add_video_to_playlist 'videoId': vidID File "/usr/local/lib/python3.5/dist-packages/oauth2client/util.py", line 137, in positional_wrapper return wrapped(*args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/googleapiclient/http.py", line 838, in execute raise HttpError(resp, content, uri=self.uri) googleapiclient.errors.HttpError: &lt;HttpError 404 when requesting https://www.googleapis.com/youtube/v3/playlistItems?alt=json&amp;part=snippet returned "Video not found." </code></pre> <p>Here is the code:</p> <pre><code>#!/usr/bin/python # -*- coding: utf-8 -*- import httplib2 import os import sys import time import urllib import re from slackclient import SlackClient # yt cmds below from apiclient.discovery import build from apiclient.errors import HttpError from oauth2client.client import flow_from_clientsecrets from oauth2client.file import Storage from oauth2client.tools import argparser, run_flow from urllib.parse import urlparse, parse_qs # starterbot's ID as an environment variable BOT_ID = os.environ.get('BOT_ID') # constants AT_BOT = '&lt;@' + BOT_ID + '&gt;' EXAMPLE_COMMAND = 'do' # youtube constants plID = 'PL7KBspcfHWhvOPW-merPTB5vIT1KMK6dS' CLIENT_SECRETS_FILE = 'client_secrets.json' YT_COMMAND = 'youtube.' YOUTUBE_SCOPE = "https://www.googleapis.com/auth/youtube" YOUTUBE_API_SERVICE_NAME = "youtube" YOUTUBE_API_VERSION = "v3" # This variable defines a message to display if the CLIENT_SECRETS_FILE is # missing. MISSING_CLIENT_SECRETS_MESSAGE = \ """ WARNING: Please configure OAuth 2.0 To make this sample run you will need to populate the client_secrets.json file found at: %s with information from the Developers Console https://console.developers.google.com/ For more information about the client_secrets.json file format, please visit: https://developers.google.com/api-client-library/python/guide/aaa_client_secrets """ \ % os.path.abspath(os.path.join(os.path.dirname(__file__), CLIENT_SECRETS_FILE)) # This OAuth 2.0 access scope allows for full read/write access to the # authenticated user's account. def get_authenticated_service(): flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE, scope=YOUTUBE_SCOPE, message=MISSING_CLIENT_SECRETS_MESSAGE) storage = Storage("%s-oauth2.json" % sys.argv[0]) credentials = storage.get() if credentials is None or credentials.invalid: credentials = run(flow, storage) return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, http=credentials.authorize(httplib2.Http())) # instantiate Slack client slack_client = SlackClient(os.environ.get('SLACK_BOT_TOKEN')) def add_video_to_playlist(youtube,vidID): add_video_request=youtube.playlistItems().insert( part="snippet", body={ 'snippet': { 'playlistId': plID, 'resourceId': { 'kind': 'youtube#video', 'videoId': vidID } #'position': 0 } } ).execute() def parse_slack_output(slack_rtm_output): output_list = slack_rtm_output if output_list and len(output_list) &gt; 0: for output in output_list: if output and 'text' in output and YT_COMMAND in output['text']: # return youtube link return output['text'].lower(), \ output['channel'] return None, None if __name__ == '__main__': READ_WEBSOCKET_DELAY = 1 # 1 second delay between reading from firehose if slack_client.rtm_connect(): print ('StarterBot connected and running!') while True: (command, channel) = \ parse_slack_output(slack_client.rtm_read()) if command and channel: youtube = get_authenticated_service() command = command.split('|', 1)[0] pattern = r'(?:https?:\/\/)?(?:[0-9A-Z-]+\.)?(?:youtube|youtu|youtube-nocookie)\.(?:com|be)\/(?:watch\?v=|watch\?.+&amp;v=|embed\/|v\/|.+\?v=)?([^&amp;=\n%\?]{11})' vidID = re.findall(pattern, command) response = "Your video ID is " + ' '.join(vidID) slack_client.api_call("chat.postMessage", channel=channel, text=response, as_user=True) add_video_to_playlist(youtube,vidID) #handle_command(command, channel) time.sleep(READ_WEBSOCKET_DELAY) else: print ('Connection failed. Invalid Slack token or bot ID?') </code></pre>
0
2016-08-21T16:26:15Z
39,070,428
<p>Found it. It was the code. I was parsing all the output into .lower() so the video IDs were not correct. Thanks for your help guys. </p> <p>Also the <code>credentials = run(flow, storage)</code> is deprecated. It should be <code>credentials = run_flow(flow, storage)</code>.</p>
0
2016-08-22T01:53:05Z
[ "python", "youtube-api", "python-3.5" ]
Reduce number of levels for large categorical variables
39,066,382
<p>Are there some ready to use libraries or packages for python or R to reduce the number of levels for large categorical factors?</p> <p>I want to achieve something similar to <a href="http://stackoverflow.com/questions/28968983/r-binning-categorical-variables">R: &quot;Binning&quot; categorical variables</a> but encode into the most frequently top-k factors and "other".</p>
0
2016-08-21T16:26:41Z
39,066,699
<p>Here is an example in <code>R</code> using <code>data.table</code> a bit, but it should be easy without <code>data.table</code> also.</p> <pre><code># Load data.table require(data.table) # Some data set.seed(1) dt &lt;- data.table(type = factor(sample(c("A", "B", "C"), 10e3, replace = T)), weight = rnorm(n = 10e3, mean = 70, sd = 20)) # Decide the minimum frequency a level needs... min.freq &lt;- 3350 # Levels that don't meet minumum frequency (using data.table) fail.min.f &lt;- dt[, .N, type][N &lt; min.freq, type] # Call all these level "Other" levels(dt$type)[fail.min.f] &lt;- "Other" </code></pre>
1
2016-08-21T17:03:42Z
[ "python", "encoding", "categorical-data", "binning" ]
Reduce number of levels for large categorical variables
39,066,382
<p>Are there some ready to use libraries or packages for python or R to reduce the number of levels for large categorical factors?</p> <p>I want to achieve something similar to <a href="http://stackoverflow.com/questions/28968983/r-binning-categorical-variables">R: &quot;Binning&quot; categorical variables</a> but encode into the most frequently top-k factors and "other".</p>
0
2016-08-21T16:26:41Z
39,066,722
<p>Here's an approach using <code>base</code> R:</p> <pre><code>set.seed(123) d &lt;- data.frame(x = sample(LETTERS[1:5], 1e5, prob = c(.4, .3, .2, .05, .05), replace = TRUE)) recat &lt;- function(x, new_cat, threshold) { x &lt;- as.character(x) xt &lt;- prop.table(table(x)) factor(ifelse(x %in% names(xt)[xt &gt;= threshold], x, new_cat)) } d$new_cat &lt;- recat(d$x, "O", 0.1) table(d$new_cat) # A B C O # 40132 29955 19974 9939 </code></pre>
0
2016-08-21T17:06:07Z
[ "python", "encoding", "categorical-data", "binning" ]
Reduce number of levels for large categorical variables
39,066,382
<p>Are there some ready to use libraries or packages for python or R to reduce the number of levels for large categorical factors?</p> <p>I want to achieve something similar to <a href="http://stackoverflow.com/questions/28968983/r-binning-categorical-variables">R: &quot;Binning&quot; categorical variables</a> but encode into the most frequently top-k factors and "other".</p>
0
2016-08-21T16:26:41Z
39,947,700
<p>The R package <code>forcats</code> has <code>fct_lump</code> for this purpose.</p> <pre><code>fct_lump(f, n) </code></pre> <p>Where <code>f</code> is the factor and <code>n</code> is the number of most common levels to be preserved. Others are recoded to <code>Other</code>.</p>
1
2016-10-09T19:32:28Z
[ "python", "encoding", "categorical-data", "binning" ]
Selenium webdriver with Python on chrome - Scroll to the exact middle of an element
39,066,399
<p>I'm trying to click on an element solely by getting it with XPATH. I get an exception that the element is un-clickable in the given location.</p> <p>I know for sure that the center of the element is clickable, so how do i get the exact middle (x,y) of the element and click it with Selenium using Python?</p> <p>EDIT:</p> <p>I've found the solution for this issue:</p> <pre><code>driver.execute_script("arguments[0].scrollIntoView(true);", element) time.sleep(0.5) element.click() </code></pre> <p>The time.sleep was the missing link.</p>
0
2016-08-21T16:28:36Z
39,067,206
<p>Actually selenium itself try to click on element at center position of element, so this exception normally occurs when target element overlayed by other element due to size of the window or any other reason, like it would be hidden inside scroll bar etc.</p> <p>So basically if you want to get exact element into view port, so you could click on it, you should try using <a href="https://developer.mozilla.org/en/docs/Web/API/Element/scrollIntoView" rel="nofollow"><code>scrollIntoView()</code></a> method which scrolls the current element into the visible area of the browser window as below :-</p> <pre><code>element = driver.find_element.. driver.execute_script("arguments[0].scrollIntoView()", element) </code></pre>
0
2016-08-21T17:54:33Z
[ "python", "selenium-webdriver", "selenium-chromedriver" ]
virtualenvwrapper: Command "python setup.py egg_info" failed with error code 1
39,066,431
<p>When trying to install <strong>virtualenvwrapper</strong> on my mac os v10.11 using:</p> <blockquote> <p>sudo pip install virtualenvwrapper</p> </blockquote> <p>I got the following error message:</p> <blockquote> <p>Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-build-ktzs4x/virtualenvwrapper/</p> </blockquote> <pre><code>Here are my logs: Collecting virtualenvwrapper Downloading virtualenvwrapper-4.7.2.tar.gz (90kB) 100% |████████████████████████████████| 92kB 243kB/s Complete output from command python setup.py egg_info: ERROR:root:Error parsing Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/pbr/core.py", line 111, in pbr attrs = util.cfg_to_args(path, dist.script_args) File "/usr/local/lib/python2.7/site-packages/pbr/util.py", line 264, in cfg_to_args wrap_commands(kwargs) File "/usr/local/lib/python2.7/site-packages/pbr/util.py", line 576, in wrap_commands for cmd, _ in dist.get_command_list(): File "/usr/local/lib/python2.7/site-packages/setuptools/dist.py", line 530, in get_command_list return _Distribution.get_command_list(self) File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 772, in get_command_list klass = self.get_command_class(cmd) File "/usr/local/lib/python2.7/site-packages/setuptools/dist.py", line 514, in get_command_class return _Distribution.get_command_class(self, command) File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 815, in get_command_class __import__ (module_name) File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/check.py", line 13, in &lt;module&gt; from docutils.utils import Reporter File "/usr/local/lib/python2.7/site-packages/docutils/utils/__init__.py", line 20, in &lt;module&gt; import docutils.io File "/usr/local/lib/python2.7/site-packages/docutils/io.py", line 18, in &lt;module&gt; from docutils.utils.error_reporting import locale_encoding, ErrorString, ErrorOutput File "/usr/local/lib/python2.7/site-packages/docutils/utils/error_reporting.py", line 47, in &lt;module&gt; locale_encoding = locale.getlocale()[1] or locale.getdefaultlocale()[1] File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/locale.py", line 543, in getdefaultlocale return _parse_localename(localename) File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/locale.py", line 475, in _parse_localename raise ValueError, 'unknown locale: %s' % localename ValueError: unknown locale: UTF-8 error in setup command: Error parsing /private/tmp/pip-build-ktzs4x/virtualenvwrapper/setup.cfg: ValueError: unknown locale: UTF-8 ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-build-ktzs4x/virtualenvwrapper/ </code></pre>
1
2016-08-21T16:31:59Z
39,066,467
<p>You can try the solution explained <a href="https://coderwall.com/p/-k_93g/mac-os-x-valueerror-unknown-locale-utf-8-in-python" rel="nofollow">here</a></p> <blockquote> <p>add these lines to your ~/.bash_profile: <code> export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8</code></p> </blockquote>
3
2016-08-21T16:36:08Z
[ "python", "django", "web", "virtualenv", "virtualenvwrapper" ]
Scrapy crawls only one page
39,066,445
<p>I have made a scrapy spider that I would like to crawl all the pages but it only crawls to the second page and then stops. It seems that within the <code>if next_page:</code> loop the url only changes to the second page and then sticks there. I think I am misunderstanding how http responses work because it seems to only grab the next page link on the starting page.</p> <pre><code>import scrapy from tutorial.items import TriniCarsItem class TCS(scrapy.Spider): name = "TCS" allowed_domains = ["TCS.com"] start_urls = [ "http://www.TCS.com/database/featuredcarsList.php"] def parse(self, response): for href in response.css("table &gt; tr &gt; td &gt; a::attr('href')"): url = response.urljoin(href.extract()) yield(scrapy.Request(url, callback=self.parse_dir_contents)) next_page = response.css("body &gt; table &gt; tr &gt; td &gt; font &gt; b &gt; a::attr('href')") if next_page: url = response.urljoin(next_page[0].extract()) print("THIS IS THE URL =----------------------------- " + url) yield(scrapy.Request(url, self.parse)) def parse_dir_contents(self, response): for sel in response.xpath('//table[@width="543"]/tr/td/table/tr/td[2]/table'): item = TCSItem() item['id'] = sel.xpath('tr[1]/td[1]//text()').extract() item['make'] = sel.xpath('tr[3]/td[2]//text()').extract() item['model'] = sel.xpath('tr[4]/td[2]//text()').extract() item['year'] = sel.xpath('tr[5]/td[2]//text()').extract() item['colour'] = sel.xpath('tr[6]/td[2]//text()').extract() item['engine_size'] = sel.xpath('tr[7]/td[2]//text()').extract() item['mileage'] = sel.xpath('tr[8]/td[2]//text()').extract() item['transmission'] = sel.xpath('tr[9]/td[2]//text()').extract() item['features'] = sel.xpath('tr[11]/td[2]//text()').extract() item['additional_info'] = sel.xpath('tr[12]/td[2]//text()').extract() item['contact_name'] = sel.xpath('tr[14]/td[2]//text()').extract() item['contact_phone'] = sel.xpath('tr[15]/td[2]//text()').extract() item['contact_email'] = sel.xpath('tr[16]/td[2]//text()').extract() item['asking_price'] = sel.xpath('tr[17]/td[2]//text()').extract() item['date_added'] = sel.xpath('tr[19]/td[2]//text()').extract() item['page_views'] = sel.xpath('tr[20]/td[2]//text()').extract() #print(make, model, year, colour, engine_size, mileage, transmission, features, #additional_info, contact_name, contact_phone, contact_email, asking_price, date_added, #page_views) yield(item) </code></pre>
0
2016-08-21T16:33:36Z
39,069,916
<p>On the 2nd page the 1st link (the one you choose) is the one pointing to the previous page. Just send any links in order and let the de-duplicator cancel out any duplicates:</p> <pre><code> if next_page: for i in next_page url = response.urljoin(i.extract()) print("THIS IS THE URL =----------------------------- " + url) yield(scrapy.Request(url, self.parse)) </code></pre> <p>P.S. In your case, consider also the significantly easier and way more massively parallel:</p> <pre><code>start_urls = [ "http://www.trinicarsforsale.com/database/featuredcarsList.php?page=%d" % i for i in xrange(1, 460)] def parse(self, response): return self.parse_dir_contents(response): </code></pre>
1
2016-08-22T00:07:07Z
[ "python", "web-scraping", "scrapy" ]
Create a dicom print server using python
39,066,485
<p>I am trying to create a dicom print server using python. As client,I am using QuantorMeddemo software which can act as an x-ray machine scanning software. So far I managed to create a PACS server using pynetdicom and have received dcm data from QUantor Med.But that doesnt register as a print server(I guess) and only works when I use the option to send my study to some PACS server and not when I use the print option in that software. Now my question is,is there any library that can help me create a print server/dicom print server?</p>
1
2016-08-21T16:37:54Z
39,076,245
<p>You can try dcmtk whick have a print SCP:</p> <p><a href="http://dicom.offis.de/dcmtk.php.en" rel="nofollow">dcmtk</a></p>
0
2016-08-22T09:45:48Z
[ "python", "dicom" ]
Django: updated_at/created_at fields have TZ info are "too" precise
39,066,524
<p>My env: Django 1.9.7, Python 3.5.2</p> <p>I use the following model code to enforce values into <code>updated_at</code> and <code>created_at</code> fields in my model class.</p> <pre><code>class MyObject(models.Model): ... ... ... created_at = models.DateTimeField(db_index=True, auto_now_add=True) updated_at = models.DateTimeField(db_index=True, auto_now=True) </code></pre> <p>When creating new model objects this fields in my database look like this:</p> <pre><code>| created_at | updated_at | +-------------------------------+-------------------------------+ | 2016-08-20 15:21:34.959854-04 | 2016-08-20 15:21:34.959924-04 | | 2016-08-20 15:21:34.977791-04 | 2016-08-20 15:21:34.97785-04 | | 2016-08-20 15:21:34.979975-04 | 2016-08-20 15:21:34.980013-04 | | 2016-08-20 15:21:34.981981-04 | 2016-08-20 15:21:34.982019-04 | | 2016-08-20 15:21:34.983878-04 | 2016-08-20 15:21:34.983917-04 | | 2016-08-20 15:21:34.985832-04 | 2016-08-20 15:21:34.98587-04 | | 2016-08-20 15:21:34.987758-04 | 2016-08-20 15:21:34.987796-04 | | 2016-08-20 15:21:34.989791-04 | 2016-08-20 15:21:34.989855-04 | </code></pre> <p>It's great to know I can measure things up to the microsecond but Ideally I would like to have those fields accurate up to the second. I don't really need more than that and I don't need to TZ in there.</p> <pre><code>| created_at | updated_at | +-------------------------------+-------------------------------+ | 2016-08-20 15:21:34 | 2016-08-20 15:21:34 | | 2016-08-20 15:21:34 | 2016-08-20 15:21:34 | | 2016-08-20 15:21:34 | 2016-08-20 15:21:34 | </code></pre> <p>In my project's <code>settings.py</code> I have this defined:</p> <pre><code>USE_TZ = False </code></pre> <p>and yet the time zone still appears in the database. </p> <p>My questions are:</p> <ol> <li>How can I make sure I have a YYYY-MM-DD HH:MM:SS output in my <code>updated_at</code> and <code>created_at</code> fields</li> <li>If removing the microseconds isn't possible - how do I remove the TZ only from the timestamp?</li> <li>If I do manage to do it -- Will there be any negative implications for not having the full timestamp in there? I.e. comparison functions already existing in Python that expect it etc. </li> </ol>
0
2016-08-21T16:43:23Z
39,066,993
<p>You can define your own default value (script below), but with <code>auto_now</code> you cannot do anything, because implementation doesn't offer more.</p> <pre><code>import datetime def default_datetime(): now = datetime.datetime.now() now.replace(microsecond=0) return now class MyObject(models.Model): ... ... ... created_at = models.DateTimeField(db_index=True, default=default_datetime) updated_at = models.DateTimeField(db_index=True, null=False) my_object = MyObject( ..., updated_at=default_datetime ) </code></pre> <p>From Django Note:</p> <blockquote> <p>The auto_now and auto_now_add options will always use the date in the default timezone at the moment of creation or update. If you need something different, you may want to consider simply using your own callable default or overriding save() instead of using auto_now or auto_now_add; or using a DateTimeField instead of a DateField and deciding how to handle the conversion from datetime to date at display time.</p> </blockquote>
0
2016-08-21T17:34:26Z
[ "python", "django-models" ]
OpenCV, Python and Raspberry Pi
39,066,538
<p>I have a problem considering Python and OpenCV programming on Raspberry Pi. I want to track an object in video and get the position of that object.</p> <p>I wrote this code, I tried using HoughCircle but I get only one tracked picture. I want to keep the tracking constant. I want to track a red triangle.</p> <pre><code>from picamera.array import PiRGBArray from picamera import PiCamera import time import cv2 import numpy as np from matplotlib import pyplot as plt camera=PiCamera() camera.resolution=(320,240) camera.framerate=32 rawCapture=PiRGBArray(camera,size=(320,240)) time.sleep(0.1) for frame in camera.capture_continuous(rawCapture,format="bgr",use_video_port=True): image=frame.array hsv=cv2.cvtColor(image,cv2.COLOR_BGR2HSV) lower_red=np.array([150,150,1]) upper_red=np.array([180,255,255]) mask=cv2.inRange(hsv,lower_red,upper_red) res=cv2.bitwise_and(image,image,mask=mask) edges=cv2.Canny(res,100,200) contours,j=cv2.findContours(edges,1,2) for cnt in contours: approx=cv2.approxPolyDP(cnt,0.01*cv2.arcLength(cnt,True),True) if len(approx)==3: print "triangle" cv2.imshow("Original_slika",edges) #cv2.imshow("Samo_crvena",res) key=cv2.waitKey(1) &amp; 0xFF rawCapture.truncate(0) if key == ord("q"): break </code></pre>
-1
2016-08-21T16:44:48Z
39,376,317
<p>I succeeded to track the center of a triangle but now i have diff problem, my servo doesn't want to respond to the duty cycle I'm sending to him, can please someone help me</p> <pre><code>from picamera.array import PiRGBArray from picamera import PiCamera import time import cv2 import numpy as np from matplotlib import pyplot as plt import RPi.GPIO as GPIO camera=PiCamera() camera.resolution=(320,256) camera.framerate=32 rawCapture=PiRGBArray(camera,size=(320,256)) GPIO.setmode(GPIO.BOARD) GPIO.setup(13,GPIO.OUT) pwm=GPIO.PWM(13,50) pwm.start(7.9) time.sleep(0.1) for frame in camera.capture_continuous(rawCapture,format="bgr",use_video_port=True): image=frame.array hsv=cv2.cvtColor(image,cv2.COLOR_BGR2HSV) lower_red=np.array([150,150,1]) upper_red=np.array([180,255,255]) mask=cv2.inRange(hsv,lower_red,upper_red) res=cv2.bitwise_and(image,image,mask=mask) edges=cv2.Canny(res,100,200) contours,j=cv2.findContours(edges,1,2) for cnt in contours: approx=cv2.approxPolyDP(cnt,0.04*cv2.arcLength(cnt,True),True) if len(approx)==3: M=cv2.moments(cnt) cx=int(M['m10']/M['m00']) cy=int(M['m01']/M['m00']) dc=12.4+(((float(cx)-86.)/154.)*(-8.9))--&lt;&lt;&lt;Here i get the good dutycycle print '%d' %dc pwm.ChangeDutyCycle(dc) -------&lt;&lt;&lt;&lt;&lt;&lt; Here is problem, this command don't want to change the servo position print '%d'' x*' '%d'' y*' % (cx ,cy) else: None cv2.imshow("Original_slika",edges) #cv2.imshow("Samo_crvena",res) key=cv2.waitKey(1) &amp; 0xFF pwm.stop() GPIO.cleanup() rawCapture.truncate(0) if key == ord("q"): break </code></pre>
0
2016-09-07T17:49:04Z
[ "python", "opencv", "numpy", "raspberry-pi2" ]
Python: How remove duplicates words in string that are not next each other?
39,066,559
<p>In the example below, I need to remove only the third "animale" which is alone in the string. How can I do that?</p> <pre><code>a = 'animale animale eau toilette animale' </code></pre> <p>Second "animale": dont remove</p> <p>Third "animale": remove</p>
1
2016-08-21T16:47:40Z
39,066,704
<pre><code>a = "animale animale eau toilette animale" words = a.split() cleaned_words = [] skip = False for i in range(len(words)): word = words[i] print(word) if skip: cleaned_words.append(word) skip = False try: next_word = words[i+1] print(next_word) except IndexError: break if word == next_word: cleaned_words.append(word) skip = True continue if word not in cleaned_words: cleaned_words.append(word) print(cleaned_words) </code></pre> <p>Quite an ugly, rough solution, but it gets the job done.</p>
0
2016-08-21T17:04:03Z
[ "python", "string" ]
Python: How remove duplicates words in string that are not next each other?
39,066,559
<p>In the example below, I need to remove only the third "animale" which is alone in the string. How can I do that?</p> <pre><code>a = 'animale animale eau toilette animale' </code></pre> <p>Second "animale": dont remove</p> <p>Third "animale": remove</p>
1
2016-08-21T16:47:40Z
39,066,817
<p>If i understand your question correctly, you want to remove any occurrences of words that are duplicates but not adjacent. I think this solution works for that:</p> <pre><code>from collections import defaultdict def remove_duplicates(s): result = [] word_counts = defaultdict(int) words = s.split() # count the frequency of each word for word in words: word_counts[word] += 1 # loop through all words, and only add to result if either it occurs only once or occurs more than once and the next word is the same as the current word. for i in range(len(words)-1): curr_word = words[i] if word_counts[curr_word] &gt; 1: if words[i+1] == curr_word: result.append(curr_word) result.append(curr_word) word_counts[curr_word] = -1 # mark as -1 so as not to add again i += 1 # skip the next word by incrementing i manually because it has already been added # if there are only two occurrences of the word left but they aren't adjacent, add one and mark the counts so you don't add it again. elif word_counts[curr_word] &lt; 3: result.append(curr_word) word_counts[curr_word] = -1 # mark as -1 so as not to add again # not adjacent but more than 2 occurrences left so decrement number of occurrences left else: word_counts[curr_word] -= 1 elif word_counts[curr_word] == 1: result.append(curr_word) word_counts[curr_word] = -1 # Fix off by one error by checking last index if word_counts[words[-1]] == 1: result.append(words[-1]) return ' '.join(result) </code></pre> <p>I think this works for any case where the repeated words aren't adjacent including @Dartmouth's example of 'animale animale eau toilette animale eau eau'.</p> <p>Sample inputs and outputs:</p> <pre><code> Inputs Outputs ============================================= ========================================= 'animale animale eau toilette animale' ----&gt; 'animale animale eau toilette' 'animale animale eau toilette animale eau eau' ----&gt; 'animale animale toilette eau eau' 'animale eau toilette animale eau eau' ----&gt; 'animale toilette eau eau' 'animale eau toilette animale eau de eau de toilette' ----&gt; 'animale toilette eau de' 'animale animale eau toilette animale eau eau compte' ----&gt; 'animale animale toilette eau eau compte' </code></pre>
0
2016-08-21T17:15:33Z
[ "python", "string" ]
Python: How remove duplicates words in string that are not next each other?
39,066,559
<p>In the example below, I need to remove only the third "animale" which is alone in the string. How can I do that?</p> <pre><code>a = 'animale animale eau toilette animale' </code></pre> <p>Second "animale": dont remove</p> <p>Third "animale": remove</p>
1
2016-08-21T16:47:40Z
39,067,545
<p>how about this</p> <pre><code>from collections import defaultdict def remove_no_adjacent_duplicates(string): position = defaultdict(list) words = string.split() for i,w in enumerate(words): position[w].append(i) for w,pos_list in position.items(): adjacent = set() for i in range(1,len(pos_list)): if pos_list[i-1] +1 == pos_list[i]: adjacent.update( (pos_list[i-1],pos_list[i]) ) if adjacent: position[w] = adjacent else: position[w] = pos_list[:1] return " ".join( w for i,w in enumerate(words) if i in position[w] ) print( remove_no_adjacent_duplicates('animale animale eau toilette animale') ) print( remove_no_adjacent_duplicates('animale animale eau toilette animale eau eau' ) ) print( remove_no_adjacent_duplicates('animale eau toilette animale eau eau' ) ) print( remove_no_adjacent_duplicates('animale eau toilette animale eau de eau de toilette' ) ) </code></pre> <p>output</p> <pre><code>animale animale eau toilette animale animale toilette eau eau animale toilette eau eau animale eau toilette de </code></pre> <p>explanation</p> <p>first I record the position of each word in the <code>position</code> dict, then I proceed to check if there is any adjacent position among them for each word, if there is any I save both it in a set, when that is finished if any is found I exchange the list of position for this set of adjacent otherwise remove all the saved position except for the first, and finally reconstruct the string</p>
1
2016-08-21T18:29:22Z
[ "python", "string" ]
Python: How remove duplicates words in string that are not next each other?
39,066,559
<p>In the example below, I need to remove only the third "animale" which is alone in the string. How can I do that?</p> <pre><code>a = 'animale animale eau toilette animale' </code></pre> <p>Second "animale": dont remove</p> <p>Third "animale": remove</p>
1
2016-08-21T16:47:40Z
39,068,100
<p>This one works for both:</p> <p><code>'animale animale eau toilette animale'</code></p> <p>and</p> <p><code>'animale animale eau toilette animale eau eau'</code></p> <p>Here's the code:</p> <pre><code>from collections import Counter def cleanup(words): splitted = words.split() counter = Counter(splitted) more_than_one = [x for x in counter.keys() if counter[x] &gt; 1] orphan_indexes = [] before = True for i in range(len(splitted)): if i == len(splitted): break if i &gt; 0: before = splitted[i] != splitted[i-1] if i+1 &lt;= len(splitted): try: after = splitted[i] != splitted[i+1] except IndexError: after = True if before and after: if splitted[i] in more_than_one: orphan_indexes.append(i) return ' '.join([ item for i, item in enumerate(splitted) if i not in orphan_indexes ]) print cleanup('animale animale eau toilette animale') print cleanup('animale animale eau toilette animale eau eau') </code></pre> <p>Result:</p> <pre><code>animale animale eau toilette animale animale toilette eau eau </code></pre>
0
2016-08-21T19:30:24Z
[ "python", "string" ]
Scikit-Learn: Std.Error, p-Value from LinearRegression
39,066,567
<p>I've been trying to get the standard error &amp; p-Values by using LR from scikit-learn. But no success. </p> <p>I've end up finding up this <a href="https://regressors.readthedocs.io/en/latest/usage.html" rel="nofollow">article</a>: but the std error &amp; p-value does not match that from the statsmodel.api OLS method </p> <pre><code>import numpy as np from sklearn import datasets from sklearn import linear_model import regressor import statsmodels.api as sm boston = datasets.load_boston() which_betas = np.ones(13, dtype=bool) which_betas[3] = False X = boston.data[:,which_betas] y = boston.target #scikit + regressor stats ols = linear_model.LinearRegression() ols.fit(X,y) xlables = boston.feature_names[which_betas] regressor.summary(ols, X, y, xlables) # statsmodel x2 = sm.add_constant(X) models = sm.OLS(y,x2) result = models.fit() print result.summary() </code></pre> <p>Output as follows: </p> <pre><code>Residuals: Min 1Q Median 3Q Max -26.3743 -1.9207 0.6648 2.8112 13.3794 Coefficients: Estimate Std. Error t value p value _intercept 36.925033 4.915647 7.5117 0.000000 CRIM -0.112227 0.031583 -3.5534 0.000416 ZN 0.047025 0.010705 4.3927 0.000014 INDUS 0.040644 0.055844 0.7278 0.467065 NOX -17.396989 3.591927 -4.8434 0.000002 RM 3.845179 0.272990 14.0854 0.000000 AGE 0.002847 0.009629 0.2957 0.767610 DIS -1.485557 0.180530 -8.2289 0.000000 RAD 0.327895 0.061569 5.3257 0.000000 TAX -0.013751 0.001055 -13.0395 0.000000 PTRATIO -0.991733 0.088994 -11.1438 0.000000 B 0.009827 0.001126 8.7256 0.000000 LSTAT -0.534914 0.042128 -12.6973 0.000000 --- R-squared: 0.73547, Adjusted R-squared: 0.72904 F-statistic: 114.23 on 12 features OLS Regression Results ============================================================================== Dep. Variable: y R-squared: 0.735 Model: OLS Adj. R-squared: 0.729 Method: Least Squares F-statistic: 114.2 Date: Sun, 21 Aug 2016 Prob (F-statistic): 7.59e-134 Time: 21:56:26 Log-Likelihood: -1503.8 No. Observations: 506 AIC: 3034. Df Residuals: 493 BIC: 3089. Df Model: 12 Covariance Type: nonrobust ============================================================================== coef std err t P&gt;|t| [95.0% Conf. Int.] ------------------------------------------------------------------------------ const 36.9250 5.148 7.173 0.000 26.811 47.039 x1 -0.1122 0.033 -3.405 0.001 -0.177 -0.047 x2 0.0470 0.014 3.396 0.001 0.020 0.074 x3 0.0406 0.062 0.659 0.510 -0.081 0.162 x4 -17.3970 3.852 -4.516 0.000 -24.966 -9.828 x5 3.8452 0.421 9.123 0.000 3.017 4.673 x6 0.0028 0.013 0.214 0.831 -0.023 0.029 x7 -1.4856 0.201 -7.383 0.000 -1.881 -1.090 x8 0.3279 0.067 4.928 0.000 0.197 0.459 x9 -0.0138 0.004 -3.651 0.000 -0.021 -0.006 x10 -0.9917 0.131 -7.547 0.000 -1.250 -0.734 x11 0.0098 0.003 3.635 0.000 0.005 0.015 x12 -0.5349 0.051 -10.479 0.000 -0.635 -0.435 ============================================================================== Omnibus: 190.837 Durbin-Watson: 1.015 Prob(Omnibus): 0.000 Jarque-Bera (JB): 897.143 Skew: 1.619 Prob(JB): 1.54e-195 Kurtosis: 8.663 Cond. No. 1.51e+04 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 1.51e+04. This might indicate that there are strong multicollinearity or other numerical problems. </code></pre> <p>I've also found the following articles </p> <ul> <li><p><a href="http://stackoverflow.com/questions/27928275/find-p-value-significance-in-scikit-learn-linearregression">Find p-value (significance) in scikit-learn LinearRegression</a></p></li> <li><p><a href="http://connor-johnson.com/2014/02/18/linear-regression-with-python/" rel="nofollow">http://connor-johnson.com/2014/02/18/linear-regression-with-python/</a></p></li> </ul> <p>Both the codes in the SO link doesn't compile </p> <p>Here is my code &amp; data that I'm working on - but not being able to find the std error &amp; p-values</p> <pre><code>import pandas as pd import statsmodels.api as sm import numpy as np import scipy from sklearn.linear_model import LinearRegression from sklearn import metrics def readFile(filename, sheetname): xlsx = pd.ExcelFile(filename) data = xlsx.parse(sheetname, skiprows=1) return data def lr_statsmodel(X,y): X = sm.add_constant(X) model = sm.OLS(y,X) results = model.fit() print (results.summary()) def lr_scikit(X,y,featureCols): model = LinearRegression() results = model.fit(X,y) predictions = results.predict(X) print 'Coefficients' print 'Intercept\t' , results.intercept_ df = pd.DataFrame(zip(featureCols, results.coef_)) print df.to_string(index=False, header=False) # Query:: The numbers matches with Excel OLS but skeptical about relating score as rsquared rSquare = results.score(X,y) print '\nR-Square::', rSquare # This looks like a better option # source: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html#sklearn.metrics.r2_score r2 = metrics.r2_score(y,results.predict(X)) print 'r2', r2 # Query: No clue at all! http://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics print 'Rsquared?!' , metrics.explained_variance_score(y, results.predict(X)) # INFO:: All three of them are providing the same figures! # Adj-Rsquare formula @ https://www.easycalculation.com/statistics/learn-adjustedr2.php # In ML, we don't use all of the data for training, and hence its highly unusual to find AdjRsquared. Thus the need for manual calculation N = X.shape[0] p = X.shape[1] adjRsquare = 1 - ((1 - rSquare ) * (N - 1) / (N - p - 1)) print "Adjusted R-Square::", adjRsquare # calculate standard errors # mean_absolute_error # mean_squared_error # median_absolute_error # r2_score # explained_variance_score mse = metrics.mean_squared_error(y,results.predict(X)) print mse print 'Residual Standard Error:', np.sqrt(mse) # OLS in Matrix : https://github.com/nsh87/regressors/blob/master/regressors/stats.py n = X.shape[0] X1 = np.hstack((np.ones((n, 1)), np.matrix(X))) se_matrix = scipy.linalg.sqrtm( metrics.mean_squared_error(y, results.predict(X)) * np.linalg.inv(X1.T * X1) ) print 'se',np.diagonal(se_matrix) # https://github.com/nsh87/regressors/blob/master/regressors/stats.py # http://regressors.readthedocs.io/en/latest/usage.html y_hat = results.predict(X) sse = np.sum((y_hat - y) ** 2) print 'Standard Square Error of the Model:', sse if __name__ == '__main__': # read file fileData = readFile('Linear_regression.xlsx','Input Data') # list of independent variables feature_cols = ['Price per week','Population of city','Monthly income of riders','Average parking rates per month'] # build dependent &amp; independent data set X = fileData[feature_cols] y = fileData['Number of weekly riders'] # Statsmodel - OLS # lr_statsmodel(X,y) # ScikitLearn - OLS lr_scikit(X,y,feature_cols) </code></pre> <p>My data-set </p> <pre><code>Y X1 X2 X3 X4 City Number of weekly riders Price per week Population of city Monthly income of riders Average parking rates per month 1 1,92,000 $15 18,00,000 $5,800 $50 2 1,90,400 $15 17,90,000 $6,200 $50 3 1,91,200 $15 17,80,000 $6,400 $60 4 1,77,600 $25 17,78,000 $6,500 $60 5 1,76,800 $25 17,50,000 $6,550 $60 6 1,78,400 $25 17,40,000 $6,580 $70 7 1,80,800 $25 17,25,000 $8,200 $75 8 1,75,200 $30 17,25,000 $8,600 $75 9 1,74,400 $30 17,20,000 $8,800 $75 10 1,73,920 $30 17,05,000 $9,200 $80 11 1,72,800 $30 17,10,000 $9,630 $80 12 1,63,200 $40 17,00,000 $10,570 $80 13 1,61,600 $40 16,95,000 $11,330 $85 14 1,61,600 $40 16,95,000 $11,600 $100 15 1,60,800 $40 16,90,000 $11,800 $105 16 1,59,200 $40 16,30,000 $11,830 $105 17 1,48,800 $65 16,40,000 $12,650 $105 18 1,15,696 $102 16,35,000 $13,000 $110 19 1,47,200 $75 16,30,000 $13,224 $125 20 1,50,400 $75 16,20,000 $13,766 $130 21 1,52,000 $75 16,15,000 $14,010 $150 22 1,36,000 $80 16,05,000 $14,468 $155 23 1,26,240 $86 15,90,000 $15,000 $165 24 1,23,888 $98 15,95,000 $15,200 $175 25 1,26,080 $87 15,90,000 $15,600 $175 26 1,51,680 $77 16,00,000 $16,000 $190 27 1,52,800 $63 16,10,000 $16,200 $200 </code></pre> <p>I've exhausted all my options and whatever I could make sense of. So any guidance on how to compute std error &amp; p-values that is the same as per the statsmodel.api is appreciated.</p> <p>EDIT: I'm trying to find the std error &amp; p-values for intercept and all the independent variables </p>
1
2016-08-21T16:48:43Z
39,136,835
<p>Clipped from <a href="http://stats.stackexchange.com/questions/203740/logistic-regression-scikit-learn-vs-statsmodels">CrossValidate post</a></p> <pre><code># sklearn output model = LogisticRegression(fit_intercept = False, C = 1e9) mdl = model.fit(X, y) model.coef_ # sm logit = sm.Logit(y, X) logit.fit().params </code></pre>
0
2016-08-25T04:45:20Z
[ "python", "machine-learning", "scikit-learn", "linear-regression" ]
How to avoid dependency injection in Django?
39,066,590
<p>I'm trying to figure out how to avoid dependency injection in my project. There is a file <code>notifications.py</code> in app directory. </p> <p>File <code>notifications.py</code> contains methods for sending emails to admin and users. To get admins email, I need to check object of <code>SystemData</code> model. But in models, I use notifications.</p> <p><strong>models</strong></p> <pre><code>class SystemData(models.Model): admin_alerts_email = models.EmailField(verbose_name=u'Emailová adresa admina') contact_us_email = models.EmailField(verbose_name=u'Adresa kontaktujte nás') waiting_threshold = models.PositiveSmallIntegerField(verbose_name=u'Maximálny počet minút čakania') class SomeModel(models.Model): .... def save(...): notifications.send_message_to_admin('message') </code></pre> <p><strong>notifications.py</strong></p> <pre><code>from django.core.mail import EmailMessage from models import SystemData def send_message_to_admin(message): mail = EmailMessage(subject, message, to=[SystemData.objects.all().first().admin_email]) mail.send() </code></pre> <p>Django returns that it can't import <code>SystemData</code>.</p> <p>Do you know what to do? </p> <p><strong>EDIT:</strong></p> <pre><code>stacktrace </code></pre> <p><a href="http://i.stack.imgur.com/Wm9zt.png" rel="nofollow"><img src="http://i.stack.imgur.com/Wm9zt.png" alt="stacktrace"></a></p>
1
2016-08-21T16:51:35Z
39,066,713
<p>You can solve circular dependencies in functions by using inline imports:</p> <pre><code>class SomeModel(models.Model): .... def save(...): from .notifications import send_message_to_admin send_message_to_admin('message') </code></pre> <p>This will delay the import statement until the function is actually executed, so the <code>models</code> module has already been loaded. The <code>notifications</code> module can then safely import the <code>models</code> module. </p>
2
2016-08-21T17:04:44Z
[ "python", "django", "python-import" ]
How to avoid dependency injection in Django?
39,066,590
<p>I'm trying to figure out how to avoid dependency injection in my project. There is a file <code>notifications.py</code> in app directory. </p> <p>File <code>notifications.py</code> contains methods for sending emails to admin and users. To get admins email, I need to check object of <code>SystemData</code> model. But in models, I use notifications.</p> <p><strong>models</strong></p> <pre><code>class SystemData(models.Model): admin_alerts_email = models.EmailField(verbose_name=u'Emailová adresa admina') contact_us_email = models.EmailField(verbose_name=u'Adresa kontaktujte nás') waiting_threshold = models.PositiveSmallIntegerField(verbose_name=u'Maximálny počet minút čakania') class SomeModel(models.Model): .... def save(...): notifications.send_message_to_admin('message') </code></pre> <p><strong>notifications.py</strong></p> <pre><code>from django.core.mail import EmailMessage from models import SystemData def send_message_to_admin(message): mail = EmailMessage(subject, message, to=[SystemData.objects.all().first().admin_email]) mail.send() </code></pre> <p>Django returns that it can't import <code>SystemData</code>.</p> <p>Do you know what to do? </p> <p><strong>EDIT:</strong></p> <pre><code>stacktrace </code></pre> <p><a href="http://i.stack.imgur.com/Wm9zt.png" rel="nofollow"><img src="http://i.stack.imgur.com/Wm9zt.png" alt="stacktrace"></a></p>
1
2016-08-21T16:51:35Z
39,066,733
<p>Apart from using circular imports you can either do it like that:</p> <pre><code>from django.core.mail import EmailMessage from django.db.models.signals import post_save from django.dispatch import receiver from .models import SystemData, SomeModel @receiver(post_save, sender=SomeModel) def send_message_to_admin(sender, instance, created, **kwargs): message = 'message' mail = EmailMessage( subject, message, to=[SystemData.objects.all().first().admin_email] ) mail.send() </code></pre> <p>and at the end of models.py put</p> <p><code>from .notifications import *</code></p> <p>or use newest approach with AppConfig to register signals (that's what your notifications actually do)</p> <p>see: <a href="https://chriskief.com/2014/02/28/django-1-7-signals-appconfig/" rel="nofollow">https://chriskief.com/2014/02/28/django-1-7-signals-appconfig/</a></p> <p>that way it will load when app registry is ready and you'll avoid circular imports, so that line:</p> <p><code>from .notifications import *</code></p> <p>can be dropped from models.py</p> <p>AppConfig can be used in a more generic way as well allowing you to import models like that:</p> <pre><code>from django.apps import apps Model = apps.get_model('app_name', 'Model') </code></pre>
2
2016-08-21T17:06:59Z
[ "python", "django", "python-import" ]
Python Guessing game - incomplete code
39,066,610
<p>Can someone please help me re-design this code so that the program prompts the user to choose Easy, Medium or Hard.</p> <pre><code>Easy: maxNumber = 10 Medium: maxNumber = 50 Hard: maxNumber = 100 </code></pre> <p>It should choose a random number between 0 and the maxNumber. The program will loop calling a function the get the users guess, and another to check their guess. a function named “getGuess” which will ask the user for their guess and reprompt if the guess is not between 0 and the maxNumber r function named “checkGuess” which will check the users guess compared to the answer. The function will return “higher” if the number is higher than the guess; “lower” if the number is lower than the guess and “correct” if thenumber is equal to the guess. Once the user has guessed the number correctly the program will display all their guesses and how many guesses it took them. Then the program will ask the user if they would like to try again and redisplay the difficulty menu.</p> <pre><code>import random guessesTaken = 0 print('Hello! Welcome to the guessing game') myName = input() number = random.randint(1, 20) print('Well, ' + myName + ', I am thinking of a number between 1 and 20.') while guessesTaken &lt; 6: print('Take a guess.') guess = input() guess = int(guess) guessesTaken = guessesTaken + 1 if guess &lt; number: print('Your guess is too low.') if guess &gt; number: print('Your guess is too high.') if guess == number: break if guess == number: guessesTaken = str(guessesTaken) print('Good job, ' + myName + '! You guessed my number in ' + guessesTaken + ' guesses!') if guess != number: number = str(number) print('Nope. The number I was thinking of was ' + number) </code></pre>
-2
2016-08-21T16:53:38Z
39,066,764
<p>You could do something like this:</p> <pre><code>from random import randint myName = input("what's your name? ") def pre_game(): difficulty = input("Choose difficulty: type easy medium or hard: ") main_loop(difficulty) def main_loop(difficulty): if difficulty == "easy": answer = randint(0, 10) elif difficulty == "medium": answer = randint(0, 50) else: answer = randint(0, 100) times_guessed = 0 guess = int() while times_guessed &lt; 6: print('Take a guess.') guess = input() guess = int(guess) times_guessed += 1 if guess &lt; answer: print('Your guess is too low.') if guess &gt; answer: print('Your guess is too high.') if guess == answer: break if guess == answer: guessesTaken = str(times_guessed) print('Good job, ' + myName + '! You guessed my number in ' + guessesTaken + ' guesses!') if guess != answer: print('Nope. The number I was thinking of was ' + str(answer)) next = input("Play again? y/n: ") if next == "y": pre_game() else: print("Thanks for playing!") pre_game() </code></pre>
-1
2016-08-21T17:09:45Z
[ "python" ]
"Segmentation Fault" in matplotlib running example Librosa script
39,066,625
<p>After many issues I've installed Librosa (<a href="https://github.com/librosa/librosa" rel="nofollow">https://github.com/librosa/librosa</a>) on Linux Mint 18 Mate x64. When I want to run example script, e.g.: <a href="http://librosa.github.io/librosa/generated/librosa.feature.tempogram.html#librosa.feature.tempogram" rel="nofollow">http://librosa.github.io/librosa/generated/librosa.feature.tempogram.html#librosa.feature.tempogram</a>, it crashes with "Segmentation Fault" error:</p> <pre><code>$ python librosa-feature-tempogram-1.py /usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_qt5.py:140: Warning: g_main_context_push_thread_default: assertion 'acquired_context' failed qApp = QtWidgets.QApplication([str(" ")]) Segmentation fault </code></pre> <p>I've tried to debug it line-by-line and there is result:</p> <pre><code>$ python Python 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import librosa &gt;&gt;&gt; # Visualize an STFT power spectrum ... &gt;&gt;&gt; import matplotlib.pyplot as plt &gt;&gt;&gt; y, sr = librosa.load(librosa.util.example_audio_file()) &gt;&gt;&gt; plt.figure(figsize=(12, 8)) /usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_qt5.py:140: Warning: g_main_context_push_thread_default: assertion 'acquired_context' failed qApp = QtWidgets.QApplication([str(" ")]) Segmentation fault </code></pre> <p>Probably there is some issue with matplotlib library and Qt (5.7.0). Moreover, I remember I had many issues when installing Librosa, including matplotlib, so it could be some installation issue. However, I don't know how to solve it. I hope somebody will have helpful clues for me.</p>
0
2016-08-21T16:55:26Z
39,259,837
<p>Finally, I've solved this issue by installing these packages: <code>sudo apt-get install tk-dev libpng-dev libffi-dev dvipng texlive-latex-base</code> and reinstalling <em>matplotlib</em> using <em>pip</em>. I have also changed background in <em>matplotlib</em> on <em>TkAgg</em>. There is a beginning of a code with import statements:</p> <pre><code>import librosa import matplotlib matplotlib.use('TkAgg') import matplotlib.pyplot as plt </code></pre> <p>Now it works perfectly.</p>
0
2016-08-31T22:17:12Z
[ "python", "python-2.7", "qt", "matplotlib", "librosa" ]
All-to-All comparison of two lists in Python
39,066,655
<p>I'm struggling with some performance complications. The task in hand is to extract the similarity value between two strings. For this I am using <code>fuzzywuzzy</code>:</p> <pre><code>from fuzzywuzzy import fuzz print fuzz.ratio("string one", "string two") print fuzz.ratio("string one", "string two which is significantly different") result1 80 result2 38 </code></pre> <p>However, this is OK. The problem that I'm facing is that I have two lists, one has 1500 rows and the other several thousand. I need to compare all elements of the first agains all elements of the second one. Simple for in a for loop will take ridiculously big amount of time to compute.</p> <p>If anyone has a suggestion how can I speed this up, it would be highly appreciated.</p>
0
2016-08-21T16:59:05Z
39,067,127
<p>If you need to count the number of times each of the statements appear then no, there is no way I know of to get a huge speedup over the n^2 operations needed to compare the elements in each list. You might be able to avoid some string matching by using the length to rule out the possibility that a match could occur but you still have nested for loops. You would probably spend far more time optimizing it than the amount of processing time it would save you.</p>
1
2016-08-21T17:47:26Z
[ "python", "performance", "fuzzywuzzy" ]
All-to-All comparison of two lists in Python
39,066,655
<p>I'm struggling with some performance complications. The task in hand is to extract the similarity value between two strings. For this I am using <code>fuzzywuzzy</code>:</p> <pre><code>from fuzzywuzzy import fuzz print fuzz.ratio("string one", "string two") print fuzz.ratio("string one", "string two which is significantly different") result1 80 result2 38 </code></pre> <p>However, this is OK. The problem that I'm facing is that I have two lists, one has 1500 rows and the other several thousand. I need to compare all elements of the first agains all elements of the second one. Simple for in a for loop will take ridiculously big amount of time to compute.</p> <p>If anyone has a suggestion how can I speed this up, it would be highly appreciated.</p>
0
2016-08-21T16:59:05Z
39,067,506
<p>I've made something on my own for you (python 2.7):</p> <pre><code>from __future__ import division import time from itertools import izip from fuzzywuzzy import fuzz one = "different simliar" two = "similar" def compare(first, second): smaller, bigger = sorted([first, second], key=len) s_smaller= smaller.split() s_bigger = bigger.split() bigger_sets = [set(word) for word in s_bigger] counter = 0 for word in s_smaller: if set(word) in bigger_sets: counter += len(word) if counter: return counter/len(' '.join(s_bigger))*100 # percentage match return counter start_time = time.time() print "match: ", compare(one, two) compare_time = time.time() - start_time print "compare: --- %s seconds ---" % (compare_time) start_time = time.time() print "match: ", fuzz.ratio(one, two) fuzz_time = time.time() - start_time print "fuzzy: --- %s seconds ---" % (fuzz_time) print print "&lt;simliar or similar&gt;/&lt;length of bigger&gt;*100%" print 7/len(one)*100 print print "Equals?" print 7/len(one)*100 == compare(one, two) print print "Faster than fuzzy?" print compare_time &lt; fuzz_time </code></pre> <p>So I think mine is faster, but more accurate for you? You decide.</p> <p><strong>EDIT</strong> Now is not only faster, but also more accurate.</p> <p>Result:</p> <pre><code>match: 41.1764705882 compare: --- 4.19616699219e-05 seconds --- match: 50 fuzzy: --- 7.39097595215e-05 seconds --- &lt;simliar or similar&gt;/&lt;length of bigger&gt;*100% 41.1764705882 Equals? True Faster than fuzzy? True </code></pre> <p>Of course if you would have words check like fuzzywuzzy does, then here you go:</p> <pre><code>from __future__ import division from itertools import izip import time from fuzzywuzzy import fuzz one = "different simliar" two = "similar" def compare(first, second): smaller, bigger = sorted([first, second], key=len) s_smaller= smaller.split() s_bigger = bigger.split() bigger_sets = [set(word) for word in s_bigger] counter = 0 for word in s_smaller: if set(word) in bigger_sets: counter += 1 if counter: return counter/len(s_bigger)*100 # percentage match return counter start_time = time.time() print "match: ", compare(one, two) compare_time = time.time() - start_time print "compare: --- %s seconds ---" % (compare_time) start_time = time.time() print "match: ", fuzz.ratio(one, two) fuzz_time = time.time() - start_time print "fuzzy: --- %s seconds ---" % (fuzz_time) print print "Equals?" print fuzz.ratio(one, two) == compare(one, two) print print "Faster than fuzzy?" print compare_time &lt; fuzz_time </code></pre> <p>Result:</p> <pre><code>match: 50.0 compare: --- 7.20024108887e-05 seconds --- match: 50 fuzzy: --- 0.000125169754028 seconds --- Equals? True Faster than fuzzy? True </code></pre>
1
2016-08-21T18:25:03Z
[ "python", "performance", "fuzzywuzzy" ]
All-to-All comparison of two lists in Python
39,066,655
<p>I'm struggling with some performance complications. The task in hand is to extract the similarity value between two strings. For this I am using <code>fuzzywuzzy</code>:</p> <pre><code>from fuzzywuzzy import fuzz print fuzz.ratio("string one", "string two") print fuzz.ratio("string one", "string two which is significantly different") result1 80 result2 38 </code></pre> <p>However, this is OK. The problem that I'm facing is that I have two lists, one has 1500 rows and the other several thousand. I need to compare all elements of the first agains all elements of the second one. Simple for in a for loop will take ridiculously big amount of time to compute.</p> <p>If anyone has a suggestion how can I speed this up, it would be highly appreciated.</p>
0
2016-08-21T16:59:05Z
39,082,168
<p>The best solution I can think of is to use the <a href="http://www.ibm.com/analytics/us/en/technology/stream-computing/" rel="nofollow">IBM Streams framework</a> to parallelize your essentially unavoidable O(n^2) solution.</p> <p>Using the framework, you would be able to write a single-threaded kernel similar to this</p> <pre><code>def matchStatements(tweet, statements): results = [] for s in statements: r = fuzz.ratio(tweet, s) results.append(r) return results </code></pre> <p>Then parallelize it using a setup similar to this</p> <pre><code>def main(): topo = Topology("tweet_compare") source = topo.source(getTweets) cpuCores = 4 match = source.parallel(cpuCores).transform(matchStatements) end = match.end_parallel() end.sink(print) </code></pre> <p>This multithreads the processing, speeding it up substantially while saving you the work of implementing the details of the multithreading yourself (which is the primary advantage of Streams).</p> <p>The idea is that each tweet is a Streams tuple to be processed across multiple processing elements.</p> <p>The Python topology framework documentation for Streams is <a href="http://ibmstreams.github.io/streamsx.documentation/docs/latest/python/python-appapi-devguide/" rel="nofollow">here</a> and the <code>parallel</code> operator in particular is described <a href="http://ibmstreams.github.io/streamsx.documentation/docs/latest/python/python-appapi-devguide/#user-defined-parallelism" rel="nofollow">here</a>.</p>
0
2016-08-22T14:29:42Z
[ "python", "performance", "fuzzywuzzy" ]
how to get rid of unicode characters in python output?
39,066,761
<p>i am trying to load a json file and then trying to parse it later. however, in the output i keep getting the 'u' characters. I tried to open the file with encoding='utf-8' which dint solve the problem. i am using python 2.7. is there a straight forward approach or workaround to ignore and get ride of the 'u' characters in the output.</p> <pre><code>import json import io with io.open('/tmp/install-report.json', encoding='utf-8') as json_data: d = json.load(json_data) print d </code></pre> <p>o/p</p> <pre><code>{u'install': {u'Status': u'In Progress...', u'StartedAt': 1471772544,}} </code></pre> <p>p.s: i went trough this post <a href="http://stackoverflow.com/questions/761361/suppress-the-uprefix-indicating-unicode-in-python-strings">Suppress the u&#39;prefix indicating unicode&#39; in python strings</a> but that doesn't have a solution for python 2.7</p>
0
2016-08-21T17:09:35Z
39,066,833
<p>Use json.dumps and decode it to convert it to string</p> <pre><code>data = json.dumps(d, ensure_ascii=False).decode('utf8') print data </code></pre>
2
2016-08-21T17:17:06Z
[ "python", "json", "unicode" ]
how to get rid of unicode characters in python output?
39,066,761
<p>i am trying to load a json file and then trying to parse it later. however, in the output i keep getting the 'u' characters. I tried to open the file with encoding='utf-8' which dint solve the problem. i am using python 2.7. is there a straight forward approach or workaround to ignore and get ride of the 'u' characters in the output.</p> <pre><code>import json import io with io.open('/tmp/install-report.json', encoding='utf-8') as json_data: d = json.load(json_data) print d </code></pre> <p>o/p</p> <pre><code>{u'install': {u'Status': u'In Progress...', u'StartedAt': 1471772544,}} </code></pre> <p>p.s: i went trough this post <a href="http://stackoverflow.com/questions/761361/suppress-the-uprefix-indicating-unicode-in-python-strings">Suppress the u&#39;prefix indicating unicode&#39; in python strings</a> but that doesn't have a solution for python 2.7</p>
0
2016-08-21T17:09:35Z
39,067,349
<p><code>u</code> just indicates a Unicode string. If you print the string, it won't be displayed:</p> <pre><code>d = {u'install': {u'Status': u'In Progress...', u'StartedAt': 1471772544}} print 'Status:',d[u'install'][u'Status'] </code></pre> <p>Output:</p> <pre><code>Status: In Progress... </code></pre>
0
2016-08-21T18:09:27Z
[ "python", "json", "unicode" ]
Python 3.4 ctypes message box doesn't open with other code included
39,066,820
<p>Normally this code works fine when called.</p> <pre><code>import ctypes def message_box(title, text): ctypes.windll.user32.MessageBoxW(0, text, title, 1) </code></pre> <p>But when it's used with other code it hangs at the line where message_box is called.</p> <pre><code>import ctypes def message_box(title, text): ctypes.windll.user32.MessageBoxW(0, text, title, 1) while True: time = input("Enter time of the reminder in the format 'HH:MM': ") if (len(time) != 5): print("\nInvalid answer\n") continue if (time[2] != ":"): print("\nInvalid answer\n") continue try: hours = int(time[0:2]) minutes = int(time[3:5]) except: print("\nInvalid answer\n") continue if not (0 &lt; hours &lt; 23 or 0 &lt; minutes &lt; 59): print("\nInvalid answer\n") continue break message_box("Example_title", "Example_text") </code></pre>
0
2016-08-21T17:15:39Z
39,068,289
<p>I found how to do it.</p> <p>In the fourth argument for the message box, you need to put in values separated by pipes ('|'). From my limited testing, the MB arguments define the buttons that the user can click, apart from MB_SYSTEMMODAL which brings the window to the front. The ICON arguments define what noise the window makes as it pops up as well as a little image in the window denoting its purpose.</p> <pre><code>MB_OK = 0x0 MB_OKCXL = 0x01 MB_YESNOCXL = 0x03 MB_YESNO = 0x04 MB_HELP = 0x4000 MB_SYSTEMMODAL = 4096 ICON_EXCLAIM = 0x30 ICON_INFO = 0x40 ICON_STOP = 0x10 def message_box(title, text): ctypes.windll.user32.MessageBoxW(0, text, title, MB_OK | ICON_INFO | MB_SYSTEMMODAL) </code></pre>
0
2016-08-21T19:51:00Z
[ "python", "ctypes" ]
How to crawl multiple websites and create a text file with the plain text
39,066,865
<p>I started a new project and I'm new using python scrapy. I'm trying to craw through multiple websites and get the plain text from them. After that I would like to create a text-file with the raw text.</p> <p>This is the code that I have. Maybe you can help me and give me some tips on how e.g. I can read other links from the same website.</p> <pre><code>import scrapy class ForenSpider(scrapy.Spider): name = "foren" allowed_domains = ["dmoz.org", "pijamassurf.com", "indeed.com"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python"] def parse(self, response): hxs = HtmlXPathSelector(response) data = hxs.select('//body//text()').extract() with open('data', 'rw+') as f: for item in data: f.writelines(str(data)) </code></pre>
0
2016-08-21T17:20:49Z
39,067,006
<p>I used <strong>BeautifulSoup</strong> to get the parsedHtml and this is how i retrieved html links from the current html page.</p> <pre><code>parsedHtml = BeautifulSoup(htmlSource, "lxml") for href in parsedHtml.find_all('a'): linkedUrl = href.get('href') </code></pre>
0
2016-08-21T17:35:45Z
[ "python", "scrapy", "web-crawler" ]
How to crawl multiple websites and create a text file with the plain text
39,066,865
<p>I started a new project and I'm new using python scrapy. I'm trying to craw through multiple websites and get the plain text from them. After that I would like to create a text-file with the raw text.</p> <p>This is the code that I have. Maybe you can help me and give me some tips on how e.g. I can read other links from the same website.</p> <pre><code>import scrapy class ForenSpider(scrapy.Spider): name = "foren" allowed_domains = ["dmoz.org", "pijamassurf.com", "indeed.com"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/", "http://www.dmoz.org/Computers/Programming/Languages/Python"] def parse(self, response): hxs = HtmlXPathSelector(response) data = hxs.select('//body//text()').extract() with open('data', 'rw+') as f: for item in data: f.writelines(str(data)) </code></pre>
0
2016-08-21T17:20:49Z
39,069,989
<p>Just consider extending one later examples from the <a href="http://doc.scrapy.org/en/latest/intro/tutorial.html#following-links" rel="nofollow">same page</a>. For example:</p> <pre><code>import scrapy class DmozSpider(scrapy.Spider): name = "dmoz" allowed_domains = ["dmoz.org"] start_urls = [ "http://www.dmoz.org/Computers/Programming/Languages/Python/", ] def parse(self, response): for href in response.css("ul.directory.dir-col &gt; li &gt; a::attr('href')"): url = response.urljoin(href.extract()) yield scrapy.Request(url, callback=self.parse_dir_contents) def parse_dir_contents(self, response): yield { "link": response.url, "body": " ".join(filter(None, map(unicode.strip, response.xpath('//body//text()').extract()))), "links": response.css("a::attr('href')").extract() } </code></pre> <p>Note that this will be less resource intensive than opening and closing a file for each item as you do now. I'm not sure they really mean it as anything more than a debugging step to get started on the tutorial.</p>
0
2016-08-22T00:22:42Z
[ "python", "scrapy", "web-crawler" ]
Duplicate keys in dictionary from database
39,066,889
<p>when i make a query to my database, i return the values as such:</p> <pre><code>tuplematches = cursor.fetchall() for items in tuplematches: print(items) </code></pre> <p>prints as:</p> <pre><code> NAME/ PRICE/ HOURS/ 1. ('honda accord 2nd light', 20.00, 10.0) 2. ('honda accord 2nd light', 22.00, 17.0) 3. ('chevy silverado headlight', 30.00, 20.0) </code></pre> <p>what i want is to create a dictionary where the key is the name of the item, and the value is a list that includes the totalsum of all prices for the item with this name, as well as the sum of the hours, and number of times NAME has occurred. for each individual name: </p> <pre><code> mydict[name] = [summed price, summed hours, occurences] mydict['honda accord 2nd light'] = [42.00, 27.0, 2] </code></pre> <p>ive tried several different ways, but i cant figure out why, instead of giving me nonduplicate names as keys with its all of it's combined values, it gives me a dictionary with multiple different instances of the same name and their own different values, which does me no good. like this:</p> <pre><code> mydict['honda accord 2nd light'] = [21.00, 13.5, 2] mydict['honda accord 2nd light'] = [44.00, 27.0, 2] mydict['honda accord 2nd light'] = [23.00, 19.0, 5] </code></pre> <p>the ultimate goal here is to get the average prices, averages hours, and total occurences for each itemname. </p> <pre><code> dict[name] = [average price, average hours, total occurences] </code></pre> <p>any idea what i can do to make this easier and a little less painful? </p>
0
2016-08-21T17:23:11Z
39,066,991
<p>Since you are dealing with database already, why not do the aggregation within the query?</p> <pre><code>select NAME, SUM(PRICE) AS PRICE, SUM(HOURS) AS HOURS, COUNT(*) AS OCCURENCE from ... group by NAME </code></pre>
0
2016-08-21T17:34:22Z
[ "python", "sqlite3" ]
Copy the result of a selection grouped by into a table
39,066,947
<p>Ok, first of all: I am quite new to PostgreSQL and programming in general. So I have two tables. One table (cars) is:</p> <pre><code> id | brand | model | price ----+---------+-------+------- 1 | Opel | Astra | 12000 2 | Citroen | C1 | 12000 3 | Citroen | C2 | 15000 4 | Citroen | C3 | 18000 5 | Audi | A3 | 20000 </code></pre> <p>And the other is:</p> <pre><code> id | brand | max_price ----+---------+----------- 4 | Opel | 5 | Citroen | 6 | Audi | </code></pre> <p>What I would like to do is, make a selection on cars so that I have the max price grouped by brand and then I would like to insert the price to the correspondent brand in max price. I tried to use python and this is what I have done:</p> <pre><code>cur = conn.cursor() cur.execute ("""DROP TABLE IF EXISTS temp """) cur.execute ("""CREATE TABLE temp (brand text, max_price integer)""") conn.commit() cur.execute ("""SELECT cars.brand, MAX(cars.price) FROM cars GROUP BY brand;""") results = cur.fetchall() for results in results: cur.execute ("""INSERT INTO temp (brand, max_price) VALUES %s""" % str(results)) conn.commit() cur.execute ("""UPDATE max_price SET max_price.max_price=temp.max_price WHERE max_price.brand = temp.brand;""") conn.commit() </code></pre> <p>It gets stuck in the update part, <code>signalling an error max_price.brand = temp.brand</code> </p> <p>Can anybody help me?</p> <p>EDIT: thanks to the suggestion of domino I changed the last line with <code>cur.execute ("""UPDATE max_price SET max_price.max_price=temp.max_price_int from temp WHERE max_price.brand = temp.brand;""")</code> Now I have the problem that temp.max_price is a recognised not as an integer but as a tuple. So, to solve the problem I tried to add before this last line the following code:</p> <pre><code>for results in results: results =results[0] results = int(results) cur.execute ("""INSERT INTO temp (max_price_int) VALUES %s""" % str(results)) conn.commit() </code></pre> <p>It gives me an error </p> <pre><code>cur.execute ("""INSERT INTO temp (max_price_int) VALUES %s""" % str(results)) psycopg2.ProgrammingError: syntax error at or near "12000" LINE 1: INSERT INTO temp (max_price_int) VALUES 12000 </code></pre> <p>12000 is exactly the first value I want it to insert!</p>
1
2016-08-21T17:29:55Z
39,069,034
<p>When using <code>cur.execute</code>, you should never use the <code>%</code> operator. It opens up your queries to SQL injection attacks. </p> <p>Instead, use the built-in query parameterization like so:</p> <p><code>cur.execute ("""INSERT INTO temp (max_price_int) VALUES (%s)""",(results,)) </code></p> <p>See documentation here: <a href="http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries" rel="nofollow">http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries</a></p> <hr> <p>A different approach would be to use SQL to do your update in a single query using the <code>with</code> clauses. The single query would look like this:</p> <pre><code>with max (brand, max_price) as ( select brand, max(price) from cars group by brand ) update max_price set max_price = max.max_price from max where max_price.brand = max.brand ; </code></pre> <p>Read more about Common Table Expressions (CTEs) here: <a href="https://www.postgresql.org/docs/9.5/static/queries-with.html" rel="nofollow">https://www.postgresql.org/docs/9.5/static/queries-with.html</a></p>
1
2016-08-21T21:29:28Z
[ "python", "postgresql" ]
iterating through list - splitting strings
39,066,980
<p>I have a list of <code>drawingnumbers</code>, I am attempting to split these strings and then append to a number of lists. </p> <p>I am hoping to end up with a number of lists, which contains each relevant piece of the original string. </p> <p>At the minute my definition is iterating through the list, but overwriting the variables, not appending them. So I have a single entry for each variable and these correspond to the final entry of the list.</p> <p>Could anybody please help?</p> <pre><code># drawingnumber split drawingnumber = ["AAA601-XXX-A-L00-1028-DR-GA-200-001", "AAA601-XXX-A-L10-1028-DR-GA-200-001", "AAA601-XXX-A-L00-1029-DR-GA-200-001", "AAA601-XXX-A-L00-1029-DR-GA-200-XXX"] building = [] buildinglist = [] originator = [] discipline = [] level = [] scope = [] drawingtype = [] drawingsubtype = [] numbera = [] numberb = [] for i in drawingnumber: building, originator, discipline, level, scope, \ drawingtype,drawingsubtype, numbera, numberb = i.split("-") print("building:", building) print("originator: ", originator) print("discipline: ", discipline) print("level: ", level) print("scope: ", scope) print("drawingtype: ", drawingtype) print("drawingsubtype", drawingsubtype) print("drawingident", numbera, "-", numberb) </code></pre>
2
2016-08-21T17:33:18Z
39,067,031
<p>You can use <code>zip</code> after splitting each element in the list to transpose your lists as:</p> <pre><code>zip(*[i.split("-") for i in drawingnumber]) </code></pre> <p>And assign them to lists names:</p> <pre><code>building, originator, discipline, level, scope, \ drawingtype, drawingsubtype, numbera, numberb = zip(*[i.split("-") for i in drawingnumber]) </code></pre> <p>Example output:</p> <pre><code>building # ('AAA601', 'AAA601', 'AAA601', 'AAA601') originator # ('XXX', 'XXX', 'XXX', 'XXX') numberb # ('001', '001', '001', 'XXX') </code></pre>
2
2016-08-21T17:38:13Z
[ "python", "list", "split" ]
iterating through list - splitting strings
39,066,980
<p>I have a list of <code>drawingnumbers</code>, I am attempting to split these strings and then append to a number of lists. </p> <p>I am hoping to end up with a number of lists, which contains each relevant piece of the original string. </p> <p>At the minute my definition is iterating through the list, but overwriting the variables, not appending them. So I have a single entry for each variable and these correspond to the final entry of the list.</p> <p>Could anybody please help?</p> <pre><code># drawingnumber split drawingnumber = ["AAA601-XXX-A-L00-1028-DR-GA-200-001", "AAA601-XXX-A-L10-1028-DR-GA-200-001", "AAA601-XXX-A-L00-1029-DR-GA-200-001", "AAA601-XXX-A-L00-1029-DR-GA-200-XXX"] building = [] buildinglist = [] originator = [] discipline = [] level = [] scope = [] drawingtype = [] drawingsubtype = [] numbera = [] numberb = [] for i in drawingnumber: building, originator, discipline, level, scope, \ drawingtype,drawingsubtype, numbera, numberb = i.split("-") print("building:", building) print("originator: ", originator) print("discipline: ", discipline) print("level: ", level) print("scope: ", scope) print("drawingtype: ", drawingtype) print("drawingsubtype", drawingsubtype) print("drawingident", numbera, "-", numberb) </code></pre>
2
2016-08-21T17:33:18Z
39,067,059
<p>Just change</p> <pre><code> for i in drawingnumber: building, originator, discipline, level, scope, drawingtype,drawingsubtype, numbera, numberb = i.split("-") </code></pre> <p>to:</p> <pre><code> for i in drawingnumber: building_, originator_, discipline_, level_, scope_, drawingtype_,drawingsubtype_, numbera_, numberb_ = i.split("-") building.append(building_) originator.append(originator_) ...etc... </code></pre> <p>splitted valeus redefine your variables each time what you want to do here is basically append those to lists you created, also pick plural names for list like: buildings and append singular variables to them</p>
0
2016-08-21T17:41:10Z
[ "python", "list", "split" ]
iterating through list - splitting strings
39,066,980
<p>I have a list of <code>drawingnumbers</code>, I am attempting to split these strings and then append to a number of lists. </p> <p>I am hoping to end up with a number of lists, which contains each relevant piece of the original string. </p> <p>At the minute my definition is iterating through the list, but overwriting the variables, not appending them. So I have a single entry for each variable and these correspond to the final entry of the list.</p> <p>Could anybody please help?</p> <pre><code># drawingnumber split drawingnumber = ["AAA601-XXX-A-L00-1028-DR-GA-200-001", "AAA601-XXX-A-L10-1028-DR-GA-200-001", "AAA601-XXX-A-L00-1029-DR-GA-200-001", "AAA601-XXX-A-L00-1029-DR-GA-200-XXX"] building = [] buildinglist = [] originator = [] discipline = [] level = [] scope = [] drawingtype = [] drawingsubtype = [] numbera = [] numberb = [] for i in drawingnumber: building, originator, discipline, level, scope, \ drawingtype,drawingsubtype, numbera, numberb = i.split("-") print("building:", building) print("originator: ", originator) print("discipline: ", discipline) print("level: ", level) print("scope: ", scope) print("drawingtype: ", drawingtype) print("drawingsubtype", drawingsubtype) print("drawingident", numbera, "-", numberb) </code></pre>
2
2016-08-21T17:33:18Z
39,067,136
<pre><code>drawingnumber = ["AAA601-XX1-A-L00-1028-DR-GA-200-001", "AAA602-XX2-A-L10-1028-DR-GA-200-001", "AAA603-XX3-A-L00-1029-DR-GA-200-001", "AAA604-XX4-A-L00-1029-DR-GA-200-XXX"] building = [] buildinglist = [] originator = [] discipline = [] level = [] scope = [] drawingtype = [] drawingsubtype = [] numbera = [] numberb = [] for i in drawingnumber: j = i.split('-') building.append(j[0]) buildinglist.append(j[1]) for i in range(len(drawingnumber)): print("building:", building[i]) print("buildinglist:", buildinglist[i]) </code></pre>
0
2016-08-21T17:48:12Z
[ "python", "list", "split" ]
Python program to multiply two polynomial where each term of the polynomial is represented as a pair of integers (coefficient, exponent)?
39,066,982
<p>fuction takes two list(having tuples as values) as input i got in my mind following algorithm to write code for this, but to write it properly.</p> <p>-->firstly make required no. of dictionary to store coefficient of each power is multiplied with all coefficient of polynomial p2.</p> <p>then all dictionary coefficient are added which having same power.</p> <pre><code>def multpoly(p1,p2): dp1=dict(map(reversed, p1)) dp2=dict(map(reversed, p2)) kdp1=list(dp1.keys()) kdp2=list(dp2.keys()) rslt={} if len(kdp1)&gt;=len(kdp2): kd1=kdp1 kd2=kdp2 elif len(kdp1)&lt;len(kdp2): kd1=kdp2 kd2=kdp1 for n in kd2: for m in kd1: rslt[n]={m:0} if len(dp1)&lt;=len(dp2): rslt[n][m+n]=rslt[n][m+n] + dp1[n]*dp2[m] elif len(dp1)&gt;len(dp2): rslt[n][m+n]=rslt[n][m+n] + dp2[n]*dp1[m] return(rslt) </code></pre>
-2
2016-08-21T17:33:39Z
39,067,151
<p>If I understand correctly, you want a function to multiply two polynomials and return the result. In the future, try and post a specific question. Here is code that will work for you:</p> <pre><code>def multiply_terms(term_1, term_2): new_c = term_1[0] * term_2[0] new_e = term_1[1] + term_2[1] return (new_c, new_e) def multpoly(p1, p2): """ @params p1,p2 are lists of tuples where each tuple represents a pair of term coefficient and exponent """ # multiply terms result_poly = [] for term_1 in p1: for term_2 in p2: result_poly.append(multiply_terms(term_1, term_2)) # collect like terms collected_terms = [] exps = [term[1] for term in result_poly] for e in exps: count = 0 for term in result_poly: if term[1] == e: count += term[0] collected_terms.append((count, e)) return collected_terms </code></pre> <p>Note however, there are definitely much better ways to represent these polynomials such that the multiplication is faster and easier to code. Your idea with the dict is slightly better but still messy. You could use a list where the index represents the exponent and the value represents the coefficient. For ex. you could represent <code>2x^4 + 3x + 1</code> as <code>[1, 3, 0, 0, 2]</code>. </p>
1
2016-08-21T17:49:20Z
[ "python", "python-2.7", "python-3.x" ]
bot.polling funciton in a try-except block, do not run the rest of the code
39,067,029
<p>In my python code, the polling function is in a try-except block </p> <pre><code>bot = telebot.TeleBot(TOKEN) while True: try: status = "Conected" bot.polling(none_stop=False, interval=1) except: status = "failure" print status #do something.. time.sleep(1) </code></pre> <p>but when <code>bot.polling</code> is executed, the script never print a status and do not run the rest of the code.</p> <p>i try adding "block=True" <code>bot.polling(none_stop=False, interval=1, block=True)</code>, but in that case, the polling dont get the telegram messages.</p>
0
2016-08-21T17:38:05Z
39,067,049
<p>Apologies, my last answer wasn't as clear as it could have been.</p> <p>So when working with try/except you'll want to make sure that you've got the indentation under the While look and the Try: and Except: (and Else, Finally if you're using them too).</p> <p>Next, you want the action that you're looking for to take place in the try or except. So, here's what I'd do:</p> <p>This would run and continuously loop once per second. If the loop's status was successful it would show connected, otherwise it would print "failure" then try the loop again one second later. If it returns an error it will print that it has an error. However, if you encounter an error it will then loop back to the top of the while True: loop. This is why you were not getting a print since the except instruction didn't include to print. Once the exception happens your code goes back to the top and tries again. </p> <pre><code>while True: try: status = "Connected" bot.polling(none_stop=False, interval=1) pass except: status = "failure" print status else: print status #do something.. time.sleep(1) </code></pre> <p>This tutorial is extremely helpful for getting up and running with these try catch loops.</p> <p><a href="https://docs.python.org/2/tutorial/errors.html" rel="nofollow">https://docs.python.org/2/tutorial/errors.html</a></p> <p>Here's an example of a try except I use that works:</p> <pre><code>token = 0 while token == 0: print("Welcome to the back office.\nYou will need to log in to contine.") sleep(1) print("="*5 + " Please log in " +"="*5) print(" ") email = raw_input("Email: ") print(" ") password = getpass.getpass("Password: ") authpayload = "grant_type=password&amp;username=" + email + "&amp;password=" + password login = requests.post(url+'/token', data=authpayload) #testing token = login.json() try: token = token["access_token"] pass except: print(" ") print("="*5 + " ERROR " +"="*5) print(token) print("Sorry please try logging in again.") logging.info("user login failed " + str(token)) logging.info("user tried email: " + email) token = 0 sleep(1) else: print(" ") print("="*5 + " You're now logged in " +"="*5) print(" ") logging.info("user login succeeded") logging.info("user email: " + email) sleep(.5) pass </code></pre> <p>In my case, I'm "Try"ing to see if the json response has an object with the key "access_token" if not, then I know something has gone wrong and I don't let the user continue. This then sends them back to the top since in the except area I make sure the token is set back to 0. Meaning this loop will run until my program receives a value for the access token.</p> <p>I truly hope this helps! If it solves your problem please accept it! </p>
0
2016-08-21T17:40:18Z
[ "python", "bots", "polling", "telegram", "try-except" ]
Squaring values of dictionary
39,067,114
<p>I am using Python 2.7, still learning about dictionaries. I am focusing on performing numerical computations for dictionaries and need some help.</p> <p>I have a dictionary and I would like to square the values in it:</p> <pre><code> dict1 = {'dog': {'shepherd': 5,'collie': 15,'poodle': 3,'terrier': 20}, 'cat': {'siamese': 3,'persian': 2,'dsh': 16,'dls': 16}, 'bird': {'budgie': 20,'finch': 35,'cockatoo': 1,'parrot': 2} </code></pre> <p>I want:</p> <pre><code> dict1 = {'dog': {'shepherd': 25,'collie': 225,'poodle': 9,'terrier': 400}, 'cat': {'siamese': 9,'persian': 4,'dsh': 256,'dls': 256}, 'bird': {'budgie': 400,'finch': 1225,'cockatoo': 1,'parrot': 4} </code></pre> <p>I tried:</p> <pre><code> dict1_squared = dict**2. dict1_squared = pow(dict,2.) dict1_squared = {key: pow(value,2.) for key, value in dict1.items()} </code></pre> <p>I did not have any success with my attempts. </p>
3
2016-08-21T17:46:29Z
39,067,165
<p>It's because you have nested dictionaries, look:</p> <pre><code>results = {} for key, data_dict in dict1.iteritems(): results[key] = {key: pow(value,2.) for key, value in data_dict.iteritems()} </code></pre>
4
2016-08-21T17:50:35Z
[ "python", "dictionary" ]
Squaring values of dictionary
39,067,114
<p>I am using Python 2.7, still learning about dictionaries. I am focusing on performing numerical computations for dictionaries and need some help.</p> <p>I have a dictionary and I would like to square the values in it:</p> <pre><code> dict1 = {'dog': {'shepherd': 5,'collie': 15,'poodle': 3,'terrier': 20}, 'cat': {'siamese': 3,'persian': 2,'dsh': 16,'dls': 16}, 'bird': {'budgie': 20,'finch': 35,'cockatoo': 1,'parrot': 2} </code></pre> <p>I want:</p> <pre><code> dict1 = {'dog': {'shepherd': 25,'collie': 225,'poodle': 9,'terrier': 400}, 'cat': {'siamese': 9,'persian': 4,'dsh': 256,'dls': 256}, 'bird': {'budgie': 400,'finch': 1225,'cockatoo': 1,'parrot': 4} </code></pre> <p>I tried:</p> <pre><code> dict1_squared = dict**2. dict1_squared = pow(dict,2.) dict1_squared = {key: pow(value,2.) for key, value in dict1.items()} </code></pre> <p>I did not have any success with my attempts. </p>
3
2016-08-21T17:46:29Z
39,067,171
<p>You were very close with the dictionary comprehension. The issue is that <em>value</em> in your solution is a dictionary itself, so you have to iterate over it too.</p> <pre><code>dict1_squared = {key: {k: pow(v,2) for k,v in value.items()} for key, value in dict1.items()} </code></pre>
3
2016-08-21T17:51:27Z
[ "python", "dictionary" ]
Squaring values of dictionary
39,067,114
<p>I am using Python 2.7, still learning about dictionaries. I am focusing on performing numerical computations for dictionaries and need some help.</p> <p>I have a dictionary and I would like to square the values in it:</p> <pre><code> dict1 = {'dog': {'shepherd': 5,'collie': 15,'poodle': 3,'terrier': 20}, 'cat': {'siamese': 3,'persian': 2,'dsh': 16,'dls': 16}, 'bird': {'budgie': 20,'finch': 35,'cockatoo': 1,'parrot': 2} </code></pre> <p>I want:</p> <pre><code> dict1 = {'dog': {'shepherd': 25,'collie': 225,'poodle': 9,'terrier': 400}, 'cat': {'siamese': 9,'persian': 4,'dsh': 256,'dls': 256}, 'bird': {'budgie': 400,'finch': 1225,'cockatoo': 1,'parrot': 4} </code></pre> <p>I tried:</p> <pre><code> dict1_squared = dict**2. dict1_squared = pow(dict,2.) dict1_squared = {key: pow(value,2.) for key, value in dict1.items()} </code></pre> <p>I did not have any success with my attempts. </p>
3
2016-08-21T17:46:29Z
39,067,216
<p>One of those cases where I might prefer loops:</p> <pre><code>for d in dict1.values(): for k in d: d[k] **= 2 </code></pre>
5
2016-08-21T17:55:38Z
[ "python", "dictionary" ]
Squaring values of dictionary
39,067,114
<p>I am using Python 2.7, still learning about dictionaries. I am focusing on performing numerical computations for dictionaries and need some help.</p> <p>I have a dictionary and I would like to square the values in it:</p> <pre><code> dict1 = {'dog': {'shepherd': 5,'collie': 15,'poodle': 3,'terrier': 20}, 'cat': {'siamese': 3,'persian': 2,'dsh': 16,'dls': 16}, 'bird': {'budgie': 20,'finch': 35,'cockatoo': 1,'parrot': 2} </code></pre> <p>I want:</p> <pre><code> dict1 = {'dog': {'shepherd': 25,'collie': 225,'poodle': 9,'terrier': 400}, 'cat': {'siamese': 9,'persian': 4,'dsh': 256,'dls': 256}, 'bird': {'budgie': 400,'finch': 1225,'cockatoo': 1,'parrot': 4} </code></pre> <p>I tried:</p> <pre><code> dict1_squared = dict**2. dict1_squared = pow(dict,2.) dict1_squared = {key: pow(value,2.) for key, value in dict1.items()} </code></pre> <p>I did not have any success with my attempts. </p>
3
2016-08-21T17:46:29Z
39,067,237
<p>Based on your question I think it would be a good idea to work through a tutorial. <a href="http://www.tutorialspoint.com/python/index.htm" rel="nofollow">Here is one from tutorialspoint</a>. You said you are trying to square the dictionary, but that is not what you are trying to do. You are trying to square the values within a dictionary. To square the values within the dictionary, you first need to get the values. Python's <code>for</code> loops can help with this. </p> <pre><code># just an example test_dict = {'a': {'aa': 2}, 'b': {'bb': 4}} # go through every key in the outer dictionary for key1 in test_dict: # set a variable equal to the inner dictionary nested_dict = test_dict[key1] # get the values you want to square for key2 in nested_dict: # square the values nested_dict[key2] = nested_dict[key2] ** 2 </code></pre>
1
2016-08-21T17:58:10Z
[ "python", "dictionary" ]
Squaring values of dictionary
39,067,114
<p>I am using Python 2.7, still learning about dictionaries. I am focusing on performing numerical computations for dictionaries and need some help.</p> <p>I have a dictionary and I would like to square the values in it:</p> <pre><code> dict1 = {'dog': {'shepherd': 5,'collie': 15,'poodle': 3,'terrier': 20}, 'cat': {'siamese': 3,'persian': 2,'dsh': 16,'dls': 16}, 'bird': {'budgie': 20,'finch': 35,'cockatoo': 1,'parrot': 2} </code></pre> <p>I want:</p> <pre><code> dict1 = {'dog': {'shepherd': 25,'collie': 225,'poodle': 9,'terrier': 400}, 'cat': {'siamese': 9,'persian': 4,'dsh': 256,'dls': 256}, 'bird': {'budgie': 400,'finch': 1225,'cockatoo': 1,'parrot': 4} </code></pre> <p>I tried:</p> <pre><code> dict1_squared = dict**2. dict1_squared = pow(dict,2.) dict1_squared = {key: pow(value,2.) for key, value in dict1.items()} </code></pre> <p>I did not have any success with my attempts. </p>
3
2016-08-21T17:46:29Z
39,067,247
<p>If your structure is alway the same you can do this way:</p> <pre><code>for k,w in dict1.items(): for k1,w1 in w.items(): print w1, pow(w1,2) 20 400 1 1 2 4 35 1225 5 25 15 225 20 400 3 9 3 9 16 256 2 4 16 256 </code></pre>
0
2016-08-21T17:59:13Z
[ "python", "dictionary" ]
variables defined in a %time statement inside a method are not accessible after the statement
39,067,203
<p>This seems to work fine - </p> <pre><code>%time a = "abc" print(a) CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 19.1 µs abc </code></pre> <p>This doesn't - </p> <pre><code>def func(): %time b = "abc" print(b) func() CPU times: user 0 ns, sys: 0 ns, total: 0 ns Wall time: 31 µs --------------------------------------------------------------------------- NameError Traceback (most recent call last) &lt;ipython-input-8-57f7d48952b8&gt; in &lt;module&gt;() 3 print(b) 4 ----&gt; 5 func() &lt;ipython-input-8-57f7d48952b8&gt; in func() 1 def func(): 2 get_ipython().magic(u'time b = "abc"') ----&gt; 3 print(b) 4 5 func() NameError: global name 'b' is not defined </code></pre> <p>Here's a link to a <a href="https://gist.github.com/jayantj/2ead360ffb326f5e5e78e9e58b8a153e" rel="nofollow">notebook</a></p> <p>I'm using python 2.7, haven't tried it with python3 yet.</p> <p>Is this expected behaviour?</p>
1
2016-08-21T17:54:27Z
39,067,622
<p>I am almost certain this is an IPython bug. <a href="https://github.com/ipython/ipython/issues/9892" rel="nofollow">Reported here</a>.</p> <pre><code>In [31]: def func(): ...: a = 2 ...: %time b = 1 ...: print(locals()) ...: print a ...: print b In [32]: func() CPU times: user 3 µs, sys: 0 ns, total: 3 µs Wall time: 6.2 µs {'a': 2, 'b': 1} 2 --------------------------------------------------------------------------- NameError Traceback (most recent call last) &lt;ipython-input-32-08a2da4138f6&gt; in &lt;module&gt;() ----&gt; 1 func() &lt;ipython-input-31-13da62c18a7e&gt; in func() 4 print(locals()) 5 print a ----&gt; 6 print b 7 8 NameError: global name 'b' is not defined </code></pre>
1
2016-08-21T18:36:58Z
[ "python", "python-2.7", "ipython" ]
Pygame for Python3.5 on CentOS 7
39,067,306
<p>Thanks in advance for the help.</p> <p>I am trying to install Pygame for Python 3.5. </p> <p>I have spent many hours doing research and found that it was possible for Windows, yet nothing about CentOS. </p> <p><strong>Is it possible to install Pygame for Python 3.5 on CentOS 7?</strong></p> <p><strong>If so, how can i do it?</strong> </p> <p>I have tried many commands, all of which have not worked. Thanks for the help</p>
0
2016-08-21T18:05:58Z
39,067,446
<p>You can compile pygame from source code. </p> <p>1) Get dependances: </p> <pre><code>yum install python3 python3-tools python3-devel SDL SDL-devel portmidi portmidi-devel ffmpeg ffmpeg-devel SDL_image-devel SDL_mixer-devel SDL_ttf-devel libjpeg-turbo-devel cd /usr/lib ln -s libportmidi.so libporttime.so </code></pre> <p>2) Get pygame source code: </p> <pre><code>svn co svn://seul.org/svn/pygame/trunk pygame cd pygame </code></pre> <p>3) Then config, compile and install: </p> <pre><code>python3 config.py python3 setup.py build python3 setup.py install </code></pre>
1
2016-08-21T18:18:56Z
[ "python", "python-3.x", "pygame", "install", "centos7" ]
how to draw rectangles using list in python
39,067,400
<pre><code>for line, images_files in zip(lines, image_list): info = line.split(',') image_index = [int(info[0])] box_coordiante1 = [info[2]] box_coordiante2 = [info[3]] box_coordiante3 = [info[4]] box_coordiante4 = [info[5]] prev_image_num = 1 for image_number in image_index: #### read each other image_number if prev_image_num != image_number: # if read 11111 but appear different number such as 2, 3 and ect prev_image_num = image_number # the different number becomes pre_image_num(it was 1) #box_coordinate = [] # empty box_coordinate #box_coordinate.append(info[2:6]) #print box_coordinate # box_coordinate.append() #insert 2 to 6 axis rect = plt.Rectangle((int(box_coordiante1), int(box_coordiante2)), int(box_coordiante3), int(box_coordiante4), linewidth=1, edgecolor='r', facecolor='none') ax.add_patch(rect) im = cv2.imread(images_files) im = im[:, :, (2, 1, 0)] # # Display the image plt.imshow(im) plt.draw() plt.pause(0.1) plt.cla() </code></pre> <p>I am supposed to draw boxes on each picture. For showing boxes on each picture, i guess that gather location of boxes and show them at that same time. So i used a way using LIST to plt.Rectanle but it said "TypeError: int() argument must be a string or a number, not 'list'" Are there other ways?? </p>
2
2016-08-21T18:14:13Z
39,069,104
<p>I'm not very familiar with Python, but it seems like you want a plain number in the variables <code>image_index</code> and <code>box_coordinateN</code>. It looks like you're assigning single-element arrays to them. Try changing:</p> <pre><code>image_index = [int(info[0])] // list containing one element: int(info[0]) box_coordiante1 = [info[2]] box_coordiante2 = [info[3]] box_coordiante3 = [info[4]] box_coordiante4 = [info[5]] </code></pre> <p>to:</p> <pre><code>image_index = int(info[0]) // number: int(info[0]) box_coordiante1 = info[2] box_coordiante2 = info[3] box_coordiante3 = info[4] box_coordiante4 = info[5] </code></pre>
0
2016-08-21T21:41:14Z
[ "python", "image", "python-2.7", "draw", "drawrectangle" ]
Python + BeautifulSoup: Encoding Error
39,067,445
<p>If I run this code:</p> <pre><code>for link in soup.findAll('a'): href = link.get('href') href = str(href) </code></pre> <p>I'm getting the following error in the last line</p> <pre><code>href = str(href) UnicodeEncodeError: 'ascii' codec can't encode character u'\u2018' in position 68: ordinal not in range(128) </code></pre> <p>When I try to encode the variable, as shown below:</p> <pre><code>for link in soup.findAll('a'): href = link.get('href') href = href.encode('utf-8') href = str(href) </code></pre> <p>I get the following error:</p> <pre><code>href = href.encode('utf-8') AttributeError: 'NoneType' object has no attribute 'encode' </code></pre> <p>I've looked at multiple posts in here and elsewhere, but none of them provided a suitable solution. I'm fairly new to python. Please help.</p>
0
2016-08-21T18:18:46Z
39,067,608
<p>In my native language we have a lot of "áçéàó" characters, so I frequently found myself in a similar situation, and most of the decoding/encoding tips didn't worked all the way.</p> <p>Found my way out reseting the sys default language at the begining of my code using:</p> <pre><code>import sys reload(sys) sys.setdefaultencoding('latin-1') </code></pre> <p>Hope this can also help with your problem.</p>
0
2016-08-21T18:35:50Z
[ "python", "python-2.7", "encoding", "beautifulsoup" ]
Python + BeautifulSoup: Encoding Error
39,067,445
<p>If I run this code:</p> <pre><code>for link in soup.findAll('a'): href = link.get('href') href = str(href) </code></pre> <p>I'm getting the following error in the last line</p> <pre><code>href = str(href) UnicodeEncodeError: 'ascii' codec can't encode character u'\u2018' in position 68: ordinal not in range(128) </code></pre> <p>When I try to encode the variable, as shown below:</p> <pre><code>for link in soup.findAll('a'): href = link.get('href') href = href.encode('utf-8') href = str(href) </code></pre> <p>I get the following error:</p> <pre><code>href = href.encode('utf-8') AttributeError: 'NoneType' object has no attribute 'encode' </code></pre> <p>I've looked at multiple posts in here and elsewhere, but none of them provided a suitable solution. I'm fairly new to python. Please help.</p>
0
2016-08-21T18:18:46Z
39,091,779
<p>If anyone ever faces this issue, here's how I resolved it:</p> <p>Ideally, for the encoding issue, this should've worked:</p> <pre><code>href = href.encode('utf-8') href = str(href) </code></pre> <p>But in the set of webpages I was scrubbing, there were a few pages which didn't store any value in <code>href</code> variable, resulting in a few NoneType returns. And that was failing the <code>str(href)</code> statement. So I finally did this:</p> <pre><code>for link in soup.findAll('a'): href = link.get('href') if href is None: href = "" href = str(href.encode('utf-8')) </code></pre> <p>If <code>href</code> is a <code>NoneType</code>, its best to assign it to an empty string to prevent any type specific issue further in the code.</p> <p>One of the observations I've made about the u\2018 and u\2019 characters, is that more often than not, they don't occur in the link itself, but in the attribute attached to the links. Which is generally the text after <code>?attribute=</code>. So if attributes are not important in your scrubbing, using a statement like the one below could solve all your problems.</p> <pre><code>href = href.split("?")[0] </code></pre>
0
2016-08-23T03:17:51Z
[ "python", "python-2.7", "encoding", "beautifulsoup" ]
"SyntaxError: non-keyword arg after keyword arg" when trying to print values of variables in a Label
39,067,466
<p>I was just trying to print the values of variables in a <code>Label</code> declaration as given below</p> <pre><code>c = Label(root, text="Enter The Number Of Fruits In Basket%d Of Type%d\n"%j,i) </code></pre> <p>but I am getting the below error</p> <blockquote> <p>SyntaxError: non-keyword arg after keyword arg</p> </blockquote> <p>Am I missing anything, or declaring any arg wrongly?</p>
0
2016-08-21T18:21:06Z
39,067,504
<p>Because you haven't used brackets around <code>j, i</code> for the format string, Python thinks that <code>i</code> is a variable being passed to the <code>Label()</code> function as the 3 argument, instead of the format string. And since you've put <code>text=</code> (as a named argument) already, then all subsequent args also have to be named.</p> <p>Add the brackets around <code>j, i</code> and then it should be okay:</p> <pre><code>c = Label(root, text="Enter The Number Of Fruits In Basket%d Of Type%d\n" % (j, i)) </code></pre>
1
2016-08-21T18:24:48Z
[ "python", "python-2.7", "tkinter" ]
"rethinkdb.errors.ReqlServerCompileError: Expected 2 arguments but found 1 in:" when trying to .update() with Python rethink
39,067,492
<p>I'm working with RethinkDB using the Python module and right now I'm trying to update a model with this statement:</p> <pre><code>results = rethink.table(model + "s").filter(id=results["id"]).update(data).run(g.rdb_conn) </code></pre> <p><code>model</code> is something being defined earlier in the function, in this case it's <code>quote</code> and <code>data</code> is a dict of JSON data:</p> <pre><code>{ "channelId": "paradigmshift3d", "quoteId": "1", "quote": "Testing 123", "userId": "123", "messageId": "456" } </code></pre> <p>According to the <a href="https://www.rethinkdb.com/api/python/update/" rel="nofollow">RethinkDB API reference</a> that statement I'm using <em>should</em> work, but it's not. Here's the full traceback:</p> <pre><code>Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 2000, in __call__ return self.wsgi_app(environ, start_response) File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1991, in wsgi_app response = self.make_response(self.handle_exception(e)) File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1567, in handle_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.5/dist-packages/flask/_compat.py", line 33, in reraise raise value File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1988, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1641, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1544, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.5/dist-packages/flask/_compat.py", line 33, in reraise raise value File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1639, in full_dispatch_request rv = self.dispatch_request() File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1625, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/home/nate/CactusAPI/views.py", line 309, in chan_quote fields=fields File "/home/nate/CactusAPI/helpers.py", line 403, in generate_response ).update(data).run(g.rdb_conn) File "/usr/local/lib/python3.5/dist-packages/remodel/monkey.py", line 18, in remodel_run return run(self, c, **global_optargs) File "/usr/local/lib/python3.5/dist-packages/rethinkdb/ast.py", line 118, in run return c._start(self, **global_optargs) File "/usr/local/lib/python3.5/dist-packages/rethinkdb/net.py", line 620, in _start return self._instance.run_query(q, global_optargs.get('noreply', False)) File "/usr/local/lib/python3.5/dist-packages/rethinkdb/net.py", line 466, in run_query raise res.make_error(query) rethinkdb.errors.ReqlServerCompileError: Expected 2 arguments but found 1 in: r.table('quotes').filter(id='92c5160a-db57-4c3b-b2b2-2704cdcfc2b7').update(r.expr({'channelId': 'paradigmshift3d', 'quoteId': '1', 'quote': 'Testing 123', 'userId': '123', 'messageId': '456'})) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ </code></pre> <p>I've done some googling around, but there don't seem to be any questions/issues about this problem.</p>
2
2016-08-21T18:24:00Z
39,067,872
<p>This was caused by me trying to do <code>.filter()</code> with a single argument. <code>.filter()</code> is expecting a dictionary and I was simply providing <code>id = 92c5160a-db57-4c3b-b2b2-2704cdcfc2b7'</code>.</p> <p>I changed the query around to </p> <p><code>rethink.table(model + "s").get(results["id"]).update(data).run(g.rdb_conn)</code> </p> <p>and it's working now!</p>
2
2016-08-21T19:05:38Z
[ "python", "rethinkdb", "rethinkdb-python" ]
Pyspark: display a spark data frame in a table format
39,067,505
<p>I am using pyspark to read a parquet file like below:</p> <pre><code>my_df = sqlContext.read.parquet('hdfs://myPath/myDB.db/myTable/**') </code></pre> <p>Then when I do <code>my_df.take(5)</code>, it will show <code>[Row(...)]</code>, instead of a table format like when we use the pandas data frame.</p> <p>Is it possible to display the data frame in a table format like pandas data frame? Thanks!</p>
1
2016-08-21T18:24:58Z
39,071,773
<p>Yes: call the <code>toPandas</code> method on your dataframe and you'll get an <strong>actual</strong> pandas dataframe !</p>
0
2016-08-22T05:12:05Z
[ "python", "pandas", "pyspark", "spark-dataframe" ]
Files not being received correctly when sent through sockets over LAN
39,067,537
<p>I am trying to build an application that will send files over wireless LAN through the Python 3.4 socket module. It appears to be able to send files within one workstation, but when attempting to send files between two computers on the same network, the computer only appears to receive the first few thousand bytes, the amount dependent on the total size of the file when sent.</p> <p>For example, a 127 byte file was sent with no issues, but of a 34,846 byte file, only 1,431 or 4,327 bytes were received (only these two numbers of bytes seemed to be received, seemingly randomly switching between the two) and of a 65,182 byte file, only 5,772 or 4,324 bytes were received (the same situation). From looking at the contents of the received files, it appeared that it was the first few bytes that were received.</p> <p>Both systems have free, accessible RAM exceeding 2GB and sufficient storage space. The server is being run on Windows 8.1, Python 3.4.2, and the client is Ubuntu 14.04 Linux, Python 3.4.0.</p> <p>My code may be piecemeal and generally poorly written, as I am a beginner without a formal computer science education or any notable experience, especially in network programming. However, I have rooted through the code and racked my brain with no clear solution presenting itself.</p> <p>Server (Host):</p> <pre><code>import os import socket import struct import time import hashlib def md5(fname): hash_md5 = hashlib.md5() with open(fname, "rb") as readFile: for chunk in iter(lambda: readFile.read(4096), b""): hash_md5.update(chunk) return hash_md5.hexdigest() try: s = socket.socket() host = socket.gethostname() port = 26 s.bind(("0.0.0.0", port)) filename = input("File to send? ") fileLength = os.stat(filename).st_size print("Length:", fileLength, "bytes") fileLengthBytes = struct.pack("&gt;L", fileLength) filenameBytes = bytes(filename, "ascii") filenameLength = struct.pack("&gt;L", len(filenameBytes)) checksum = md5(filename) checksumBytes = bytes(bytearray.fromhex(checksum)) s.listen(5) while True: c, addr = s.accept() print("Connection from", addr) c.send(checksumBytes) time.sleep(0.1) c.send(filenameLength + fileLengthBytes) time.sleep(0.1) with open(filename, "rb") as f: c.send(filenameBytes + f.read()) c.close() finally: try: c.close() except NameError: pass except Exception as e: print(type(e), e.args, e) try: f.close() except NameError: pass except Exception as e: print(type(e), e.args, e) </code></pre> <p>Client (Recipient):</p> <pre><code>import socket import struct import hashlib def md5(fname): hash_md5 = hashlib.md5() with open(fname, "rb") as readFile: for chunk in iter(lambda: readFile.read(4096), b""): hash_md5.update(chunk) return hash_md5.hexdigest() CHECKSUM_LENGTH = 16 try: s = socket.socket() host = socket.gethostname() port = 26 ip = input("Connect to IP: ") s.connect((ip, port)) initialChecksum = s.recv(CHECKSUM_LENGTH) received = s.recv(8) # 1st 4 bytes: filename length # 2nd 4 bytes: file contents length filenameLength = struct.unpack("&gt;L", received[0:4])[0] fileLength = struct.unpack("&gt;L", received[4:])[0] print("Length:", fileLength) bytesToReceive = filenameLength + fileLength receivedBytes = s.recv(bytesToReceive) filename = str(receivedBytes[0:filenameLength], "ascii") f = open(filename, "wb") f.write(receivedBytes[filenameLength:]) # Write file contents actualChecksum = bytes(bytearray.fromhex(md5(filename))) if initialChecksum == actualChecksum: print("Fully received", filename) else: print(filename, "not received correctly") finally: try: f.close() except NameError: pass except Exception as e: print(type(e), e.args, e) try: s.close() except NameError: pass except Exception as e: print(type(e), e.args, e) </code></pre> <p>I am aware that the <code>md5</code> function that I use doesn't appear to work for small files below, I assume, 4 kilobytes.</p> <p>Where is the problem? How can it be solved? Am I missing something important?</p>
0
2016-08-21T18:28:46Z
39,067,867
<p>You are receiving only the first few packages of the transmission. This is because the <code>s.recv()</code> will receive the bytes that have reached the client already, but at most the number given as argument. On a local machine the transmission is fast, you will receive more.</p> <p>To get all parts of the transmission you should collect all bytes until the expected length has been reached.</p> <p>A very simplistic approach would be something like:</p> <pre><code>buffer = '' while len(buffer) &lt; bytes_to_receive: buffer += s.recv(1500) </code></pre> <p>1500 is the typical LAN MTU and a good value to get at least a full package. The code given is only a simple example for Python 2 to explain the concept. It needs to be refined and optimized and, if you are using Python 3, adapted to bytes.</p>
0
2016-08-21T19:05:11Z
[ "python", "sockets", "lan" ]