qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
53,632,979
|
i recently start learning about python, and made simple source of 2 balls in canvas which are moving with 2d vector rule. i want multiply the number of balls with list in python. here is source of that.
```
import time
import random
from tkinter import *
import numpy as np
import math
window = Tk()
canvas = Canvas(window,width=600,height=400)
canvas.pack()
canvas.create_rectangle(50,50,550,350)
R = 15
x1 = random.randrange(50+R,550-R)
y1 = random.randrange(50+R,350-R)
x2 = random.randrange(50+R,550-R)
y2 = random.randrange(50+R,350-R)
vx1 = random.randrange(1 , 10)
vy1 = random.randrange(1 , 10)
vx2 = random.randrange(1 , 10)
vy2 = random.randrange(1 , 10)
ntime = 100000
dt = .1
for iter in range(ntime):
x1 += vx1*dt
y1 += vy1*dt
x2 += vx2*dt
y2 += vy2*dt
c1 = canvas.create_oval(x1-R,y1-R,x1+R,y1+R,fill="red")
c2 = canvas.create_oval(x2-R,y2-R,x2+R,y2+R,fill="blue")
if (x1 > 550-R):
vx1 = -vx1
if (x1 < 50+R ):
vx1 = -vx1
if (x2 > 550-R):
vx2 = -vx2
if (x2 < 50+R ):
vx2 = -vx2
if (y1 > 350-R) or (y1 < 50+R):
vy1 = -vy1
if (y2 > 350-R) or (y2 < 50+R):
vy2 = -vy2
if (x2-x1)**2 + (y2-y1)**2 <= 4*R*R:
vector1 = np.array([x1,y1])
vector2 = np.array([x2,y2])
vvector1 = np.array([vx1,vy1])
vvector2 = np.array([vx2,vy2])
nvector = np.array([x2-x1,y2-y1])
un = (nvector)/((sum(nvector*nvector))**(1/2))
tvector = np.array([y1-y2,x2-x1])
ut = tvector/((sum(nvector*nvector))**(1/2))
vector1midn = sum(vvector1*un)
vector2midn = sum(vvector2*un)
vector1midt = sum(vvector1*ut)
vector2midt = sum(vvector2*ut)
vector1after = vector2midn*un + vector1midt*ut
vector2after = vector1midn*un + vector2midt*ut
vx1 = vector1after[0]
vy1 = vector1after[1]
vx2 = vector2after[0]
vy2 = vector2after[1]
txt = canvas.create_text(100,30,text=str(iter),font=('consolas', '20',
'bold'))
window.update()
time.sleep(0.002)
if iter == ntime-1 : break
canvas.delete(c1)
canvas.delete(c2)
canvas.delete(txt)
window.mainloop()
```
exact question is, how do i change c1,c2 , above there , into many of them without simply typing every single ball.
|
2018/12/05
|
[
"https://Stackoverflow.com/questions/53632979",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10749521/"
] |
1) You should be. I highly recommend avoiding secondary indexes and ALLOW FILTERING consider them advanced features for corner cases.
2) It can be more efficient with index, but still horrible, and also horrible in more new ways. There are only very few scenarios where secondary indexes are acceptable. There are very few scenarios where ALLOW FILTERING is acceptable. You are looking at an overlap of the two.
Maybe take a step back. You're building pojos to represent objects and trying to map that into Cassandra. The approach you should make when data modeling with Cassandra is to think of the queries you are going to make and design tables to match that - not the data. It is normal to end up with multiple tables that you update (disk space and writes are cheap) on changes so that your reads can hit 1 partition efficiently and get everything you need in one hit. Denormalize the data, Cassandra is not relational and 3rd normal form is generally a bad thing here.
|
As worst case for your use case, consider searching for an Austrian composer born in 1756. Yes, you can find him (Mozart) in a table of all humans who ever lived by intersecting the index of nationality=Austria, the index of birth=1756 and the index of profession=composer. But Cassandra will implement such a query very inefficiently - it either needs to retrieve huge lists and intersect them, or, what it really does, retrieve just one huge list (e.g., list of all Austrians who ever lived) and then filter them according to the other criteria (birth and profession). This is why you need the "ALLOW FILTERING". And why it is not a recommended use case for Cassandra's original Secondary Index.
Unlike Cassandra's original Secondary Index, search engines are geared toward exactly these sorts of intersections, and have special algorithms for calculating them efficiently. In particular, search engines usually have "skip lists", allowing find a small intersection of two lengthy lists by quickly skipping in one of the lists based on entries in the second list. They also have logic on which list (the shorter list, i.e., the rarer word) to start the process with.
As you may know, Cassandra has a **second** secondary-index implementation, known as SASI. SASI (see <https://github.com/apache/cassandra/blob/trunk/doc/SASI.md>) has many search-engine-oriented improvements over Cassandra's original secondary index implementation, and if I understand correctly (I never tried myself), efficient intersections is one of these features. So maybe switching to SASI is a good idea in your use case.
| 10,240
|
69,301,316
|
Im currently trying to get the mean() of a group in my dataframe (tdf), but I have a mix of some NaN values and filled values in my dataset. Example shown below
| Test # | a | b |
| --- | --- | --- |
| 1 | 1 | 1 |
| 1 | 2 | NaN |
| 1 | 3 | 2 |
| 2 | 4 | 3 |
My code needs to take this dataset, and make a new dataset containing the mean, std, and 95% interval of the set.
```
i = 0
num_timeframes = 2 #writing this in for example sake
new_df = pd.DataFrame(columns = tdf.columns)
while i < num_timeframes:
results = tdf.loc[tdf["Test #"] == i].groupby(["Test #"]).mean()
new_df = pd.concat([new_df,results])
results = tdf.loc[tdf["Test #"] == i].groupby(["Test #"]).std()
new_df = pd.concat([new_df,results])
results = 2*tdf.loc[tdf["Test #"] == i].groupby(["Test #"]).std()
new_df = pd.concat([new_df,results])
new_df['Test #'] = new_df['Test #'].fillna(i) #fill out test number values
i+=1
```
For simplicity, i will show the desired output on the first pass of the while loop, only calculating the mean. The problem impacts every row however. The expected output for the mean of Test # 1 is shown below:
| Test # | a | b |
| --- | --- | --- |
| 1 | 2 | 1.5 |
However, columns which contain any NaN rows are calculating the entire mean as NaN resulting in the output shown below
| Test # | a | b |
| --- | --- | --- |
| 1 | 2 | NaN |
I have tried passing skipna=True, but got an error stating that mean doesn't have a skipna argument. Im really at a loss here because it was my understanding that df.mean() ignores NaN rows by default. I have limited experience with python so any help is greatly appreciated.
|
2021/09/23
|
[
"https://Stackoverflow.com/questions/69301316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16985105/"
] |
You can use
```py
(?<![a-zA-Z0-9_.-])@(?=([A-Za-z]+[A-Za-z0-9_-]*))\1(?![@\w])
(?a)(?<![\w.-])@(?=([A-Za-z][\w-]*))\1(?![@\w])
```
See the [regex demo](https://regex101.com/r/KFHwo8/3). *Details*:
* `(?<![a-zA-Z0-9_.-])` - a negative lookbehind that matches a location that is not immediately preceded with ASCII digits, letters, `_`, `.` and `-`
* `@` - a `@` char
* `(?=([A-Za-z]+[A-Za-z0-9_-]*))` - a positive lookahead with a capturing group inside that captures one or more ASCII letters and then zero or more ASCII letters, digits, `-` or `_` chars
* `\1` - the Group 1 value (backreferences are atomic, no backtracking is allowed through them)
* `(?![@\w])` - a negative lookahead that fails the match if there is a word char (letter, digit or `_`) or a `@` char immediately to the right of the current location.
Note I put hyphens at the end of the character classes, this is best practice.
The `(?a)(?<![\w.-])@(?=([A-Za-z][\w-]*))\1(?![@\w])` alternative uses shorthand character classes and the `(?a)` inline modifier (equivalent of `re.ASCII` / `re.A` makes `\w` only match ASCII chars (as in the original version). Remove `(?a)` if you plan to match any Unicode digits/letters.
|
Another option is to assert a whitespace boundary to the left, and assert no word char or @ sign to the right.
```
(?<!\S)@([A-Za-z]+[\w-]+)(?![@\w])
```
The pattern matches:
* `(?<!\S)` Negative lookbehind, assert not a non whitespace char to the left
* `@` Match literally
* `([A-Za-z]+[\w-]+)` Capture group1, match 1+ chars A-Za-z and then 1+ word chars or `-`
* `(?![@\w])` Negative lookahead, assert not @ or word char to the right
[Regex demo](https://regex101.com/r/v2OGdL/1)
Or match a non word boundary `\B` before the @ instead of a lookbehind.
```
\B@([A-Za-z]+[\w-]+)(?![@\w])
```
[Regex demo](https://regex101.com/r/GRV8x4/1)
| 10,241
|
29,240,807
|
What is the complexity of the function `most_common` provided by the `collections.Counter` object in Python?
More specifically, is `Counter` keeping some kind of sorted list while it's counting, allowing it to perform the `most_common` operation faster than `O(n)` when `n` is the number of (unique) items added to the counter? For you information, I am processing some large amount of text data trying to find the n-th most frequent tokens.
I checked the [official documentation](https://docs.python.org/library/collections.html#collections.Counter) and the [TimeComplexity article](https://wiki.python.org/moin/TimeComplexity) on the CPython wiki but I couldn't find the answer.
|
2015/03/24
|
[
"https://Stackoverflow.com/questions/29240807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3397458/"
] |
From the source code of [collections.py](https://github.com/python/cpython/blob/a8e814db96ebfeb1f58bc471edffde2176c0ae05/Lib/collections/__init__.py#L571), we see that if we don't specify a number of returned elements, `most_common` returns a sorted list of the counts. This is an `O(n log n)` algorithm.
If we use `most_common` to return `k > 1` elements, then we use [`heapq.nlargest`](https://github.com/python/cpython/blob/a8e814db96ebfeb1f58bc471edffde2176c0ae05/Lib/heapq.py#L521). This is an `O(k) + O((n - k) log k) + O(k log k)` algorithm, which is very good for a small constant `k`, since it's essentialy linear. The `O(k)` part comes from heapifying the initial `k` counts, the second part from `n - k` calls to `heappushpop` method and the third part from sorting the final heap of `k` elements. Since `k <= n` we can conclude that the complexity is:
>
> O(n log k)
>
>
>
If `k = 1` then it's easy to show that the complexity is:
>
> O(n)
>
>
>
|
[The source](https://github.com/python/cpython/blob/1b85f4ec45a5d63188ee3866bd55eb29fdec7fbf/Lib/collections/__init__.py#L575) shows exactly what happens:
```
def most_common(self, n=None):
'''List the n most common elements and their counts from the most
common to the least. If n is None, then list all element counts.
>>> Counter('abracadabra').most_common(3)
[('a', 5), ('r', 2), ('b', 2)]
'''
# Emulate Bag.sortedByCount from Smalltalk
if n is None:
return sorted(self.iteritems(), key=_itemgetter(1), reverse=True)
return _heapq.nlargest(n, self.iteritems(), key=_itemgetter(1))
```
`heapq.nlargest` is defined in [heapq.py](https://github.com/python/cpython/blob/1b85f4ec45a5d63188ee3866bd55eb29fdec7fbf/Lib/heapq.py#L524)
| 10,242
|
54,565,417
|
When I'm visiting my website (<https://osm-messaging-platform.appspot.com>), I get this error on the main webpage:
```
502 Bad Gateway. nginx/1.14.0 (Ubuntu).
```
It's really weird, since when I run it locally
```
python app.py
```
I get no errors, and my app and the website load fine.
I've already tried looking it up, but most of the answers I've found on stack overflow either have no errors or don't relate to me. Here is the error when I look at my GCloud logs:
```
019-02-07 02:07:05 default[20190206t175104] Traceback (most recent
call last): File "/env/lib/python3.7/site-
packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process() File "/env/lib/python3.7/site-
packages/gunicorn/workers/gthread.py", line 104, in init_process
super(ThreadWorker, self).init_process() File
"/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line
129, in init_process self.load_wsgi() File
"/env/lib/python3.7/site-packages/gunicorn/workers/base.py", line
138, in load_wsgi self.wsgi = self.app.wsgi() File
"/env/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in
wsgi self.callable = self.load() File
"/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 52,
in load return self.load_wsgiapp() File
"/env/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 41,
in load_wsgiapp return util.import_app(self.app_uri) File
"/env/lib/python3.7/site-packages/gunicorn/util.py", line 350, in
import_app __import__(module) ModuleNotFoundError: No module
named 'main'
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [25] [INFO] Worker exiting (pid: 25)
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [8] [INFO] Shutting down: Master
2019-02-07 02:07:05 default[20190206t175104] [2019-02-07 02:07:05
+0000] [8] [INFO] Reason: Worker failed to boot.
```
And here are the contents of my app.yaml file:
```
runtime: python37
handlers:
# This configures Google App Engine to serve the files in the app's
static
# directory.
- url: /static
static_dir: static
- url: /.*
script: auto
```
I expected it to show my website, but it didn't. Can anyone help?
|
2019/02/07
|
[
"https://Stackoverflow.com/questions/54565417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10808093/"
] |
The error is produced because the App Engine Standard Python37 runtime handles the requests in the `main.py` file by default. I guess that you don't have this file and you are handling the requests in the `app.py` file.
Also the logs traceback is pointing to it: `ModuleNotFoundError: No module named 'main'`
Change the name the name of the `app.py` file to `main.py` and try again.
As a general rule it is recommended to follow this file structure present in the [App Engine Standard documention](https://cloud.google.com/appengine/docs/standard/python3/building-app/writing-web-service#structuring_your_web_service_files):
* `your-app/`
+ `app.yaml`
+ `main.py`
+ `requirements.txt`
+ `static/`
- `script.js`
- `style.css`
+ `templates/`
- `index.html`
I believe this would be overkill for your situation but If you need a custom [entrypoint](https://cloud.google.com/appengine/docs/standard/python3/config/appref#entrypoint) read t[his Python3 runtime documentation](https://cloud.google.com/appengine/docs/standard/python3/runtime#application_startup) to know more about how to configure it.
|
My mistake was naming the main app "main" which conflicted with main.py. It worked fine locally as it did not use main.py. I changed it to root and everything worked fine. It took me a whole day to solve it out.
| 10,243
|
70,147,775
|
I am just learning python and was trying to loop through a list and use pop() to remove all items from original list and add them to a new list formatted. Here is the code I use:
```
languages = ['russian', 'english', 'spanish', 'french', 'korean']
formatted_languages = []
for a in languages:
formatted_languages.append(languages.pop().title())
print(formatted_languages)
```
The result is this :
`['Korean', 'French', 'Spanish']`
I do not understand why do I only get 3 out of 5 items in the list returned. Can someone please explain?
Thank you,
Ruslan
|
2021/11/28
|
[
"https://Stackoverflow.com/questions/70147775",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17534746/"
] |
Look:
```
languages = ['russian', 'english', 'spanish', 'french', 'korean']
formatted_languages = []
for a in languages:
print(a)
print(languages)
formatted_languages.append(languages.pop().title())
print(formatted_languages)
```
Notice what "a" is:
```
russian
['russian', 'english', 'spanish', 'french', 'korean']
['Korean']
english
['russian', 'english', 'spanish', 'french']
['Korean', 'French']
spanish
['russian', 'english', 'spanish']
['Korean', 'French', 'Spanish']
```
So A is moving ahead and then it cannot continue in the loop
|
What you're doing here is changing the size of the list whilst the list is being processed, so that once you've got to spanish in the list (3rd item), you've removed french and korean already, so once you've removed spanish, there are no items left to loop through. You can use the code below to get around this.
```py
languages = ['russian', 'english', 'spanish', 'french', 'korean']
formatted_languages = []
for a in len(range(languages)):
formatted_languages.append(languages.pop().title())
print(formatted_languages)
```
This iterates through the length of the list instead, which is only calculated once and therefore doesn't change
| 10,246
|
17,509,607
|
I have seen questions like this asked many many times but none are helpful
Im trying to submit data to a form on the web ive tried requests, and urllib and none have worked
for example here is code that should search for the [python] tag on SO:
```
import urllib
import urllib2
url = 'http://stackoverflow.com/'
# Prepare the data
values = {'q' : '[python]'}
data = urllib.urlencode(values)
# Send HTTP POST request
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
html = response.read()
# Print the result
print html
```
yet when i run it i get the html soure of the home page
here is an example of using requests:
```
import requests
data= {
'q': '[python]'
}
r = requests.get('http://stackoverflow.com', data=data)
print r.text
```
same result! i dont understand why these methods arent working i've tried them on various sites with no success so if anyone has successfully done this please show me how!
Thanks so much!
|
2013/07/07
|
[
"https://Stackoverflow.com/questions/17509607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2292723/"
] |
If you want to pass `q` as a parameter in the URL using [`requests`](http://docs.python-requests.org/), use the `params` argument, not `data` (see [Passing Parameters In URLs](http://docs.python-requests.org/en/latest/user/quickstart.html#passing-parameters-in-urls)):
```
r = requests.get('http://stackoverflow.com', params=data)
```
This will request <https://stackoverflow.com/?q=%5Bpython%5D> , which isn't what you are looking for.
You really want to *`POST`* to a *form*. Try this:
```
r = requests.post('https://stackoverflow.com/search', data=data)
```
This is essentially the same as *`GET`*-ting <https://stackoverflow.com/questions/tagged/python> , but I think you'll get the idea from this.
|
```
import urllib
import urllib2
url = 'http://www.someserver.com/cgi-bin/register.cgi'
values = {'name' : 'Michael Foord',
'location' : 'Northampton',
'language' : 'Python' }
data = urllib.urlencode(values)
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
the_page = response.read()
```
This makes a POST request with the data specified in the values. we need urllib to encode the url and then urllib2 to send a request.
| 10,248
|
17,847,869
|
I have this following Python Tkinter code which redraw the label every 10 second. My question is , to me it seems like it is drawing the new label over and over again over the old label. So, eventually, after a few hours there will be hundreds of drawing overlapping (at least from what i understand). Will this use more memory or cause problem?
```
import Tkinter as tk
import threading
def Draw():
frame=tk.Frame(root,width=100,height=100,relief='solid',bd=1)
frame.place(x=10,y=10)
text=tk.Label(frame,text='HELLO')
text.pack()
def Refresher():
print 'refreshing'
Draw()
threading.Timer(10, Refresher).start()
root=tk.Tk()
Refresher()
root.mainloop()
```
Here in my example, i am just using a single label.I am aware that i can use `textvariable` to update the text of the label or even `text.config`. But what am actually doing is to refresh a grid of label(like a table)+buttons and stuffs to match with the latest data available.
From my beginner understanding, if i wrote this `Draw()` function as class, i can destroy the `frame` by using `frame.destroy` whenever i execute `Refresher` function. But the code i currently have is written in functions without class ( i don't wish to rewrite the whole code into class).
The other option i can think of is to declare `frame` in the `Draw()` as global and use `frame.destroy()` ( which i reluctant to do as this could cause name conflict if i have too many frames (which i do))
If overdrawing over the old drawing doesn't cause any problem (except that i can't see the old drawing), i can live with that.
These are all just my beginner thoughts. Should i destroy the frame before redraw the updated one? if so, in what way should i destroy it if the code structure is just like in my sample code? Or overdrawing the old label is fine?
**EDIT**
Someone mentioned that python tkinter is not thread safe and my code will likely to fail randomly.
I actually took [this link](https://stackoverflow.com/questions/2223157/how-to-execute-a-function-asynchronously-every-60-seconds-in-python) as a reference to use `threading` as my solution and i didn't find anything about thread safety in that post.
I am wondering what are the general cases that i should not use `threading` and what are the general cases i could use `threading`?
|
2013/07/25
|
[
"https://Stackoverflow.com/questions/17847869",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180836/"
] |
The correct way to run a function or update a label in tkinter is to use the [after](http://effbot.org/tkinterbook/widget.htm#Tkinter.Widget.after-method) method. This puts an event on the event queue to be executed at some time in the future. If you have a function that does some work, then puts itself back on the event queue, you have created something that will run forever.
Here's a quick example based on your example:
```
import Tkinter as tk
import time
def Draw():
global text
frame=tk.Frame(root,width=100,height=100,relief='solid',bd=1)
frame.place(x=10,y=10)
text=tk.Label(frame,text='HELLO')
text.pack()
def Refresher():
global text
text.configure(text=time.asctime())
root.after(1000, Refresher) # every second...
root=tk.Tk()
Draw()
Refresher()
root.mainloop()
```
There are a lot of things I would change about that program from a coding style point of view, but I wanted to keep it as close to your original question as possible. The point is, you can use `after` to call a function that updates the label without having to create new labels. Plus, that function can arrange for itself to be called again at some interval. In this example I picked one second just so that the effect is easier to see.
You also asked "I am wondering what are the general cases that i should not use threading and what are the general cases i could use threading?"
To put a blunt point on it, you should never use threading if you have to ask a question about when to use threading. Threading is an advanced technique, it is complicated, and it easy to get wrong. It's quite possible for threading to make your program slower rather than faster. It has subtle consequences, such as your programs failing mysteriously if you do things that aren't thread safe.
To be more specific to your situation: you should avoid using threads with tkinter. You *can* use them, but you can't access widgets from these other threads. If you need to do something with a widget, you must put an instruction in a thread-safe queue, and then in the main thread you need to periodically check that queue to see if there's an instruction to be processed. There are examples of that on this website if you search for them.
If all that sounds complicated, it is. For most of the GUIs you write, you won't need to worry about that. If you can take large processes and break them down into chunks that execute in 100 ms or less, you only need `after`, and never need threads.
|
To allow the cleanup with minimal code changes, you could pass previous frames explicitly:
```
import Tkinter as tk
def Draw(oldframe=None):
frame = tk.Frame(root,width=100,height=100,relief='solid',bd=1)
frame.place(x=10,y=10)
tk.Label(frame, text='HELLO').pack()
frame.pack()
if oldframe is not None:
oldframe.destroy() # cleanup
return frame
def Refresher(frame=None):
print 'refreshing'
frame = Draw(frame)
frame.after(10000, Refresher, frame) # refresh in 10 seconds
root = tk.Tk()
Refresher()
root.mainloop()
```
| 10,251
|
8,008,829
|
I am working on a project in python in which I need to extract only a subfolder of tar archive not all the files.
I tried to use
```
tar = tarfile.open(tarfile)
tar.extract("dirname", targetdir)
```
But this does not work, it does not extract the given subdirectory also no exception is thrown. I am a beginner in python.
Also if the above function doesn't work for directories whats the difference between this command and tar.extractfile() ?
|
2011/11/04
|
[
"https://Stackoverflow.com/questions/8008829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/517272/"
] |
Building on the second example from the [tarfile module documentation](https://docs.python.org/3.5/library/tarfile.html#examples), you could extract the contained sub-folder and all of its contents with something like this:
```
with tarfile.open("sample.tar") as tar:
subdir_and_files = [
tarinfo for tarinfo in tar.getmembers()
if tarinfo.name.startswith("subfolder/")
]
tar.extractall(members=subdir_and_files)
```
This creates a list of the subfolder and its contents, and then uses the recommended `extractall()` method to extract just them. Of course, replace `"subfolder/"` with the actual path (relative to the root of the tar file) of the sub-folder you want to extract.
|
The other answer will retain the subfolder path, meaning that `subfolder/a/b` will be extracted to `./subfolder/a/b`. To extract a subfolder to the root, so `subfolder/a/b` would be extracted to `./a/b`, you can rewrite the paths with something like this:
```
def members(tf):
l = len("subfolder/")
for member in tf.getmembers():
if member.path.startswith("subfolder/"):
member.path = member.path[l:]
yield member
with tarfile.open("sample.tar") as tar:
tar.extractall(members=members(tar))
```
| 10,252
|
31,767,292
|
I may be way off here - but would appreciate insight on just how far ..
In the following `getFiles` method, we have an anonymous function being passed as an argument.
```
def getFiles(baseDir: String, filter: (File, String) => Boolean ) = {
val ffilter = new FilenameFilter {
// How to assign to the anonymous function argument 'filter' ?
override def accept(dir: File, name: String): Boolean = filter
}
..
```
So that `override` is quite incorrect: that syntax tries to `evaluate` the filter() function which results in a Boolean.
Naturally we could simply evaluate the anonymous function as follows:
```
override def accept(dir: File, name: String): Boolean = filter(dir, name)
```
But that approach does not actually `replace` the method .
So: how to assign the `accept` method to the `filter` anonymous function?
**Update** the error message is
```
Error:(56, 64) type mismatch;
found : (java.io.File, String) => Boolean
required: Boolean
override def accept(dir: File, name: String): Boolean = filter // { filter(dir, name) }
```
**Another update** Thinking more on this - and am going to take a swag : Dynamic languages like python and ruby can handle assignment of class methods to arbitrary functions. But scala requires compilation and thus the methods are actually static. A definitive answer on this hunch would be appreciated.
|
2015/08/01
|
[
"https://Stackoverflow.com/questions/31767292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1056563/"
] |
There is no way easy or type-safe way (that I know of) to assign a function to a method as they are different types. In Python or JavaScript you could do something like this:
```
var fnf = new FilenameFilter();
fnf.accepts = filter;
```
But in Scala you have to do the delegation:
```
val fnf = new FilenameFilter {
override def accept(dir: File, name: String): Boolean = filter(dir, name)
}
```
|
I think you are actually misunderstanding the meaning of the override keyword.
It is for redefining methods defined in the superclass of a class, not redefining a method in an instance of a class.
If you want to redefine the accept method in the instances of FilenameFilter, you will need to add the filter method to the constructor of the class, like:
```
class FilenameFilter(val filter: (File, String) => Boolean) {
def accept(dir: File, name: String): Boolean = filter(dir, name)
}
```
| 10,253
|
35,201,647
|
How to use hook in bottle?
<https://pypi.python.org/pypi/bottle-session/0.4>
I am trying to implement session plug in with bottle hook.
```
@bottle.route('/loginpage')
def loginpage():
return '''
<form action="/login" method="post">
Username: <input name="username" type="text" />
Password: <input name="password" type="password" />
<input value="Login" type="submit" />
</form>
'''
@bottle.route('/login', method='POST')
def do_login(session, rdb):
username = request.forms.get('username')
password = request.forms.get('password')
session['name'] = username
return session['name']
@bottle.route('/usernot')
def nextPage(session):
return session['name']
```
and below is my hook:
```
@hook('before_request')
def setup_request():
try:
request.session = session['name']
request.session.cookie_expires = False
response.set_cookie(session['name'], "true")
#request.session.save()
except Exception, e:
print ('setup_request--> ', e)
```
I am unable to access session in hook, is it possible to pass session as a parameter to hook?
|
2016/02/04
|
[
"https://Stackoverflow.com/questions/35201647",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3282945/"
] |
Try this .................
Your **initializers/carrierwave.rb** look like this.
```
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'xxx', # required
:aws_secret_access_key => 'yyy', # required
:region => 'eu-west-1', # optional, defaults to 'us-east-1'
:host => 's3.example.com', # optional, defaults to nil
:endpoint => 'https://s3.example.com:8080' # optional, defaults to nil
}
config.fog_directory = 'name_of_directory' # required
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
```
**In your uploader, set the storage to :fog**
```
class AvatarUploader < CarrierWave::Uploader::Base
storage :fog
end
```
|
`CarrierWave::Uploader::Base:Class#fog_provider=` is not released yet. It is only available on the CarrierWave `master` branch.
**Solution 1 (Use master):**
Change your `Gemfile` entry to
```
gem "carrierwave", git: "git@github.com:carrierwaveuploader/carrierwave.git"
```
But this is not recommended since it is not as stable as a release version.
---
**Solution 2 (Check 0.10 documentation):**
In `0.10` you set the `:provider` using `fog_credentials=`
```
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'xxx', # required
:aws_secret_access_key => 'yyy', # required
:region => 'eu-west-1', # optional, defaults to 'us-east-1'
:host => 's3.example.com', # optional, defaults to nil
:endpoint => 'https://s3.example.com:8080' # optional, defaults to nil
}
config.fog_directory = 'name_of_directory' # required
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
```
View documentation for [v0.10 here](https://github.com/carrierwaveuploader/carrierwave/tree/v0.10.0).
| 10,254
|
10,043,636
|
A common antipattern in Python is to concatenate a sequence of strings using `+` in a loop. This is bad because the Python interpreter has to create a new string object for each iteration, and it ends up taking quadratic time. (Recent versions of CPython can apparently optimize this in some cases, but other implementations can't, so programmers are discouraged from relying on this.) `''.join` is the right way to do this.
However, I've heard it said ([including here on Stack Overflow](https://stackoverflow.com/questions/10001301/list-slicing-python/10001401#10001401)) that you should *never, ever* use `+` for string concatenation, but instead always use `''.join` or a format string. I don't understand why this is the case if you're only concatenating two strings. If my understanding is correct, it shouldn't take quadratic time, and I think `a + b` is cleaner and more readable than either `''.join((a, b))` or `'%s%s' % (a, b)`.
Is it good practice to use `+` to concatenate two strings? Or is there a problem I'm not aware of?
|
2012/04/06
|
[
"https://Stackoverflow.com/questions/10043636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064659/"
] |
There is nothing wrong in concatenating *two* strings with `+`. Indeed it's easier to read than `''.join([a, b])`.
You are right though that concatenating more than 2 strings with `+` is an O(n^2) operation (compared to O(n) for `join`) and thus becomes inefficient. However this has not to do with using a loop. Even `a + b + c + ...` is O(n^2), the reason being that each concatenation produces a new string.
CPython2.4 and above try to mitigate that, but it's still advisable to use `join` when concatenating more than 2 strings.
|
The assumption that one should never, ever use + for string concatenation, but instead always use ''.join may be a myth. It is true that using `+` creates unnecessary temporary copies of immutable string object but the other not oft quoted fact is that calling `join` in a loop would generally add the overhead of `function call`. Lets take your example.
Create two lists, one from the linked SO question and another a bigger fabricated
```
>>> myl1 = ['A','B','C','D','E','F']
>>> myl2=[chr(random.randint(65,90)) for i in range(0,10000)]
```
Lets create two functions, `UseJoin` and `UsePlus` to use the respective `join` and `+` functionality.
```
>>> def UsePlus():
return [myl[i] + myl[i + 1] for i in range(0,len(myl), 2)]
>>> def UseJoin():
[''.join((myl[i],myl[i + 1])) for i in range(0,len(myl), 2)]
```
Lets run timeit with the first list
```
>>> myl=myl1
>>> t1=timeit.Timer("UsePlus()","from __main__ import UsePlus")
>>> t2=timeit.Timer("UseJoin()","from __main__ import UseJoin")
>>> print "%.2f usec/pass" % (1000000 * t1.timeit(number=100000)/100000)
2.48 usec/pass
>>> print "%.2f usec/pass" % (1000000 * t2.timeit(number=100000)/100000)
2.61 usec/pass
>>>
```
They have almost the same runtime.
Lets use cProfile
```
>>> myl=myl2
>>> cProfile.run("UsePlus()")
5 function calls in 0.001 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.001 0.001 0.001 0.001 <pyshell#1376>:1(UsePlus)
1 0.000 0.000 0.001 0.001 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1 0.000 0.000 0.000 0.000 {range}
>>> cProfile.run("UseJoin()")
5005 function calls in 0.029 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.015 0.015 0.029 0.029 <pyshell#1388>:1(UseJoin)
1 0.000 0.000 0.029 0.029 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
5000 0.014 0.000 0.014 0.000 {method 'join' of 'str' objects}
1 0.000 0.000 0.000 0.000 {range}
```
And it looks that using Join, results in unnecessary function calls which could add to the overhead.
Now coming back to the question. Should one discourage the use of `+` over `join` in all cases?
I believe no, things should be taken into consideration
1. Length of the String in Question
2. No of Concatenation Operation.
And off-course in a development pre-mature optimization is evil.
| 10,255
|
11,566,355
|
Why it executes the script every 2 minutes?
Shouldn't it execute every 10 minutes ?
```
*\10 * * * * /usr/bin/python /var/www/py/LSFchecker.py
```
|
2012/07/19
|
[
"https://Stackoverflow.com/questions/11566355",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1131053/"
] |
```
*\10 * * * *
```
Should probably be
```
*/10 * * * *
```
|
You can try:
00,10,20,30,40,50 \* \* \* \* /usr/bin/python /var/www/py/LSFchecker.py
| 10,265
|
39,003,106
|
Comrades,
I'd like to capture images from the laptop camera in Python. Currently all signs point to OpenCV. Problem is OpenCV is a nightmare to install, and it's a nightmare that reoccurs every time you reinstall your code on a new system.
Is there a more lightweight way to capture camera data in Python? I'm looking for something to the effect of:
```
$ pip install something
$ python
>>> import something
>>> im = something.get_camera_image()
```
I'm on Mac, but solutions on any platform are welcome.
|
2016/08/17
|
[
"https://Stackoverflow.com/questions/39003106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/851699/"
] |
I've done this before using [pygame](http://www.pygame.org/download.shtml). But I can't seem to find the script that I used... It seems there is a tutorial [here](http://www.pygame.org/docs/tut/camera/CameraIntro.html) which uses the native camera module and is not dependent on OpenCV.
Try this:
```
import pygame
import pygame.camera
from pygame.locals import *
pygame.init()
pygame.camera.init()
cam = pygame.camera.Camera("/path/to/camera",(640,480))
cam.start()
image = cam.get_image()
```
If you don't know the path to the camera you can also get a list of all available which should include you laptop webcam:
```
camlist = pygame.camera.list_cameras()
if camlist:
cam = pygame.camera.Camera(camlist[0],(640,480))
```
|
Videocapture for Windows
========================
I've used [videocapture](http://videocapture.sourceforge.net/) in the past and it was perfect:
```
from VideoCapture import Device
cam = Device()
cam.saveSnapshot('image.jpg')
```
| 10,266
|
32,231,589
|
*Question:* I am trying to debug a function in a `class` to make sure a list is implemented and if not, I want an error to be thrown up. I am new to programming and am trying to figure how to effectively add an `assert` for the debugging purposes mentioned above.
In the method body of `class BaseAPI()` I have to iterate over the list and call `retrieve_category` for each item in ConvenienceAPI(BaseAPI), passing the new list on your `super()`. I think that I am doing something wrong because I get the error: `TypeError: create_assessment() got an unexpected keyword argument 'category'`
*Question 2*
I also want `category` to have a list to hold multiple categories (strings). This is not happening right now. The list in the method is not being iterated or created. I am unsure... see the traceback error below.
*CODE*
instantiating a list of `categories`:
=====================================
```
api = BaseAPI()
api.create_assessment('Situation Awareness', 'Developing better skills', 'baseball', 'this is a video','selecting and communicating options', category=['Leadership', 'Decision-Making', 'Situation Awareness', 'Teamwork and Communication'])
```
BaseAPI() method where `assert` and `create` lives
==================================================
```
class BaseAPI(object):
def create_assessment(self, name, text, user, video, element, category):
category = [] # Trying to check if category isn't a list
assert category, 'Category is not a list'
new_assessment = Assessment(name, text, user, video, element, category)
self.session.add(new_assessment)
self.session.commit()
```
NEW ADD TO iterate through list but still gives `Error`!
========================================================
```
class ConvenienceAPI(BaseAPI):
def create_assessment(self, name, text, username, videoname, element_text, category_name):
user = self.retrieve_user(username)
video = self.retrieve_video(videoname)
element = self.retrieve_element(element_text)
categories = self.retrieve_category(category_name)
for items in categories: # < ==== added
return categories # < ==== added
return super(ConvenienceAPI, self).create_assessment(name, text, user, video, element, categories)
```
Table
=====
```
class Assessment(Base):
__tablename__ = 'assessments'
#some code
category_id = Column(Integer, ForeignKey('categories.category_id'))
category = relationship('AssessmentCategoryLink', backref='assessments')
def __init__(self, name, text, user, video, element, category): # OBJECTS !!
#some code
self.category = []
```
TracebackError
==============
```
ERROR: db.test.test.test1
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/Users/ack/code/venv/NotssDB/notssdb/test/test.py", line 69, in test1
api.create_assessment('Situation Awareness', 'Developing better skills', 'baseball', 'this is a video','selecting and communicating options', category=['Leadership', 'Decision-Making', 'Situation Awareness', 'Teamwork and Communication'])
TypeError: create_assessment() got an unexpected keyword argument 'category'
```
|
2015/08/26
|
[
"https://Stackoverflow.com/questions/32231589",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5140076/"
] |
I am going to go for my Self Learner badge, here, and answer my own question.
I did about 8 hours of research on this and came up with a great solution. Three things had to be done.
1. Set the HTML5 video src to something other than the original URL. This will trigger the player to close the open socket connection.
2. Set the HTML5 video src to a Base64 encoded data object. This will prevent the Error Code 4 issue MEDIA\_ERR\_SRC\_NOT\_SUPPORTED.
3. For issues in older versions of Firefox, also trigger the .load() event.
My Code:
```
// Handling Bootstrap Modal Window Close Event
// Trigger player destroy
$("#xr-interaction-detail-modal").on("hidden.bs.modal", function () {
var player = xr.ui.mediaelement.xrPlayer();
if (player) {
player.pause();
("video,audio").attr("src","data:video/mp4;base64,AAAAHG...MTAw");
player.load();
player.remove();
}
});
```
I came up with the idea to load the data object as the src. However, I'd like to thank kud on GitHub for the super small base64 video.
<https://github.com/kud/blank-video>
|
Added a line between pause() and remove():
```
// Handling Bootstrap Modal Window Close Event
// Trigger player destroy
$("#xr-interaction-detail-modal").on("hidden.bs.modal", function () {
var player = xr.ui.mediaelement.xrPlayer();
if (player) {
player.pause();
("video,audio").attr("src", "");
player.remove();
}
});
```
| 10,269
|
44,472,252
|
I am very beginner Python. So my question could be quite naive.
I just started to study this language principally due to mathematical tools such as Numpy and Matplotlib which seem to be very useful.
In fact I don't see how python works in fields other than maths
I wonder if it is possible (and if yes, how?) to use Python for problems such as text files treatment.
And more precisely is it possible to solve such problem:
I have two files A.txt and B.txt . The A.txt file contains three columns of numbers and looks like this
```
0.22222000 0.11111000 0.00000000
0.22222000 0.44444000 0.00000000
0.22222000 0.77778000 0.00000000
0.55556000 0.11111000 0.00000000
0.55556000 0.44444000 0.00000000
.....
```
The B.txt file contains three columns of letters F or T and looks like this :
```
F F F
F F F
F F F
F F F
T T F
......
```
The number of lines is the same in files A.txt and B.txt
I need to create a file which would look like this
```
0.22222000 0.11111000 0.00000000 F F F
0.22222000 0.44444000 0.00000000 F F F
0.22222000 0.77778000 0.00000000 F F F
0.55556000 0.11111000 0.00000000 F F F
0.55556000 0.44444000 0.00000000 T T F
```
.......
In other words I need to create a file containing 3 columns of A.txt and then the 3 columns of B.txt file.
Could someone help me to write the lines in python needed for this?
I could easily do it in fortran but have heard that the script in python would be much smaller.
And since I started to study math tools in Python I wish also to extend my knowledge to other opportunities that this language offers.
Thanks in advance
|
2017/06/10
|
[
"https://Stackoverflow.com/questions/44472252",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8141006/"
] |
If you want to concatenate them the good ol' fashioned way, and put them in a new file, you can do this:
```
a = open('A.txt')
b = open('B.txt')
c = open('C.txt', 'w')
for a_line, b_line in zip(a, b):
c.write(a_line.rstrip() + ' ' + b_line)
a.close()
b.close()
c.close()
```
|
Try this,read files as numpy arrays
```
a = np.loadtxt('a.txt')
b = np.genfromtxt('b.txt',dtype='str')
```
In case of b you need genfromtext because of string content.Than
```
np.concatenate((a, b), axis=1)
```
Finally, you will get
```
np.concatenate((a, b), axis=1)
array([['0.22222', '0.11111', '0.0', 'F', 'F', 'F'],
['0.22222', '0.44444', '0.0', 'F', 'F', 'F'],
['0.22222', '0.77778', '0.0', 'F', 'F', 'F'],
['0.55556', '0.11111', '0.0', 'F', 'F', 'F'],
['0.55556', '0.44444', '0.0', 'T', 'T', 'F']],
dtype='<U32')
```
| 10,270
|
3,999,829
|
I wouldn't call myself programmer, but I've started learning Python recently and really enjoy it.
I mainly use it for small tasks so far - scripting, text processing, KML generation and ArcGIS.
From my experience with R (working with excellent Notepad++ and [NppToR](http://sourceforge.net/projects/npptor/) combo) I usually try to work with my scripts line by line (or region by region) in order to understand what each step of my script is doing.. and to check results on the fly.
My question: is there and IDE (or editor?) for Windows that lets you evaluate single line of Python script?
I [have](https://stackoverflow.com/questions/81584/what-ide-to-use-for-python) [seen](https://stackoverflow.com/questions/60784/poll-which-python-ide-editor-is-the-best) [quite](https://stackoverflow.com/questions/126753/is-there-a-good-free-python-ide-for-windows) a lot of discussion regarding IDEs in Python context.. but havent stubled upon this specific question so far.
Thanks for help!
|
2010/10/22
|
[
"https://Stackoverflow.com/questions/3999829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319487/"
] |
I use Notepad++ for most of my Windows based Python development and for debugging I use [Winpdb](http://winpdb.org/about/). It's a cross platform GUI based debugger. You can actually setup a keyboard shortcut in Notepad++ to launch the debugger on your current script:
To do this go to "Run" -> "Run ..." in the menu and enter the following, making sure the path points to your winpdb\_.pyw file:
```
C:\python26\Scripts\winpdb_.pyw "$(FULL_CURRENT_PATH)"
```
Then choose "Save..." and pick a shortcut that you wish to use to launch the debugger.
PS: You can also setup a shortcut to execute your python scripts similarly using this string instead:
```
C:\python26\python.exe "$(FULL_CURRENT_PATH)"
```
|
[Light Table](http://lighttable.com/) was doing that for me, unfortunately it is discontinued:
>
> INLINE EVALUTION No more printing to the console in order to view your
> results. Simply evaluate your code and the results will be displayed
> inline.
>
>
>
[](https://i.stack.imgur.com/H0VlW.png)
| 10,273
|
18,149,636
|
I have a centos 6 server, and I'm trying to configure apache + wsgi + django but I can't.
As I have Python 2.6 for my system and I using Python2.7.5, I can't install wit yum. I was dowload a tar file and try to compile using:
```
./configure --with-python=/usr/local/bin/python2.7
```
But not works. Systems respond:
```
/usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC
```
I don't understand where I have to use `-fPIC`. I'm execute:
```
./configure -fPIC --with-python=/usr/local/bin/python2.7
```
But not works.
Can anyone help me?
|
2013/08/09
|
[
"https://Stackoverflow.com/questions/18149636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1623769/"
] |
**I assume** you are on a shared hosting server, **mod\_wsgi** is not supported on most of shared hosting providers.
|
Try using [nginx](http://wiki.nginx.org/Main) server, it seems to be much easier to deploy.
Here is a [good tutorial for deploying to EC2](http://www.yaconiello.com/blog/setting-aws-ec2-instance-nginx-django-uwsgi-and-mysql/#sthash.E2Gg8vGi.dpbs) but you can use parts of it to configure the server.
| 10,283
|
41,320,092
|
I am new to `python` and `boto3`, I want to get the latest snapshot ID.
I am not sure if I wrote the sort with lambda correctly and how to access the last snapshot, or maybe I can do it with the first part of the script when I print the `snapshot_id` and `snapshot_date` ?
Thanks.
Here is my script
```
import boto3
ec2client = mysession.client('ec2', region_name=region)
ec2resource = mysession.resource('ec2', region_name=region)
def find_snapshots():
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
print(snapshot_id)
print(snapshot_date)
find_snapshots()
snapshots = ec2resource.snapshots.filter(Filters=[{'Name': 'volume-id', 'Values': [mnt_vol]}]).all()
print(snapshots)
snapshots = sorted(snapshots, key=lambda ss:ss.start_time)
print(snapshots)
snapshot_ids = map(lambda ss:ss.id, snapshots)
print(snapshot_ids)
last_snap_id = ?
```
output:
```
snap-05a8e27b15161d3d5
2016-12-25 05:00:17+00:00
snap-0b87285592e21f0
2016-12-25 03:00:17+00:00
snap-06fa39b86961ffa89
2016-12-24 03:00:17+00:00
ec2.snapshotsCollection(ec2.ServiceResource(), ec2.Snapshot)
[]
<map object at 0x7f8d91ea9cc0>
```
\*update question to @roshan answer:
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
return(list_of_snaps)
find_snapshots()
print(find_snapshots())
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
print(newlist)
```
output:
```
[{'date': datetime.datetime(2016, 12, 25, 14, 23, 37, tzinfo=tzutc()), 'snap_id': 'snap-0de26a40c1d1e53'}, {'date': datetime.datetime(2016, 12, 24, 22, 9, 34, tzinfo=tzutc()), 'snap_id': 'snap-0f0341c53f47a08'}]
Traceback (most recent call last):
File "test.py", line 115, in <module>
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
NameError: name 'list_to_be_sorted' is not defined
```
If I do:
```
list_to_be_sorted = (find_snapshots())
print(list_to_be_sorted)
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['snap_id'])
print(newlist[0])
```
the latest snapshot does not appear in the output:
```
{'date': datetime.datetime(2016, 12, 23, 3, 0, 18, tzinfo=tzutc()), 'snap_id': 'snap-0225cff1675c369'}
```
this one is the latest:
```
[{'date': datetime.datetime(2016, 12, 25, 5, 0, 17, tzinfo=tzutc()), 'snap_id': 'snap-05a8e27b15161d5'}
```
How do I get the latest snapshot (`snap_id`) ?
Thanks
|
2016/12/25
|
[
"https://Stackoverflow.com/questions/41320092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4308473/"
] |
You should it sort in `reverse` order.
```
sorted(list_to_be_sorted, key=lambda k: k['date'], reverse=True)[0]
```
sorts the list by the date (in ascending order - oldest to the latest)
If you add `reverse=True`, then it sorts in descending order (latest to the oldest). `[0]` returns the first element in the list which is the latest snapshot.
If you want the `snap_id` of the latest snapshot, just access that key.
```
sorted(list_to_be_sorted, key=lambda k: k['date'], reverse=True)[0]['snap_id']
```
|
You can append the snapshots to list, then sort the list based on date
```
def find_snapshots():
list_of_snaps = []
for snapshot in ec2client_describe_snapshots['Snapshots']:
snapshot_volume = snapshot['VolumeId']
mnt_vol = "vol-xxxxx"
if mnt_vol == snapshot_volume:
snapshot_date = snapshot['StartTime']
snapshot_id = snapshot['SnapshotId']
list_of_snaps.append({'date':snapshot['StartTime'], 'snap_id': snapshot['SnapshotId']})
print(snapshot_id)
print(snapshot_date)
#sort snapshots order by date
newlist = sorted(list_to_be_sorted, key=lambda k: k['name'])
```
**Hope it helps !!**
| 10,286
|
25,985,491
|
I have a sentiment analysis task and i need to specify how much data (in my case text) does scikit can handle. I have a corpus of 2500 opinions all ready tagged. I now that it´s a small corpus but my thesis advisor is asking me to specifically argue how many data does scikit learn can handle. My advisor has his doubts about python/scikit and she wants facts about how many text parameters, featues and stuff related can handle scikit-learn.
|
2014/09/23
|
[
"https://Stackoverflow.com/questions/25985491",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3930105/"
] |
Here are some timings for scikit-learn's document classification example on my machine (Python 2.7, NumPy 1.8.2, SciPy 0.13.3, scikit-learn 0.15.2, Intel Core i7-3540M laptop running on battery power). The dataset is twenty newsgroups; I've trimmed the output quite a bit.
```
$ python examples/document_classification_20newsgroups.py --all_categories
data loaded
11314 documents - 22.055MB (training set)
7532 documents - 13.801MB (test set)
20 categories
Extracting features from the training dataset using a sparse vectorizer
done in 2.849053s at 7.741MB/s
n_samples: 11314, n_features: 129792
Extracting features from the test dataset using the same vectorizer
done in 1.526641s at 9.040MB/s
n_samples: 7532, n_features: 129792
________________________________________________________________________________
Training:
LinearSVC(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, loss='l2', multi_class='ovr', penalty='l2',
random_state=None, tol=0.001, verbose=0)
train time: 5.274s
test time: 0.033s
f1-score: 0.860
dimensionality: 129792
density: 1.000000
________________________________________________________________________________
Training:
SGDClassifier(alpha=0.0001, class_weight=None, epsilon=0.1, eta0=0.0,
fit_intercept=True, l1_ratio=0.15, learning_rate='optimal',
loss='hinge', n_iter=50, n_jobs=1, penalty='l2', power_t=0.5,
random_state=None, shuffle=False, verbose=0, warm_start=False)
train time: 3.521s
test time: 0.038s
f1-score: 0.857
dimensionality: 129792
density: 0.390184
________________________________________________________________________________
Training:
MultinomialNB(alpha=0.01, class_prior=None, fit_prior=True)
train time: 0.161s
test time: 0.036s
f1-score: 0.836
dimensionality: 129792
density: 1.000000
________________________________________________________________________________
Training:
BernoulliNB(alpha=0.01, binarize=0.0, class_prior=None, fit_prior=True)
train time: 0.167s
test time: 0.153s
f1-score: 0.761
dimensionality: 129792
density: 1.000000
```
Timings for dataset loading aren't shown, but it didn't take more than half a second; the input is a zipfile containing texts. "Extracting features" includes tokenization and stopword filtering. So in all, I can load up 18.8k documents and train a naive Bayes classifier on 11k of them in five seconds, or an SVM in ten seconds. That means solving a 20×130k dimensional optimization problem.
I advise you to re-run this example on your machine, because the actual time taken depends on a lot of factors including the speed of the disk.
[Disclaimer: I'm one of the scikit-learn developers.]
|
[This Scikit page](http://scikit-learn.org/stable/modules/scaling_strategies.html) may provide some answers if you are experiencing some issues with the amount of data that you are trying to load.
As stated in your other question on data size [concerning Weka](https://stackoverflow.com/questions/25979182/how-much-text-can-weka-handle), I agree with ealdent that your dataset does seem small (unless you have an exceptionally large number of features) and it should not be a problem to load a dataset of this size into memory.
Hope this helps!
| 10,291
|
9,291,036
|
I'm putting together a basic photoalbum on appengine using python 27. I have written the following method to retrieve image details from the datastore matching a particular "adventure". I'm using limits and offsets for pagination, however it is very inefficient. After browsing 5 pages (of 5 photos per page) I've already used 16% of my Datastore Small Operations. Interestingly I've only used 1% of my datastore read operations. How can I make this more efficient for datastore small operations - I'm not sure what these consist of.
```
def grab_images(adventure, the_offset=0, the_limit = 10):
logging.info("grab_images")
the_photos = None
the_photos = PhotosModel.all().filter("adventure =", adventure)
total_number_of_photos = the_photos.count()
all_photos = the_photos.fetch(limit = the_limit, offset = the_offset)
total_number_of_pages = total_number_of_photos / the_limit
all_photo_keys = []
for photo in all_photos:
all_photo_keys.append(str(photo.blob_key.key()))
return all_photo_keys, total_number_of_photos, total_number_of_pages
```
|
2012/02/15
|
[
"https://Stackoverflow.com/questions/9291036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/714852/"
] |
A few things:
1. You don't need to have count called each time, you can cache it
2. Same goes to the query, why are you querying all the time? cache it also.
3. Cache the pages also, you should not calc the data per page each time.
4. You only need the blob\_key but your are loading the entire photo entity, try to model it in a way that you won't need to load all the Photo atributes.
nitpicking: you don't need the\_photos = None
|
The way you handle paging is inefficient as it goes through every record before the offset to deliver the data. You should consider building the paging mechanisms using the bookmark methods described by Google <http://code.google.com/appengine/articles/paging.html>.
Using this method you only go through the items you need for each page. I also urge you to cache properly as suggested by Shay, it's both faster and cheaper.
| 10,293
|
35,390,152
|
I am new to selenium with python. Tried this sample test script.
```
from selenium import webdriver
def browser():
driver= webdriver.Firefox()
driver.delete_all_cookies()
driver.get('http://www.gmail.com/')
driver.maximize_window()
driver.save_screenshot('D:\Python Programs\Screen shots\TC_01.png')
driver.find_element_by_xpath("//*[@id='next']").click()
message=driver.find_element_by_xpath("//*[@id='errormsg_0_Email']")
driver.save_screenshot('D:\Python Programs\Screen shots\TC_03.png')
name= driver.find_element_by_xpath("//*[@id='Email']").send_keys('gmail')
driver.save_screenshot('D:\Python Programs\Screen shots\TC_02.png')
print name
driver.find_element_by_xpath("//*[@id='next']").click()
password=driver.find_element_by_xpath("//*[@id='Passwd']").send_keys('password')
driver.save_screenshot('D:\Python Programs\Screen shots\TC_03.png')
print password
driver.find_element_by_xpath("//*[@id='signIn']").click()
driver.implicitly_wait(10)
driver.quit()
i=browser()
```
Till the below steps the script runs after that i am getting the error as
```
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"xpath","selector":"//*[@id='Passwd']"}
Stacktrace:.
```
|
2016/02/14
|
[
"https://Stackoverflow.com/questions/35390152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5861605/"
] |
Just use [pandas](http://pandas.pydata.org/):
```
In [1]: import pandas as pd
```
Change the number of decimals:
```
In [2]: pd.options.display.float_format = '{:.3f}'.format
```
Make a data frame:
```
In [3]: df = pd.DataFrame({'gof_test': gof_test, 'gof_train': gof_train})
```
and display:
```
In [4]: df
Out [4]:
```
[](https://i.stack.imgur.com/WnX8M.png)
Another option would be the use of the [engineering prefix](http://pandas.pydata.org/pandas-docs/stable/options.html#number-formatting):
```
In [5]: pd.set_eng_float_format(use_eng_prefix=True)
df
Out [5]:
```
[](https://i.stack.imgur.com/p9i0s.png)
```
In [6]: pd.set_eng_float_format()
df
Out [6]:
```
[](https://i.stack.imgur.com/lZQLq.png)
|
Indeed you cannot affect the display of the Python output with CSS, but you can give your results to a formatting function that will take care of making it "beautiful". In your case, you could use something like this:
```
def compare_dicts(dict1, dict2, col_width):
print('{' + ' ' * (col_width-1) + '{')
for k1, v1 in dict1.items():
col1 = u" %s: %.3f," % (k1, v1)
padding = u' ' * (col_width - len(col1))
line = col1 + padding
if k1 in dict2:
line = u"%s %s: %.3f," % (line, k1, dict2[k1])
print(line)
print('}' + ' ' * (col_width-1) + '}')
dict1 = {
'foo': 0.43657,
'foobar': 1e6,
'bar': 0,
}
dict2 = {
'bar': 1,
'foo': 0.43657,
}
compare_dicts(dict1, dict2, 25)
```
This gives:
```
{ {
foobar: 1000000.000,
foo: 0.437, foo: 0.437,
bar: 0.000, bar: 1.000,
} }
```
| 10,296
|
42,449,296
|
In OpenAPI, inheritance is achieved with `allof`. For instance, in [this example](https://github.com/OAI/OpenAPI-Specification/blob/master/examples/v2.0/json/petstore-simple.json):
```js
"definitions": {
"Pet": {
"type": "object",
"allOf": [
{
"$ref": "#/definitions/NewPet" # <--- here
},
[...]
]
},
"NewPet": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
}
},
```
I didn't find in the spec anything about multiple inheritance. For instance, if Pet inherits from both NewPet and OldPet.
In Python, I would write
```python
class Pet(NewPet, OldPet):
...
```
and the Method Resolution Order is deterministic about which parent class should be checked first for methods/attributes, so I can tell if Pet.age will be NewPet.age or OldPet.age.
So what if I let Pet inherit from both NewPet and OldPet, where name property is defined in both schemas, with a different value in each? What will be Pet.name?
```js
"definitions": {
"Pet": {
"type": "object",
"allOf": [
{
"$ref": "#/definitions/NewPet" # <--- multiple inheritance
"$ref": "#/definitions/OldPet"
},
[...]
]
},
"NewPet": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
}
},
"OldPet": {
"type": "object",
"properties": {
"name": {
"type": "integer" # <-- name redefined here, different type
},
}
},
```
Will OldPet take precedence? NewPet? It is undefined/invalid? Is it application defined?
I tried this in [swagger-editor](http://editor.swagger.io/). Apparently, the schema with two refs is valid, but swagger-editor does not resolve the properties. If just displays allof with the two references.
Edit: According to [this tutorial](https://apihandyman.io/writing-openapi-swagger-specification-tutorial-part-4-advanced-data-modeling/#combining-multiple-definitions-to-ensure-consistency), having several refs in the same allof is valid. But nothing is said about the case where both schemas have a different attribute with the same name.
|
2017/02/24
|
[
"https://Stackoverflow.com/questions/42449296",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4653485/"
] |
JSON Schema doesn't support inheritance. The behavior of `allOf` can sometimes look like inheritance, but if you think about it that way you will end up getting confused.
The `allOf` keyword means just what it says: all of the schemas must validate. Nothing gets overridden or takes precedence over anything else. Everything must validate.
In your example, a JSON value must validate against both "NewPet" and "OldPet" in full. Because there is no JSON value that can validate as both a string and an integer, validation of the "name" property will always fail to validate against either "NewPet" or "OldPet" (or both). Therefore the "Pet" schema will never validate against any given JSON value.
|
One thing to consider is what inheritance means. In Eclipse Modeling Framework trying to create a class that extends 2 classes with the same attribute is an error. Never the less I consider that multiple inheritance.
This is called the Diamond Problem. See <https://en.wikipedia.org/wiki/Multiple_inheritance>
| 10,297
|
60,942,530
|
I'm working on an Django project, and building and testing with a database on GCP. Its full of test data and kind of a mess.
Now I want to release the app with a new and fresh another database.
How do I migrate to the new database? with all those `migrations/` folder?
I don't want to delete the folder cause the development might continue.
Data do not need to be preserved. It's test data only.
Django version is 2.2;
Python 3.7
Thank you.
========= update
After changing the `settings.py`, `python manage.py makemigrations` says no changes detected.
Then I did `python manage.py migrate`, and now it complains about relation does not exist.
=============== update2
The problem seems to be that, I had a table name `Customer`, and I changed it to 'Client'. Now it's complaining about "psycopg2.errors.UndefinedTable: relation "app\_customer" does not exist".
How can I fix it, maybe without deleting all files in `migrations/`?
================ update final
After eliminating all possibilities, I have found out that the "new" database is not new at all. I migrated on that database some months ago.
Now I created a fresh new one and `migrate` worked like a charm.
Again, thank you all for your suggestions.
|
2020/03/31
|
[
"https://Stackoverflow.com/questions/60942530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4971866/"
] |
>
> or maybe a different data structure to keep both the name and the value of the variable?
>
>
>
Ding ding! You want to use a map for this. A map stores a key and a value as a pair. To give you a visual representation, a map looks like this:
```
Cats: 5
Dogs: 3
Horses: 2
```
Using a map you can access both the key (name of the word you found) and its paired value (number of times you found it in your file)
[How to use maps](http://tutorials.jenkov.com/java-collections/map.html#implementations)
|
You might want to consider priority queue in java. But first you need a class which have 2 attributes (the word and its quantity). This class should implement Comparable and compareTo method should be overridden. Here's an example: <https://howtodoinjava.com/java/collections/java-priorityqueue/>
| 10,298
|
37,972,029
|
[PEP 440](https://www.python.org/dev/peps/pep-0440) lays out what is the accepted format for version strings of Python packages.
These can be simple, like: `0.0.1`
Or complicated, like: `2016!1.0-alpha1.dev2`
What is a suitable regex which could be used for finding and validating such strings?
|
2016/06/22
|
[
"https://Stackoverflow.com/questions/37972029",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1728179/"
] |
I had the same question. This is the most thorough regex pattern I could find. PEP440 links to the codebase of the packaging library in it's references section.
```
pip install packaging
```
To access just the pattern string you can use the global
```
from packaging import version
version.VERSION_PATTERN
```
See: <https://github.com/pypa/packaging/blob/21.3/packaging/version.py#L225-L254>
```py
VERSION_PATTERN = r"""
v?
(?:
(?:(?P<epoch>[0-9]+)!)? # epoch
(?P<release>[0-9]+(?:\.[0-9]+)*) # release segment
(?P<pre> # pre-release
[-_\.]?
(?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview))
[-_\.]?
(?P<pre_n>[0-9]+)?
)?
(?P<post> # post release
(?:-(?P<post_n1>[0-9]+))
|
(?:
[-_\.]?
(?P<post_l>post|rev|r)
[-_\.]?
(?P<post_n2>[0-9]+)?
)
)?
(?P<dev> # dev release
[-_\.]?
(?P<dev_l>dev)
[-_\.]?
(?P<dev_n>[0-9]+)?
)?
)
(?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version
"""
```
Of course this example is specific to Python's flavor of regex.
|
I think this should comply with PEP440:
```
^(\d+!)?(\d+)(\.\d+)+([\.\-\_])?((a(lpha)?|b(eta)?|c|r(c|ev)?|pre(view)?)\d*)?(\.?(post|dev)\d*)?$
```
### Explained
Epoch, e.g. `2016!`:
```
(\d+!)?
```
Version parts (major, minor, patch, etc.):
```
(\d+)(\.\d+)+
```
Acceptable separators (`.`, `-` or `_`):
```
([\.\-\_])?
```
Possible pre-release flags (and their normalisations; as well as post release flags `r` or `rev`), may have one or more digits following:
```
((a(lpha)?|b(eta)?|c|r(c|ev)?|pre(view)?)\d*)?
```
Post-release flags, and one or more digits:
```
(\.?(post|dev)\d*)?
```
| 10,299
|
19,512,457
|
What can i do to optimize this function, and make it looks like more pythonic?
```
def flatten_rows_to_file(filename, rows):
f = open(filename, 'a+')
temp_ls = list()
for i, row in enumerate(rows):
temp_ls.append("%(id)s\t%(price)s\t%(site_id)s\t%(rating)s\t%(shop_id)s\n" % row)
if i and i % 100000 == 0:
f.writelines(temp_ls)
temp_ls = []
f.writelines(temp_ls)
f.close()
```
|
2013/10/22
|
[
"https://Stackoverflow.com/questions/19512457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2000477/"
] |
A few things that come to mind immediately:
1. Use a `with` statement, rather than manually closing your file.
2. Pass a generator expression to `f.writelines` rather than building up a 100000 row list over and over (let the standard library handle how much, if any, it buffers the output).
3. Or, better yet, use the `csv` module to handle writing your tab-separated output.
Here's a quick stab at some improved code:
```
from csv import DictWriter
def flatten_rows_to_file(filename, rows):
with open(filename, 'ab') as f:
writer = DictWriter(f, ['id','price','site_id','rating','shop_id'],
delimiter='\t')
writer.writerows(rows)
```
Note that if you're using Python 3, you need slightly different code for the opening the file. Use mode `'a'` rather than `'ab'` and add the keyword argument `newline=""`. You didn't need the `+` in the mode you were using (you are only writing, not writing and reading both).
If the values in your `rows` argument may have extra keys beyond the ones you were writing, you'll need to pass some extra arguments to the `DictWriter` constructor as well.
|
It is generally a good idea to use the `with` statement to make sure the file is closed properly. Also, unless I'm mistaken there should be no need to manually buffer the lines. You can just as well specify a buffer size when opening the file, determining [how often the file is flushed](https://stackoverflow.com/q/3167494/1639625).
```
def flatten_rows_to_file(filename, rows, buffsize=100000):
with open(filename, 'a+', buffsize) as f:
for row in rows:
f.write("%(id)s\t%(price)s\t%(site_id)s\t%(rating)s\t%(shop_id)s\n" % row)
```
| 10,300
|
36,380,144
|
i'm new at here and im new at coding something at node.js. My question is how to use the brackets i mean, i'm working on a steam trade bot and i need to get understand how to use options and callbacks. Let me give you a example,
[](https://i.stack.imgur.com/VMV5q.png)
For example i'm writing something, i wanted to a take tradeoffer.
is that gonna be like that?
```
makeOffer.partnerAccountId: 'mysteamid';
makeOffer(accessToken[, itemsFromThem]
```
or something. Really i can't understanding. I didn't have ever been a pro at programing languages and i was worked little bit python. It was understable than this. Please help, if i can understand little, i can solve it. Thanks. Sorry for my bad Eng.
|
2016/04/03
|
[
"https://Stackoverflow.com/questions/36380144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
The brackets are documentation notation indicating that those parameters are optional, and can be omitted from any given invocation.
They are not indicative of syntax you should be using in your program.
Both of these styles should work, given that documentation. The callback being optional.
```
makeOffer({ ... });
makeOffer({ ... }, function (...) { ... });
```
*Dots indicate more code - in this case an object definition, function parameters, and the function body.*
Some other examples of this type of documentation notation:
MDN Array: [`concat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat), [`slice`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice), [`reduce`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce)
|
~~`makeOffer(accessToken[, itemsFromThem])` is not JavaScript syntax. It's just common notation that's used to indicate that the function can take any number of arguments, like so:~~
```
makeOffer(accessToken, something, somethingElse);
makeOffer(accessToken, something, secondThing, thirdThing);
makeOffer(accessToken);
```
Check the documentation for more details on how that library behaves.
| 10,301
|
62,919,733
|
I have an ascii file containing 2 columns as following;
```
id value
1 15.1
1 12.1
1 13.5
2 12.4
2 12.5
3 10.1
3 10.2
3 10.5
4 15.1
4 11.2
4 11.5
4 11.7
5 12.5
5 12.2
```
I want to estimate the average value of column "value" for each id (i.e. group by id)
Is it possible to do that in python using numpy or pandas ?
|
2020/07/15
|
[
"https://Stackoverflow.com/questions/62919733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5834711/"
] |
If you don't know how to read the file, there are several methods as you can see [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html) that you could use, so you can try one of them, e.g. `pd.read_csv()`.
Once you have read the file, you could try this using pandas functions as `pd.DataFrame.groupby` and `pd.Series.mean()`:
```
df.groupby('id').mean()
#if df['id'] is the index, try this:
#df.reset_index().groupby('id').mean()
```
Output:
```
value
id
1 13.566667
2 12.450000
3 10.266667
4 12.375000
5 12.350000
```
|
```
import pandas as pd
filename = "data.txt"
df = pd.read_fwf(filename)
df.groupby(['id']).mean()
```
Output
```
value
id
1 13.566667
2 12.450000
3 10.266667
4 12.375000
5 12.350000
```
| 10,304
|
15,316,886
|
I'm new to python and would like to take this
```
K=[['d','o','o','r'], ['s','t','o','p']]
```
to print:
```
door, stop
```
|
2013/03/09
|
[
"https://Stackoverflow.com/questions/15316886",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2152618/"
] |
How about:
`', '.join(''.join(x) for x in K)`
|
Using `str.join()` and `map()`:
```
In [26]: K=[['d','o','o','r'], ['s','t','o','p']]
In [27]: ", ".join(map("".join,K))
Out[27]: 'door, stop'
```
>
> S.join(iterable) -> string
>
>
> Return a string which is the concatenation of the strings in the
> iterable. The separator between elements is S.
>
>
>
| 10,305
|
30,331,919
|
Can someone explain why the following occurs? My use case is that I have a python list whose elements are all numpy ndarray objects and I need to search through the list to find the index of a particular ndarray obj.
Simplest Example:
```
>>> import numpy as np
>>> a,b = np.arange(0,5), np.arange(1,6)
>>> a
array([0, 1, 2, 3, 4])
>>> b
array([1, 2, 3, 4, 5])
>>> l = list()
>>> l.append(a)
>>> l.append(b)
>>> l.index(a)
0
>>> l.index(b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
Why can `l` find the index of `a`, but not `b`?
|
2015/05/19
|
[
"https://Stackoverflow.com/questions/30331919",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2383529/"
] |
Applying the idea in <https://stackoverflow.com/a/17703076/901925> (see the Related sidebare)
```
[np.array_equal(b,x) for x in l].index(True)
```
should be more reliable. It ensures a correct array to array comparison.
Or `[id(b)==id(x) for x in l].index(True)` if you want to ensure it compares ids.
|
The idea is to convert numpy arrays to lists and transform the problem to finding a list in an other list:
```py
def find_array(list_of_numpy_array,taregt_numpy_array):
out = [x.tolist() for x in list_of_numpy_array].index(taregt_numpy_array.tolist())
return out
```
| 10,306
|
53,723,025
|
I have anaconda installed, and I use Anaconda Prompt to install python packages. But I'm unable to install RASA-NLU using conda prompt. Please let me know the command for the same
I have used the below command:
```
conda install rasa_nlu
```
Error:
```
PackagesNotFoundError: The following packages are not available from current channels:
- rasa_nlu
```
|
2018/12/11
|
[
"https://Stackoverflow.com/questions/53723025",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7905329/"
] |
Turns out the framework search path (under Project->Target->Build Settings) was indeed the culprit. Removing my custom overrides solved the issue. Interestingly, if I remember correctly, I had added them, because Xcode wasn't able to find my frameworks...
See also <https://forums.developer.apple.com/message/328635#328779>
|
Set "Always Search User Paths" to NO and problem will be solved
| 10,307
|
68,299,665
|
I am new to using webdataset library from pytorch. I have created .tar files of a sample dataset present locally in my system using webdataset.TarWriter(). The .tar files creation seems to be successful as I could extract them separately on windows platform and verify the same dataset files.
Now, I create `train_dataset = wds.Dataset(url)` where url is the local file path of the .tar files. after this, I perform the following operations:
```
train_loader = torch.utils.data.DataLoader(train_dataset, num_workers=0, batch_size=10)
sample = next(iter(train_loader))
print(sample)
```
It is resulting me in error like this[](https://i.stack.imgur.com/8uaz7.png)
The same code works fine if I use a web url example: `"http://storage.googleapis.com/nvdata-openimages/openimages-train-000000.tar"` mentioned in webdataset documentation: `https://reposhub.com/python/deep-learning/tmbdev-webdataset.html`
I couldn't understand the error so far. Any idea on how to solve this problem?
|
2021/07/08
|
[
"https://Stackoverflow.com/questions/68299665",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2562870/"
] |
I have had the same error since yesterday, I finally found the culprit. [WebDataset/tarIterators.py](https://github.com/webdataset/webdataset/blob/master/webdataset/tariterators.py) makes use of [WebDataset/gopen.py](https://github.com/webdataset/webdataset/blob/master/webdataset/gopen.py). In gopen.py `urllib.parse.urlparse` is called to parse the url to be opened, in your case the url is `D:/PhD/...`.
```
gopen_schemes = dict(
__default__=gopen_error,
pipe=gopen_pipe,
http=gopen_curl,
https=gopen_curl,
sftp=gopen_curl,
ftps=gopen_curl,
scp=gopen_curl)
def gopen(url, mode="rb", bufsize=8192, **kw):
"""Open the URL. This uses the `gopen_schemes` dispatch table to dispatch based on scheme.
Support for the following schemes is built-in: pipe, file, http, https, sftp, ftps, scp.
When no scheme is given the url is treated as a file. You can use the OPEN_VERBOSE argument to get info about files being opened.
:param url: the source URL
:param mode: the mode ("rb", "r")
:param bufsize: the buffer size
"""
global fallback_gopen
verbose = int(os.environ.get("GOPEN_VERBOSE", 0))
if verbose:
print("GOPEN", url, info, file=sys.stderr)
assert mode in ["rb", "wb"], mode
if url == "-":
if mode == "rb":
return sys.stdin.buffer
elif mode == "wb":
return sys.stdout.buffer
else:
raise ValueError(f"unknown mode {mode}")
pr = urlparse(url)
if pr.scheme == "":
bufsize = int(os.environ.get("GOPEN_BUFFER", -1))
return open(url, mode, buffering=bufsize)
if pr.scheme == "file":
bufsize = int(os.environ.get("GOPEN_BUFFER", -1))
return open(pr.path, mode, buffering=bufsize)
handler = gopen_schemes["__default__"]
handler = gopen_schemes.get(pr.scheme, handler)
return handler(url, mode, bufsize, **kw)
```
As you can see in the dictionary the `__default__` function is `gopen_error`. This is the function returning the error you are seeing. `pr = urlparse(url)` on your url will generate an urlparse where the scheme (`pr.scheme`) is `'d'` because your disk is named D. However, it should be `'file'` for the function to work as intended. Since it is not equal to 'file' or any of the other schemes in the dictionary (http, https, sftp, etc), the default function will be used, which returns the error. I circumvented this issue by adding `d=gopen_file` to the `gopen_schemes` dictionary. I hope this helps you further temporarily. I will address this issue on the WebDataset GitHub page as well and keep this page updated if I get a more practical update.
Good luck!
|
Add "file:" in the front of local file path, like this:
```
from itertools import islice
import webdataset as wds
import os
import tqdm
path = "file:D:/Dataset/00000.tar"
dataset = wds.WebDataset(path)
for sample in islice(dataset, 0, 3):
for key, value in sample.items():
print(key, repr(value)[:50])
print()
```
| 10,308
|
66,178,922
|
I am trying to run Django tests on Gitlab CI but getting this error, Last week it was working perfectly but suddenly I am getting this error during test run
>
> django.db.utils.OperationalError: could not connect to server: Connection refused
> Is the server running on host "database" (172.19.0.3) and accepting
> TCP/IP connections on port 5432?
>
>
>
**My gitlab-ci file is like this**
```
image: docker:latest
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
test:
stage: test
image: tiangolo/docker-with-compose
script:
- docker-compose -f docker-compose.yml build
- docker-compose run app python3 manage.py test
```
my docker-compose is like this:
```
version: '3'
volumes:
postgresql_data:
services:
database:
image: postgres:12-alpine
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=123
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
volumes:
- postgresql_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -e \"SHOW DATABASES;\""]
interval: 5s
timeout: 5s
retries: 5
ports:
- "5432"
restart: on-failure
app:
container_name: proj
hostname: proj
build:
context: .
dockerfile: Dockerfile
image: sampleproject
command: >
bash -c "
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
gunicorn sampleproject.wsgi:application -c ./gunicorn.py
"
env_file: .env
ports:
- "8000:8000"
volumes:
- .:/srv/app
depends_on:
- database
- redis
```
So why its refusing connection? I don't have any idea and it was working last week.
|
2021/02/12
|
[
"https://Stackoverflow.com/questions/66178922",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15200334/"
] |
Unsure if it would help in your case but I was getting the same issue with docker-compose. What solved it for me was explicitly specifying the hostname for postgres.
```
services:
database:
image: postgres:12-alpine
hostname: database
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=123
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
...
```
|
Could you do a `docker container ls` and check if the container name of the database is in fact, "database"?
You've skipped setting the `container_name` for that container, and it may be so that docker isn't creating it with the default name of the service, i.e. "database", thus the DNS isn't able to find it under that name in the network.
| 10,309
|
44,130,318
|
env:linux + python3 + rornado
When i run client.py,he suggested that my connection was rejected, for a few ports,Using the 127.0.0.1:7233,The server does not have any response, but the client prompts to refuse to connect,Who can tell me why?
server.py
```
# coding:utf8
import socket
import time
import threading
# accept conn
def get_hart(host, port):
global clien_list
print('begin get_hart')
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.bind((host, port))
s.listen(5)
print(clien_list)
while True:
try:
clien, address = s.accept()
try:
clien_data = clien.recv(1024).decode('utf8')
if clien_data == str(0):
clien_id = clien_reg()
clien.send(str(clien_id))
print(clien_list)
else:
clien_list[int(clien_data)]['time'] = time.time()
# print clien_data
except:
print('send fail!')
clien.close()
except:
print("accept fail!")
continue
# client reg
def clien_reg():
global clien_list
tim = str(time.time() / 100).split('.')
id = int(tim[1])
clien_list[id] = {"time": time.time(), "state": 0}
return id
# client dict
def check_hart(clien_list, delay, lost_time):
while True:
for id in clien_list:
if abs(clien_list[id]['time'] - time.time()) > lost_time:
clien_list[id]['state'] = 0
del clien_list[id] # del offline client
break # err
else:
clien_list[id]['state'] = 1
print(clien_list)
time.sleep(delay)
if __name__ == '__main__':
host = '127.0.0.1'
port = 7233
global clien_list # Dict: client info
clien_list = {}
lost_time = 30 # timeout
print('begin threading')
try:
threading.Thread(target=get_hart,args=(host,port,),name='getHart')
threading.Thread(target=check_hart,args=(clien_list, 5, lost_time,))
except Exception as e:
print("thread error!"+e)
while 1:
pass
print('e')
```
client.py
```
# coding:utf8
import socket
import time
# sent pkg
def send_hart(host, port, delay):
s = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print(host,port)
global clien_id
try:
s.connect((host, port))
s.send(clien_id)
if clien_id == 0:
try:
to_clien_id = s.recv(1024)
clien_id = to_clien_id
except:
print('send fail!')
print(to_clien_id) # test id
time.sleep(delay)
except Exception as e:
print('connect fail!:'+e)
time.sleep(delay)
if __name__ == '__main__':
host = '127.0.0.1'
port = 7233
global clien_id
clien_id = 0 # client reg id
while True:
send_hart(host, port, 5)
```
Error
```
Traceback (most recent call last):
File "/home/ubuntu/XPlan/socket_test/client.py", line 13, in send_hart
s.connect((host, port))
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/XPlan/socket_test/client.py", line 34, in <module>
send_hart(host, port, 5)
File "/home/ubuntu/XPlan/socket_test/client.py", line 24, in send_hart
print('connect fail!:'+e)
TypeError: Can't convert 'ConnectionRefusedError' object to str implicitly
Process finished with exit code 1
```
|
2017/05/23
|
[
"https://Stackoverflow.com/questions/44130318",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5472490/"
] |
Are the `featured_value_id` array values unique inside the array? If not does it make a difference if you give the planner a little hand by making them unique?:
```
select distinct c.id
from
dematerialized_products
cross join lateral
(
select distinct id
from unnest(feature_value_ids) u (id)
) c
```
|
You didn't show execution plans, but obviously the time is spent sorting the values to eliminate doubles.
If `EXPLAIN (ANALYZE)` shows that the sort is performed using temporary files, you can improve the performance by raising `work_mem` so that the sort can be performed in memory.
You will still experience a performance hit with `DISTINCT`.
| 10,311
|
74,428,888
|
`polars.LazyFrame.var` will return variance value for each column in a table as below:
```
>>> df = pl.DataFrame({"a": [1, 2, 3, 4], "b": [1, 2, 1, 1], "c": [1, 1, 1, 1]}).lazy()
>>> df.collect()
shape: (4, 3)
┌─────┬─────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪═════╡
│ 1 ┆ 1 ┆ 1 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ 2 ┆ 2 ┆ 1 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ 3 ┆ 1 ┆ 1 │
├╌╌╌╌╌┼╌╌╌╌╌┼╌╌╌╌╌┤
│ 4 ┆ 1 ┆ 1 │
└─────┴─────┴─────┘
>>> df.var().collect()
shape: (1, 3)
┌──────────┬──────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 │
╞══════════╪══════╪═════╡
│ 1.666667 ┆ 0.25 ┆ 0.0 │
└──────────┴──────┴─────┘
```
I wish to select columns with value > 0 from LazyFrame but couldn't find the solution.
I can iterate over columns in polars dataframe then filter columns by condition as below:
```
>>> data.var()
shape: (1, 3)
┌──────────┬──────┬─────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 │
╞══════════╪══════╪═════╡
│ 1.666667 ┆ 0.25 ┆ 0.0 │
└──────────┴──────┴─────┘
>>> cols = pl.select([s for s in data.var() if (s > 0).all()]).columns
>>> cols
['a', 'b']
>>> data.select(cols)
shape: (4, 2)
┌─────┬─────┐
│ a ┆ b │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═════╪═════╡
│ 1 ┆ 1 │
├╌╌╌╌╌┼╌╌╌╌╌┤
│ 2 ┆ 2 │
├╌╌╌╌╌┼╌╌╌╌╌┤
│ 3 ┆ 1 │
├╌╌╌╌╌┼╌╌╌╌╌┤
│ 4 ┆ 1 │
└─────┴─────┘
```
But it doesn't work in LazyFrame:
```
>>> data = data.lazy()
>>> data
<polars.internals.lazyframe.frame.LazyFrame object at 0x7f0e3d9966a0>
>>> cols = pl.select([s for s in data.var() if (s > 0).all()]).columns
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
File "/home/jasmine/miniconda3/envs/jupyternb/lib/python3.9/site-packages/polars/internals/lazyframe/frame.py", line 421, in __getitem__
raise TypeError(
TypeError: 'LazyFrame' object is not subscriptable (aside from slicing). Use 'select()' or 'filter()' instead.
```
The reason for doing this in LazyFrame is that we want to maximize the performance. Any advice would be much appreciated. Thanks!
|
2022/11/14
|
[
"https://Stackoverflow.com/questions/74428888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20497072/"
] |
polars doesn't know what the variance is until after it is calculated but that's the same time that it is displaying the results so there's no way to filter the columns reported and also have it be more performant than just displaying all the columns, at least with respect to the polars calculation. It could be that python/jupyter takes longer to display more results than fewer.
With that said you could do something like this:
```
df.var().melt().filter(pl.col('value')>0).collect()
```
which gives you what you want in one line but it's a different shape.
You could also do something like this:
```
dfvar=df.var()
dfvar.select(dfvar.melt().filter(pl.col('value')>0).select('variable').collect().to_series().to_list()).collect()
```
|
Building on the answer from @dean MacGregor, we:
* do the var calculation
* melt
* apply the filter
* extract the `variable` column with column names
* pass it as a list to `select`
```py
df.select(
(
df.var().melt().filter(pl.col('value')>0).collect()
["variable"]
)
.to_list()
).collect()
shape: (4, 2)
┌─────┬─────┐
│ a ┆ b │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞═════╪═════╡
│ 1 ┆ 1 │
├╌╌╌╌╌┼╌╌╌╌╌┤
│ 2 ┆ 2 │
├╌╌╌╌╌┼╌╌╌╌╌┤
│ 3 ┆ 1 │
├╌╌╌╌╌┼╌╌╌╌╌┤
│ 4 ┆ 1 │
└─────┴─────┘
```
```
| 10,312
|
16,748,592
|
I'm trying to build a simple Scapy script which manually manages 3-way-handshake, makes an HTTP GET request (by sending a single packet) to a web server and manually manages response packets (I need to manually send ACK packets for the response).
Here is the beginning of the script:
```
#!/usr/bin/python
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
# Import scapy
from scapy.all import *
# beginning of 3-way-handshake
ip=IP(dst="www.website.org")
TCP_SYN=TCP(sport=1500, dport=80, flags="S", seq=100, options=[('NOP', None), ('MSS', 1448)])
TCP_SYNACK=sr1(ip/TCP_SYN)
my_ack = TCP_SYNACK.seq + 1
TCP_ACK=TCP(sport=1500, dport=80, flags="A", seq=101, ack=my_ack)
send(ip/TCP_ACK)
TCP_PUSH=TCP(sport=1500, dport=80, flags="PA", seq=102, ack=my_ack)
send(ip/TCP_PUSH)
# end of 3-way-handshake
# beginning of GET request
getString = 'GET / HTTP/1.1\r\n\r\n'
request = ip / TCP(dport=80, sport=1500, seq=103, ack=my_ack + 1, flags='A') / getString
# help needed from here...
```
The script above completes the 3-way-handshake and prepares the HTTP GET request.
Then, I've to send the request through:
```
send(request)
```
Now, after the sending, I should obtain/manually read the response.
My purpose is indeed to read the response by manually sending the ACK packets of the response (this is the main focus of the script).
How can I do that?
Is the `send(_)` function appropriate, or it's better to use something like `response = sr1(request)` (but in this case, I suppose ACK are automatically sent)?
|
2013/05/25
|
[
"https://Stackoverflow.com/questions/16748592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1194426/"
] |
Use sr(), read the data in from every ans that is received (you'll need to parse, as you go, the Content-Length or the chunk lengths depending on whether you're using transfer chunking) then send ACKs in response to the *last* element in the ans list, then repeat until the end criteria is satisfied (don't forget to include a global timeout, I use eight passes without getting any replies which is 2 seconds).
Make sure sr is sr(req, multi=True, timeout=0.25) or thereabouts. Occasionally as you loop around there will be periods where no packets turn up, that's why HTTP has all those length specs. If the timeout is too high then you might find the server gets impatient waiting for the ACKs and starts re-sending.
Note that when you're using transfer chunking you can also attempt to cheat and just read the 0\r\n\r\n but then if that's genuinely in the data stream... yeah. Don't rely on the TCP PSH, however tempting that may look after a few Wireshark sessions with trivial use-cases.
Obviously the end result isn't a completely viable user-agent, but it's close enough for most purposes. Don't forget to send Keep-Alive headers, RST on exceptions and politely close down the TCP session if everything works.
Remember that RFC 2616 isn't 176 pages long just for giggles.
|
Did you try:
```
responce = sr1(request)
print responce.show2
```
Also try using a different sniffer like wireshark or tcpdump as netcat has a few bugs.
| 10,313
|
57,327,185
|
I'm using Telethon in python to automatic replies in Telegram's Group. I wanna reporting spam or abuse an account automatically via the Telethon and I read the Telethon document and google it, but I can't find any example.
If this can be done, please provide an example with a sample code.
|
2019/08/02
|
[
"https://Stackoverflow.com/questions/57327185",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5704891/"
] |
`display: flex;`
This CSS property will make all child inline element.
```
<div style="display:flex; flex-direction: row;">
<div>1</div>
<div>2</div>
</div
```
This is an example of inline child.
for more info please Check this link [CSS Flexbox Layout](https://www.w3schools.com/css/css3_flexbox.asp)
|
Can you describe what do you mean by "when the "Add Dates" button is pressed I want another line be available." ?
In any case, when trying to inline two objects, you should put style="display: inline" property onto very element you want to be displayed inline. Thus, try to add style="display: inline" property on element.
Hope this helps.
| 10,316
|
66,510,970
|
I've tried relentlessly for about 1-2hrs to get this piece of code to work. I need to add the role to a user, simple enough right?
This code, searches for the role but cannot find it, is this because I am sending it from a channel which that role doesn't have access to? I need help please.
Edit 1: removed quotations around the ID
```
@bot.command()
async def addrole(ctx, user: discord.User):
#Add the customer role to the user
role_id = 810264985258164255
guild = ctx.guild
role = discord.utils.get(guild.roles, id=810264985258164255)
await user.add_roles(role)
```
```
2021-03-06T21:12:51.811235+00:00 app[worker.1]: Your bot is ready.
2021-03-06T21:14:09.010057+00:00 app[worker.1]: Ignoring exception in command addrole:
2021-03-06T21:14:09.012477+00:00 app[worker.1]: Traceback (most recent call last):
2021-03-06T21:14:09.012553+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped
2021-03-06T21:14:09.012554+00:00 app[worker.1]: ret = await coro(*args, **kwargs)
2021-03-06T21:14:09.012592+00:00 app[worker.1]: File "bot.py", line 92, in addrole
2021-03-06T21:14:09.012593+00:00 app[worker.1]: await user.add_roles(user, role)
2021-03-06T21:14:09.012621+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles
2021-03-06T21:14:09.012621+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason)
2021-03-06T21:14:09.012651+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 243, in request
2021-03-06T21:14:09.012652+00:00 app[worker.1]: raise NotFound(r, data)
2021-03-06T21:14:09.012699+00:00 app[worker.1]: discord.errors.NotFound: 404 Not Found (error code: 10011): Unknown Role
2021-03-06T21:14:09.012735+00:00 app[worker.1]:
2021-03-06T21:14:09.012735+00:00 app[worker.1]: The above exception was the direct cause of the following exception:
2021-03-06T21:14:09.012736+00:00 app[worker.1]:
2021-03-06T21:14:09.012772+00:00 app[worker.1]: Traceback (most recent call last):
2021-03-06T21:14:09.012832+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke
2021-03-06T21:14:09.012833+00:00 app[worker.1]: await ctx.command.invoke(ctx)
2021-03-06T21:14:09.012863+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke
2021-03-06T21:14:09.012863+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs)
2021-03-06T21:14:09.012890+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped
2021-03-06T21:14:09.012891+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc
2021-03-06T21:14:09.012932+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: NotFound: 404 Not Found (error code: 10011): Unknown Role
```
Edit 2: removed user from add\_role
```
2021-03-06T22:22:45.062172+00:00 app[worker.1]: Your bot is ready.
2021-03-06T22:22:52.026199+00:00 app[worker.1]: Ignoring exception in command addrole:
2021-03-06T22:22:52.028082+00:00 app[worker.1]: Traceback (most recent call last):
2021-03-06T22:22:52.028089+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped
2021-03-06T22:22:52.028089+00:00 app[worker.1]: ret = await coro(*args, **kwargs)
2021-03-06T22:22:52.028092+00:00 app[worker.1]: File "bot.py", line 94, in addrole
2021-03-06T22:22:52.028100+00:00 app[worker.1]: await user.add_roles(role)
2021-03-06T22:22:52.028104+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles
2021-03-06T22:22:52.028108+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason)
2021-03-06T22:22:52.028111+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 241, in request
2021-03-06T22:22:52.028111+00:00 app[worker.1]: raise Forbidden(r, data)
2021-03-06T22:22:52.028152+00:00 app[worker.1]: discord.errors.Forbidden: 403 Forbidden (error code: 50013): Missing Permissions
2021-03-06T22:22:52.028160+00:00 app[worker.1]:
2021-03-06T22:22:52.028161+00:00 app[worker.1]: The above exception was the direct cause of the following exception:
2021-03-06T22:22:52.028161+00:00 app[worker.1]:
2021-03-06T22:22:52.028164+00:00 app[worker.1]: Traceback (most recent call last):
2021-03-06T22:22:52.028223+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke
2021-03-06T22:22:52.028224+00:00 app[worker.1]: await ctx.command.invoke(ctx)
2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke
2021-03-06T22:22:52.028225+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs)
2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped
2021-03-06T22:22:52.028226+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc
2021-03-06T22:22:52.028228+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: Forbidden: 403 Forbidden (error code: 50013): Missing Permissions
```
|
2021/03/06
|
[
"https://Stackoverflow.com/questions/66510970",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15250234/"
] |
Use `in_array` and looking for last character in array:
```
if (in_array(substr($var, -1, 1), ['s', 'z']))
```
|
Or maybe:
```
in_array(array_pop(explode('', $var)), ['s', 'z'])` ?
```
not really much more readable neither gallant, but what do I know? :)
| 10,319
|
49,510,815
|
I am trying to create a class that returns the class name together with the attribute. This needs to work both with instance attributes and class attributes
```
class TestClass:
obj1 = 'hi'
```
I.e. I want the following (note: both with and without class instantiation)
```
>>> TestClass.obj1
('TestClass', 'hi')
>>> TestClass().obj1
('TestClass', 'hi')
```
A similar effect is obtained when using the Enum package in python, but if I inherit from Enum, I cannot create an `__init__` function, which I want to do as well
If I use Enum I would get:
```
from enum import Enum
class TestClass2(Enum):
obj1 = 'hi'
>>> TestClass2.obj1
<TestClass2.obj1: 'hi'>
```
I've already tried overriding the `__getattribute__` magic method in a meta class as suggested here: [How can I override class attribute access in python](https://stackoverflow.com/questions/9820314/how-can-i-override-class-attribute-access-in-python). However, this breaks the `__dir__` magic method, which then wont return anything, and furthermore it seems to return name of the meta class, rather than the child class. Example below:
```py
class BooType(type):
def __getattribute__(self, attr):
if attr == '__class__':
return super().__getattribute__(attr)
else:
return self.__class__.__name__, attr
class Boo(metaclass=BooType):
asd = 'hi'
>>> print(Boo.asd)
('BooType', 'asd')
>>> print(dir(Boo))
AttributeError: 'tuple' object has no attribute 'keys'
```
I have also tried overriding the `__setattr__` magic method, but this seems to only affect instance attributes, and not class attributes.
I should state that I am looking for a general solution. Not something where I need to write a `@property` or `@classmethod` function or something similar for each attribute
|
2018/03/27
|
[
"https://Stackoverflow.com/questions/49510815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9490769/"
] |
I got help from a colleague for defining meta classes, and came up with the following solution
```
class MyMeta(type):
def __new__(mcs, name, bases, dct):
c = super(MyMeta, mcs).__new__(mcs, name, bases, dct)
c._member_names = []
for key, value in c.__dict__.items():
if type(value) is str and not key.startswith("__"):
c._member_names.append(key)
setattr(c, key, (c.__name__, value))
return c
def __dir__(cls):
return cls._member_names
class TestClass(metaclass=MyMeta):
a = 'hi'
b = 'hi again'
print(TestClass.a)
# ('TestClass', 'hi')
print(TestClass.b)
# ('TestClass', 'hi again')
print(dir(TestClass))
# ['a', 'b']
```
|
Way 1
-----
You can use [`classmethod`](https://www.programiz.com/python-programming/methods/built-in/classmethod) decorator to define methods callable at the whole class:
```
class TestClass:
_obj1 = 'hi'
@classmethod
def obj1(cls):
return cls.__name__, cls._obj1
class TestSubClass(TestClass):
pass
print(TestClass.obj1())
# ('TestClass', 'hi')
print(TestSubClass.obj1())
# ('TestSubClass', 'hi')
```
Way 2
-----
Maybe you should use [`property`](https://www.programiz.com/python-programming/property) decorator so the disered output will be accessible by instances of a certain class instead of the class itself:
```
class TestClass:
_obj1 = 'hi'
@property
def obj1(self):
return self.__class__.__name__, self._obj1
class TestSubClass(TestClass):
pass
a = TestClass()
b = TestSubClass()
print(a.obj1)
# ('TestClass', 'hi')
print(b.obj1)
# ('TestSubClass', 'hi')
```
| 10,324
|
40,615,604
|
I am looking for a nice, efficient and pythonic way to go from something like this:
`('zone1', 'pcomp110007')`
to this:
`'ZONE 1, PCOMP 110007'`
without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma.
What i wrote is the following:
```
tags = ('zone1', 'pcomp110007')
def sep(astr):
chars = ''.join([x.upper() for x in astr if x.isalpha()])
nums = ''.join([x for x in astr if x.isnumeric()])
return chars + ' ' + nums
print(', '.join(map(sep, tags)))
```
Which does produce the desired result but looks a bit too much for the task.
**The tuple might vary in length but the numbers are always going to be at the end of each string.**
|
2016/11/15
|
[
"https://Stackoverflow.com/questions/40615604",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6162307/"
] |
Using `re`:
```
import re
tupl = ('zone1', 'pcomp110007')
", ".join(map(lambda x: " ".join(re.findall('([A-Z]+)([0-9])+',x.upper())[0]), tupl))
#'ZONE 1, PCOMP 7'
```
|
A recursive solution:
```
def non_regex_split(s,i=0):
if len(s) == i:
return s
try:
return '%s %d' %(s[:i], int(s[i:]))
except:
return non_regex_split(s,i+1)
', '.join(non_regex_split(s).upper() for s in tags)
```
| 10,325
|
23,407,824
|
I am building a website with django for months. So now i thought its time to test it with some friend. While deploying it to a Ubuntu 14.4 64bit LTS ( Same as development environment ) i found a strange error.
Called
```
OSError at /accounts/edit/
[Errno 21] Is a directory: '/var/www/media/'
```
I also tried with different path `BASE_PATH + "/media"` Which was ok for my development.
I am running the default django development server (As its a test!)
It will be really awesome if someone correct me and teach me what's wrong here.
Thanks.
Edit: (Traceback)
```
Environment:
Request Method: POST
Request URL: http://playbox.asia:8000/accounts/edit/
Django Version: 1.6.4
Python Version: 2.7.6
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'south',
'easy_pjax',
'bootstrap3',
'djangoratings',
'taggit',
'imagekit',
'playbox',
'accounts',
'musics',
'playlist')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File "/root/playbox/venv/local/lib/python2.7/site- packages/django/core/handlers/base.py" in get_response
114. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/root/playbox/venv/local/lib/python2.7/site- packages/django/contrib/auth/decorators.py" in _wrapped_view
22. return view_func(request, *args, **kwargs)
File "/root/playbox/accounts/views.py" in edit
189. os.remove(settings.MEDIA_ROOT + "/" + profile.avatar.name)
Exception Type: OSError at /accounts/edit/
Exception Value: [Errno 21] Is a directory: '/var/www/media/'
```
|
2014/05/01
|
[
"https://Stackoverflow.com/questions/23407824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2665252/"
] |
Looks like `profile.avatar.name` evaluates to a **blank string**
So No file is being provided to `os.remove` to remove and It **can't remove a directory** and **raises OSError** : See here: <https://docs.python.org/2/library/os.html#os.remove>
You can rectify this error by **applying a conditional**, which is what to be followed:
```
if profile.avatar.name:
os.remove(settings.MEDIA_ROOT + "/" + profile.avatar.name)
```
|
This just happened to me for another reason. Answering here in case it is not the situation above.
I had a file named "admin" in my static folder, which matched the directory "admin" where Django's admin app stores its static files.
When running `collectstatic` it would conflict between those two. I had to remove my "admin" file (or rename it) so it didn't clash.
In general, make sure your files don't clash with any static file provided by other app.
| 10,335
|
73,401,346
|
In the following code I am having a problem, the log file should get generated each day with timestamp in its name. Right now the first file has name as **dashboard\_asset\_logs** and then other subsequent log files come as **dashboard\_asset\_logs.2022\_08\_18.log, dashboard\_asset\_logs.2022\_08\_19.log**.
How can the first filename also have name like **dashboard\_asset\_logs\_2022\_08\_17.log** and subsequent file have names with **.** removed from between and replaced by **\_** from **dashboard\_asset\_logs.2022\_08\_18.log** to **dashboard\_asset\_logs\_2022\_08\_18.log**
Here is the logger.py code with TimedRotatorFileHandler
```
import logging
import os
from logging.handlers import TimedRotatingFileHandler
class Logger():
loggers = {}
logger = None
def getlogger(self, name):
# file_formatter = logging.Formatter('%(asctime)s~%(levelname)s~%(message)s~module:%(module)s~function:%(
# module)s')
file_formatter = logging.Formatter('%(asctime)s %(levelname)s [%(module)s : %(funcName)s] : %('
'message)s')
console_formatter = logging.Formatter('%(levelname)s -- %(message)s')
print(os.getcwd())
# file_name = "src/logs/"+name+".log"
# file_handler = logging.FileHandler(file_name)
#
# file_handler.setLevel(logging.DEBUG)
# file_handler.setFormatter(file_formatter)
# console_handler = logging.StreamHandler()
# console_handler.setLevel(logging.DEBUG)
# console_handler.setFormatter(console_formatter)
logger = logging.getLogger(name)
# logger.addHandler(file_handler)
# logger.addHandler(console_handler)
logger.setLevel(logging.DEBUG)
file_name = "src/logs/dashboard_asset_logs"
handler = TimedRotatingFileHandler(
file_name, when="midnight", interval=1)
handler.suffix = "%Y_%m_%d.log"
handler.setLevel(logging.DEBUG)
handler.setFormatter(file_formatter)
logger.addHandler(handler)
logger.propagate = False
self.loggers[name] = logger
return logger
def getCommonLogger(self):
if self.loggers.get("dashboard_asset_logs"):
return self.loggers.get("dashboard_asset_logs")
else:
self.logger = self.getlogger("dashboard_asset_logs")
return self.logger
```
Here is the app\_log code
```
from src.main import logger
import time
logger_obj = logger.Logger()
logger = logger_obj.getCommonLogger()
def log_trial():
while True:
time.sleep(10)
logger.info("----------Log - start ---------------------")
if __name__ == '__main__':
log_trial()
```
I run the code by using **python -m src.main.app\_log**
|
2022/08/18
|
[
"https://Stackoverflow.com/questions/73401346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8838167/"
] |
In short, you will have to write your own `TimedRotatingFileHandler` implementation to make it happen.
The more important question is why you need it? File name without date is a file with log from the current day and this log is incomplete during the day. When log is rotated at midnight, old file (the one without the date) is renamed (date part is added). Just after that, a new empty file without the date part in the name is created. This behavior of log files is very common in most of the systems because it's easy to see which file is the current one and which are the archived ones (those with dates). Adding date to the file name of current one, will mess with this conception and can lead to some problems when this file will be read by people who don't know the idea begind this file naming convention.
|
You might be able to use the `namer` property of the handler, which you can set to a callable to customise naming. See [the documentation](https://docs.python.org/3/library/logging.handlers.html#logging.handlers.BaseRotatingHandler.namer) for it.
| 10,336
|
27,656,401
|
I've got rather small flask application which I run using:
```
$ python wsgi.py
```
When editing files, server reloads on each file save. This reload takes even up to 10sec.
That's system section from my Virtual Box:
```
Base: 2048Mb,
Memory:
Processors: 4
Acceleration: VT-x/AMD-V, Nested Paging, PAE/NX
```
How can I speed it up, or where do I look for issues?
|
2014/12/26
|
[
"https://Stackoverflow.com/questions/27656401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2743105/"
] |
Your problem might be virtualenv being sync'd too.
I stumbled upon the same problem, and the issue was that VirtualBox's default synchronization implementation is very very slow when dealing with too many files in the mounted directory. Upon investigating, I found:
```
$ cd my-project
$ tree | tail -n 1
220 directories, 2390 files
```
That looks like too many files for a simple flask project, right? So, as it turns out, I was putting my virtualenv directory inside my project directory too, meaning everything got sync'ed.
```
$ cd my-project/env
203 directories, 2313 files
$ cd my-project
$ rm -Rf my-project/env
$ tree | tail -n 1
17 directories, 77 files
```
Now it looks much more manageable and indeed much faster. Sure, we still need to store the virtualenv somewhere, but it actually makes much more sense to create it somewhere *inside* the guest machine, and not mounted against the host - specially if you consider that the host and the guest could be different OS's anyway.
Hope this helps.
|
Try changing the file system for NFS. I had this problem, I switched to NFS and has been fixed.
```
config.vm.synced_folder ".", "/vagrant", type: "nfs"
```
[ENABLING NFS SYNCED FOLDERS](http://docs.vagrantup.com/v2/synced-folders/nfs.html)
| 10,337
|
19,077,580
|
I use python 2.7.5.
I have got some files in the directory/sub directory. Sample of the `file1` is given below
```
Title file name
path1 /path/to/file
options path2=/path/to/file1,/path/to/file2,/path/to/file3,/path/to/file4 some_vale1 some_vale2 some_value3=abcdefg some_value4=/path/to/value some_value5
```
I would like to insert the text `/root/directory` in the text file. Final outcome i would like to have is as followes:-
```
Title file name
path1 /root/directory/path/tofile
path2=/root/directory/path/to/file1,/root/directory/path/to/file2,/root/directory/path/to/file3,/root/directory/path/to/file4
options some_vale1 some_vale2 some_value3=abcdefg some_value4=/path/to/value some_value5
```
The names `path, options and path2` are same in all files. The files in the directory/subdirectory required to be modified with the same outcome as above. I tried to use the `re.sub` to find and replace the string. However I never got the output i wanted.
|
2013/09/29
|
[
"https://Stackoverflow.com/questions/19077580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2221360/"
] |
This one-liner does the entire transformation:
```
str = re.sub(r'(options) (\S+)', r'\2\n \1', str.replace('/path/', '/root/directory/path/')
```
See a [live demo](http://codepad.org/I5IpoYcW) of this code
|
You can try this:
```
result = re.sub(r'([ \t =,])/', replace_text, text, 1)
```
The last `1` is to indicate the first match only, so that only the first path is substituted.
By the way, I think that you want to conserve the space/tab or comma right? Make replace\_text like this:
```
replace_text = r'\1/root/directory/'
```
| 10,338
|
30,265,557
|
I have a matrix with x rows (i.e. the number of draws) and y columns (the number of observations). They represent a distribution of y forecasts.
Now I would like to make sort of a 'heat map' of the draws. That is, I want to plot a 'confidence interval' (not really a confidence interval, but just all the values with shading in between), but as a 'heat map' (an example of a [heat map](http://climate.ncas.ac.uk/~andy/python/pictures/ex13.png) ). That means, that if for instance a lot of draws for observation y=y\* were around 1 but there was also a draw of 5 for that same observation, that then the area of the confidence interval around 1 is darker (but the whole are between 1 and 5 is still shaded).
To be totally clear: I like for instance the plot in the answer [here](https://stackoverflow.com/questions/16463325/shading-confidence-intervals-manually-with-ggplot2), but then I would want the grey confidence interval to instead be colored as intensities (i.e. some areas are darker).
Could someone please tell me how I could achieve that?
Thanks in advance.
**Edit:** As per request: example data.
Example of the first 20 values of the first column (i.e. y[1:20,1]):
```
[1] 0.032067416 -0.064797792 0.035022338 0.016347263 0.034373065
0.024793101 -0.002514447 0.091411355 -0.064263536 -0.026808208 [11] 0.125831185 -0.039428744 0.017156454 -0.061574540 -0.074207109 -0.029171227 0.018906181 0.092816957 0.028899699 -0.004535961
```
|
2015/05/15
|
[
"https://Stackoverflow.com/questions/30265557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2151205/"
] |
So, the hard part of this is transforming your data into the right shape, which is why it's nice to share something that really looks like your data, not just a single column.
Let's say your data is this a matrix with 10,000 rows and 10 columns. I'll just use a uniform distribution so it will be a boring plot at the end
```
n = 10000
k = 10
mat = matrix(runif(n * k), nrow = n)
```
Next, we'll calculate quantiles for each column using `apply`, transpose, and make it a data frame:
```
dat = as.data.frame(t(apply(mat, MARGIN = 2, FUN = quantile, probs = seq(.1, 0.9, 0.1))))
```
Add an `x` variable (since we transposed, each x value corresponds to a column in the original data)
```
dat$x = 1:nrow(dat)
```
We now need to get it into a "long" form, grouped by the min and max values for a certain deviation group around the median, and of course get rid of the pesky percent signs introduced by `quantile`:
```
library(dplyr)
library(tidyr)
dat_long = gather(dat, "quantile", value = "y", -x) %>%
mutate(quantile = as.numeric(gsub("%", "", quantile)),
group = abs(50 - quantile))
dat_ribbon = dat_long %>% filter(quantile < 50) %>%
mutate(ymin = y) %>%
select(x, ymin, group) %>%
left_join(
dat_long %>% filter(quantile > 50) %>%
mutate(ymax = y) %>%
select(x, ymax, group)
)
dat_median = filter(dat_long, quantile == 50)
```
And finally we can plot. We'll plot a transparent ribbon for each "group", that is 10%-90% interval, 20%-80% interval, ... 40%-60% interval, and then a single line at the median (50%). Using transparency, the middle will be darker as it has more ribbons overlapping on top of it. This doesn't go from the mininum to the maximum, but it will if you set the `probs` in the `quantile` call to go from 0 to 1 instead of .1 to .9.
```
library(ggplot2)
ggplot(dat_ribbon, aes(x = x)) +
geom_ribbon(aes(ymin = ymin, ymax = ymax, group = group), alpha = 0.2) +
geom_line(aes(y = y), data = dat_median, color = "white")
```

Worth noting that this is *not* a conventional heatmap. A heatmap usually implies that you have 3 variables, x, y, and z (color), where there is a z-value for every x-y pair. Here you have two variables, x and y, with y depending on x.
|
That is not a lot to go on, but I would probably start with the `hexbin` or `hexbinplot` package. Several alternatives are presented in this SO post.
[Formatting and manipulating a plot from the R package "hexbin"](https://stackoverflow.com/questions/15504983/formatting-and-manipulating-a-plot-from-the-r-package-hexbin)
| 10,340
|
38,729,374
|
My `.profile` defines a function
```
myps () {
ps -aef|egrep "a|b"|egrep -v "c\-"
}
```
I'd like to execute it from my python script
```
import subprocess
subprocess.call("ssh user@box \"$(typeset -f); myps\"", shell=True)
```
Getting an error back
```
bash: -c: line 0: syntax error near unexpected token `;'
bash: -c: line 0: `; myps'
```
Escaping ; results in
```
bash: ;: command not found
```
|
2016/08/02
|
[
"https://Stackoverflow.com/questions/38729374",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/359862/"
] |
```
script='''
. ~/.profile # load local function definitions so typeset -f can emit them
ssh user@box ksh -s <<EOF
$(typeset -f)
myps
EOF
'''
import subprocess
subprocess.call(['ksh', '-c', script]) # no shell=True
```
---
There are a few pertinent items here:
* The dotfile defining this function needs to be locally invoked *before* you run `typeset -f` to dump the function's definition over the wire. By default, a noninteractive shell does not run the majority of dotfiles (any specified by the `ENV` environment variable is an exception).
In the given example, this is served by the `. ~/profile` command within the script.
* The shell needs to be one supporting `typeset`, so it has to be `bash` or `ksh`, not `sh` (as used by `script=True` by default), which may be provided by `ash` or `dash`, lacking this feature.
In the given example, this is served by passing `['ksh', '-c']` is the first two arguments to the argv array.
* `typeset` needs to be run locally, so it can't be in an argv position other than the first with `script=True`. (To provide an example: `subprocess.Popen(['''printf '%s\n' "$@"''', 'This is just literal data!', '$(touch /tmp/this-is-not-executed)'], shell=True)` evaluates only `printf '%s\n' "$@"` as a shell script; `This is just literal data!` and `$(touch /tmp/this-is-not-executed)` are passed as literal data, so no file named `/tmp/this-is-not-executed` is created).
In the given example, this is mooted by *not using* `script=True`.
* Explicitly invoking `ksh -s` (or `bash -s`, as appropriate) ensures that the shell evaluating your function definitions matches the shell you *wrote* those functions against, rather than passing them to `sh -c`, as would happen otherwise.
In the given example, this is served by `ssh user@box ksh -s` inside the script.
|
The original command was not interpreting the `;` before `myps` properly. Using `sh -c` fixes that, but... ( please see Charles Duffy comments below ).
Using a combination of single/double quotes sometimes makes the syntax easier to read and less prone to mistakes. With that in mind, a safe way to run the command ( provided the functions in `.profile` are actually accessible in the shell started by the subprocess.Popen object ):
```
subprocess.call('ssh user@box "$(typeset -f); myps"', shell=True),
```
An alternative ( less safe ) method would be to use `sh -c` for the subshell command:
```
subprocess.call('ssh user@box "sh -c $(echo typeset -f); myps"', shell=True)
# myps is treated as a command
```
This seemingly returned the same result:
```
subprocess.call('ssh user@box "sh -c typeset -f; myps"', shell=True)
```
There are definitely alternative methods for accomplishing these type of tasks, however, this might give you an idea of what the issue was with the original command.
| 10,341
|
21,083,746
|
How do you cache a paginated Django queryset, specifically in a ListView?
I noticed one query was taking a long time to run, so I'm attempting to cache it. The queryset is huge (over 100k records), so I'm attempting to only cache paginated subsections of it. I can't cache the entire view or template because there are sections that are user/session specific and need to change constantly.
ListView has a couple standard methods for retrieving the queryset, `get_queryset()`, which returns the non-paginated data, and `paginate_queryset()`, which filters it by the current page.
I first tried caching the query in `get_queryset()`, but quickly realized calling `cache.set(my_query_key, super(MyView, self).get_queryset())` was causing the entire query to be serialized.
So then I tried overriding `paginate_queryset()` like:
```
import time
from functools import partial
from django.core.cache import cache
from django.views.generic import ListView
class MyView(ListView):
...
def paginate_queryset(self, queryset, page_size):
cache_key = 'myview-queryset-%s-%s' % (self.page, page_size)
print 'paginate_queryset.cache_key:',cache_key
t0 = time.time()
ret = cache.get(cache_key)
if ret is None:
print 're-caching'
ret = super(MyView, self).paginate_queryset(queryset, page_size)
cache.set(cache_key, ret, 60*60)
td = time.time() - t0
print 'paginate_queryset.time.seconds:',td
(paginator, page, object_list, other_pages) = ret
print 'total objects:',len(object_list)
return ret
```
However, this takes almost a minute to run, even though only 10 objects are retrieved, and every requests shows "re-caching", implying nothing is being saved to cache.
My `settings.CACHE` looks like:
```
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
```
and `service memcached status` shows memcached is running and `tail -f /var/log/memcached.log` shows absolutely nothing.
What am I doing wrong? What is the proper way to cache a paginated query so that the entire queryset isn't retrieved?
Edit: I think their may be a bug in either memcached or the Python wrapper. Django appears to support two different memcached backends, one using python-memcached and one using pylibmc. The python-memcached seems to silently hide the error caching the `paginate_queryset()` value. When I switched to the pylibmc backend, now I get an explicit error message "error 10 from memcached\_set: SERVER ERROR" tracing back to django/core/cache/backends/memcached.py in set, line 78.
|
2014/01/13
|
[
"https://Stackoverflow.com/questions/21083746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247542/"
] |
The problem turned out to be a combination of factors. Mainly, the result returned by the `paginate_queryset()` contains a reference to the unlimited queryset, meaning it's essentially uncachable. When I called `cache.set(mykey, (paginator, page, object_list, other_pages))`, it was trying to serialize thousands of records instead of just the `page_size` number of records I was expecting, causing the cached item to exceed memcached's limits and fail.
The other factor was the horrible default error reporting in the memcached/python-memcached, which silently hides all errors and turns cache.set() into a nop if anything goes wrong, making it very time-consuming to track down the problem.
I fixed this by essentially rewriting `paginate_queryset()` to ditch Django's builtin paginator functionality altogether and calculate the queryset myself with:
```
object_list = queryset[page_size*(page-1):page_size*(page-1)+page_size]
```
and then caching **that** `object_list`.
|
I wanted to paginate my infinite scrolling view on my home page and this is the solution I came up with. It's a mix of Django CCBVs and the author's initial solution.
The response times, however, didn't improve as much as I would've hoped for but that's probably because I am testing it on my local with just 6 posts and 2 users haha.
```
# Import
from django.core.cache import cache
from django.core.paginator import InvalidPage
from django.views.generic.list import ListView
from django.http Http404
class MyListView(ListView):
template_name = 'MY TEMPLATE NAME'
model = MY POST MODEL
paginate_by = 10
def paginate_queryset(self, queryset, page_size):
"""Paginate the queryset"""
paginator = self.get_paginator(
queryset, page_size, orphans=self.get_paginate_orphans(),
allow_empty_first_page=self.get_allow_empty())
page_kwarg = self.page_kwarg
page = self.kwargs.get(page_kwarg) or self.request.GET.get(page_kwarg) or 1
try:
page_number = int(page)
except ValueError:
if page == 'last':
page_number = paginator.num_pages
else:
raise Http404(_("Page is not 'last', nor can it be converted to an int."))
try:
page = paginator.page(page_number)
cache_key = 'mylistview-%s-%s' % (page_number, page_size)
retreive_cache = cache.get(cache_key)
if retreive_cache is None:
print('re-caching')
retreive_cache = super(MyListView, self).paginate_queryset(queryset, page_size)
# Caching for 1 day
cache.set(cache_key, retreive_cache, 86400)
return retreive_cache
except InvalidPage as e:
raise Http404(_('Invalid page (%(page_number)s): %(message)s') % {
'page_number': page_number,
'message': str(e)
})
```
| 10,344
|
35,795,663
|
**[What I want]** is to find the only one smallest positive real root of quartic function **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e**
**[Existing Method]**
My equation is for collision prediction, the maximum degree is quartic function as **f(x)** = **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e**
and **a,b,c,d,e** coef can be positive/negative/zero (**real float value**). So my function **f(x)** can be quartic, cubic, or quadratic depending on a, b, c ,d ,e input coefficient.
Currently, I use NumPy to find roots as below.
```py
import numpy
root_output = numpy.roots([a, b, c ,d ,e])
```
The "**root\_output**" from the NumPy module can be all possible real/complex roots depending on the input coefficient. So I have to look at "**root\_output**" one by one, and check which root is the smallest real positive value (root>0?)
**[The Problem]**
My program needs to execute **numpy.roots([a, b, c, d, e])** many times, so many times of executing numpy.roots is too slow for my project. and (a, b, c ,d ,e) value is always changed every time when executing numpy.roots
My attempt is to run the code on Raspberry Pi2. Below is an example of processing time.
* Running many many times of numpy.roots on PC: **1.2 seconds**
* Running many many times of numpy.roots on Raspberry Pi2: **17 seconds**
Could you please guide me on how to find the smallest positive real root in the **fastest solution**? Using scipy.optimize or implement some algorithm to speed up finding root or any advice from you will be great.
Thank you.
**[Solution]**
* **Quadratic function** only need real positive roots (please be aware of division by zero)
```py
def SolvQuadratic(a, b ,c):
d = (b**2) - (4*a*c)
if d < 0:
return []
if d > 0:
square_root_d = math.sqrt(d)
t1 = (-b + square_root_d) / (2 * a)
t2 = (-b - square_root_d) / (2 * a)
if t1 > 0:
if t2 > 0:
if t1 < t2:
return [t1, t2]
return [t2, t1]
return [t1]
elif t2 > 0:
return [t2]
else:
return []
else:
t = -b / (2*a)
if t > 0:
return [t]
return []
```
* **Quartic Function** for quartic function, you can use pure python/numba version as **the below answer from @B.M.**. I also add another cython version from @B.M's code. You can use the below code as .pyx file and then compile it to get about 2x faster than pure python (please be aware of rounding issues).
```py
import cmath
cdef extern from "complex.h":
double complex cexp(double complex)
cdef double complex J=cexp(2j*cmath.pi/3)
cdef double complex Jc=1/J
cdef Cardano(double a, double b, double c, double d):
cdef double z0
cdef double a2, b2
cdef double p ,q, D
cdef double complex r
cdef double complex u, v, w
cdef double w0, w1, w2
cdef double complex r1, r2, r3
z0=b/3/a
a2,b2 = a*a,b*b
p=-b2/3/a2 +c/a
q=(b/27*(2*b2/a2-9*c/a)+d)/a
D=-4*p*p*p-27*q*q
r=cmath.sqrt(-D/27+0j)
u=((-q-r)/2)**0.33333333333333333333333
v=((-q+r)/2)**0.33333333333333333333333
w=u*v
w0=abs(w+p/3)
w1=abs(w*J+p/3)
w2=abs(w*Jc+p/3)
if w0<w1:
if w2<w0 : v = v*Jc
elif w2<w1 : v = v*Jc
else: v = v*J
r1 = u+v-z0
r2 = u*J+v*Jc-z0
r3 = u*Jc+v*J-z0
return r1, r2, r3
cdef Roots_2(double a, double complex b, double complex c):
cdef double complex bp
cdef double complex delta
cdef double complex r1, r2
bp=b/2
delta=bp*bp-a*c
r1=(-bp-delta**.5)/a
r2=-r1-b/a
return r1, r2
def SolveQuartic(double a, double b, double c, double d, double e):
"Ferrarai's Method"
"resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals"
"First shift : x= z-b/4/a => P=z^4+pz^2+qz+r"
cdef double z0
cdef double a2, b2, c2, d2
cdef double p, q, r
cdef double A, B, C, D
cdef double complex y0, y1, y2
cdef double complex a0, b0
cdef double complex r0, r1, r2, r3
z0=b/4.0/a
a2,b2,c2,d2 = a*a,b*b,c*c,d*d
p = -3.0*b2/(8*a2)+c/a
q = b*b2/8.0/a/a2 - 1.0/2*b*c/a2 + d/a
r = -3.0/256*b2*b2/a2/a2 + c*b2/a2/a/16 - b*d/a2/4+e/a
"Second find y so P2=Ay^3+By^2+Cy+D=0"
A=8.0
B=-4*p
C=-8*r
D=4*r*p-q*q
y0,y1,y2=Cardano(A,B,C,D)
if abs(y1.imag)<abs(y0.imag): y0=y1
if abs(y2.imag)<abs(y0.imag): y0=y2
a0=(-p+2*y0)**.5
if a0==0 : b0=y0**2-r
else : b0=-q/2/a0
r0,r1=Roots_2(1,a0,y0+b0)
r2,r3=Roots_2(1,-a0,y0-b0)
return (r0-z0,r1-z0,r2-z0,r3-z0)
```
**[Problem of Ferrari's method]** We're facing the problem when the coefficients of quartic equation is [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] the output from numpy.roots and ferrari methods is entirely different (numpy.roots is correct output).
```py
import numpy as np
import cmath
J=cmath.exp(2j*cmath.pi/3)
Jc=1/J
def ferrari(a,b,c,d,e):
"Ferrarai's Method"
"resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals"
"First shift : x= z-b/4/a => P=z^4+pz^2+qz+r"
z0=b/4/a
a2,b2,c2,d2 = a*a,b*b,c*c,d*d
p = -3*b2/(8*a2)+c/a
q = b*b2/8/a/a2 - 1/2*b*c/a2 + d/a
r = -3/256*b2*b2/a2/a2 +c*b2/a2/a/16-b*d/a2/4+e/a
"Second find y so P2=Ay^3+By^2+Cy+D=0"
A=8
B=-4*p
C=-8*r
D=4*r*p-q*q
y0,y1,y2=Cardano(A,B,C,D)
if abs(y1.imag)<abs(y0.imag): y0=y1
if abs(y2.imag)<abs(y0.imag): y0=y2
a0=(-p+2*y0)**.5
if a0==0 : b0=y0**2-r
else : b0=-q/2/a0
r0,r1=Roots_2(1,a0,y0+b0)
r2,r3=Roots_2(1,-a0,y0-b0)
return (r0-z0,r1-z0,r2-z0,r3-z0)
#~ @jit(nopython=True)
def Cardano(a,b,c,d):
z0=b/3/a
a2,b2 = a*a,b*b
p=-b2/3/a2 +c/a
q=(b/27*(2*b2/a2-9*c/a)+d)/a
D=-4*p*p*p-27*q*q
r=cmath.sqrt(-D/27+0j)
u=((-q-r)/2)**0.33333333333333333333333
v=((-q+r)/2)**0.33333333333333333333333
w=u*v
w0=abs(w+p/3)
w1=abs(w*J+p/3)
w2=abs(w*Jc+p/3)
if w0<w1:
if w2<w0 : v*=Jc
elif w2<w1 : v*=Jc
else: v*=J
return u+v-z0, u*J+v*Jc-z0, u*Jc+v*J-z0
#~ @jit(nopython=True)
def Roots_2(a,b,c):
bp=b/2
delta=bp*bp-a*c
r1=(-bp-delta**.5)/a
r2=-r1-b/a
return r1,r2
coef = [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869]
print("Coefficient A, B, C, D, E", coef)
print("")
print("numpy roots: ", np.roots(coef))
print("")
print("ferrari python ", ferrari(*coef))
```
|
2016/03/04
|
[
"https://Stackoverflow.com/questions/35795663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6017946/"
] |
An other answer :
do it with analytic methods ([Ferrari](https://fr.wikipedia.org/wiki/M%C3%A9thode_de_Ferrari),[Cardan](https://fr.wikipedia.org/wiki/M%C3%A9thode_de_Cardan#Formules_de_Cardan)), and speed the code with Just in Time compilation ([Numba](http://numba.pydata.org)) :
Let see the improvement first :
```
In [2]: P=poly1d([1,2,3,4],True)
In [3]: roots(P)
Out[3]: array([ 4., 3., 2., 1.])
In [4]: %timeit roots(P)
1000 loops, best of 3: 465 µs per loop
In [5]: ferrari(*P.coeffs)
Out[5]: ((1+0j), (2-0j), (3+0j), (4-0j))
In [5]: %timeit ferrari(*P.coeffs) #pure python without jit
10000 loops, best of 3: 116 µs per loop
In [6]: %timeit ferrari(*P.coeffs) # with numba.jit
100000 loops, best of 3: 13 µs per loop
```
Then the ugly code :
for order 4 :
```
@jit(nopython=True)
def ferrari(a,b,c,d,e):
"resolution of P=ax^4+bx^3+cx^2+dx+e=0"
"CN all coeffs real."
"First shift : x= z-b/4/a => P=z^4+pz^2+qz+r"
z0=b/4/a
a2,b2,c2,d2 = a*a,b*b,c*c,d*d
p = -3*b2/(8*a2)+c/a
q = b*b2/8/a/a2 - 1/2*b*c/a2 + d/a
r = -3/256*b2*b2/a2/a2 +c*b2/a2/a/16-b*d/a2/4+e/a
"Second find X so P2=AX^3+BX^2+C^X+D=0"
A=8
B=-4*p
C=-8*r
D=4*r*p-q*q
y0,y1,y2=cardan(A,B,C,D)
if abs(y1.imag)<abs(y0.imag): y0=y1
if abs(y2.imag)<abs(y0.imag): y0=y2
a0=(-p+2*y0.real)**.5
if a0==0 : b0=y0**2-r
else : b0=-q/2/a0
r0,r1=roots2(1,a0,y0+b0)
r2,r3=roots2(1,-a0,y0-b0)
return (r0-z0,r1-z0,r2-z0,r3-z0)
```
for order 3 :
```
J=exp(2j*pi/3)
Jc=1/J
@jit(nopython=True)
def cardan(a,b,c,d):
u=empty(2,complex128)
z0=b/3/a
a2,b2 = a*a,b*b
p=-b2/3/a2 +c/a
q=(b/27*(2*b2/a2-9*c/a)+d)/a
D=-4*p*p*p-27*q*q
r=sqrt(-D/27+0j)
u=((-q-r)/2)**0.33333333333333333333333
v=((-q+r)/2)**0.33333333333333333333333
w=u*v
w0=abs(w+p/3)
w1=abs(w*J+p/3)
w2=abs(w*Jc+p/3)
if w0<w1:
if w2<w0 : v*=Jc
elif w2<w1 : v*=Jc
else: v*=J
return u+v-z0, u*J+v*Jc-z0,u*Jc+v*J-z0
```
for order 2:
```
@jit(nopython=True)
def roots2(a,b,c):
bp=b/2
delta=bp*bp-a*c
u1=(-bp-delta**.5)/a
u2=-u1-b/a
return u1,u2
```
Probably needs to be test furthermore, but efficient.
|
the numpy solution to do that without loop is :
```
p=array([a,b,c,d,e])
r=roots(p)
r[(r.imag==0) & (r.real>=0) ].real.min()
```
`scipy.optimize` methods will be slower, unless you don't need precision:
```
In [586]: %timeit r=roots(p);r[(r.imag==0) & (r.real>=0) ].real.min()
1000 loops, best of 3: 334 µs per loop
In [587]: %timeit newton(poly1d(p),10,tol=1e-8)
1000 loops, best of 3: 555 µs per loop
In [588]: %timeit newton(poly1d(p),10,tol=1)
10000 loops, best of 3: 177 µs per loop
```
And you have then to find the min...
**EDIT**
for a 2x factor, do by yourself what roots does:
```
In [638]: b=zeros((4,4),float);b[1:,:-1]=eye(3)
In [639]: c=b.copy();c[0]=-(p/p[0])[1:];eig(c)[0]
Out[639]:
array([-7.40849430+0.j , 5.77969794+0.j ,
-0.18560182+3.48995646j, -0.18560182-3.48995646j])
In [640]: roots(p)
Out[640]:
array([-7.40849430+0.j , 5.77969794+0.j ,
-0.18560182+3.48995646j, -0.18560182-3.48995646j])
In [641]: %timeit roots(p)
1000 loops, best of 3: 365 µs per loop
In [642]: %timeit c=b.copy();c[0]=-(p/p[0])[1:];eig(c)
10000 loops, best of 3: 181 µs per loop
```
| 10,346
|
68,400,475
|
I am trying to display a 2d list as CSV like structure in a new tkinter window
here is my code
```
import tkinter as tk
import requests
import pandas as pd
import numpy as np
from tkinter import messagebox
from finta import TA
from math import exp
import xlsxwriter
from tkinter import *
from tkinter.ttk import *
import yfinance as yf
import csv
from operator import itemgetter
def Createtable(root,rows):
# code for creating table
newWindow = Toplevel(root)
newWindow.title("New Window")
# sets the geometry of toplevel
newWindow.geometry("600x600")
for i in range(len(rows)):
for j in range(3):
self.e = Entry(root, width=20, fg='blue',
font=('Arial',16,'bold'))
self.e.grid(row=i, column=j)
self.e.insert(END, rows[i][j])
Label(newWindow, text ="Results").pack()
```
This is my code for display the 2d list in a new window, here `rows` is the 2d list
This is how I am calling the function
```
def clicked():
selected_stocks=[]
converted_tickers=[]
selected = box.curselection()
for idx in selected:
selected_stocks.append(box.get(idx))
for i in selected_stocks:
converted_tickers.append(name_to_ticker[i])
rows=compute(converted_tickers)
Createtable(app,rows)
btn = Button(app, text = 'Submit',command = clicked)
btn.pack(side = 'bottom')
```
`rows` works, I printed it out seperately and confirmed.
When I execute the program this is the error I receive
```
Exception in Tkinter callback
Traceback (most recent call last):
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/tkinter/__init__.py", line 1883, in __call__
return self.func(*args)
File "pair.py", line 144, in clicked
Createtable(app,rows)
File "pair.py", line 31, in Createtable
self.e = Entry(root, width=20, fg='blue',
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/tkinter/ttk.py", line 669, in __init__
Widget.__init__(self, master, widget or "ttk::entry", kw)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/tkinter/ttk.py", line 557, in __init__
tkinter.Widget.__init__(self, master, widgetname, kw=kw)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/tkinter/__init__.py", line 2567, in __init__
self.tk.call(
_tkinter.TclError: unknown option "-fg"
```
I looked up the error and it says that this happens when you are using both old and new version of tkinter. I am not able to understand this as I am not using `tk` and `ttk` seperately anywhere in the `Createtable` function.
How can I solve this issue to display the 2d list on the new window?
For a reference I am attaching a screenshot here
[](https://i.stack.imgur.com/QhoAA.png)
UPDATE: Updated code as per comment
1)I commented out `from tkinter.ttk import *`
2)I also changed the `Createtable` function into a class like this
```
class Table:
def __init__(self,root,rows):
# code for creating table
newWindow = Toplevel(root)
newWindow.title("New Window")
# sets the geometry of toplevel
newWindow.geometry("600x600")
for i in range(len(rows)):
for j in range(3):
self.e=Entry(root, width=20, fg='blue',
font=('Arial',16,'bold'))
self.e.grid(row=i, column=j)
self.e.insert(END, rows[i][j])
Label(newWindow, text ="Results").pack()
```
Now, two errors are happening
1.The 2d list is showing on top of the main window instead of the new window.
2.After removing `ttk` the words on top of the button have become white for some reason. You can see in the picture attached below ( compare with the old picture, you will see "Submit" has become white)
How do I solve these?
Attaching picture for reference
[](https://i.stack.imgur.com/Jpjvp.png)
UPDATE 2: minimum reproducible code
```
import tkinter as tk
import requests
import pandas as pd
import numpy as np
from tkinter import messagebox
from finta import TA
from math import exp
import xlsxwriter
from tkinter import *
import tkinter.ttk as ttk
import yfinance as yf
import csv
from operator import itemgetter
class Table:
def __init__(self,root,rows):
# code for creating table
newWindow = Toplevel(root)
newWindow.title("New Window")
# sets the geometry of toplevel
newWindow.geometry("600x600")
for i in range(len(rows)):
for j in range(3):
self.e=Entry(newWindow)
self.e.grid(row=i, column=j)
self.e.insert(END, rows[i][j])
Label(newWindow, text ="Results").pack()
#######################LIST TO DISPLAY##############
final_sorted_list=['Allahabad Bank', 'Andhra Bank', 'Axis Bank Limited','Bank of Baroda','Bank of India Limited']
########TKINTER##############################
app = Tk()
app.title('Test')
app.geometry("500x800")
def clicked():
rows=[['Stock 1', 'Stock 2', 'Value'], ['AXISBANK.NS', 'MAHABANK.NS', 81.10000000000001], ['AXISBANK.NS', 'BANKINDIA.NS', 82.3], ['BANKBARODA.NS', 'MAHABANK.NS', 84.8], ['MAHABANK.NS', 'CANBK.NS', 85.5], ['BANKBARODA.NS', 'BANKINDIA.NS', 90.4], ['BANKINDIA.NS', 'CANBK.NS', 90.9], ['AXISBANK.NS', 'CANBK.NS', 91.5], ['AXISBANK.NS', 'BANKBARODA.NS', 93.30000000000001], ['BANKINDIA.NS', 'MAHABANK.NS', 95.8], ['BANKBARODA.NS', 'CANBK.NS', 97.6]]
Table(app,rows)
print("Finished")
box = Listbox(app, selectmode=MULTIPLE, height=4)
for val in final_sorted_list:
box.insert(tk.END, val)
box.pack(padx=5,pady=8,side=LEFT,fill=BOTH,expand=True)
scrollbar = Scrollbar(app)
scrollbar.pack(side = RIGHT, fill = BOTH)
box.config(yscrollcommand = scrollbar.set)
# scrollbar.config(command = box.yview)
btn = ttk.Button(app, text = 'Submit',command = clicked)
btn.pack(side = 'bottom')
exit_button = ttk.Button(app, text='Close', command=app.destroy)
exit_button.pack(side='top')
clear_button = ttk.Button(app, text='Clear Selection',command=lambda: box.selection_clear(0, 'end'))
clear_button.pack(side='top')
app.mainloop()
```
If you run this, you get a frontend like in the picture above. You don't need to select anything as a sample result already has been hard coded in `rows`. The data in `rows` needs to be displayed in another window as a table. To recreate the error (new window not popping up) - you can just click on the "Submit" button, you don't need to select anything from the list.
|
2021/07/15
|
[
"https://Stackoverflow.com/questions/68400475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15530206/"
] |
The problem update 2 is that you are using pack and grid in the same window. When I click submit the new window comes up (I couldn't reproduce that problem) but the word "results" is missing. This is because you are trying to use pack and grid on the same window. To fix this, show "results" using grid like this:
```
class Table:
def __init__(self,root,rows):
# code for creating table
newWindow = Toplevel(root)
newWindow.title("New Window")
# sets the geometry of toplevel
newWindow.geometry("600x600")
Label(newWindow, text ="Results").grid(row = 0, column = 0, columnspan = 3)
for i in range(len(rows)):
for j in range(3):
self.e=Entry(newWindow)
self.e.grid(row=i+1, column=j)
self.e.insert(END, rows[i][j])
```
Because the "results" label is now in column 1 I've had to change `self.e.grid` to use `row=i+1` so it displays properly. I also set the `columnspan` of the "results" label to 3 so it will be centered.
|
I've organized your imports. This should remove naming conflicts.
Note: You will have to declare `tkinter` objects with `tk.` and `ttk` objects with `ttk.` This should also help to remove naming conflicts.
```py
import tkinter as tk
from tkinter import ttk
from tkinter import messagebox
import requests
import csv
import pandas as pd
import numpy as np
import xlsxwriter
import yfinance as yf
from finta import TA
from math import exp
from operator import itemgetter
```
| 10,351
|
61,529,832
|
I have a python script called script.py that has two optional arguments (-a, -b) and has a single positional argument that either accepts a file or uses stdin. I want to make it so that a flag/file cannot be used multiple times. Thus, something like this shouldn't be allowed
```
./script.py -a 5 -a 7 test.txt
./script.py -a 5 test1.txt test2.txt
```
Is there a way I can do this? I've been combing through argparse (<https://docs.python.org/3/library/argparse.html>) but can't seem to find anything that specifically meets my needs. I thought maybe I could use nargs=1 in the add\_argument(..) function and potentially check the length of the list generated. Any help would be much appreciated.
|
2020/04/30
|
[
"https://Stackoverflow.com/questions/61529832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12912748/"
] |
@ajrwhite adding attachments had one trick, you need to use 'Alias' from mactypes to convert a string/path object to a mactypes path. I'm not sure why but it works.
here's a working example which creates messages with recipients and can add attachments:
```
from appscript import app, k
from mactypes import Alias
from pathlib import Path
def create_message_with_attachment():
subject = 'This is an important email!'
body = 'Just kidding its not.'
to_recip = ['myboss@mycompany.com', 'theguyih8@mycompany.com']
msg = Message(subject=subject, body=body, to_recip=to_recip)
# attach file
p = Path('path/to/myfile.pdf')
msg.add_attachment(p)
msg.show()
class Outlook(object):
def __init__(self):
self.client = app('Microsoft Outlook')
class Message(object):
def __init__(self, parent=None, subject='', body='', to_recip=[], cc_recip=[], show_=True):
if parent is None: parent = Outlook()
client = parent.client
self.msg = client.make(
new=k.outgoing_message,
with_properties={k.subject: subject, k.content: body})
self.add_recipients(emails=to_recip, type_='to')
self.add_recipients(emails=cc_recip, type_='cc')
if show_: self.show()
def show(self):
self.msg.open()
self.msg.activate()
def add_attachment(self, p):
# p is a Path() obj, could also pass string
p = Alias(str(p)) # convert string/path obj to POSIX/mactypes path
attach = self.msg.make(new=k.attachment, with_properties={k.file: p})
def add_recipients(self, emails, type_='to'):
if not isinstance(emails, list): emails = [emails]
for email in emails:
self.add_recipient(email=email, type_=type_)
def add_recipient(self, email, type_='to'):
msg = self.msg
if type_ == 'to':
recipient = k.to_recipient
elif type_ == 'cc':
recipient = k.cc_recipient
msg.make(new=recipient, with_properties={k.email_address: {k.address: email}})
```
|
Figured it out using [py-appscript](http://appscript.sourceforge.net/py-appscript/doc_3x/appscript-manual/03_quicktutorial.html)
```
pip install appscript
```
```
from appscript import app, k
outlook = app('Microsoft Outlook')
msg = outlook.make(
new=k.outgoing_message,
with_properties={
k.subject: 'Test Email',
k.plain_text_content: 'Test email body'})
msg.make(
new=k.recipient,
with_properties={
k.email_address: {
k.name: 'Fake Person',
k.address: 'fakeperson@gmail.com'}})
msg.open()
msg.activate()
```
Also very useful to download py-appscript's ASDictionary and ASTranslate tools to convert examples of AppleScript to the python version.
| 10,352
|
22,699,040
|
I'm trying to pull the parsable-cite info from this [webpage](http://deepbills.cato.org/api/1/bill?congress=113&billnumber=499&billtype=s&billversion=is) using python. For example, for the page listed I would pull pl/111/148 and pl/111/152. My current regex is listed below, but it seems to return everything after parsable cite. It's probably something simple, but I'm relatively new to regexes. Thanks in advance.
```
re.findall(r'^parsable-cite=.*>$',page)
```
|
2014/03/27
|
[
"https://Stackoverflow.com/questions/22699040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1153018/"
] |
I highly recommend to use this regex which will capture what you want:
```
re.findall(r'parsable-cite=\\\"(.*?)\\\"\>',page)
```
explanation:
```
parsable-cite= matches the characters parsable-cite= literally (case sensitive)
\\ matches the character \ literally
\" matches the character " literally
1st Capturing group (.*?)
.*? matches any character (except newline)
Quantifier: Between zero and unlimited times, as few times as possible,
expanding as needed
\\ matches the character \ literally
\" matches the character " literally
\> matches the character > literally
```
using **?** is the key ;)
hope this helps.
|
Make your regex lazy:
```
re.findall(r'^parsable-cite=.*?>$',page)
^
```
Or use a negated class (preferable):
```
re.findall(r'^parsable-cite=[^>]*>$',page)
```
`.*` is greedy by default and will try to match as much as possible before concluding a match.
[regex101 demo](http://regex101.com/r/pZ7fW1)
If you want to get the parts you need only, you can use capture groups:
```
re.findall(r'^parsable-cite=([^>]*)>$',page)
```
[regex101 demo](http://regex101.com/r/nV6cW6)
---
Though, from the layout of your webpage, it doesn't seem that you need the anchors (`^` and `$`) (unless the newlines were somehow removed on the site...)
| 10,355
|
45,954
|
I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information.
I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working.
I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org.
Is there a suitable module available somewhere?
|
2008/09/05
|
[
"https://Stackoverflow.com/questions/45954",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4761/"
] |
You might be interested in the DWARF library from [pydevtools](http://code.google.com/p/pydevtools/):
```
>>> from bintools.dwarf import DWARF
>>> dwarf = DWARF('test/test')
>>> dwarf.get_loc_by_addr(0x8048475)
('/home/emilmont/Workspace/dbg/test/main.c', 36, 0)
```
|
I don't know of any, but if all else fails you could use [ctypes](http://docs.python.org/lib/module-ctypes.html) to directly use libdwarf, libelf or libbfd.
| 10,361
|
45,107,439
|
This is more a question out of interest.
I observed the following behaviour and would like to know why / how this happens (tried in python 2.7.3 & python 3.4.1)
```
ph = {1:0, 2:0}
d1 = {'a':ph, 'b':ph}
d2 = {'a':{1:0,2:0},{'b':{1:0,2:0}}
>>> d1
{'a':{1:0,2:0},{'b':{1:0,2:0}}
```
so d1 and d2 are the same.
However when using del or replacing values this happens
```
>>> del d1['a'][1]
>>> d1
{'a':{2:0}, 'b':{2:0}}
>>> del d2['a'][1]
>>> d2
{'a':{2:0},{'b':{1:0,2:0}}
```
The behaviour for d2 is as expected, but the nested dictionaries in d1 (ph) seem to work differently.
Is it because all variables in python are really references?
Is there a work around if I can't specify the placeholder dictionary for every instance?
Thank you!
|
2017/07/14
|
[
"https://Stackoverflow.com/questions/45107439",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8308726/"
] |
Try using [pythontutor](http://www.pythontutor.com). If you put your code into it you will see this:
[](https://i.stack.imgur.com/z3sr1.png)
You can see that as you suspected the dictionaries in `d1` are references to the same object, `ph`.
This means that when you `del d1['a'][1]` it really deletes a key in `ph` so you get this:
[](https://i.stack.imgur.com/iRL3u.png)
To work around this you need to initialise `d1` with *copies* of the `ph` dictionary, as discussed in other answers here...
|
Both keys in `d1` point to the same reference, so when you delete from `d1` it affects `ph` whether you reach it by `d1['a']` or `d1['b']` - you're getting to the same place.
`d2` on the other hand instantiates two separate objects, so you're only affecting one of those in your example.
Your workaround would be [`.deepcopy()`](https://docs.python.org/2/library/copy.html) for your second reference in `d1`.
```
d1 = {'a':ph, 'b':ph.deepcopy()}
```
(You don't really need deepcopy in this example - copy() would be sufficient - but if `ph` were any more complex you would - see the above linked docs.)
| 10,370
|
69,687,722
|
I wanna know how can i get **1 minute** gold price data **of a specific time and date interval** (such as an 1 houre interval in 18th october: 2021-10-18 09:30:00 to 2021-10-18 10:30:00) from yfinance or any other source in python?
my code is:
```
gold = yf.download(tickers="GC=F", period="5d", interval="1m")
```
it seems it`s just possible to set **period** while i wanna set **specific date and time intervals**.
thanks
|
2021/10/23
|
[
"https://Stackoverflow.com/questions/69687722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4688178/"
] |
Your call to `yfinance` returns a Pandas `DataFrame` with `datetime` as the index. We can use this to filter the dataframe to only entries between our `start` and `end` times.
```
import yfinance as yf
from datetime import datetime
gold = yf.download(tickers="GC=F", period="5d", interval="1m")
start = datetime(2021, 10, 18, 9, 30, 0)
end = datetime(2021, 10, 18, 10, 30, 0)
filtered = gold[start: end]
```
Outputs
```
Open High ... Adj Close Volume
Datetime ...
2021-10-18 09:30:00-04:00 1770.099976 1770.099976 ... 1767.599976 1035
2021-10-18 09:31:00-04:00 1767.900024 1769.099976 ... 1768.500000 467
2021-10-18 09:32:00-04:00 1768.599976 1769.300049 ... 1769.199951 428
2021-10-18 09:33:00-04:00 1769.300049 1770.199951 ... 1769.099976 750
2021-10-18 09:34:00-04:00 1769.199951 1769.300049 ... 1767.800049 549
... ... ... ... ... ...
2021-10-18 10:26:00-04:00 1770.300049 1770.500000 ... 1769.900024 147
2021-10-18 10:27:00-04:00 1769.800049 1769.800049 ... 1769.400024 349
2021-10-18 10:28:00-04:00 1769.400024 1770.400024 ... 1770.199951 258
2021-10-18 10:29:00-04:00 1770.300049 1771.000000 ... 1770.099976 382
2021-10-18 10:30:00-04:00 1770.300049 1771.000000 ... 1770.900024 180
[61 rows x 6 columns]
```
|
Edit 2021-10-25
===============
To clear my answer. Question was:
>
> i wanna set specific date and time intervals. thanks
>
>
>
All you need is in the code documentation.
So `start` and `end` could be date or **\_datetime**
```
start: str
Download start date string (YYYY-MM-DD) or _datetime.
Default is 1900-01-01
```
Example code:
>
> Note: something wrong with timezones, i've tryed to pass correcth timezone with start and end but lib didn't handle it correctly and I was finish with convert it manually)
>
>
>
```py
import pandas as pd
import yfinance as yf
import pendulum
pd.options.display.max_rows=10 # To decrease printouts
start = pendulum.parse('2021-10-18 09:30').add(hours=7) # My tz is UTC+03:00, original TZ UTC-04:00. So adds to my local time 7 hours
end = pendulum.parse('2021-10-18 10:30').add(hours=7) # Same
print(start)
print(yf.download(tickers="GC=F", interval="1m", start=start, end=end))
```
Result and you can pass whatever datetime ranges you want:
```
2021-10-18T16:30:00+00:00
[*********************100%***********************] 1 of 1 completed
Open High Low Close \
Datetime
2021-10-18 09:30:00-04:00 1770.099976 1770.099976 1767.400024 1767.800049
2021-10-18 09:31:00-04:00 1767.900024 1769.099976 1767.800049 1768.500000
2021-10-18 09:32:00-04:00 1768.599976 1769.300049 1768.199951 1769.199951
2021-10-18 09:33:00-04:00 1769.300049 1770.199951 1768.900024 1769.099976
2021-10-18 09:34:00-04:00 1769.199951 1769.300049 1767.599976 1767.800049
... ... ... ... ...
2021-10-18 10:25:00-04:00 1769.900024 1770.400024 1769.800049 1770.300049
2021-10-18 10:26:00-04:00 1770.300049 1770.500000 1769.900024 1769.900024
2021-10-18 10:27:00-04:00 1769.800049 1769.800049 1769.099976 1769.400024
2021-10-18 10:28:00-04:00 1769.400024 1770.400024 1769.400024 1770.199951
2021-10-18 10:29:00-04:00 1770.300049 1771.000000 1769.900024 1770.099976
Adj Close Volume
Datetime
2021-10-18 09:30:00-04:00 1767.800049 0
2021-10-18 09:31:00-04:00 1768.500000 459
2021-10-18 09:32:00-04:00 1769.199951 428
2021-10-18 09:33:00-04:00 1769.099976 750
2021-10-18 09:34:00-04:00 1767.800049 549
... ... ...
2021-10-18 10:25:00-04:00 1770.300049 134
2021-10-18 10:26:00-04:00 1769.900024 147
2021-10-18 10:27:00-04:00 1769.400024 349
2021-10-18 10:28:00-04:00 1770.199951 258
2021-10-18 10:29:00-04:00 1770.099976 382
[60 rows x 6 columns]
```
PS: with `start` and `end` you do not have limitation to the last 7 days, but still have limit to the last 30 days:
```
1 Failed download:
- GC=F: 1m data not available for startTime=1631980800 and endTime=1631998800. The requested range must be within the last 30 days.
```
Original
========
this lib has a lack of documentation. But this is python and as result it some kind of self-documented.
Read definition of download function here
<https://github.com/ranaroussi/yfinance/blob/6654a41a8d5c0c9e869a9b9acb3e143786c765c7/yfinance/multi.py#L32>
PS this function have `start=` and `end=` params that I hope help you
| 10,371
|
59,819,633
|
i want to login on instagram using selenium python. I tried to find elements by name, tag and css selector but selenium doesn't find any element (i think). I also tried to switch to the iFrame but nothing.
This is the error:
>
> Traceback (most recent call last):
> File "C:/Users/anton/Desktop/Instabot/chrome instagram bot/main.py", line 8, in
> my\_driver.sign\_in(username=USERNAME, password=PASSWORD)
> File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\chrome\_driver\_cli.py", line 39, in sign\_in
> username\_input = self.driver.find\_element\_by\_name("username")
> File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 496, in find\_element\_by\_name
> return self.find\_element(by=By.NAME, value=name)
> File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 978, in find\_element
> 'value': value})['value']
> File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
> self.error\_handler.check\_response(response)
> File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check\_response
> raise exception\_class(message, screen, stacktrace)
> selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[name="username"]"}
> (Session info: chrome=79.0.3945.130)
>
>
>
This is the code
```
from selenium import webdriver
from selenium.common.exceptions import SessionNotCreatedException
from selenium.webdriver.common.keys import Keys
supported_versions_path = [
"..\\chrome driver\\CHROME 80\\chromedriver.exe",
"..\\chrome driver\\CHROME 79\\chromedriver.exe",
"..\\chrome driver\\CHROME 78\\chromedriver.exe"
]
instagram_link = "https://www.instagram.com/accounts/login/?source=auth_switcher"
class ChromeDriver:
def __init__(self):
self.driver = self.__startup()
def __startup(self):
self.driver = None
for current_checking_version in supported_versions_path:
if self.driver is None:
try:
self.driver = webdriver.Chrome(current_checking_version)
pass
except SessionNotCreatedException:
self.driver = None
return self.driver
def sign_in(self, username, password):
self.driver.get(instagram_link)
frame = self.driver.find_element_by_tag_name("iframe")
self.driver.switch_to.frame(frame)
username_input = self.driver.find_element_by_name("username")
# password_input = self.driver.find_element_by_name('password')
# username_input.send_keys(username)
# password_input.send_keys(password)
# password_input.send_keys(Keys.ENTER)
def get_page(self, url):
self.driver.get(url)
def quit(self):
self.driver.quit()
```
Can you help me?
|
2020/01/20
|
[
"https://Stackoverflow.com/questions/59819633",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12507302/"
] |
You try to return an Object and this is not possible.
```
function Main(){
// create an array that holds objects
const people = [
{
firstName: 'Fathi',
lastName: 'Noor',
age: 27,
colors: ['red', 'blue'],
bodyAttributes: {
weight: 64,
height: 171
}
},
{
firstName: 'Hanafi',
lastName: 'Noor',
age: 26,
colors: ['white', 'black'],
bodyAttributes: {
weight: 62,
height: 172
}
}
]
const filterdPeople = people.filter(person => person.age == 27)
return(
<main>
<div>
{filterdPeople.map(person => {
return (
<div key={person.lastName}>
<p>First name: {person.firstName}</p>
<p>Last name: {person.lastName}</p>
<p>Age: {person.age}</p>
<p>Colors: {person.colors.map(color => (<span>{`${color}, `}</span>))}</p>
<p>
<span>Body Attributes: </span>
<span>Weight: {person.bodyAttributes.weight}</span>
<span>Height: {person.bodyAttributes.height}</span>
</p>
</div>
)
})}
</div>
</main>
)
}
```
|
You have to `map` the array. See this [doc](https://reactjs.org/docs/lists-and-keys.html).
For each person, you decide how to display each field, here I used `<p>` tags, you can display them in a table, or in a custom card.. Your choice!
Remember that inside `{}` brackets goes Javascript expressions while in `()` goes JSX, very similar to `HTML` tags as you can see!
Read the documents, follow a tutorial, it'll help a lot!
```
{people.filter(person => person.age == 27).map(person => { return(
<div>
<p>First name: {person.firstName}</p>
<p>Last name: {person.lastName}</p>
<p>Age: {person.age}</p>
<p>Colors: {person.colors.map(color => (<span>{color+" "}</span>)}</p>
<p>
<span>Body Attributes: </span>
<span>Weight: {person.bodyAttributes.weight}</span>
<span>Height: {person.bodyAttributes.height}</span>
</p>
</div>
)})}
```
| 10,372
|
50,091,373
|
I am new to using Pandas on Windows and I'm not sure what I am doing wrong here.
My data is located at 'C:\Users\me\data\lending\_club\loan.csv'
```
path = 'C:\\Users\\me\\data\\lending_club\\loan.csv'
pd.read_csv(path)
```
And I get this error:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-107-b5792b17a3c3> in <module>()
1 path = 'C:\\Users\\me\\data\\lending_club\\loan.csv'
----> 2 pd.read_csv(path)
C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision)
707 skip_blank_lines=skip_blank_lines)
708
--> 709 return _read(filepath_or_buffer, kwds)
710
711 parser_f.__name__ = name
C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds)
447
448 # Create the parser.
--> 449 parser = TextFileReader(filepath_or_buffer, **kwds)
450
451 if chunksize or iterator:
C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds)
816 self.options['has_index_names'] = kwds['has_index_names']
817
--> 818 self._make_engine(self.engine)
819
820 def close(self):
C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine)
1047 def _make_engine(self, engine='c'):
1048 if engine == 'c':
-> 1049 self._engine = CParserWrapper(self.f, **self.options)
1050 else:
1051 if engine == 'python':
C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds)
1693 kwds['allow_leading_cols'] = self.index_col is not False
1694
-> 1695 self._reader = parsers.TextReader(src, **kwds)
1696
1697 # XXX
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source()
FileNotFoundError: File b'C:\\Users\\me\\data\\lending_club\\loan.csv' does not exist
```
**EDIT:**
I re-installed Anaconda and the error went away. Not sure exactly what was going on but could potentially have been related to the initial install being global vs. user specific in my second install. Thanks for the help everybody!
|
2018/04/29
|
[
"https://Stackoverflow.com/questions/50091373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3971910/"
] |
Just use forward slash(`'/'`) instead of backslash(`'\'`)
```
path = 'C:/Users/me/data/lending_club/loan.csv'
```
|
Python only access to the current folder's files.
If you want to access files from an other folder, try this :
```
import sys
sys.path.insert(0, 'C:/Users/myFolder')
```
Those lines allow your script to access an other folder. Be carrefull, you shoud use slashes /, not backslashes \
| 10,374
|
51,947,506
|
i have been trying to install module for python-3.6 through pip. i've read these post from stackoverflow and from python website, which seemed promising but they didn't worked for me.
[Install a module using pip for specific python version](https://stackoverflow.com/questions/10919569/install-a-module-using-pip-for-specific-python-version)
[python website](https://docs.python.org/3.4/installing/index.html)
I've added python3.6 main folder,Scripts and Lib to PATH and i've tried these commands.But somehow they refer to anaconda installations.
```
C:\Program Files (x86)\Python36-32\Scripts> pip3 install xlrd
C:\Program Files (x86)\Python36-32\Scripts> pip install xlrd
C:\Program Files (x86)\Python36-32\Scripts> pip3.6 install xlrd
C:\Program Files (x86)\Python36-32\Scripts> py -3.6 -m pip install xlrd
C:\Program Files (x86)\Python36-32\Scripts> py -3 -m pip install xlrd
```
but they give same answer.
```
Requirement already satisfied: xlrd in c:\programdata\anaconda3\lib\site-packages (1.1.0)
```
|
2018/08/21
|
[
"https://Stackoverflow.com/questions/51947506",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10088290/"
] |
to install package for a specific python installation, you need a package installer shipped with that installation, in your case, `pip` is installed by an anaconda installation, use `pip.exe` or `easy_install.exe` from this python3.6 installation's `Scripts` directory instead.
|
First, Uninstall all the versions you have!
then go to the library <https://www.python.org/downloads/>
Select the required vesrsion MSI file. Run as administrator!
| 10,375
|
42,776,294
|
I need to extract list of IP addresses and port number as well as other information in the following html table, I am using python 2.7 with lxml currently, but have no idea how to find the proper path to these elements,
here is the address to table:
[link to table](https://hidester.com/proxylist/)
|
2017/03/14
|
[
"https://Stackoverflow.com/questions/42776294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1624681/"
] |
Looking at the examples at [github](https://github.com/kivy/kivy/search?utf8=%E2%9C%93&q=background_color "github") it seems the values aren't 0-255 RGB like you might expect but are 0.0-1.1
```
bubble.background_color = (1, 0, 0, .5) #50% translucent red
background_color: .8, .8, 0, 1
```
etc.
You'll probably need something like
```
background_color: .2, .1, .73, 1
```
|
delete background\_normal: ' ', just use background\_color: (R, G, B, A) pick a color in this [tool](https://developer.mozilla.org/es/docs/Web/CSS/CSS_Colors/Herramienta_para_seleccionar_color) and divide R, G and B by 100 A=always 1 (transparency), for example if you choose (255, 79, 25, 1) write instead (2.55, 0.79, 0.25, 1).
| 10,377
|
21,821,045
|
I want to get a buffer from a numpy array in Python 3.
I have found the following code:
```
$ python3
Python 3.2.3 (default, Sep 25 2013, 18:25:56)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> a = numpy.arange(10)
>>> numpy.getbuffer(a)
```
However it produces the error on the last step:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'getbuffer'
```
Why am I doing wrong?
The code works fine for Python 2.
The numpy version I'm using is 1.6.1.
|
2014/02/17
|
[
"https://Stackoverflow.com/questions/21821045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/856504/"
] |
According to [Developer notes on the transition to Python 3](https://github.com/numpy/numpy/blob/master/doc/Py3K.rst.txt#pybuffer-object):
>
> PyBuffer (object)
>
>
> Since there is a native buffer object in Py3, the `memoryview`, the
> `newbuffer` and **`getbuffer` functions are removed from multiarray in Py3**:
> their functionality is taken over by the new memoryview object.
>
>
>
```
>>> import numpy
>>> a = numpy.arange(10)
>>> memoryview(a)
<memory at 0xb60ae094>
>>> m = _
>>> m[0] = 9
>>> a
array([9, 1, 2, 3, 4, 5, 6, 7, 8, 9])
```
|
Numpy's [`arr.tobytes()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tobytes.html) seems to be significantly faster than [`bytes(memoryview(arr))`](https://docs.python.org/dev/library/stdtypes.html#memoryview) in returning a `bytes` object.
So, you may want to have a look at `tobytes()` as well.
*Profiling* for Windows 7 on Intel i7 CPU, CPython v3.5.0, numpy v1.10.1.
(**Edit** Note: same result order on Ubuntu 16.04, Intel i7 CPU, CPython v3.6.5, numpy v1.14.5.)
```
setup = '''import numpy as np; x = np.random.random(n).reshape(n//10, -1)'''
```
results
```
globals: {'n': 100}, tested 1e+06 times
time (s) speedup methods
0 0.163005 6.03x x.tobytes()
1 0.491887 2.00x x.data.tobytes()
2 0.598286 1.64x memoryview(x).tobytes()
3 0.964653 1.02x bytes(x.data)
4 0.982743 bytes(memoryview(x))
globals: {'n': 1000}, tested 1e+06 times
time (s) speedup methods
0 0.378260 3.21x x.tobytes()
1 0.708204 1.71x x.data.tobytes()
2 0.827941 1.47x memoryview(x).tobytes()
3 1.189048 1.02x bytes(x.data)
4 1.213423 bytes(memoryview(x))
globals: {'n': 10000}, tested 1e+06 times
time (s) speedup methods
0 3.393949 1.34x x.tobytes()
1 3.739483 1.22x x.data.tobytes()
2 4.033783 1.13x memoryview(x).tobytes()
3 4.469730 1.02x bytes(x.data)
4 4.543620 bytes(memoryview(x))
```
| 10,378
|
10,830,820
|
I need to backup various file types to GDrive (not just those convertible to GDocs formats) from some linux server.
What would be the simplest, most elegant way to do that with a python script? Would any of the solutions pertaining to GDocs be applicable?
|
2012/05/31
|
[
"https://Stackoverflow.com/questions/10830820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/303295/"
] |
You can use the Documents List API to write a script that writes to Drive:
<https://developers.google.com/google-apps/documents-list/>
Both the Documents List API and the Drive API interact with the same resources (i.e. same documents and files).
This sample in the Python client library shows how to upload an unconverted file to Drive:
<http://code.google.com/p/gdata-python-client/source/browse/samples/docs/docs_v3_example.py#180>
|
The current documentation for saving a file to google drive using python can be found here:
<https://developers.google.com/drive/v3/web/manage-uploads>
However, the way that the google drive api handles document storage and retrieval does not follow the same architecture as POSIX file systems. As a result, if you wish to preserve the hierarchical architecture of the nested files on your linux file system, you will need to write a lot of custom code so that the parent directories are preserved on google drive.
On top of that, google makes it difficult to gain write access to a normal drive account. Your permission scope must include the following link: <https://www.googleapis.com/auth/drive> and to obtain a token to access a user's normal account, that user must first [join a group](https://groups.google.com/forum/#!forum/risky-access-by-unreviewed-apps) to provide access to non-reviewed apps. And any oauth token that is created has a limited shelf life.
However, if you obtain an access token, the following script should allow you to save any file on your local machine to the same (relative) path on google drive.
```
def migrate(file_path, access_token, drive_space='drive'):
'''
a method to save a posix file architecture to google drive
NOTE: to write to a google drive account using a non-approved app,
the oauth2 grantee account must also join this google group
https://groups.google.com/forum/#!forum/risky-access-by-unreviewed-apps
:param file_path: string with path to local file
:param access_token: string with oauth2 access token grant to write to google drive
:param drive_space: string with name of space to write to (drive, appDataFolder, photos)
:return: string with id of file on google drive
'''
# construct drive client
import httplib2
from googleapiclient import discovery
from oauth2client.client import AccessTokenCredentials
google_credentials = AccessTokenCredentials(access_token, 'my-user-agent/1.0')
google_http = httplib2.Http()
google_http = google_credentials.authorize(google_http)
google_drive = discovery.build('drive', 'v3', http=google_http)
drive_client = google_drive.files()
# prepare file body
from googleapiclient.http import MediaFileUpload
media_body = MediaFileUpload(filename=file_path, resumable=True)
# determine file modified time
import os
from datetime import datetime
modified_epoch = os.path.getmtime(file_path)
modified_time = datetime.utcfromtimestamp(modified_epoch).isoformat()
# determine path segments
path_segments = file_path.split(os.sep)
# construct upload kwargs
create_kwargs = {
'body': {
'name': path_segments.pop(),
'modifiedTime': modified_time
},
'media_body': media_body,
'fields': 'id'
}
# walk through parent directories
parent_id = ''
if path_segments:
# construct query and creation arguments
walk_folders = True
folder_kwargs = {
'body': {
'name': '',
'mimeType' : 'application/vnd.google-apps.folder'
},
'fields': 'id'
}
query_kwargs = {
'spaces': drive_space,
'fields': 'files(id, parents)'
}
while path_segments:
folder_name = path_segments.pop(0)
folder_kwargs['body']['name'] = folder_name
# search for folder id in existing hierarchy
if walk_folders:
walk_query = "name = '%s'" % folder_name
if parent_id:
walk_query += "and '%s' in parents" % parent_id
query_kwargs['q'] = walk_query
response = drive_client.list(**query_kwargs).execute()
file_list = response.get('files', [])
else:
file_list = []
if file_list:
parent_id = file_list[0].get('id')
# or create folder
# https://developers.google.com/drive/v3/web/folder
else:
if not parent_id:
if drive_space == 'appDataFolder':
folder_kwargs['body']['parents'] = [ drive_space ]
else:
del folder_kwargs['body']['parents']
else:
folder_kwargs['body']['parents'] = [parent_id]
response = drive_client.create(**folder_kwargs).execute()
parent_id = response.get('id')
walk_folders = False
# add parent id to file creation kwargs
if parent_id:
create_kwargs['body']['parents'] = [parent_id]
elif drive_space == 'appDataFolder':
create_kwargs['body']['parents'] = [drive_space]
# send create request
file = drive_client.create(**create_kwargs).execute()
file_id = file.get('id')
return file_id
```
PS. I have modified this script from the `labpack` python module. There is class called [driveClient](https://collectiveacuity.github.io/labPack/clients/#driveclient) in that module written by rcj1492 which handles saving, loading, searching and deleting files on google drive in a way that preserves the POSIX file system.
```
from labpack.storage.google.drive import driveClient
```
| 10,379
|
42,852,722
|
I'm iterating through such a feed:
```
{"siri":{"serviceDelivery":{"responseTimestamp":"2017-03-14T18:37:23Z","producerRef":"IVTR_RELAIS","status":"true","estimatedTimetableDelivery":[
{"lineRef":{"value":"C01742"},"directionRef":{"value":""},"datedVehicleJourneyRef":{"value":"SNCF-ACCES:VehicleJourney::UPAL97_20170314:LOC"},"vehicleMode":["RAIL"],"routeRef":{},"publishedLineName":[{"value":"RER A"}],"directionName":[],"originRef":{},"originName":[],"originShortName":[],"destinationDisplayAtOrigin":[],"via":[],"destinationRef":{"value":"STIF:StopPoint:Q:40918:"},"destinationName":[{"value":"GARE DE CERGY LE HAUT"}],"destinationShortName":[],"originDisplayAtDestination":[],"operatorRef":{"value":"SNCF-ACCES:Operator::SNCF:"},"productCategoryRef":{},"vehicleJourneyName":[],"journeyNote":[],"firstOrLastJourney":"UNSPECIFIED","additionalVehicleJourneyRef":[],"estimatedCalls":{"estimatedCall":[{"stopPointRef":{"value":"STIF:StopPoint:Q:411321:"},"expectedArrivalTime":"2017-03-14T20:02:00.000Z","expectedDepartureTime":"2017-03-14T20:02:00.000Z","aimedArrivalTime":"2017-03-14T20:02:00.000Z","aimedDepartureTime":"2017-03-14T20:02:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:411368:"},"expectedArrivalTime":"2017-03-14T20:09:00.000Z","expectedDepartureTime":"2017-03-14T20:09:00.000Z","aimedArrivalTime":"2017-03-14T20:09:00.000Z","aimedDepartureTime":"2017-03-14T20:09:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:411352:"},"expectedArrivalTime":"2017-03-14T20:05:00.000Z","expectedDepartureTime":"2017-03-14T20:05:00.000Z","aimedArrivalTime":"2017-03-14T20:05:00.000Z","aimedDepartureTime":"2017-03-14T20:05:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:41528:"},"expectedArrivalTime":"2017-03-14T19:56:00.000Z","expectedDepartureTime":"2017-03-14T19:56:00.000Z","aimedArrivalTime":"2017-03-14T19:56:00.000Z","aimedDepartureTime":"2017-03-14T19:56:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:40918:"},"expectedArrivalTime":"2017-03-14T20:12:00.000Z","aimedArrivalTime":"2017-03-14T20:12:00.000Z","aimedDepartureTime":"2017-03-14T20:12:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]}]},"recordedAtTime":"2017-03-14T18:37:12.314Z"}
,{"lineRef":{"value":"C00049"},"directionRef":{"value":""},"datedVehicleJourneyRef":{"value":"004_DEVILLAIRS:VehicleJourney::109173051020957:LOC"},"vehicleMode":[],"routeRef":{},"publishedLineName":[{"value":"42"}],"directionName":[{"value":"Aller"}],"originRef":{},"originName":[],"originShortName":[],"destinationDisplayAtOrigin":[],"via":[],"destinationRef":{"value":"STIF:StopPoint:BP:12689:"},"destinationName":[{"value":"L'Onde Maison des Arts"}],"destinationShortName":[],"originDisplayAtDestination":[],"operatorRef":{"value":"004_DEVILLAIRS:Operator::004_DEVILLAIRS_Operator__004_DEVILLAIRS_Company__Devillairs 4_LOC_:"},"productCategoryRef":{},"vehicleJourneyName":[],"journeyNote":[],"firstOrLastJourney":"UNSPECIFIED","additionalVehicleJourneyRef":[],"estimatedCalls":{"estimatedCall":[{"stopPointRef":{"value":"STIF:StopPoint:Q:12690:"},"expectedArrivalTime":"2017-03-14T18:44:26.000Z","expectedDepartureTime":"2017-03-14T18:44:26.000Z","aimedArrivalTime":"2017-03-14T18:43:39.000Z","aimedDepartureTime":"2017-03-14T18:43:39.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12684:"},"expectedArrivalTime":"2017-03-14T18:38:51.000Z","expectedDepartureTime":"2017-03-14T18:38:51.000Z","aimedArrivalTime":"2017-03-14T18:34:51.000Z","aimedDepartureTime":"2017-03-14T18:34:51.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:40538:"},"expectedArrivalTime":"2017-03-14T18:40:53.000Z","expectedDepartureTime":"2017-03-14T18:40:53.000Z","aimedArrivalTime":"2017-03-14T18:37:24.000Z","aimedDepartureTime":"2017-03-14T18:37:24.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12678:"},"expectedArrivalTime":"2017-03-14T18:41:10.000Z","expectedDepartureTime":"2017-03-14T18:41:10.000Z","aimedArrivalTime":"2017-03-14T18:37:57.000Z","aimedDepartureTime":"2017-03-14T18:37:57.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12682:"},"expectedArrivalTime":"2017-03-14T18:40:00.000Z","expectedDepartureTime":"2017-03-14T18:40:00.000Z","aimedArrivalTime":"2017-03-14T18:36:21.000Z","aimedDepartureTime":"2017-03-14T18:36:21.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:41690:"},"expectedArrivalTime":"2017-03-14T18:42:17.000Z","expectedDepartureTime":"2017-03-14T18:42:17.000Z","aimedArrivalTime":"2017-03-14T18:39:00.000Z","aimedDepartureTime":"2017-03-14T18:39:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12743:"},"expectedDepartureTime":"2017-03-14T18:15:00.000Z","aimedDepartureTime":"2017-03-14T18:15:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12680:"},"expectedArrivalTime":"2017-03-14T18:40:24.000Z","expectedDepartureTime":"2017-03-14T18:40:24.000Z","aimedArrivalTime":"2017-03-14T18:36:52.000Z","aimedDepartureTime":"2017-03-14T18:36:52.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12676:"},"expectedArrivalTime":"2017-03-14T18:41:42.000Z","expectedDepartureTime":"2017-03-14T18:41:42.000Z","aimedArrivalTime":"2017-03-14T18:38:29.000Z","aimedDepartureTime":"2017-03-14T18:38:29.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]}]},"recordedAtTime":"2017-03-14T18:35:51.000Z"}
,{"lineRef":{"value":"C01375"},"directionRef":{"value":""},"datedVehicleJourneyRef":{"value":"RATP:VehicleJourney::M5.R.1937.1:LOC"},"vehicleMode":[],"routeRef":{},"publishedLineName":[{"value":"Place d'Italie / Bobigny Pablo Picasso"}],"directionName":[{"value":"Bobigny Pablo Picasso"}],"originRef":{},"originName":[],"originShortName":[],"destinationDisplayAtOrigin":[],"via":[],"destinationRef":{"value":"STIF:StopPoint:Q:22015:"},"destinationName":[{"value":"Bobigny Pablo Picasso"}],"destinationShortName":[],"originDisplayAtDestination":[],"operatorRef":{},"productCategoryRef":{},"vehicleJourneyName":[],"journeyNote":[],"firstOrLastJourney":"UNSPECIFIED","additionalVehicleJourneyRef":[],"estimatedCalls":{"estimatedCall":[{"stopPointRef":{"value":"STIF:StopPoint:Q:22003:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22008:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22017:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:21952:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22009:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22016:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22007:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:21903:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22005:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22006:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22004:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22012:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22011:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22013:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:21981:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22000:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22010:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22002:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22001:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]}]},"recordedAtTime":"2017-03-14T18:36:55.890Z"}
,{"lineRef":{"value":"C00774"},"directionRef":{"value":""},"datedVehicleJourneyRef":{"value":"STIVO:VehicleJourney::268437511:LOC"},"vehicleMode":[],"routeRef":{},"publishedLineName":[{"value":"CERGY PREFECTURE-VAUREAL TOUPETS"}],"directionName":[{"value":"R"}],"originRef":{},"originName":[],"originShortName":[],"destinationDisplayAtOrigin":[],"via":[],"destinationRef":{"value":"STIF:StopPoint:Q:10118:"},"destinationName":[{"value":"Préfecture RER"}],"destinationShortName":[],"originDisplayAtDestination":[],"operatorRef":{},"productCategoryRef":{},"vehicleJourneyName":[],"journeyNote":[],"firstOrLastJourney":"UNSPECIFIED","additionalVehicleJourneyRef":[],"estimatedCalls":{"estimatedCall":[{"stopPointRef":{"value":"STIF:StopPoint:Q:8729:"},"expectedDepartureTime":"2017-03-14T18:39:31.000Z","aimedDepartureTime":"2017-03-14T18:39:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:8731:"},"expectedDepartureTime":"2017-03-14T18:40:31.000Z","aimedDepartureTime":"2017-03-14T18:40:20.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:8730:"},"expectedDepartureTime":"2017-03-14T18:40:46.000Z","aimedDepartureTime":"2017-03-14T18:39:51.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]}]},"recordedAtTime":"2017-03-14T18:36:43.000Z"}
,{"lineRef":{"value":"C00697"},"directionRef":{"value":""},"datedVehicleJourneyRef":{"value":"SRVSAE:VehicleJourney::34661-1:LOC"},"vehicleMode":["BUS"],"routeRef":{},"publishedLineName":[{"value":"H "}],"directionName":[],"originRef":{"value":"STIF:StopPoint:BP:20399:"},"originName":[{"value":"Versailles Rive Gauche"}],"originShortName":[],"destinationDisplayAtOrigin":[],"via":[],"destinationRef":{"value":"STIF:StopPoint:BP:4122:"},"destinationName":[{"value":"La Celle St Cloud - Gare"}],"destinationShortName":[],"originDisplayAtDestination":[],"operatorRef":{"value":"SRVSAE:Operator::56 :"},"productCategoryRef":{},"vehicleJourneyName":[{"value":"34661-1"}],"journeyNote":[],"firstOrLastJourney":"OTHER_SERVICE","additionalVehicleJourneyRef":[],"estimatedCalls":{"estimatedCall":[{"stopPointRef":{"value":"STIF:StopPoint:Q:4062:"},"expectedArrivalTime":"2017-03-14T18:39:55.000Z","expectedDepartureTime":"2017-03-14T18:39:55.000Z","aimedArrivalTime":"2017-03-14T18:35:00.000Z","aimedDepartureTime":"2017-03-14T18:35:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:4064:"},"expectedArrivalTime":"2017-03-14T18:38:58.000Z","expectedDepartureTime":"2017-03-14T18:38:58.000Z","aimedArrivalTime":"2017-03-14T18:34:10.000Z","aimedDepartureTime":"2017-03-14T18:34:10.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:4068:"},"expectedArrivalTime":"2017-03-14T18:36:48.000Z","expectedDepartureTime":"2017-03-14T18:36:48.000Z","aimedArrivalTime":"2017-03-14T18:32:00.000Z","aimedDepartureTime":"2017-03-14T18:32:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]}]},"recordedAtTime":"2017-03-14T18:33:37.000Z"}
]}}}
```
Using a python script:
```
import datetime
import time
import dateutil.parser
import pytz
import json
import gtfs_realtime_pb2
from traceback import print_exc
EPOCH = datetime.datetime(1970, 1, 1, tzinfo=pytz.utc)
def handle_siri(raw):
siri_data = json.loads(raw.decode('utf-8'))['siri']
msg = gtfs_realtime_pb2.FeedMessage()
msg.header.gtfs_realtime_version = "1.0"
msg.header.incrementality = msg.header.FULL_DATASET
msg.header.timestamp = int(time.time())
# msg.header.timestamp = long(siri_data['serviceDelivery']['responseTimestamp'])
for i, vehicle in enumerate(siri_data['serviceDelivery']['estimatedTimetableDelivery']):
route_id = vehicle['lineRef']['value'][:6].strip()
if len(vehicle['datedVehicleJourneyRef']) > 0:
operator = vehicle['datedVehicleJourneyRef'].split('[:.]')
if operator[0] == "RATP":
sens = operator[4]
if operator[4] == "A":
ent.trip_update.trip.direction_id = 0
if operator[4] == "R":
ent.trip_update.trip.direction_id = 1
if operator[0] != "RATP":
continue
# direction = vehicle['monitoredVehicleJourney']['directionRef'] # Il faudra editer le code pour le definir pour les autres...
# ent.trip_update.trip.direction_id = int(direction) - 1 # Il faudra editer le code pour le definir pour les autres...
if 'estimatedCalls' not in vehicle['monitoredVehicleJourney']:
continue
for call in vehicle['monitoredVehicleJourney']['estimatedCalls']:
stoptime = ent.trip_update.stop_time_update.add()
if 'stopPointRef' in vehicle['monitoredVehicleJourney']['estimatedCall']:
stoptime.stop_id = vehicle['monitoredVehicleJourney']['estimatedCall']['stopPointRef']
arrival_time = (dateutil.parser.parse(call['expectedArrivalTime']) - EPOCH).total_seconds()
stoptime.arrival.time = int(arrival_time)
departure_time = (dateutil.parser.parse(call['expectedDepartureTime']) - EPOCH).total_seconds()
stoptime.departure.time = int(departure_time)
ent = msg.entity.add() # On garde ca ?
ent.id = str(i) #
ent.trip_update.timestamp = vehicle['recordedAtTime'] #
ent.trip_update.trip.route_id = route_id #
# try:
# int(vehicle['MonitoredVehicleJourney']['Delay'])
# except:
# print_exc()
# print vehicle, vehicle['MonitoredVehicleJourney']['Delay']
# continue
# ent.trip_update.trip.start_date = vehicle['MonitoredVehicleJourney']['FramedVehicleJourneyRef']['DataFrameRef']['value'].replace("-", "")
# if 'datedVehicleJourneyRef' in vehicle['monitoredVehicleJourney']['framedVehicleJourneyRef']: # doesn't exist in our feed
# start_time = vehicle['monitoredVehicleJourney']['framedVehicleJourneyRef']['datedVehicleJourneyRef']
# ent.trip_update.trip.start_time = start_time[:2]+":"+start_time[2:]+":00"
#
#
# if 'vehicleRef' in vehicle['monitoredVehicleJourney']: # doesn't exist in our feed
# ent.trip_update.vehicle.label = vehicle['monitoredVehicleJourney']['vehicleRef']['value']
return msg
```
Unfortunately,
This loop is not working, returning an error, while it should iterate through each item starting with `{"lineRef"`:
```
File "/home/nicolas/packaging/stif.py", line 24, in handle_siri operator = vehicle['datedVehicleJourneyRef'].split('[:.]') AttributeError: 'dict' object has no attribute 'split'
```
Could you please help me fix this?
Thanks for looking at this.
|
2017/03/17
|
[
"https://Stackoverflow.com/questions/42852722",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5689072/"
] |
See the code snippet
```
public static class MyDownloadTask extends AsyncTask<Void, Integer, Void> {
@Override
protected void onProgressUpdate(Integer... values) {
super.onProgressUpdate(values);
// receive the published update here
// progressBar.setProgress(values[0]);
}
@Override
protected Void doInBackground(Void... params) {
// publish your download progress here
// publishProgress(10);
return null;
}
}
```
|
With this example you can set download speed on your progressdialog
```
public class AsyncDownload extends AsyncTask<Void, Double, String> {
ProgressDialog progressDialog;
@Override
protected void onPreExecute() {
super.onPreExecute();
progressDialog = new ProgressDialog(MainActivity.this);
progressDialog.setMessage("Speed: " + 0.0);
progressDialog.show();
}
@Override
protected String doInBackground(Void... voids) {
// AsyncDownload
Double speed = 0.0;
// Calculate speed
publishProgress(speed);
return null;
}
@Override
protected void onProgressUpdate(Double... values) {
super.onProgressUpdate(values);
progressDialog.setMessage("Speed " + values[0]);
}
@Override
protected void onPostExecute(String s) {
super.onPostExecute(s);
}
}
```
to calculate download speed you can use this example [Measuring Download Speed Java](https://stackoverflow.com/questions/6322767/measuring-download-speed-java)
| 10,381
|
22,923,983
|
I am embedding python code in my c++ program.
The use of PyFloat\_AsDouble is causing loss of precision. It keeps only up to 6 precision digits. My program is very sensitive to precision. Is there a known fix for this?
Here is the relevant C++ code:
```
_ret = PyObject_CallObject(pFunc, pArgs);
vector<double> retVals;
for(size_t i=0; i<PyList_Size(_ret); i++){
retVals[i] = PyFloat_AsDouble(PyList_GetItem(_ret, i));
}
```
retVals[i] has precision of only 6, while the value returned by the python code is a float that can have a higher precision.
How to get full precision?
|
2014/04/07
|
[
"https://Stackoverflow.com/questions/22923983",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/249560/"
] |
Assuming that the Python object contains floating point values stored to double precision, then your code works as you expect.
Most likely you are simply mis-diagnosing a problem that does not exist. My guess is that you are looking at the values in the debugger which only displays the values to a limited precision. Or you are printing them out to a limited precision.
|
print type(PyList\_GetItem(\_ret, i))
My bet is it will show float.
Edit: in the python code, not in the C++ code.
| 10,382
|
44,576,509
|
I am having a problem like
```
In [5]: x = "this string takes two like {one} and {two}"
In [6]: y = x.format(one="one")
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-6-b3c89fbea4d3> in <module>()
----> 1 y = x.format(one="one")
KeyError: 'two'
```
I have a compound string with many keys that gets kept in a config file. For 8 different queries, they all use the same string, except 1 key is a different setting. I need to be able to substitute a key in that file to save the strings for later like:
```
"this string takes two like one and {two}"
```
How do I substitute one key at a time using `format`?
|
2017/06/15
|
[
"https://Stackoverflow.com/questions/44576509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3282434/"
] |
you can escape the interpolation of `{two}` by doubling the curly brackets:
```
x = "this string takes two like {one} and {{two}}"
y = x.format(one=1)
z = y.format(two=2)
print(z) # this string takes two like 1 and 2
```
---
a different way to go are [template strings](https://docs.python.org/3/library/string.html#template-strings):
```
from string import Template
t = Template('this string takes two like $one and $two')
y = t.safe_substitute(one=1)
print(y) # this string takes two like 1 and $two
z = Template(y).safe_substitute(two=2)
print(z) # this string takes two like 1 and 2
```
([this answer](https://stackoverflow.com/a/44576618/4954037) was before mine for the template strings....)
|
You can replace `{two}` by `{two}` to enable further replacement later:
```
y = x.format(one="one", two="{two}")
```
This easily extends in multiple replacement passages, but it requires that you give all keys, in each iteration.
| 10,383
|
32,703,469
|
I am new to kafka. We are trying to import data from a csv file to Kafka. We need to import everyday, in the mean while the previous day's data is depredated.
How could remove all messages under a Kafka topic in python? or how could I remove the Kafka topic in python?
Or I saw someone suggest to wait to data expire, how could I set the data expiration time if that's possible?
Any suggestions will be appreciated!
Thanks
|
2015/09/21
|
[
"https://Stackoverflow.com/questions/32703469",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1028315/"
] |
You cannot delete messages in Kafka topic. You can:
* Set `log.retention.*` properties which is basically the expiration of messages. You can choose either time-based expiration (e. g. keep messages that are six hour old or newer) or space-based expiration (e. g. keep at max 1 GB of messages). See [Broker config](http://kafka.apache.org/documentation.html#brokerconfigs) and search for *retention*. You can set different values for different topics.
* Delete the whole topic. It's a kind of tricky and I don't recommend this way.
* Create a new topic for every day. Something like *my-topic-2015-09-21*.
But I don't think you need to delete the messages in the topic at all. Because your Kafka consumer keeps track of messages that has been already processed. Thus when you read all today's messages, Kafka consumer saves this information and you're going to read just the new messages tomorrow.
Another possible solution could be [Log compaction](http://kafka.apache.org/documentation.html#compaction). But it's more complicated and probably it's not what you need. Basically you can set a key for every message in the Kafka topic. If you send two different messages with the same key, Kafka will keep just the newest message in the topic and it will delete all older messages with the same key. You can think of it as a kind of "key-value store". Every message with the same key just updates a value under the specific key. But hey, you really don't need this, it's just FYI :-).
|
The simplest approach is to simply delete the topic. I use this in Python automated test suites, where I want to verify a specific set of test messages gets sent through Kafka, and don't want to see results from previous test runs
```
def delete_kafka_topic(topic_name):
call(["/usr/bin/kafka-topics", "--zookeeper", "zookeeper-1:2181", "--delete", "--topic", topic_name])
```
| 10,392
|
43,214,036
|
I've a python script that I'm invoking from C#. Code Provided below. Issue with this process is that if Python script fails I'm not able to understand in C# and display that exception. I'm using C#, MVC, Python. Can you please modify below code and show me how can I catch the exception thrown at the time of Python Script exception?
```
Process process = new Process();
Stopwatch stopWatch = new Stopwatch();
ProcessStartInfo processStartInfo = new ProcessStartInfo(python);
processStartInfo.UseShellExecute = false;
processStartInfo.RedirectStandardOutput = true;
try
{
process.StartInfo = processStartInfo;
stopWatch.Start();
//Start the process
process.Start();
// Read the standard output of the app we called.
// in order to avoid deadlock we will read output first
// and then wait for process terminate:
StreamReader myStreamReader = process.StandardOutput;
string myString = myStreamReader.ReadLine();
// wait exit signal from the app we called and then close it.
process.WaitForExit();
process.Close();
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
Session["Message"] = "Success";
}
catch (InvalidOperationException ex)
{
throw new System.InvalidOperationException("Bulk Upload Failed. Please contact administrator for further details." + ex.StackTrace);
Session["Message"] = "Failed";
}
```
|
2017/04/04
|
[
"https://Stackoverflow.com/questions/43214036",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4879531/"
] |
Here is the working code.. To get the error or any exception in Python to C# RedirectStandardError property true and next get the Standard Error. Working version of the code is provided below -
```
process.StartInfo = processStartInfo;
processStartInfo.RedirectStandardError = true;
stopWatch.Start();
//Start the process
process.Start();
string standardError = process.StandardError.ReadToEnd();
// wait exit signal from the app we called and then close it.
process.WaitForExit();
process.Close();
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
```
|
You can subscribe for `ErrorDataReceived` event and read the error message.
```
process.ErrorDataReceived+=process_ErrorDataReceived;
```
and here is your event handler
```
void process_ErrorDataReceived(object sender, DataReceivedEventArgs e)
{
Console.WriteLine(e.Data);
}
```
| 10,393
|
37,947,258
|
Newbie in mongodb/python/pymongo. My mongodb client code works OK. Here it is :
```
db.meteo.find().forEach( function(myDoc) {
db.test.update({"contract_name" : myDoc.current_observation.observation_location.city},
{ $set: { "temp_c_meteo" : myDoc.current_observation.temp_c ,
}
},
{upsert:false,
multi:true})
})
```
this adds temp\_c\_meteo column to all documents in my test collection, when contract\_name (from test collection) is equal to current\_observation.observation\_location.city (from meteo collection). And that's what i want!
But, i would like this to work the same in Python. Python 2.7-pymongo 2.6.3 is installed. But also Python 2.4.6 (64-bit) and 3.4.0 (64-bit).
Here is a part of the Python script :
```
[...]
saveInDB:
#BE connected to MongoDB client
MDBClient = MongoClient()
db = MDBClient ['test']
collection = db['infosdb']
[...]
meteos = collectionMeteo.find()
for met in meteos:
logging.info('DEBUG - Update with meteo ' + met['current_observation']['observation_location']['city'])
result = collection.update_many( {"contract_name" : met['current_observation']['observation_location']['city']},
{
"$set": {"temp_c_meteo" : met['current_observation']['temp_c']}
})
```
And there is the error i get :
==> *[06/21/2016 10:51:03 AM] [WARNING] : 'Collection' object is not callable. If you meant to call the 'update\_many' method on a 'Collection' object it is failing because no such method exists.*
I read this problem could be linked to some pymongo release ("The methods save and update are deprecated", that's why i try to use update\_many) but not sure et how can i make this work?
Thanks
|
2016/06/21
|
[
"https://Stackoverflow.com/questions/37947258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6494503/"
] |
Try this solution:
```
ArrayList<IonicBond> list = IonicBond.where { typeRs { classCode == "WD996" } || typeIIs { classCode == "WD996" } }.list()
```
|
Detached Criteria is one way to query the GORM. You first create a DetachedCriteria with the name of the domain class for which you want to execute the query. Then you call the method 'build' with a 'where' query or criteria query.
```
def criteria = new DetachedCriteria(IonicBond).build {
or {
typeIIs {
eq("classCode", "WD996")
}
typeRs {
eq("classCode", "WD996")
}
}
}
def result = criteria.list()
```
| 10,395
|
19,252,087
|
Hi all we are using google api for e.g. this one '<http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s>' % query via python script but very fast it gets blocked. Any work around for this? Thank you.
Below is my current codes.
```
#!/usr/bin/env python
import math,sys
import json
import urllib
def gsearch(searchfor):
query = urllib.urlencode({'q': searchfor})
url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s' % query
search_response = urllib.urlopen(url)
search_results = search_response.read()
results = json.loads(search_results)
data = results['responseData']
return data
args = sys.argv[1:]
m = 45000000000
if len(args) != 2:
print "need two words as arguments"
exit
n0 = int(gsearch(args[0])['cursor']['estimatedResultCount'])
n1 = int(gsearch(args[1])['cursor']['estimatedResultCount'])
n2 = int(gsearch(args[0]+" "+args[1])['cursor']['estimatedResultCount'])
```
|
2013/10/08
|
[
"https://Stackoverflow.com/questions/19252087",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2711681/"
] |
Try this:
```
$(".profileImage").click(function() {
$(this).animate({opacity: "0.0"}).animate({width: 0}).hide(0);
})
```
Animate your opacity to 0 to fade it from view, then animate your width to 0 to regain the space, then hide it to remove it from visibility altogether. Note that if you want to redisplay, you'll have to restore the previous values.
<http://jsfiddle.net/Palpatim/YuFqh/2/>
|
what about using the fadeOut callback
```
$(".profileImage").click(function() {
$(this).animate({opacity:0},400, function(){
// sliding code goes here
$(this).animate({width:0},300).hide();
});
});
```
| 10,397
|
67,764,659
|
I working with vscode for python programming.how can i change color text error output in terminal?
.can you help me?
|
2021/05/30
|
[
"https://Stackoverflow.com/questions/67764659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11847527/"
] |
may be useful for someOne.
for First and last day of Persian Month :
```
function GetStartEndMonth(currentDate) {
const splitDate = splitGeorgianDateToPersianDateArray(currentDate);
const year = splitDate[0];
const month = splitDate[1];
const lastDayOfPersianMonth = GetLastDayOfPersianMonth(month, year);
//Not Work in some Month !! => moment(currentPersianDate).clone().startOf('month').format('YYYY/MM/DD');
//Not Work at All in persian => moment(currentPersianDate).clone().endof('month').format('YYYY/MM/DD');
const startPersianMonth = year + "/" + month + "/" + 1;
const endPersianMonth = year + "/" + month + "/" + lastDayOfPersianMonth;
let startGeorgianDate = ConvertPersianDateStringToGeorgianDate(startPersianMonth);
let endGeorgianDate = ConvertPersianDateStringToGeorgianDate(endPersianMonth);
endGeorgianDate.setHours(23, 59, 59, 59);
const newMonthArray = [startGeorgianDate, endGeorgianDate];
return newMonthArray;
}
function GetLastDayOfPersianMonth(month, year) {
//محاسبه کبیسه
const leapMatch = [1, 5, 9, 13, 17, 22, 26, 30];
const number = year % 33;
const isLeap = leapMatch.includes(number);
if (month <= 6) {
return 31;
}
if (month > 6 && month < 12) {
return 30;
}
if (month == 12 && !isLeap) {
return 29;
}
if (month == 12 && isLeap) {
return 30;
}
}
function splitGeorgianDateToPersianDateArray(georgianDate) {
const persianStringDate = moment(georgianDate).locale('fa').format('YYYY/M/D');
const splitDate = persianStringDate.split("/");
const year = splitDate[0];
const month = splitDate[1];
const day = splitDate[2];
return [year, month, day];
}
```
|
I think you need to set
```
dayGridMonthPersian :{duration: { week: 4 }}
```
| 10,398
|
53,489,173
|
I understand there are many answers already on SO dealing with split python URL's. BUT, I want to split a URL and then use it in a function.
I'm using a curl request in python:
```
r = requests.get('http://www.datasciencetoolkit.org/twofishes?query=New%York')
r.json()
```
Which provides the following:
```
{'interpretations': [{'what': '',
'where': 'new york',
'feature': {'cc': 'US',
'geometry': {'center': {'lat': 40.742185, 'lng': -73.992602},
......
# (lots more, but not needed here)
```
I want to be able to call any city/location, and I want to separate out the `lat` and `lng`. For example, I want to call a function where I can input any city, and it responds with the latitude and longitude. Kind of like [this question](https://stackoverflow.com/questions/51489815/scraping-data-using-rvest-and-a-specific-error) (which uses R).
This is my attempt:
```
import requests
def lat_long(city):
geocode_result = requests.get('http://www.datasciencetoolkit.org/twofishes?query= "city"')
```
How do I parse it so I can just call the function with a city?
|
2018/11/26
|
[
"https://Stackoverflow.com/questions/53489173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9439560/"
] |
Taking your example, I'd suggest using regex and string interpolation. This answer assumes the API returns data the same way every time.
```
import re, requests
def lat_long(city: str) -> tuple:
# replace spaces with escapes
city = re.sub('\s', '%20', city)
res = requests.get(f'http://www.datasciencetoolkit.org/twofishes?query={city}')
data = res.json()
geo = data['interpretations'][0]['feature']['geometry']['center']
return (geo['lat'], geo['lng'])
```
|
You can loop over this list 'interpretations' looking for the city name, and return the coordinates when you find the correct city.
```
def lat_long(city):
geocode_result = requests.get('http://www.datasciencetoolkit.org/twofishes?query= "city"')
for interpretation in geocode_result["interpretations"]:
if interpretation["where"] == city:
return interpretation["feature"]["geometry"]["center"]
```
| 10,399
|
29,000,392
|
When I Try to use the sorted() function in python it only sorts the elements within each array alphabetically as the first 3 outputs are:
```
[u'A', u'a', u'a', u'f', u'g', u'h', u'i', u'n', u'n', u's', u't']
[u'N', u'a', u'e', u'g', u'i', u'i', u'r']
[u'C', u'a', u'e', u'm', u'n', u'o', u'o', u'r']
```
These should be Afghanistan, Nigeria and Cameroon respectively but instead they are only sorted within their own array.
Where have I went wrong in my code?
```
import urllib2
import csv
from bs4 import BeautifulSoup
url = "http://en.wikipedia.org/wiki/List_of_ongoing_armed_conflicts"
soup = BeautifulSoup(urllib2.urlopen(url))
#f= csv.writer(open("test.csv","w"))
#f.writerow(["location"])
def unique(countries):
seen = set()
for country in countries:
l = country.lower()
if l in seen:
continue
seen.add(l)
yield country
for row in soup.select('table.wikitable tr'):
cells = row.find_all('td')
if cells:
for location in cells[3].find_all(text=True):
location = location.split()
for locations in unique(location):
print sorted(locations)
#f.writerow([location])
```
|
2015/03/12
|
[
"https://Stackoverflow.com/questions/29000392",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3092868/"
] |
As far as I'm concerned, the built in VBE is the best way to go. It has not changed in the last 10 years or even longer, but it is tightly integrated into Excel and does everything you need, especially as a beginner.
|
some time passed since you asked, but maybe someone else might find it useful. I looked for the same thing and couldn't find anything good until I found Notepad++.
It makes analyzing the code much easier.
| 10,400
|
31,388,514
|
I have to split a list of characters such that it gets cut when it encounters a vowel. For example, a string like
```
toy = ['b', 'a', 'm', 'b', 'i', 'n', 'o']
```
the output should be
```
[('b', 'a'), ('m', 'b', 'i'), ('n', 'o')]
```
I tried to run 2 loops, one behind the other.
```
# usr/bin/env/python
apple = []
consonants = ('k', 'h', 'b', 'n')
vowels = ('i', 'a', 'u')
toy = ('k', 'h', 'u', 'b', 'a', 'n', 'i')
for i in range(len(toy)):
if i == 0:
pass
else:
if toy[i] in vowels:
for k in range(i - 1, len(toy)):
if toy[k - 1] in consonants:
n = toy[i - k:i - 1]
apple.append(n)
break
print apple
```
But this does not let me come out of the loop once the vowel is reached. Using the "bambino" example, it gives me something like `[('b', 'a'), ('b', 'a', 'm', 'b', 'i'), ('b', 'a', 'm', 'b', 'i', 'n', 'o')]`. Can someone please help?
|
2015/07/13
|
[
"https://Stackoverflow.com/questions/31388514",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3116297/"
] |
You seem to be overcomplicating things. A simple solution for this would be -
```
>>> toy = ['b', 'a', 'm', 'b', 'i', 'n', 'o']
>>> vowels = ['a','i','e','o','u']
>>> apples = []
>>> k = 0
>>> for i ,x in enumerate(toy):
... if x in vowels:
... apples.append(tuple(toy[k:i+1]))
... k = i+1
...
>>> apples
[('b', 'a'), ('m', 'b', 'i'), ('n', 'o')]
```
[`enumerate()`](https://docs.python.org/2/library/functions.html#enumerate) function returns the index as well as the current element of the list.
|
You could also use this :
```
#usr/bin/env/python
apple = []
vowels = ('i', 'a', 'u')
toy = ('k', 'h', 'u', 'b', 'a', 'n', 'i')
collector = []
for i in toy:
collector.append(i)
if i in vowels:
apple.append(collector)
collector = []
print apple
```
Result:
```
[['k', 'h', 'u'], ['b', 'a'], ['n', 'i']]
```
| 10,401
|
13,608,029
|
I have 365 2d `numpy` arrays for the every day of the year, displaying an image like this:

I have them all stacked in a 3d numpy array. Pixels with a value that represents cloud i want to get rid of, i want to search through the previous 7 days, or next 7 days (previous 7 layers, next 7 layers) to find a value other than cloud, and then replace the cloud value with other possible values for that pixel (values experienced in the other days/layers for the corresponding pixel).
I am new to python, and am a bit lost.
Any ideas?
Thanks
|
2012/11/28
|
[
"https://Stackoverflow.com/questions/13608029",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1860229/"
] |
You are essentially trying to write a filter for your array.
First you need to write a function that when given an array of values, with the middle one being the element currently examined, will return some computation of those values. In your case the function will expect to take 1-d array and returns the element nearest to the middle index that is not cloud:
```
import numpy as np
from scipy.ndimage.filters import generic_filter
_cloud = -1
def findNearestNonCloud(elements):
middleIndex = len(elements) / 2
if elements[middleIndex] != _cloud:
return elements[middleIndex] # middle value is not cloud
nonCloudIndices, = np.where(elements != _cloud)
if len(nonCloudIndices) == 0:
return elements[middleIndex] # all values were cloud
prevNonCloudIndex = np.where(nonCloudIndices < middleIndex,
nonCloudIndices, -1).max()
nextNonCloudIndex = -np.where(nonCloudIndices > middleIndex,
-nonCloudIndices, 1).min()
# -1 means no non-cloud index
# pick index closest to middle index
if (abs(prevNonCloudIndex - middleIndex)
<= abs(nextNonCloudIndex - middleIndex)):
return elements[prevNonCloudIndex]
else:
return elements[nextNonCloudIndex]
```
Now you need to apply this function to the elements you're interesting. To do this you'll need a mask that denotes which other elements you'll be interested in with regard to a specific element.
```
from scipy.ndimage.filters import generic_filter
# creates 5 days worth of a 3x3 plot of land
input = np.ones((5, 3, 3)) * _cloud
input[0,:,:] = 10 # set first "image" to all be 10s
input[4,0,0] = 12 # uppper left corner of fourth image is set to 12
print "input data\n", input, "\n"
mask = (5, 1, 1)
# mask represents looking at the present day, 2 days in the future and 2 days in
# the past for 5 days in total.
print "result\n", generic_filter(input, findNearestNonCloud, size=mask)
# second and third images should mirror first image,
# except upper left corner of third image should be 12
```
|
I would think that you could do something like:
```
data = somehow_get_your_3d_data() #indexed as [day_of_year,y,x]
for i,dat in enumerate(data):
weeks2 = data[max(i-7,i):min(i+7,len(data)), ... ]
new_value = get_new_value(weeks2) #get value from weeks2 here somehow
dat[dat == cloud_value] = new_value
```
| 10,403
|
71,401,616
|
While writing a program to help myself study, I run into a problem with my program not displaying the Chinese characters properly.
The Chinese characters are loaded in from a .JSON file, and are then printed using a python program.
The JSON entries look like this.
```
{
"symbol": "我",
"reading": "wo",
"meaning": "I",
"streak": 0
},
```
The output in the console looks like this
[VS Code console output](https://i.stack.imgur.com/UwPRi.png)
And once the program has finished and dumps the info pack into the JSON, it looks like this.
```
{
"symbol": "\u00e6\u02c6\u2018",
"reading": "wo",
"meaning": "I",
"streak": 0
}
```
Changing Language for non-Unicode programs to Chinese (simplified) didn't fix.
Using chcp 936 didn't fix the issue.
The program is not a .py file that is not being hosted online. The IDE is Visual Studio code.
The program for the python file is
```
import json
#Import JSON file as an object
with open('cards.json') as f:
data = json.load(f)
def main():
for card in data['cards']:
#Store Current meaning reading and kanji in a local varialbe
currentKanji = card['symbol']
currentReading = card['reading']
currentMeaning = card['meaning']
#Ask the user the meaning of the kanji
inputMeaning = input(f'What is the meaning of {currentKanji}\n')
#Check if The user's answer is correct
if inputMeaning == currentMeaning:
print("Was correct")
else:
print("Was incorrect")
#Ask the User the reading of the kanji
inputReading = input(f'What is the reading of {currentKanji}\n')
#Check if the User's input is correct
if inputReading == currentReading:
print("Was Correct")
else:
print("Was incorrect")
#If both Answers correct, update the streak by one
if (inputMeaning == currentMeaning) and (inputReading == currentReading):
card['streak'] = card['streak'] + 1
print(card['streak'])
#If one of the answers is incorrect, decrease the streak by one
if not (inputMeaning == currentMeaning) or not (inputReading == currentReading):
card['streak'] = card['streak'] - 1
main()
#Reopen the JSON file an write new info into it.
with open('cards.json', 'w') as f:
json.dump(data,f,indent=2)
```
|
2022/03/08
|
[
"https://Stackoverflow.com/questions/71401616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15465144/"
] |
An example creating a precompiled header:
```sh
mkdir -p pch/bits &&
g++ -O3 -std=c++20 -pedantic-errors -o pch/bits/stdc++.h.gch \
/usr/include/c++/11/x86_64-redhat-linux/bits/stdc++.h
```
Check what you got:
```sh
$ file pch/bits/stdc++.h.gch
pch/bits/stdc++.h.gch: GCC precompiled header (version 014) for C++
$ ls -l pch/bits/stdc++.h.gch
-rw-r--r-- 1 ted users 118741428 8 mar 23.27 pch/bits/stdc++.h.gch
```
A program using it:
```cpp
#include <bits/stdc++.h>
int main() {
std::vector<int> foo{1, 2, 3};
for(auto v : foo) std::cout << v << '\n';
}
```
Example compilation (put `-Ipch` first of the `-I` directives):
```sh
$ strace -f g++ -Ipch -O3 -std=c++20 -pedantic-errors -o program program.cpp 2>&1 | grep 'stdc++.h'
[pid 13964] read(3, "#include <bits/stdc++.h>\n\nint ma"..., 122) = 122
[pid 13964] newfstatat(AT_FDCWD, "pch/bits/stdc++.h.gch", {st_mode=S_IFREG|0644, st_size=118741428, ...}, 0) = 0
[pid 13964] openat(AT_FDCWD, "pch/bits/stdc++.h.gch", O_RDONLY|O_NOCTTY) = 4
```
|
Building on Ted's answer, I would actually do something like this (untested):
my\_pch.h:
```
#include <bits/stdc++.h> // might need to specify the full path here
```
And then:
```
g++ -O3 -std=c++20 -pedantic-errors -o pch/bits/my_pch.h.gch my_pch.h
```
And finally, your program would look like this:
```
#include "my_pch.h"
int main() {
// ...
}
```
This means you don't need to put `#include <bits/stdc++.h>` directly in your source files, since that is a bit naughty. It also means you can add other include files to `my_pch.h` if you want them.
I think, also, it wouldn't cost you anything to put, say, `#include <string>` after including `my_pch.h`, and that doing that sort of thing might be wise. If you're ever going to move the code into production you could then recompile it with `my_pch.h` empty.
---
**Edit:** Another thing to consider (which I also can't test) is just to include the things you actually use (`string`, `vector`, whatever) in `my_pch.h`. That will probably pull in `bits/stdc++.h` anyway, when building the precompiled header. Make it a comprehensive list so that you don't need to have to keep adding to it. Then you have portable code and people won't keep beating you up about it.
| 10,405
|
17,504,570
|
So, I'm on simple project for a online course to make an image gallery using python. The thing is to create 3 buttons one Next, Previous and Quit. So far the quit button works and the next loads a new image but in a different window, I'm quite new to python and GUI-programming with Tkinter so this is a big part of the begineers course.
So far my code looks like this and everything works. But I need help in HOW to make a previous and a next button, I've used the NEW statement so far but it opens in a different window. I simply want to display 1 image then click next image with some simple text.
```
import Image
import ImageTk
import Tkinter
root = Tkinter.Tk();
text = Tkinter.Text(root, width=50, height=15);
myImage = ImageTk.PhotoImage(file='nesta.png');
def new():
wind = Tkinter.Toplevel()
wind.geometry('600x600')
imageFile2 = Image.open("signori.png")
image2 = ImageTk.PhotoImage(imageFile2)
panel2 = Tkinter.Label(wind , image=image2)
panel2.place(relx=0.0, rely=0.0)
wind.mainloop()
master = Tkinter.Tk()
master.geometry('600x600')
B = Tkinter.Button(master, text = 'Previous picture', command = new).pack()
B = Tkinter.Button(master, text = 'Quit', command = quit).pack()
B = Tkinter.Button(master, text = 'Next picture', command = new).pack()
master.mainloop()
```
|
2013/07/06
|
[
"https://Stackoverflow.com/questions/17504570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2475520/"
] |
Change image by setting image item: `Label['image'] = photoimage_obj`
```
import Image
import ImageTk
import Tkinter
image_list = ['1.jpg', '2.jpg', '5.jpg']
text_list = ['apple', 'bird', 'cat']
current = 0
def move(delta):
global current, image_list
if not (0 <= current + delta < len(image_list)):
tkMessageBox.showinfo('End', 'No more image.')
return
current += delta
image = Image.open(image_list[current])
photo = ImageTk.PhotoImage(image)
label['text'] = text_list[current]
label['image'] = photo
label.photo = photo
root = Tkinter.Tk()
label = Tkinter.Label(root, compound=Tkinter.TOP)
label.pack()
frame = Tkinter.Frame(root)
frame.pack()
Tkinter.Button(frame, text='Previous picture', command=lambda: move(-1)).pack(side=Tkinter.LEFT)
Tkinter.Button(frame, text='Next picture', command=lambda: move(+1)).pack(side=Tkinter.LEFT)
Tkinter.Button(frame, text='Quit', command=root.quit).pack(side=Tkinter.LEFT)
move(0)
root.mainloop()
```
|
My UI is not so good. But my logics works well i tested well. U can change the UI. How it works is, First we need to browse the file and when we click open it displays the image and also it will creates a list of images that are in that selected image folder. I mentioned only '.png' and '.jpg' fils only. If u want to add more u can add it in the path\_func().
```
from tkinter import *
from PIL import Image, ImageTk
from tkinter import filedialog
import os
class ImageViewer:
def __init__(self, root):
self.root = root
self.root.title("Photo Viewer")
self.root.geometry("1360x750")
self.root.config(bg = "lightblue")
menus = Menu(self.root)
self.root.config(menu = menus)
file_menu = Menu(menus)
menus.add_cascade(label = "File", menu = file_menu)
file_menu.add_command(label = "Open", command = self.open_dialog)
file_menu.add_separator()
file_menu.add_command(label = "Previous", command = self.previous_img)
file_menu.add_command(label = "Next", command = self.next_img)
file_menu.add_separator()
file_menu.add_command(label = "Exit", command = self.root.destroy)
self.label = Label(self.root, text = "Open a image using open menu", font = ("Helvetica", 15), foreground = "#0000FF", background = "lightblue")
self.label.grid(row = 0, column = 0, columnspan = 4)
self.buttons()
def path_func(self, path):
l = []
self.path = path.split('/')
self.path.pop()
self.path = '/'.join([str(x) for x in self.path])
#print(self.path)
for file in os.listdir(self.path):
if file.endswith('.jpg') or file.endswith('.png'):
l.append(file)
#print(l)
def join(file):
os.chdir(self.path)
#print(os.getcwd())
cwd = os.getcwd().replace('\\', '/')
#print(cwd)
f = cwd + '/' + file
#print(f)
return f
global file_list
file_list = list(map(join, l))
#print(file_list)
def open_dialog(self):
global file_name
file_name = filedialog.askopenfilename(initialdir = "C:/Users/elcot/Pictures", title = "Open file")
#print(file_name)
self.view_image(file_name)
self.path_func(file_name)
'''except:
label = Label(self.root, text = "Select a file to open")
label.grid(row = 4, column =1)'''
def view_image(self, filename):
try:
self.label.destroy()
global img
img = Image.open(filename)
img = img.resize((1360, 650))
img = ImageTk.PhotoImage(img)
#print(img)
show_pic = Label(self.root, image = img)
show_pic.grid(row = 1, column = 0, columnspan = 3)
except:
pass
def buttons(self):
open_button = Button(self.root, text = "Browse", command = self.open_dialog, background = "lightblue")
open_button.grid(row = 1, column = 1)
previous_button = Button(self.root, text = "Previous", command = self.previous_img, background = "lightblue", width = 25)
previous_button.grid(row = 3, column = 0, pady = 10)
empty = Label(self.root, text = " ", background = "lightblue")
empty.grid(row = 3, column = 1)
next_button = Button(self.root, text = "Next", command = self.next_img, background = "lightblue", width = 25)
next_button.grid(row = 3, column = 2)
def previous_img(self):
global file_name
#print(file_list)
index = file_list.index(file_name)
#print(index)
curr = file_list[index - 1]
#print(curr)
self.view_image(curr)
file_name = curr
def next_img(self):
global file_name
index = file_list.index(file_name)
#print(index)
if index == len(file_list) - 1:
index = -1
curr = file_list[index + 1]
#print(curr)
self.view_image(curr)
file_name = curr
else:
curr = file_list[index + 1]
#print(curr)
self.view_image(curr)
file_name = curr
if __name__ == "__main__":
root = Tk()
gallery = ImageViewer(root)
root.mainloop()
```
| 10,410
|
64,774,439
|
I'm trying to install some packages, and for some reason I can't for the life of me make it happen. My set up is that I'm using PyCharm on Windows with Conda. I'm having these problems with all the non-standard packages I'd like to install (things like numpy install just fine), but for reference I'll use [this](https://github.com/wmayner/pyphi) package. I'll detail all the methods I've tried:
1. Going `File>Settings>Python Interpreter> Install` and then searching for pyphy - it's not found.
2. Adding the URL of in the link above as a repository in PyCharm, and searching again for packages; again not found.
3. Downloading the .tar.gz from the above GitHub, and adding the folder containing that as a repository in PyCharm, then searching for `pyphi` -- again, nothing found.
4. From the terminal in PyCharm, `pip install pyphi`, which gives `ERROR: Could not find a version that satisfies the requirement pyphy (from versions: none)`
5. From the terminal in pycharm, `conda install -c wmayner pyphi`, which gives a very long error report EDIT: the long error message is [1](https://i.imgur.com/0JpWdRy.png), [2](https://i.imgur.com/ZCMckBv.png), [3](https://i.imgur.com/nOL9EYe.png), [4](https://i.imgur.com/W0QvuaA.png), [5](https://i.imgur.com/P4CTwmY.png). EDIT 2: The error message is now included at the end of this post, as text.
6. With the `.tar.gz` file in the working directory, trying `conda install -c local pyphi-1.2.0.tar.gz`, which gives
```
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- pyphi-1.2.0.tar.gz
```
7. Putting the `.tar.gz` file in the directory that the conda virtual environment is installed in, then directing the terminal there, and trying `python pip install pyphi-1.2.0.tar.gz`.
What am I doing wrong?
ERROR MESSAGE FROM APPROACH 5:
```
(MyPy) C:\Users\Dan Goldwater\Dropbox\Nottingham\GPT NNs\MyPy>conda install -c wmayner pyphi
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: -
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
Examining @/win-64::__cuda==11.0=0: 33%|█████████████████████████████████████████████████████▎ | 1/3 [00:00<00:00, 3.42it/s]/
failed
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "C:\Anaconda\lib\site-packages\conda\cli\install.py", line 265, in install
should_retry_solve=(_should_retry_unfrozen or repodata_fn != repodata_fns[-1]),
File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 117, in solve_for_transaction
should_retry_solve)
File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 158, in solve_for_diff
force_remove, should_retry_solve)
File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 275, in solve_final_state
ssc = self._add_specs(ssc)
File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 696, in _add_specs
raise UnsatisfiableError({})
conda.exceptions.UnsatisfiableError:
Did not find conflicting dependencies. If you would like to know which
packages conflict ensure that you have enabled unsatisfiable hints.
conda config --set unsatisfiable_hints True
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Anaconda\lib\site-packages\conda\exceptions.py", line 1079, in __call__
return func(*args, **kwargs)
File "C:\Anaconda\lib\site-packages\conda\cli\main.py", line 84, in _main
exit_code = do_call(args, p)
File "C:\Anaconda\lib\site-packages\conda\cli\conda_argparse.py", line 82, in do_call
return getattr(module, func_name)(args, parser)
File "C:\Anaconda\lib\site-packages\conda\cli\main_install.py", line 20, in execute
install(args, parser, 'install')
File "C:\Anaconda\lib\site-packages\conda\cli\install.py", line 299, in install
should_retry_solve=(repodata_fn != repodata_fns[-1]),
File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 117, in solve_for_transaction
should_retry_solve)
File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 158, in solve_for_diff
force_remove, should_retry_solve)
File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 275, in solve_final_state
ssc = self._add_specs(ssc)
File "C:\Anaconda\lib\site-packages\conda\core\solve.py", line 694, in _add_specs
ssc.r.find_conflicts(spec_set)
File "C:\Anaconda\lib\site-packages\conda\resolve.py", line 347, in find_conflicts
bad_deps = self.build_conflict_map(specs, specs_to_add, history_specs)
File "C:\Anaconda\lib\site-packages\conda\resolve.py", line 507, in build_conflict_map
root, search_node, dep_graph, num_occurances)
File "C:\Anaconda\lib\site-packages\conda\resolve.py", line 369, in breadth_first_search_for_dep_graph
last_spec = MatchSpec.union((path[-1], target_paths[-1][-1]))[0]
File "C:\Anaconda\lib\site-packages\conda\models\match_spec.py", line 481, in union
return cls.merge(match_specs, union=True)
File "C:\Anaconda\lib\site-packages\conda\models\match_spec.py", line 475, in merge
reduce(lambda x, y: x._merge(y, union), group) if len(group) > 1 else group[0]
File "C:\Anaconda\lib\site-packages\conda\models\match_spec.py", line 475, in <lambda>
reduce(lambda x, y: x._merge(y, union), group) if len(group) > 1 else group[0]
File "C:\Anaconda\lib\site-packages\conda\models\match_spec.py", line 502, in _merge
final = this_component.union(that_component)
File "C:\Anaconda\lib\site-packages\conda\models\match_spec.py", line 764, in union
return '|'.join(options)
TypeError: sequence item 0: expected str instance, Channel found
`$ C:\Anaconda\Scripts\conda-script.py install -c wmayner pyphi`
environment variables:
CIO_TEST=<not set>
CONDA_DEFAULT_ENV=MyPy
CONDA_EXE=C:\Anaconda\condabin\..\Scripts\conda.exe
CONDA_EXES="C:\Anaconda\condabin\..\Scripts\conda.exe"
CONDA_PREFIX=C:\Anaconda\envs\MyPy
CONDA_PROMPT_MODIFIER=(MyPy)
CONDA_ROOT=C:\Anaconda
CONDA_SHLVL=1
CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
HOMEPATH=\Users\Dan Goldwater
MOZ_PLUGIN_PATH=C:\Program Files (x86)\Foxit Software\Foxit Reader\plugins\
NVTOOLSEXT_PATH=C:\Program Files\NVIDIA Corporation\NvToolsExt\
PATH=C:\Anaconda;C:\Anaconda\Library\mingw-w64\bin;C:\Anaconda\Library\usr\
bin;C:\Anaconda\Library\bin;C:\Anaconda\Scripts;C:\Anaconda\bin;C:\Ana
conda\envs\MyPy;C:\Anaconda\envs\MyPy\Library\mingw-w64\bin;C:\Anacond
a\envs\MyPy\Library\usr\bin;C:\Anaconda\envs\MyPy\Library\bin;C:\Anaco
nda\envs\MyPy\Scripts;C:\Anaconda\envs\MyPy\bin;C:\Anaconda\condabin;C
:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin;C:\Program
Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\libnvvp;.;C:\Program
Files (x86)\Intel\iCLS Client;C:\Program Files\Intel\iCLS Client;C:\WI
NDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32
\WindowsPowerShell\v1.0;C:\Program Files (x86)\Intel\Intel(R)
Management Engine Components\DAL;C:\Program Files\Intel\Intel(R)
Management Engine Components\DAL;C:\Program Files (x86)\Intel\Intel(R)
Management Engine Components\IPT;C:\Program Files\Intel\Intel(R)
Management Engine Components\IPT;C:\Program Files\MiKTeX
2.9\miktex\bin\x64;C:\Program
Files\MATLAB\R2017a\runtime\win64;C:\Program
Files\MATLAB\R2017a\bin;C:\Program Files\Calibre2;C:\Program
Files\PuTTY;C:\Program Files (x86)\OpenSSH\bin;C:\Program
Files\Git\cmd;C:\Program
Files\Golem;C:\WINDOWS\System32\OpenSSH;C:\Program Files\Microsoft VS
Code\bin;C:\Program Files (x86)\Wolfram
Research\WolframScript;C:\Program Files\NVIDIA Corporation\Nsight
Compute 2020.1.2;C:\Program Files (x86)\NVIDIA
Corporation\PhysX\Common;C:\Program Files\NVIDIA Corporation\NVIDIA Nv
DLISR;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDO
WS\System32\WindowsPowerShell\v1.0;C:\WINDOWS\System32\OpenSSH;C:\Prog
ram Files\dotnet;C:\Users\Dan Goldwater\AppData\Local\Programs\Python\
Python38-32\Scripts;C:\Users\Dan
Goldwater\AppData\Local\Programs\Python\Python38-32;C:\Users\Dan
Goldwater\AppData\Local\Microsoft\WindowsApps;.;C:\Program
Files\Microsoft VS Code\bin;C:\Program Files\Docker
Toolbox;C:\texlive\2018\bin\win32;C:\Users\Dan
Goldwater\AppData\Local\Pandoc;C:\Program Files\JetBrains\PyCharm
Community Edition 2020.1.2\bin;.
PSMODULEPATH=C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules\
REQUESTS_CA_BUNDLE=<not set>
SSL_CERT_FILE=<not set>
VBOX_MSI_INSTALL_PATH=C:\Program Files\Oracle\VirtualBox\
active environment : MyPy
active env location : C:\Anaconda\envs\MyPy
shell level : 1
user config file : C:\Users\Dan Goldwater\.condarc
populated config files : C:\Users\Dan Goldwater\.condarc
conda version : 4.8.2
conda-build version : 3.18.11
python version : 3.7.6.final.0
virtual packages : __cuda=11.0
base environment : C:\Anaconda (writable)
channel URLs : https://conda.anaconda.org/wmayner/win-64
https://conda.anaconda.org/wmayner/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : C:\Anaconda\pkgs
C:\Users\Dan Goldwater\.conda\pkgs
C:\Users\Dan Goldwater\AppData\Local\conda\conda\pkgs
envs directories : C:\Anaconda\envs
C:\Users\Dan Goldwater\.conda\envs
C:\Users\Dan Goldwater\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.8.2 requests/2.22.0 CPython/3.7.6 Windows/10 Windows/10.0.19041
administrator : False
netrc file : None
offline mode : False
An unexpected error has occurred. Conda has prepared the above report.
```
|
2020/11/10
|
[
"https://Stackoverflow.com/questions/64774439",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5559681/"
] |
You should do sth like [installation package from git](https://stackoverflow.com/questions/20101834/pip-install-from-git-repo-branch) , but for conda. (replace pip with conda, and provide valid URL). This is not a pypi package, it is not known to pip by default.
|
In response to @buran's comments above, I updated conda using
`conda update --name base conda`
Then used the recommended install for Windows, with
```
conda install -c wmayner pyphi
```
which now gives the error:
```
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
Specifications:
- pyphi -> python[version='>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0']
Your python: python=3.8
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
```
So at least it's clear why it won't install, and I can create another environment with another version of Python to house it.
Kudos to @buran!
| 10,411
|
41,850,809
|
```
import datetime
from nltk_contrib import timex
now = datetime.date.today()
basedate = timex.Date(now.year, now.month, now.day)
print timex.ground(timex.tag("Hai i would like to go to mumbai 22nd of next month"), basedate)
print str(datetime.date.day)
```
when i am trying to run the above code i am getting the following error
```
File "/usr/local/lib/python2.7/dist-packages/nltk_contrib/timex.py", line 250, in ground
elif re.match(r'last ' + month, timex, re.IGNORECASE):
UnboundLocalError: local variable 'month' referenced before assignment
```
what should i do to rectify this error?
|
2017/01/25
|
[
"https://Stackoverflow.com/questions/41850809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7334656/"
] |
The `timex` module has a bug where a global variable is referenced without assignment in the `ground` function.
To fix the bug, add the following code which should start at line 171:
def ground(tagged\_text, base\_date):
```
# Find all identified timex and put them into a list
timex_regex = re.compile(r'<TIMEX2>.*?</TIMEX2>', re.DOTALL)
timex_found = timex_regex.findall(tagged_text)
timex_found = map(lambda timex:re.sub(r'</?TIMEX2.*?>', '', timex), \
timex_found)
# Calculate the new date accordingly
for timex in timex_found:
global month # <--- here is the global reference assignment
```
|
The solution above about adding month as a global variable causes other problems when timex is called multiple times in a row, because variables are not reset unless you import again. This happens for me in a deployed environment in AWS Lambda.
A solution that isn't super pretty but will not cause problems is just to set the month value again in the ground function:
```
def ground(tagged_text, base_date):
# Find all identified timex and put them into a list
timex_regex = re.compile(r'<TIMEX2>.*?</TIMEX2>', re.DOTALL)
timex_found = timex_regex.findall(tagged_text)
timex_found = map(lambda timex:re.sub(r'</?TIMEX2.*?>', '', timex), \
timex_found)
# Calculate the new date accordingly
for timex in timex_found:
month = "(january|february|march|april|may|june|july|august|september| \
october|november|december)" # <--- reset month to the value it is set to upon import
```
| 10,412
|
8,399,341
|
I can't find any tutorial for jQuery + web.py.
So I've got basic question on POST method.
I've got jQuery script:
```
<script>
jQuery('#continue').click(function() {
var command = jQuery('#continue').attr('value');
jQuery.ajax({
type: "POST",
data: {signal : command},
});
});
</script>
```
A simple form:
```
<form>
<button type="submit">Cancel</button>
<button type="submit" id="continue" value="next">Continue</button>
</form>
```
and python script:
```
def POST (self):
s = signal
print s
return
```
I expect to see string "next" in console. But it doesn't happen.
I have a strong feeling that selectors somehow do not work. Any way to check it?
|
2011/12/06
|
[
"https://Stackoverflow.com/questions/8399341",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1073589/"
] |
you need to use web.input in web.py to access POST variables
look at the docs: <http://webpy.org/docs/0.3/api> (search for "function input")
```
def POST(self):
s = web.input().signal
print s
return
```
|
```
<script>
jQuery('#continue').click(function() {
var command = jQuery('#continue').attr('value');
jQuery.ajax({
type: "POST",
data: {signal : command},
url: "add the url here"
});
});
</script>
```
**add the url of the server.**
| 10,413
|
42,237,103
|
I'm writing a python program for the purpose of studying HTML source code used in different countries. I'm testing in a UNIX Shell. The code I have so far works fine, except that I'm getting [HTTP Error 403: Forbidden](https://i.stack.imgur.com/9gzqG.png). Through testing it line by line, I know it has something to do with line 27: (`url3response = urllib2.urlopen(url3)
url3Content =url3response.read()`
Every other URL response works fine except this one. Any ideas???
Here is the text file I'm reading from (top5\_US.txt):
```
http://www.caltech.edu
http://www.stanford.edu
http://www.harvard.edu
http://www.mit.edu
http://www.princeton.edu
```
And here is my code:
```
import urllib2
#Open desired text file (In this case, "top5_US.txt)
text_file = open('top5_US.txt', 'r')
#Read each line of the text file
firstLine = text_file.readline().strip()
secondLine = text_file.readline().strip()
thirdLine = text_file.readline().strip()
fourthLine = text_file.readline().strip()
fifthLine = text_file.readline().strip()
#Turn each line into a URL variable
url1 = firstLine
url2 = secondLine
url3 = thirdLine
url4 = fourthLine
url5 = fifthLine
#Read URL 1, get content , and store it in a variable.
url1response = urllib2.urlopen(url1)
url1Content =url1response.read()
#Read URL 2, get content , and store it in a variable.
url2response = urllib2.urlopen(url2)
url2Content =url2response.read()
#Read URL 3, get content , and store it in a variable.
url3response = urllib2.urlopen(url3)
url3Content =url3response.read()
#Read URL 4, get content , and store it in a variable.
url4response = urllib2.urlopen(url4)
url4Content =url4response.read()
#Read URL 5, get content , and store it in a variable.
url5response = urllib2.urlopen(url5)
url5Content =url5response.read()
text_file.close()
```
|
2017/02/14
|
[
"https://Stackoverflow.com/questions/42237103",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6115937/"
] |
With jQuery you need to also provide the other transformation:
```
//Image Category size change effect
$('.cat-wrap div').hover(function() {
$(this).css('width', '30%');
$(this).children().css('opacity', '1');
$(this).siblings().css("width", "16.5%");
}, function() {
$(this).css('width', '19.2%');
$(this).children().css('opacity', '0');
$(this).siblings().css("width", "19.2%");
});
```
|
For it to revert back, you need to add the handler for the mouseout event. This is simply passing a second callback argument to the `hover()` method.
```
$('.cat-wrap div').hover(function() {
$(this).css('width', '30%');
$(this).children().css('opacity','1');
$(this).siblings().css( "width", "16.5%");
}, function(){
// mouse out function
});
```
| 10,414
|
44,181,879
|
I believe that I have installed virtualenvwrapper incorrectly (the perils of following different tutorials for python setup).
I would like to remove the extension completely from my Mac OSX system but there seems to be no documentation on how to do this.
Does anyone know how to completely reverse the installation? Its wreaking havoc with my attempts to compile python scripts.
|
2017/05/25
|
[
"https://Stackoverflow.com/questions/44181879",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7442842/"
] |
```
pip uninstall virtualenvwrapper
```
Or
```
sudo pip uninstall virtualenvwrapper
```
worked for me.
|
On windows - This works great
pip uninstall virtualenvwrapper-win
| 10,417
|
38,129,357
|
In the python difflib library, is the SequenceMatcher class behaving unexpectedly, or am I misreading what the supposed behavior is?
Why does the isjunk argument seem to not make any difference in this case?
```
difflib.SequenceMatcher(None, "AA", "A A").ratio() return 0.8
difflib.SequenceMatcher(lambda x: x in ' ', "AA", "A A").ratio() returns 0.8
```
My understanding is that if space is omitted, the ratio should be 1.
|
2016/06/30
|
[
"https://Stackoverflow.com/questions/38129357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5839052/"
] |
This is happening because the `ratio` function uses total sequences' length while calculating the ratio, **but it doesn't filter elements using `isjunk`**. So, as long as the number of matches in the matching blocks results in the same value (with and without `isjunk`), the ratio measure will be the same.
I assume that sequences are not filtered by `isjunk` because of performance reasons.
```python
def ratio(self):
"""Return a measure of the sequences' similarity (float in [0,1]).
Where T is the total number of elements in both sequences, and
M is the number of matches, this is 2.0*M / T.
"""
matches = sum(triple[-1] for triple in self.get_matching_blocks())
return _calculate_ratio(matches, len(self.a) + len(self.b))
```
`self.a` and `self.b` are the strings (sequences) passed to the SequenceMatcher object ("AA" and "A A" in your example). The `isjunk` function `lambda x: x in ' '` is only used to determine the matching blocks. Your example is quite simple, so the resulting ratio and matching blocks are the same for both calls.
```python
difflib.SequenceMatcher(None, "AA", "A A").get_matching_blocks()
[Match(a=0, b=0, size=1), Match(a=1, b=2, size=1), Match(a=2, b=3, size=0)]
difflib.SequenceMatcher(lambda x: x == ' ', "AA", "A A").get_matching_blocks()
[Match(a=0, b=0, size=1), Match(a=1, b=2, size=1), Match(a=2, b=3, size=0)]
```
*Same matching blocks, the ratio is*: `M = 2, T = 6 => ratio = 2.0 * 2 / 6`
**Now consider the following example**:
```python
difflib.SequenceMatcher(None, "AA ", "A A").get_matching_blocks()
[Match(a=1, b=0, size=2), Match(a=3, b=3, size=0)]
difflib.SequenceMatcher(lambda x: x == ' ', "AA ", "A A").get_matching_blocks()
[Match(a=0, b=0, size=1), Match(a=1, b=2, size=1), Match(a=3, b=3, size=0)]
```
*Now matching blocks are different, but the ratio will be the same because the number of matches is still equal*:
*When `isjunk` is None*: `M = 2, T = 6 => ratio = 2.0 * 2 / 6`
*When `isjunk` is* `lambda x: x == ' '`: `M = 1 + 1, T = 6 => ratio = 2.0 * 2 / 6`
**Finally, a different number of matches:**
```python
difflib.SequenceMatcher(None, "AA ", "A A ").get_matching_blocks()
[Match(a=1, b=0, size=2), Match(a=3, b=4, size=0)]
difflib.SequenceMatcher(lambda x: x == ' ', "AA ", "A A ").get_matching_blocks()
[Match(a=0, b=0, size=1), Match(a=1, b=2, size=2), Match(a=3, b=4, size=0)]
```
*The number of matches is different*
*When `isjunk` is None*: `M = 2, T = 7 => ratio = 2.0 * 2 / 7`
*When `isjunk` is* `lambda x: x == ' '`: `M = 1 + 2, T = 6 => ratio = 2.0 * 3 / 7`
|
You can remove the characters from the string before sequencing it
```
def withoutJunk(input, chars):
return input.translate(str.maketrans('', '', chars))
a = withoutJunk('AA', ' ')
b = withoutJunk('A A', ' ')
difflib.SequenceMatcher(None, a, b).ratio()
# -> 1.0
```
| 10,418
|
58,674,723
|
I have a new MacBook with fresh installs of everything which I upgraded to macOS Catalina. I installed homebrew and then pyenv, and installed Python 3.8.0 using pyenv. All these things seemed to work properly.
However, neither `pyenv local` nor `pyenv global` seem to take effect. Here are all the details of what I'm seeing:
```
thewizard@Special-MacBook-Pro ~ % pyenv versions
system
* 3.8.0 (set by /Usersthewizard/.python-version)
thewizard@Special-MacBook-Pro ~ % python --version
Python 2.7.16
thewizard@Special-MacBook-Pro ~ % pyenv global 3.8.0
thewizard@Special-MacBook-Pro ~ % python --version
Python 2.7.16
thewizard@Special-MacBook-Pro ~ % pyenv local 3.8.0
thewizard@Special-MacBook-Pro ~ % python --version
Python 2.7.16
thewizard@Special-MacBook-Pro ~ % echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/thewizard/.pyenv/bin
thewizard@Special-MacBook-Pro ~ % cat ~/.zshenv
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
if command -v pyenv 1>/dev/null 2>&1; then
eval "$(pyenv init -)"
fi
```
BTW there is no `/bin` in my .pyenv, I only added those commands per some other instructions but I'm planning to remove it because I think it is wrong:
```
thewizard@Special-MacBook-Pro ~ % ls -al ~/.pyenv
total 8
drwxr-xr-x 5 thewizard staff 160 Nov 2 15:03 .
drwxr-xr-x+ 22 thewizard staff 704 Nov 2 15:36 ..
drwxr-xr-x 22 thewizard staff 704 Nov 2 15:03 shims
-rw-r--r-- 1 thewizard staff 6 Nov 2 15:36 version
drwxr-xr-x 3 thewizard staff 96 Nov 2 15:01 versions
```
It's worth noting that Catalina moved to zsh from bash, not sure if that's relevant here.
|
2019/11/02
|
[
"https://Stackoverflow.com/questions/58674723",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/783314/"
] |
I added the following to my ~/.zprofile and got it working.
```
export PYENV_ROOT="$HOME/.pyenv/versions/3.7.3"
export PATH="$PYENV_ROOT/bin:$PATH"
```
|
catalina (and OS X in general) uses `/etc/zprofile` to set the `$PATH` in advance of what you're specifying within the local dotfiles.
it uses the `path_helper` utility to specify the `$PATH` and i suspect this is overriding the shim injection in your local dotfiles. you can comment out the following lines in `/etc/zprofile`. this will get overridden in subsequent OS updates.
```
# if [ -x /usr/libexec/path_helper ]; then
# eval `/usr/libexec/path_helper -s`
# fi
```
alternately, and less intrusively, you can unset the `GLOBAL_RCS` option (add `unsetopt GLOBAL_RCS`) in your personal `.zshenv` file which will allow you to suppress sourcing all of the system default RC files for zsh and allow the pyenv shims to operate as intended.
| 10,419
|
54,569,512
|
I need to parse json file size of 200MB, at the end I would like to write data from the file in sqlite3 database. I have a working python code, but it takes around 9 minutes to complete the task.
```
@transaction.atomic
def create_database():
with open('file.json') as f:
data = json.load(f)
cve_items = data['CVE_Items']
for i in range(len(cve_items)):
database_object = Data()
for vendor_data in cve_items[i]['cve']['affects']['vendor']['vendor_data']:
database_object.vendor_name = vendor_data['vendor_name']
for description_data in cve_items[i]['cve']['description']['description_data']:
database_object.description = description_data['value']
for product_data in vendor_data['product']['product_data']:
database_object.product_name = product_data['product_name']
database_object.save()
for version_data in product_data['version']['version_data']:
if version_data['version_value'] != '-':
database_object.versions_set.create(version=version_data['version_value'])
```
Is it possible to speed up the process?
|
2019/02/07
|
[
"https://Stackoverflow.com/questions/54569512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9517224/"
] |
```
'ODataManifestModel>EntitySetForBoolean>booleanProperty'
```
A few things:
* your screenshot is probably wrong because you always need the entitySet name that can be found in the "folder" `Entity Sets` not the `Entity Type`. Although your name looks correct.
* you have to bind one element of the entitySet (array) to the `mode` property specifying it with its defined key in SEGW -> your entitytype needs at least one key field. You **cannot** acces oData entitSet elements in an OdataModel with an index
* you need an absolute path if you are referencing the entitySet, means after `model>` it must start with `/`. Alternatively in your controller init method after metadata loaded, bind one element to the whole view
`var that = this;
this.getOwnerComponent().getModel().metadataLoaded().then(function() {
that.getView().bindElement({path:"/EntitySetForBoolean('1234')" });
})` to use relative binding in the view (not starting with `/`)
* the path within the structure uses `/` instead of `>`
Absolute Binding:
```
"ODataManifestModel>/EntitySetForBoolean('1234')/booleanProperty"
```
Or if the element is bound to the view or a parent container object in the view, you can use a relative path:
```
"ODataManifestModel>booleanProperty"
```
|
**mode** property from ListBase can have a the following properties (**None, SingleSelect, MultiSelect, Delete**) and it is applied to all the list elements
| 10,426
|
56,793,083
|
I am getting an error and I'm not sure what is causing the error to occur.
The error is:
```
Parts[n] = PN
IndexError: list assignment index out of range
```
The code I'm using is this. I'm pretty new to python and tried to look out similar problems but didn't seem to find anything exactly similar to this. Any help would be appreciated.
```
import pandas as pd
df = pd.read_excel(r'C:\Users\md77879\Desktop\Test.xls')
Parts = list()
Prices = list()
print("\nEnter 'exit' to end")
PN = input('Enter PN: ')
Parts.append(PN)
Number = (df['Part Number'] == PN)
print(df[Number][['Part Number', 'Price']])
i, n = 0, 0
while PN != ('exit'):
n = n + 1
PN = input(' ')
Number = df['Part Number'] == PN
print(df[Number][['Part Number', 'Price']])
Parts[n] = PN
for i in range(0, n):
print(Parts[i])
```
|
2019/06/27
|
[
"https://Stackoverflow.com/questions/56793083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10560733/"
] |
If the chunks you need to upper are separated with `-` or `.` you may use
```
"Filename to UPPER_SNAKE_CASE": {
"prefix": "usc_",
"body": [
"${TM_FILENAME/\\.component\\.html$|(^|[-.])([^-.]+)/${1:+_}${2:/upcase}/g}"
],
"description": "Convert filename to UPPER_SNAKE_CASE dropping .component.html at the end"
}
```
You may check the [regex workings here](https://regex101.com/r/L4Jsj7/1).
* `\.component\.html$` - matches `.component.html` at the end of the string
* `|` - or
* `(^|[-.])` capture start of string or `-` / `.` into Group 1
* `([^-.]+)` capture any 1+ chars other than `-` and `.` into Group 2.
The `${1:+_}${2:/upcase}` replacement means:
* `${1:+` - if Group 1 is not empty,
* `_` - replace with `_`
* `}` - end of the first group handling
* `${2:/upcase}` - put the uppered Group 2 value back.
|
Here is a pretty simple alternation regex:
```
"upcaseSnake": {
"prefix": "rf1",
"body": [
"${TM_FILENAME_BASE/(\\..*)|(-)|(.)/${2:+_}${3:/upcase}/g}",
"${TM_FILENAME/(\\..*)|(-)|(.)/${2:+_}${3:/upcase}/g}"
],
"description": "upcase and snake the filename"
},
```
Either version works.
`(\\..*)|(-)|(.)` alternation of three capture groups is conceptually simple. The order of the groups is **important**, and it is also what makes the regex so simple.
`(\\..*)` everything after and including the first dot `.` in the filename goes into group 1 which will not be used in the transform.
`(-)` group 2, if there is a group 2, replace it with an underscore `${2:+_}`.
`(.)` group 3, all other characters go into group 3 which will be upcased `${3:/upcase}`.
See [regex101 demo](http://some-fancy-ui.html).
| 10,428
|
74,165,004
|
i have 2d list implementation as follows. It shows no. of times every student topped in exams:-
```
list = main_record
['student1',1]
['student2',1]
['student2',2]
['student1',5]
['student3',3]
```
i have another list of unique students as follows:-
```
list = students_enrolled
['student1','student2','student3']
```
which i want to display student ranking based on their distinctions as follows:-
```
list = student_ranking
['student1','student3','student2']
```
What built in functions can be useful. I could not pose proper query on net. In other words i need python equivalent of following queries:-
```
select max(main_record[1]) where name = student1 >>> result = 5
select max(main_record[1]) where name = student2 >>> result = 2
select max(main_record[1]) where name = student3 >>> result = 3
```
|
2022/10/22
|
[
"https://Stackoverflow.com/questions/74165004",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1028289/"
] |
You define a `dict` base key of `studentX` and save the max value for each student `key` then sort the `students_enrolled` base max value of each key.
```
from collections import defaultdict
main_record = [['student1',1], ['student2',1], ['student2',2], ['student1',5], ['student3',3]]
students_enrolled = ['student1','student2','student3']
# defind dict with negative infinity and update with max in each iteration
tmp_dct = defaultdict(lambda: float('-inf'))
for lst in main_record:
k, v = lst
tmp_dct[k] = max(tmp_dct[k], v)
print(tmp_dct)
students_enrolled.sort(key = lambda x: tmp_dct[x], reverse=True)
print(students_enrolled)
```
Output:
```
# tmp_dct =>
defaultdict(<function <lambda> at 0x7fd81044b1f0>,
{'student1': 5, 'student2': 2, 'student3': 3})
# students_enrolled after sorting
['student1', 'student3', 'student2']
```
|
If it is a 2D list it should look like this: `l = [["student1", 2], ["student2", 3], ["student3", 4]]`. To get the highest numeric value from the 2nd column you can use a loop like this:
```
numbers = []
for student in list:
numbers.append(student[1])
for num in numbers:
n = numbers.copy()
n.sort()
n.reverse()
student_index = numbers.index(n[0])
print(list[student_index], n[0])
numbers.remove(n[0])
```
| 10,429
|
51,903,617
|
How can a check for list membership be inverted based on a boolean variable?
I am looking for a way to simplify the following code:
```python
# variables: `is_allowed:boolean`, `action:string` and `allowed_actions:list of strings`
if is_allowed:
if action not in allowed_actions:
print(r'{action} must be allowed!')
else:
if action in allowed_actions:
print(r'{action} must NOT be allowed!')
```
I feel there must be a way to avoid doing the check twice, once for `in` and another time for `not in`, but can't figure out a less verbose way.
|
2018/08/17
|
[
"https://Stackoverflow.com/questions/51903617",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/191246/"
] |
Compare the result of the test to `is_allowed`. Then use `is_allowed` to put together the correct error message.
```
if (action in allowed_actions) != is_allowed:
print(action, "must" if is_allowed else "must NOT", "be allowed!")
```
|
Given the way your specific code is structured, I think the only improvement you can make is to just store `action in allowed_actions` in a variable:
```
present = action in allowed_actions
if is_allowed:
if not present:
print(r'{action} must be allowed!')
else:
if present:
print(r'{action} must NOT be allowed!')
```
| 10,430
|
18,033,700
|
I generate lot of messages for sending to client (push notifications using push woosh). I collect messages for a period of time and the send a bucket of messages. Need advice, what is the best to use for queue python list ( I am afraid to store in memory lot of messages and to lose if server restarts), Redis or MySQL ?
|
2013/08/03
|
[
"https://Stackoverflow.com/questions/18033700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1800871/"
] |
Redis can save the data contained in memory on your hard drive, so you don't have to be worried about to lose informations. And you can add a key expiration to your data saved in memory, so you can remove old messages.
Have a look here :
<http://redis.io/topics/persistence>
And here :
<http://redis.io/commands/expire>
|
I don't know what is the best from MySQL or Redis to process message queues since I don't know Redis.
But I could tell, MySQL is *not* designed for that purpose.
You should take a look at dedicated tools such as [RabitMQ](http://www.rabbitmq.com/) that will probably serve better your purpose. Here is a basic tutorial (incl. Python): <http://www.rabbitmq.com/tutorials/tutorial-one-python.html>
| 10,431
|
34,939,762
|
I have a function called `prepared_db` in submodule `db.db_1`:
```
from spam import db
submodule_name = "db_1"
func_name = "prepare_db"
func = ...
```
how can I get the function by the submodule name and function name in the context above?
**UPDATE**:
To respond @histrio 's answer, I can verify his code works for `os` module. But it does not work in this case. To create the example:
```
$ mkdir -p spam/db
$ cat > spam/db/db_1.py
def prepare_db():
print('prepare_db func')
$ touch spam/db/__init__.py
$ PYTHONPATH=$PYTHONPATH:spam
```
now, you can do the import normally:
```
>>> from spam.db.db_1 import prepare_db
>>> prepare_db()
prepare_db func
```
but if you do this dynamically, I get this error:
```
>>> getattr(getattr(db, submodule_name), func_name)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-9-1b6aa1216551> in <module>()
----> 1 getattr(getattr(db, submodule_name), func_name)
AttributeError: module 'spam.db.db_1' has no attribute 'prepared_db'
```
|
2016/01/22
|
[
"https://Stackoverflow.com/questions/34939762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/489564/"
] |
It's simple. You can consider module as object.
```
import os
submodule_name = "path"
func_name = "exists"
submodule = getattr(os, submodule_name)
function = getattr(submodule, func_name)
function('/home') # True
```
or [just for fun, don't do that]
```
fn = reduce(getattr, ('sub1', 'sub2', 'sub3', 'fn'), module)
```
**UPDATE**
```
import importlib
submodule = importlib.import_module('.'+submodule_name, module.__name__)
function = getattr(submodule, func_name)
```
|
I think I figured it out, you need to add the function to the `__all__` variable in the `__init__.py` file, so that would be something along the lines of:
```
from .db_1 import prepare_db
__all__ = ['prepare_db']
```
After that, it should work just fine.
| 10,432
|
54,833,296
|
I am using spyder python 2.7 and i changed the syntax coloring in Spyder black theme, but i really want my python programme to look in full black, so WITHOUT the white windows.
Can someone provide me a good explanation about how to change this?
[Python example of how i want it to be](https://i.stack.imgur.com/MO17A.png)
|
2019/02/22
|
[
"https://Stackoverflow.com/questions/54833296",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11103122/"
] |
If you can't wait for Spyder 4 - this is what does it for **Spyder 3.3.2 in Windows, using Anaconda3**.
1. Exit Spyder
2. Open command prompt or Anaconda prompt
3. Run `pip install qdarkstyle` and exit the prompt
4. Go to ...\Anaconda3\Lib\site-packages\spyder\utils and open
*qhelpers.py*
5. Add `import qdarkstyle` to the top of that file
6. Replace the `qapplication` function definition with below code (only two added lines)
7. Save and close the file
8. Open Spyder and enjoy your dark theme
```
def qapplication(translate=True, test_time=3):
"""
Return QApplication instance
Creates it if it doesn't already exist
test_time: Time to maintain open the application when testing. It's given
in seconds
"""
if running_in_mac_app():
SpyderApplication = MacApplication
else:
SpyderApplication = QApplication
app = SpyderApplication.instance()
if app is None:
# Set Application name for Gnome 3
# https://groups.google.com/forum/#!topic/pyside/24qxvwfrRDs
app = SpyderApplication(['Spyder'])
# Set application name for KDE (See issue 2207)
app.setApplicationName('Spyder')
app.setStyleSheet(qdarkstyle.load_stylesheet_pyqt5())
if translate:
install_translator(app)
test_ci = os.environ.get('TEST_CI_WIDGETS', None)
if test_ci is not None:
timer_shutdown = QTimer(app)
timer_shutdown.timeout.connect(app.quit)
timer_shutdown.start(test_time*1000)
return app
```
|
(*Spyder maintainer here*) This functionality will be available in Spyder **4**, to be released later in 2019. For now there's nothing you can do to get what you want with Spyder's current version, sorry.
| 10,433
|
51,839,083
|
I am using `MacOS`.
I used following command:
```
gcloud beta functions deploy start --runtime python37
--trigger-http --memory 2048MB --timeout 540s
```
But while deploying `google cloud functions` I got this error:
```
(gcloud.beta.functions.deploy) OperationError: code=3, message=Build failed: USER ERROR:
pip_download_wheels had stderr output:
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-wheel-8b_hn74q/PyWavelets/
error: pip_download_wheels returned code: 1
```
I added `scikit-image` in my `requirements.txt`, which was not added before. Code was successfully deploying when `scikit-image` was not added in `requirements.txt`.
Any ideas?
|
2018/08/14
|
[
"https://Stackoverflow.com/questions/51839083",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10223950/"
] |
Do you have a Pipfile in your directory? I was able to replicate this same error when I tried to deploy a GCF containing a Pipfile but no accompanying Pipfile.lock. To fix, either remove Pipfile and just include requirements.txt, or generate Pipfile.lock:
`$ pipenv install` without the --skip-lock flag
While the current documentation does not state this, I have found that you can deploy to GCF without a requirements.txt file at all. Just include Pipfile and Pipfile.lock.
To recap, acceptable dependency files to deploy a beta GCF:
* requirements.txt
* Pipfile + Pipfile.lock
* requirements.txt + Pipfile + Pipfile.lock
|
According to the google cloud function [documentation](https://cloud.google.com/appengine/docs/standard/python3/runtime#dependencies) it only supports installing dependency from `requirements.txt` file.
And the file `Pipfile/Pipfile.lock` must not be present in the root directory.
| 10,438
|
1,611,625
|
I only just noticed this feature today!
```
s={1,2,3} #Set initialisation
t={x for x in s if x!=3} #Set comprehension
t=={1,2}
```
What version is it in? I also noticed that it has set comprehension. Was this added in the same version?
**Resources**
* [Sets in Python 2.4 Docs](http://docs.python.org/library/stdtypes.html#set-types-set-frozenset)
* [What's new in Python 3.0](http://docs.python.org/dev/3.0/whatsnew/3.0.html)
|
2009/10/23
|
[
"https://Stackoverflow.com/questions/1611625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/165495/"
] |
The `sets` module was added in Python 2.3, but the built-in set type was added to the language in 2.4, with essentially the same interface. (As of 2.6, the `sets` module has been deprecated.)
So you can use sets as far back as 2.3, as long as you
```
import sets
```
But you will get a `DeprecationWarning` if you try that import in 2.6
Set comprehensions, and the set literal syntax -- that is, being able to say
```
a = { 1, 2, 3 }
```
are new in Python 3.0. To be very specific, both set literals and set comprehensions were present in Python 3.0a1, the first public release of Python 3.0, from 2007. [Python 3 release notes](http://www.python.org/download/releases/3.0/NEWS.txt)
The comprehensions and literals were later implemented in 2.7. [3.x Python features incorporated into 2.7](https://docs.python.org/3/whatsnew/2.7.html#python-3-1-features)
|
Well, testing it:
```
>>> s = {1, 2, 3}
File "<stdin>", line 1
s = {1, 2, 3}
^
SyntaxError: invalid syntax
```
I'm running 2.5, so I would assume that this syntax was added sometime in 2.6 (Update: actually added in 3.0, but Ian beat me). I should probably be upgrading sometime soon. I'm glad they added a syntax for it - I'm rather tired of `set([1, 2, 3])`.
Set comprehensions have probably been around since sets were first created. The Python documentation site isn't very clear, but I wouldn't imagine sets would be too useful without iterators.
| 10,439
|
12,698,646
|
How can I create a list of methods in python to be applied to an object?
Given some arbitrary class:
```
class someClass:
def __init__(self, s):
self.size = s
def shrink(self):
self.size -= 1
def grow(self):
self.size += 1
def invert(self):
self.size = -self.size
```
I want to be able to write an iterable object like this list:
```
instructions = [shrink, grow, shrink, shrink, grow, invert]
```
To be run through a for-loop later:
```
elephant = someClass(90)
sizeList = []
for ins in instructions:
elephant.ins()
sizeList.append(elephant.size)
```
I've done something similar with functions before. Not being able to do this with methods would require me to rewrite an intimidating amount of code...
|
2012/10/02
|
[
"https://Stackoverflow.com/questions/12698646",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1175133/"
] |
You could create a list of the method *names* then use [`getattr()`](https://docs.python.org/3/library/functions.html#getattr) to access the methods:
```
instructions = ["shrink", "grow", "shrink"]
for i in instructions:
getattr(elephant, i)()
```
|
Possibly naïvely:
```
for ins in instructions:
getattr(elephant, ins)()
```
Gotchas include that `ins` must be a string and that it's probably wise to validate both that `ins` is what you really want to call and that `getattr(elephant, ins)` is a callable.
| 10,441
|
33,225,888
|
I'm new to python. I had a difficult time understanding why the output would be 2 for the problem below. Can someone explain it to be in very basic terms.
```
a = [1, 2, 3, 4, 0]
b = [3, 0, 2, 4, 1]
c = [3, 2, 4, 1, 5]
print c[a[a[4]]]
```
|
2015/10/20
|
[
"https://Stackoverflow.com/questions/33225888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5464930/"
] |
Maybe it helps understanding splitting it in 3 rows ?
```
inner_one = a[4] # a[4] = 0
inner_two = a[inner_one] # a[0] = 1
result = c[inner_two] # c[1] = 2
```
|
Python lists are 0-indexed. So your first call, `a[4]`, returns `0`, then `a[0]` returns `1`, and finally `c[1]` returns `2`.
| 10,446
|
38,025,218
|
I am running python 2.7 and django 1.8.
[I have this exact issue.](https://stackoverflow.com/questions/24983777/cant-add-a-new-field-in-migration-column-does-not-exist)
The answer, posted as a comment is: `What I did is completely remake the db, erase the migration history and folders.`
I am very uncertain about deleting and creating the database.
I am running a PostgreSQL database. If I drop/delete the database and then run the migrations, will the database be rebuilt from the migrations? I don't want to delete the database and then be stuck in a worse situation.
|
2016/06/25
|
[
"https://Stackoverflow.com/questions/38025218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1261774/"
] |
You can create a separate interface for your static needs:
```
interface IPerson {
name: string;
getName(): string;
}
class Person implements IPerson {
public name: string;
constructor(name: string) {
this.name = name;
}
public getName() {
return this.name;
}
public static create(name: string) { // method added for demonstration purposes
return new Person(name);
}
}
```
Static interface:
```
interface IPersonStatic {
new(name: string); // constructor
create(name: string): IPerson; // static method
}
let p: IPersonStatic = Person;
```
Also, you can use `typeof` to determine the type:
```
let p2: typeof Person = Person; // same as 'let p2 = Person;'
let p3: typeof Person = AnotherPerson;
```
|
I added `IPersonConstructor` to your example. The rest is identical; just included for clarity.
`new (arg1: typeOfArg1, ...): TypeOfInstance;` describes a class, since it can be invoked with `new` and will return an instance of the class.
```
interface IPerson {
name: string;
getName(): string;
}
class Person implements IPerson {
public name: string;
constructor(name: string) {
this.name = name;
}
public getName() {
return this.name;
}
}
interface IPersonConstructor {
// When invoked with `new` and passed a string, returns an instance of `Person`
new (name: string): Person;
prototype: Person;
}
```
| 10,447
|
17,134,897
|
i'm using the popular pythonscript ( <http://code.google.com/p/edim-mobile/source/browse/trunk/ios/IncrementalLocalization/localize.py> ) to localize my storyboards in ios5.
I did only some changes in storyboard and got this error:
>
> Please file a bug at <http://bugreport.apple.com> with this warning
> message and any useful information you can provide.
> com.apple.ibtool.errors
> description The strings
> file "MainStoryboard.strings" could not be applied.
> recovery-suggestion Missing object referenced
> from oid-keyed mapping. Object ID ztT-UO-myJ
> underlying-errors
>
> description
> The strings file "MainStoryboard.strings" could not be applied.
> recovery-suggestion
> Missing object referenced from oid-keyed mapping. Object ID ztT-UO-myJ
> Traceback (most recent call last): File "./localize.py", line 105, in
> raise Exception("\n" + errorDescription) Exception:
>
>
> **\* Error while creating the 'Project/en.lproj/MainStoryboard.storyboard' file\***
>
>
> **\* Error while creating the 'Project/es.lproj/MainStoryboard.storyboard' file\***
>
>
> **\* Error while creating the 'Project/fr.lproj/MainStoryboard.storyboard' file\***
>
>
> **\* Error while creating the 'Project/it.lproj/MainStoryboard.storyboard' file\***
>
>
> Showing first 200 notices only Command /bin/sh failed with exit code 1
>
>
>
I can't find a solution..
Maik
|
2013/06/16
|
[
"https://Stackoverflow.com/questions/17134897",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
The second query is an example of an implicit cross join (aka Cartesian join) - every record from users will be joined to every record from roles with `id=5`, since all these combinations will have the `where` clause evaluate as true.
|
A join will be required to have correct data returned
```
SELECT u.username
FROM users u
JOIN roles r
ON u.roleid = r.id
WHERE r.id = 5;
```
I think is better to use explicit join with `ON` to dtermine which columns have relationship rather than using realtionship in `WHERE` clause
| 10,449
|
63,336,300
|
My Tensorflow model makes heavy use of data preprocessing that should be done on the CPU to leave the GPU open for training.
```
top - 09:57:54 up 16:23, 1 user, load average: 3,67, 1,57, 0,67
Tasks: 400 total, 1 running, 399 sleeping, 0 stopped, 0 zombie
%Cpu(s): 19,1 us, 2,8 sy, 0,0 ni, 78,1 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st
MiB Mem : 32049,7 total, 314,6 free, 5162,9 used, 26572,2 buff/cache
MiB Swap: 6779,0 total, 6556,0 free, 223,0 used. 25716,1 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17604 joro 20 0 22,1g 2,3g 704896 S 331,2 7,2 4:39.33 python
```
This is what top shows me. I would like to make this python process use at least 90% of available CPU across all cores. How can this be achieved?
GPU utilization is better, around 90%. Even though I don't know why it is not at 100%
```
Mon Aug 10 10:00:13 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:01:00.0 On | N/A |
| 35% 41C P2 90W / 260W | 10515MiB / 11016MiB | 11% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1128 G /usr/lib/xorg/Xorg 102MiB |
| 0 1648 G /usr/lib/xorg/Xorg 380MiB |
| 0 1848 G /usr/bin/gnome-shell 279MiB |
| 0 10633 G ...uest-channel-token=1206236727 266MiB |
| 0 13794 G /usr/lib/firefox/firefox 6MiB |
| 0 17604 C python 9457MiB |
+-----------------------------------------------------------------------------+
```
All i found was a solution for tensorflow 1.0:
```
sess = tf.Session(config=tf.ConfigProto(
intra_op_parallelism_threads=NUM_THREADS))
```
I have an Intel 9900k and a RTX 2080 Ti and use Ubuntu 20.04
E: When I add the following code on top, it uses 1 core 100%
```
tf.config.threading.set_intra_op_parallelism_threads(1)
tf.config.threading.set_inter_op_parallelism_threads(1)
```
But increasing this number to 16 again only utilizes all cores ~30%
|
2020/08/10
|
[
"https://Stackoverflow.com/questions/63336300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9280994/"
] |
Just setting the `set_intra_op_parallelism_threads` and `set_inter_op_parallelism_threads` wasn't working for me. Incase someone else is in the same place, after a lot of struggle with the same issue, below piece of code worked for me in limiting the CPU usage of tensorflow below 500%:
```
import os
import tensorflow as tf
num_threads = 5
os.environ["OMP_NUM_THREADS"] = "5"
os.environ["TF_NUM_INTRAOP_THREADS"] = "5"
os.environ["TF_NUM_INTEROP_THREADS"] = "5"
tf.config.threading.set_inter_op_parallelism_threads(
num_threads
)
tf.config.threading.set_intra_op_parallelism_threads(
num_threads
)
tf.config.set_soft_device_placement(True)
```
|
There can be many issues for this, I solved it for me the following way:
Set
`tf.config.threading.set_intra_op_parallelism_threads(<Your_Physical_Core_Count>) tf.config.threading.set_inter_op_parallelism_threads(<Your_Physical_Core_Count>)`
both to your *physical* core count. You do not want Hyperthreading for highly vectorized operations as you cannot benefit from parallized operations when there aren't any gaps.
>
> "With a high level of vectorization, the number of execution gaps is
> very small and there is possibly insufficient opportunity to make up
> any penalty due to increased contention in HT."
>
>
>
From: Saini et al, published by NASAA dvanced Supercomputing Division, 2011: The Impact of Hyper-Threading on Processor
Resource Utilization in Production Applications
EDIT: I am not sure anymore, if one of the two has to be 1. But one 100% needs to be set to Physical.
| 10,452
|
1,544,535
|
I have to synchronize two different LDAP servers with different schemas. To make my life easier I'm searching for an object mapper for python like SQLobject/SQLAlchemy, but for LDAP.
I found the following packages via pypi and google that might provide such functionality:
* **pumpkin 0.1.0-beta1**:
Pumpkin is LDAP ORM (without R) for python.
* **afpy.ldap 0.3**:
This module provide an easy way to deal with ldap stuff in python.
* **bda.ldap 1.3.1**:
LDAP convenience library.
* **Python LDAP Object Mapper**:
Provides an ORM-like (Django, Storm, SQLAlchemy, et al.) layer for LDAP in Python.
* **ldapdict 1.4**:
Python package for connecting to LDAP, returning results as dictionary like classes. Results are cached.
Which of these packages could you recommend? Or should I better use something different?
|
2009/10/09
|
[
"https://Stackoverflow.com/questions/1544535",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/179014/"
] |
If I were you I would either use python-ldap or ldaptor. Python-ldap is a wrapper for OpenLDAP so you may have problems with using it on Windows unless you are able to build from source.
LDAPtor, is pure python so you avoid that problem. Also, there is a very well written, and graphical description of ldaptor on the website so you should be able to tell whether or not it will do the job you need, just by reading through this web page:
<http://eagain.net/talks/ldaptor/>
|
Giving links to the projects in question would help a lot.
Being the developer of [Python LDAP Object Mapper](https://launchpad.net/python-ldap-om), I can tell that it is quite dead at the moment. If you (or anybody else) is up for taking it over, you're welcome :)
| 10,453
|
45,321,425
|
I am using python transitions module ([link](http://github.com/pytransitions/transitions)) to create finite state machine.
How do I run this finite state machine forever?
Bascically what I want is a fsm model which can stay "idle" when there is no more event to trigger.
For examplel, in example.py:
```
state = [ 'A', B', 'C']
transtion = [ A->B->C]
if name == 'main':
machine = Machine(state, transition, initial='A')**
print(machine.state)**
```
If I to run this machine in a python program, it will go into state 'A', print the current state, and the program will then exit immediately.
So my question is how can I keep it running forever when there is nothing to trigger a transition? Shall I implement a loop or is there any other way to do so?
|
2017/07/26
|
[
"https://Stackoverflow.com/questions/45321425",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8368519/"
] |
@BaptisteM, Instead of using AsyncTask for this you can use Handler and Runnable like this,
```
private void startColorTimer() {
mColorHandler.postDelayed(ColorRunnable, interval);
timerOn = true;
}
private Handler mColorHandler = new Handler();
private Runnable ColorRunnable = new Runnable() {
@Override
public void run() {
coloredTextView.setTextColor(getRanColor());
if (timerOn) {
mColorHandler.postDelayed(this, interval);
}
}
};
```
|
The question have been asked, I started to answer it but a moderator closed it so I put my exemple here.
This exemple change randomly the color of a TextView every 1000ms.
The interval can be changed.
```
package com.your.package;
import android.graphics.Color;
import android.os.AsyncTask;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.widget.TextView;
import java.util.Random;
public class MainActivity extends AppCompatActivity {
TextView coloredTextView;
long interval = 1000; //in ms
boolean timerOn = false;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
coloredTextView = (TextView) findViewById(R.id.colorTextView);
startColorTimer();
}
private void startColorTimer(){
new ColorTimer().execute();
timerOn=true;
}
private void stopColorTimer(){
timerOn=false;
}
private class ColorTimer extends AsyncTask<Void, Void, Void> {
public ColorTimer() {
timerOn=true;
}
@Override
protected Void doInBackground(Void... params) {
try {
Thread.sleep(interval);
} catch (InterruptedException e) {
e.printStackTrace();
}
return null;
}
@Override
protected void onPostExecute(Void aVoid) {
super.onPostExecute(aVoid);
coloredTextView.setTextColor(getRanColor());
if(timerOn) new ColorTimer().execute();
}
private int getRanColor() {
Random rand = new Random();
return Color.argb(255, rand.nextInt(256), rand.nextInt(256), rand.nextInt(256));
}
@Override
protected void finalize() throws Throwable {
super.finalize();
timerOn=false;
}
}
}
```
activity\_main.xml :
```
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context="fr.rifft.bacasable.MainActivity">
<TextView
android:id="@+id/colorTextView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginLeft="8dp"
android:layout_marginTop="8dp"
android:text="Colored Text View"
android:textSize="30sp"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintTop_toTopOf="parent" />
</android.support.constraint.ConstraintLayout>
```
I hope it could help.
| 10,455
|
44,753,426
|
I have a .csv file with two columns of interest 'latitude' and 'longitude' with populated values
I would like to return [latitude, longitude] pairs of each row from the two columns as lists...
[10.222, 20.445]
[10.2555, 20.119] ... and so forth for each row of my csv...
The problem with
>
import pandas
colnames = [ 'latitude', 'longitude']
data = pandas.read\_csv('path\_name.csv', names=colnames)
```
latitude = data.latitude.tolist()
longitude = data.longitude.tolist()
```
is that it creates two lists for all the values each latitude and longitude column
How can I create lists for each latitude, longitude pair in python ?
|
2017/06/26
|
[
"https://Stackoverflow.com/questions/44753426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8124002/"
] |
Most basic way
```
import csv
with open('filename.txt', 'r') as csvfile:
spamreader = csv.reader(csvfile)
for row in spamreader:
print row
```
|
So from what I understand is that you want many lists of two elements: lat and long. However what you are receiving is two lists, one of lat and one of long. what I would do is loop over the length of those lists and then take that element in the lat/long lists and put them together in their own list.
```
for x in range(0, len(latitude)):
newList = [latitude[x], longitude[x]]
```
| 10,464
|
65,511,540
|
I am new to this world and I am starting to take my first steps in python. I am trying to extract in a single list the indices of certain values of my list (those that are greater than 10). When using append I get the following error and I don't understand where the error is.
```py
dbs = [0, 1, 0, 0, 0, 0, 1, 0, 1, 23, 1, 0, 1, 1, 0, 0, 0,
1, 1, 0, 20, 1, 1, 15, 1, 0, 0, 0, 40, 15, 0, 0]
exceed2 = []
for d, i in enumerate(dbs):
if i > 10:
exceed2.append= (d,i)
print(exceed2)
```
|
2020/12/30
|
[
"https://Stackoverflow.com/questions/65511540",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14914879/"
] |
You probably mean to write
```
for i, d in enumerate(dbs):
if d > 10:
exceed2.append(i)
print(exceed2)
```
Few fixes here:
* `append=()` is invalid syntax, you should just write `append()`
* the `i, d` values from `enumerate()` are returning the values and indexes. You should be checking `d > 10`, since that's the value (per your description of the task). Then you should be putting only `i` into the `exceed2` array. (I switch the `i` and `d` variables so that `i` is for `index` as that's more conventional)
* `append(d,i)` wouldn't work anyway, as `append` takes one argument. If you want to append both the value *and* index, you should use `.append((d, i))`, which will append a tuple of both to the list.
* you probably don't want to print `exceed2` every time the condition is hit, when you could just print it once at the end.
|
Welcome to this world :D
the problem is that .append is actually a function that only takes one input, and appends this input to the very end of whatever list you provide.
Try this instead:
```
dbs = [0, 1, 0, 0, 0, 0, 1, 0, 1, 23, 1, 0, 1, 1, 0, 0, 0,
1, 1, 0, 20, 1, 1, 15, 1, 0, 0, 0, 40, 15, 0, 0]
exceed2 = []
for d, i in enumerate(dbs):
if i > 10:
exceed2.append(i)
print(exceed2)
```
| 10,465
|
14,320,758
|
Is there a way to run an arbitrary method whenever a new thread is started in Python (2.7)? My goal is to use [setproctitle](http://pypi.python.org/pypi/setproctitle) to set an appropriate title for each spawned thread.
|
2013/01/14
|
[
"https://Stackoverflow.com/questions/14320758",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/186971/"
] |
Just inherit from threading.Thread and use this class instead of Thread - as long as you have control over the Threads.
```
import threading
class MyThread(threading.Thread):
def __init__(self, callable, *args, **kwargs):
super(MyThread, self).__init__(*args, **kwargs)
self._call_on_start = callable
def start(self):
self._call_on_start()
super(MyThread, self).start()
```
Just as a coarse sketch.
**Edit**
From the comments the need arose to kind of "inject" the new behavior into an existing application. Let's assume you have a script that itself imports other libraries. These libraries use the `threading` module:
Before importing any other modules, first execute this;
```
import threading
import time
class MyThread(threading.Thread):
_call_on_start = None
def __init__(self, callable_ = None, *args, **kwargs):
super(MyThread, self).__init__(*args, **kwargs)
if callable_ is not None:
self._call_on_start = callable_
def start(self):
if self._call_on_start is not None:
self._call_on_start
super(MyThread, self).start()
def set_thread_title():
print "Set thread title"
MyThread._call_on_start = set_thread_title()
threading.Thread = MyThread
def calculate_something():
time.sleep(5)
print sum(range(1000))
t = threading.Thread(target = calculate_something)
t.start()
time.sleep(2)
t.join()
```
As subsequent imports only do a lookup in `sys.modules`, all other libraries using this should be using our new class now. I regard this as a hack, and it might have strange side effects. But at least it's worth a try.
Please note: `threading.Thread` is not the only way to implement concurrency in python, there are other options like `multiprocessing` etc.. These will be unaffected here.
**Edit 2**
I just took a look at the library you cited and it's all about processes, not Threads! So, just do a `:%s/threading/multiprocessing/g` and `:%s/Thread/Process/g` and things should be fine.
|
Use `threading.setprofile`. You give it your callback and Python will invoke it every time a new thread starts.
Documentation [here](https://docs.python.org/2/library/threading.html).
| 10,466
|
40,879,394
|
I am new to python and pydev. I have tensorflow source and am able to run the example files using python3 /pathtoexamplefile.py. I want to try to step thru the word2vec\_basic.py code inside pydev. The debuger keep throwing
File "/Users/me/workspace/tensorflow/tensorflow/python/**init**.py", line 45, in
from tensorflow.python import pywrap\_tensorflow
ImportError: cannot import name 'pywrap\_tensorflow'
I think it has something to do with the working directory. I am able to run python3 -c 'import tensorflow' from my home directory. But, once I enter /Users/me/workspace/tensorflow, the command throws the same error, referencing the same line 45.
Can someone help me thru this part? Thank you.
[](https://i.stack.imgur.com/ii5yg.png)
|
2016/11/30
|
[
"https://Stackoverflow.com/questions/40879394",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1058511/"
] |
```
SELECT StockNo
FROM sales
GROUP BY StockNo
HAVING SUM(CASE WHEN DATE_FORMAT(Date, '%Y-%m') = '2016-11' THEN 1 ELSE 0 END) > 0
```
If you also want to retrieve the full records for those matching stock numbers in the above query, you can just add a join:
```
SELECT s1.*
FROM sales s1
INNER JOIN
(
SELECT StockNo
FROM sales
GROUP BY StockNo
HAVING SUM(CASE WHEN DATE_FORMAT(Date, '%Y-%m') = '2016-11' THEN 1 ELSE 0 END) > 0
) s2
ON s1.StockNo = s2.StockNo
```
**Demo here:**
[SQLFiddle
=========](http://sqlfiddle.com/#!9/8a539/3)
|
Thank you very much Tim for pointing me in the right direction. Your answer was close but it still only returned records from the current month and in the end I used the following query:
```
SELECT s1.* FROM `sales` s1
INNER JOIN
(
SELECT * FROM `sales` GROUP BY `StockNo` HAVING COUNT(`StockNo`) > 1
AND SUM(CASE WHEN DATE_FORMAT(`Date`, '%Y-%m')='2016-11' THEN 1 ELSE 0 END) > 0
) s2
ON s1.StockNo=s2.StockNo
```
This one had been eluding me for some time.
| 10,467
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.