qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
19,512,457
What can i do to optimize this function, and make it looks like more pythonic? ``` def flatten_rows_to_file(filename, rows): f = open(filename, 'a+') temp_ls = list() for i, row in enumerate(rows): temp_ls.append("%(id)s\t%(price)s\t%(site_id)s\t%(rating)s\t%(shop_id)s\n" % row) if i and i % 100000 == 0: f.writelines(temp_ls) temp_ls = [] f.writelines(temp_ls) f.close() ```
2013/10/22
[ "https://Stackoverflow.com/questions/19512457", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2000477/" ]
A few things that come to mind immediately: 1. Use a `with` statement, rather than manually closing your file. 2. Pass a generator expression to `f.writelines` rather than building up a 100000 row list over and over (let the standard library handle how much, if any, it buffers the output). 3. Or, better yet, use the `csv` module to handle writing your tab-separated output. Here's a quick stab at some improved code: ``` from csv import DictWriter def flatten_rows_to_file(filename, rows): with open(filename, 'ab') as f: writer = DictWriter(f, ['id','price','site_id','rating','shop_id'], delimiter='\t') writer.writerows(rows) ``` Note that if you're using Python 3, you need slightly different code for the opening the file. Use mode `'a'` rather than `'ab'` and add the keyword argument `newline=""`. You didn't need the `+` in the mode you were using (you are only writing, not writing and reading both). If the values in your `rows` argument may have extra keys beyond the ones you were writing, you'll need to pass some extra arguments to the `DictWriter` constructor as well.
It is generally a good idea to use the `with` statement to make sure the file is closed properly. Also, unless I'm mistaken there should be no need to manually buffer the lines. You can just as well specify a buffer size when opening the file, determining [how often the file is flushed](https://stackoverflow.com/q/3167494/1639625). ``` def flatten_rows_to_file(filename, rows, buffsize=100000): with open(filename, 'a+', buffsize) as f: for row in rows: f.write("%(id)s\t%(price)s\t%(site_id)s\t%(rating)s\t%(shop_id)s\n" % row) ```
36,380,144
i'm new at here and im new at coding something at node.js. My question is how to use the brackets i mean, i'm working on a steam trade bot and i need to get understand how to use options and callbacks. Let me give you a example, [![enter image description here](https://i.stack.imgur.com/VMV5q.png)](https://i.stack.imgur.com/VMV5q.png) For example i'm writing something, i wanted to a take tradeoffer. is that gonna be like that? ``` makeOffer.partnerAccountId: 'mysteamid'; makeOffer(accessToken[, itemsFromThem] ``` or something. Really i can't understanding. I didn't have ever been a pro at programing languages and i was worked little bit python. It was understable than this. Please help, if i can understand little, i can solve it. Thanks. Sorry for my bad Eng.
2016/04/03
[ "https://Stackoverflow.com/questions/36380144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The brackets are documentation notation indicating that those parameters are optional, and can be omitted from any given invocation. They are not indicative of syntax you should be using in your program. Both of these styles should work, given that documentation. The callback being optional. ``` makeOffer({ ... }); makeOffer({ ... }, function (...) { ... }); ``` *Dots indicate more code - in this case an object definition, function parameters, and the function body.* Some other examples of this type of documentation notation: MDN Array: [`concat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat), [`slice`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice), [`reduce`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce)
~~`makeOffer(accessToken[, itemsFromThem])` is not JavaScript syntax. It's just common notation that's used to indicate that the function can take any number of arguments, like so:~~ ``` makeOffer(accessToken, something, somethingElse); makeOffer(accessToken, something, secondThing, thirdThing); makeOffer(accessToken); ``` Check the documentation for more details on how that library behaves.
36,380,144
i'm new at here and im new at coding something at node.js. My question is how to use the brackets i mean, i'm working on a steam trade bot and i need to get understand how to use options and callbacks. Let me give you a example, [![enter image description here](https://i.stack.imgur.com/VMV5q.png)](https://i.stack.imgur.com/VMV5q.png) For example i'm writing something, i wanted to a take tradeoffer. is that gonna be like that? ``` makeOffer.partnerAccountId: 'mysteamid'; makeOffer(accessToken[, itemsFromThem] ``` or something. Really i can't understanding. I didn't have ever been a pro at programing languages and i was worked little bit python. It was understable than this. Please help, if i can understand little, i can solve it. Thanks. Sorry for my bad Eng.
2016/04/03
[ "https://Stackoverflow.com/questions/36380144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The brackets are documentation notation indicating that those parameters are optional, and can be omitted from any given invocation. They are not indicative of syntax you should be using in your program. Both of these styles should work, given that documentation. The callback being optional. ``` makeOffer({ ... }); makeOffer({ ... }, function (...) { ... }); ``` *Dots indicate more code - in this case an object definition, function parameters, and the function body.* Some other examples of this type of documentation notation: MDN Array: [`concat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat), [`slice`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice), [`reduce`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce)
I don't have full code of what You've done. But I'll give necessary part of it. Look here: ``` var SteamTradeOffers = require('steam-tradeoffers'); var offers = new SteamTradeOffers(); // look at api, it needs 2 arguments // 1. offer object, that consist of params like this: var offer = { partnerAccountId: 'steam id goes here', accessToken: 'access token goes here', itemsFromThem: [{ appid: 440, contextid: 2, amount: 1, assetid: "1627590398" }], itemsFromMe: [{ appid: 440, contextid: 2, amount: 1, assetid: "1627590399" }], message: "Hello! Checkout what I'm offering You ;)" }; offers.makeOffer(offer, function(err, result){ // 2. callback function that will handle result of makeOffer console.log(result); }); ```
36,380,144
i'm new at here and im new at coding something at node.js. My question is how to use the brackets i mean, i'm working on a steam trade bot and i need to get understand how to use options and callbacks. Let me give you a example, [![enter image description here](https://i.stack.imgur.com/VMV5q.png)](https://i.stack.imgur.com/VMV5q.png) For example i'm writing something, i wanted to a take tradeoffer. is that gonna be like that? ``` makeOffer.partnerAccountId: 'mysteamid'; makeOffer(accessToken[, itemsFromThem] ``` or something. Really i can't understanding. I didn't have ever been a pro at programing languages and i was worked little bit python. It was understable than this. Please help, if i can understand little, i can solve it. Thanks. Sorry for my bad Eng.
2016/04/03
[ "https://Stackoverflow.com/questions/36380144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
~~`makeOffer(accessToken[, itemsFromThem])` is not JavaScript syntax. It's just common notation that's used to indicate that the function can take any number of arguments, like so:~~ ``` makeOffer(accessToken, something, somethingElse); makeOffer(accessToken, something, secondThing, thirdThing); makeOffer(accessToken); ``` Check the documentation for more details on how that library behaves.
I don't have full code of what You've done. But I'll give necessary part of it. Look here: ``` var SteamTradeOffers = require('steam-tradeoffers'); var offers = new SteamTradeOffers(); // look at api, it needs 2 arguments // 1. offer object, that consist of params like this: var offer = { partnerAccountId: 'steam id goes here', accessToken: 'access token goes here', itemsFromThem: [{ appid: 440, contextid: 2, amount: 1, assetid: "1627590398" }], itemsFromMe: [{ appid: 440, contextid: 2, amount: 1, assetid: "1627590399" }], message: "Hello! Checkout what I'm offering You ;)" }; offers.makeOffer(offer, function(err, result){ // 2. callback function that will handle result of makeOffer console.log(result); }); ```
62,919,733
I have an ascii file containing 2 columns as following; ``` id value 1 15.1 1 12.1 1 13.5 2 12.4 2 12.5 3 10.1 3 10.2 3 10.5 4 15.1 4 11.2 4 11.5 4 11.7 5 12.5 5 12.2 ``` I want to estimate the average value of column "value" for each id (i.e. group by id) Is it possible to do that in python using numpy or pandas ?
2020/07/15
[ "https://Stackoverflow.com/questions/62919733", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5834711/" ]
If you don't know how to read the file, there are several methods as you can see [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html) that you could use, so you can try one of them, e.g. `pd.read_csv()`. Once you have read the file, you could try this using pandas functions as `pd.DataFrame.groupby` and `pd.Series.mean()`: ``` df.groupby('id').mean() #if df['id'] is the index, try this: #df.reset_index().groupby('id').mean() ``` Output: ``` value id 1 13.566667 2 12.450000 3 10.266667 4 12.375000 5 12.350000 ```
``` import pandas as pd filename = "data.txt" df = pd.read_fwf(filename) df.groupby(['id']).mean() ``` Output ``` value id 1 13.566667 2 12.450000 3 10.266667 4 12.375000 5 12.350000 ```
15,316,886
I'm new to python and would like to take this ``` K=[['d','o','o','r'], ['s','t','o','p']] ``` to print: ``` door, stop ```
2013/03/09
[ "https://Stackoverflow.com/questions/15316886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2152618/" ]
How about: `', '.join(''.join(x) for x in K)`
Using `str.join()` and `map()`: ``` In [26]: K=[['d','o','o','r'], ['s','t','o','p']] In [27]: ", ".join(map("".join,K)) Out[27]: 'door, stop' ``` > > S.join(iterable) -> string > > > Return a string which is the concatenation of the strings in the > iterable. The separator between elements is S. > > >
30,331,919
Can someone explain why the following occurs? My use case is that I have a python list whose elements are all numpy ndarray objects and I need to search through the list to find the index of a particular ndarray obj. Simplest Example: ``` >>> import numpy as np >>> a,b = np.arange(0,5), np.arange(1,6) >>> a array([0, 1, 2, 3, 4]) >>> b array([1, 2, 3, 4, 5]) >>> l = list() >>> l.append(a) >>> l.append(b) >>> l.index(a) 0 >>> l.index(b) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` Why can `l` find the index of `a`, but not `b`?
2015/05/19
[ "https://Stackoverflow.com/questions/30331919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2383529/" ]
Applying the idea in <https://stackoverflow.com/a/17703076/901925> (see the Related sidebare) ``` [np.array_equal(b,x) for x in l].index(True) ``` should be more reliable. It ensures a correct array to array comparison. Or `[id(b)==id(x) for x in l].index(True)` if you want to ensure it compares ids.
The idea is to convert numpy arrays to lists and transform the problem to finding a list in an other list: ```py def find_array(list_of_numpy_array,taregt_numpy_array): out = [x.tolist() for x in list_of_numpy_array].index(taregt_numpy_array.tolist()) return out ```
53,723,025
I have anaconda installed, and I use Anaconda Prompt to install python packages. But I'm unable to install RASA-NLU using conda prompt. Please let me know the command for the same I have used the below command: ``` conda install rasa_nlu ``` Error: ``` PackagesNotFoundError: The following packages are not available from current channels: - rasa_nlu ```
2018/12/11
[ "https://Stackoverflow.com/questions/53723025", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7905329/" ]
Turns out the framework search path (under Project->Target->Build Settings) was indeed the culprit. Removing my custom overrides solved the issue. Interestingly, if I remember correctly, I had added them, because Xcode wasn't able to find my frameworks... See also <https://forums.developer.apple.com/message/328635#328779>
Set "Always Search User Paths" to NO and problem will be solved
68,299,665
I am new to using webdataset library from pytorch. I have created .tar files of a sample dataset present locally in my system using webdataset.TarWriter(). The .tar files creation seems to be successful as I could extract them separately on windows platform and verify the same dataset files. Now, I create `train_dataset = wds.Dataset(url)` where url is the local file path of the .tar files. after this, I perform the following operations: ``` train_loader = torch.utils.data.DataLoader(train_dataset, num_workers=0, batch_size=10) sample = next(iter(train_loader)) print(sample) ``` It is resulting me in error like this[![enter image description here](https://i.stack.imgur.com/8uaz7.png)](https://i.stack.imgur.com/8uaz7.png) The same code works fine if I use a web url example: `"http://storage.googleapis.com/nvdata-openimages/openimages-train-000000.tar"` mentioned in webdataset documentation: `https://reposhub.com/python/deep-learning/tmbdev-webdataset.html` I couldn't understand the error so far. Any idea on how to solve this problem?
2021/07/08
[ "https://Stackoverflow.com/questions/68299665", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2562870/" ]
I have had the same error since yesterday, I finally found the culprit. [WebDataset/tarIterators.py](https://github.com/webdataset/webdataset/blob/master/webdataset/tariterators.py) makes use of [WebDataset/gopen.py](https://github.com/webdataset/webdataset/blob/master/webdataset/gopen.py). In gopen.py `urllib.parse.urlparse` is called to parse the url to be opened, in your case the url is `D:/PhD/...`. ``` gopen_schemes = dict( __default__=gopen_error, pipe=gopen_pipe, http=gopen_curl, https=gopen_curl, sftp=gopen_curl, ftps=gopen_curl, scp=gopen_curl) def gopen(url, mode="rb", bufsize=8192, **kw): """Open the URL. This uses the `gopen_schemes` dispatch table to dispatch based on scheme. Support for the following schemes is built-in: pipe, file, http, https, sftp, ftps, scp. When no scheme is given the url is treated as a file. You can use the OPEN_VERBOSE argument to get info about files being opened. :param url: the source URL :param mode: the mode ("rb", "r") :param bufsize: the buffer size """ global fallback_gopen verbose = int(os.environ.get("GOPEN_VERBOSE", 0)) if verbose: print("GOPEN", url, info, file=sys.stderr) assert mode in ["rb", "wb"], mode if url == "-": if mode == "rb": return sys.stdin.buffer elif mode == "wb": return sys.stdout.buffer else: raise ValueError(f"unknown mode {mode}") pr = urlparse(url) if pr.scheme == "": bufsize = int(os.environ.get("GOPEN_BUFFER", -1)) return open(url, mode, buffering=bufsize) if pr.scheme == "file": bufsize = int(os.environ.get("GOPEN_BUFFER", -1)) return open(pr.path, mode, buffering=bufsize) handler = gopen_schemes["__default__"] handler = gopen_schemes.get(pr.scheme, handler) return handler(url, mode, bufsize, **kw) ``` As you can see in the dictionary the `__default__` function is `gopen_error`. This is the function returning the error you are seeing. `pr = urlparse(url)` on your url will generate an urlparse where the scheme (`pr.scheme`) is `'d'` because your disk is named D. However, it should be `'file'` for the function to work as intended. Since it is not equal to 'file' or any of the other schemes in the dictionary (http, https, sftp, etc), the default function will be used, which returns the error. I circumvented this issue by adding `d=gopen_file` to the `gopen_schemes` dictionary. I hope this helps you further temporarily. I will address this issue on the WebDataset GitHub page as well and keep this page updated if I get a more practical update. Good luck!
Add "file:" in the front of local file path, like this: ``` from itertools import islice import webdataset as wds import os import tqdm path = "file:D:/Dataset/00000.tar" dataset = wds.WebDataset(path) for sample in islice(dataset, 0, 3): for key, value in sample.items(): print(key, repr(value)[:50]) print() ```
66,178,922
I am trying to run Django tests on Gitlab CI but getting this error, Last week it was working perfectly but suddenly I am getting this error during test run > > django.db.utils.OperationalError: could not connect to server: Connection refused > Is the server running on host "database" (172.19.0.3) and accepting > TCP/IP connections on port 5432? > > > **My gitlab-ci file is like this** ``` image: docker:latest services: - docker:dind variables: DOCKER_HOST: tcp://docker:2375 DOCKER_DRIVER: overlay2 test: stage: test image: tiangolo/docker-with-compose script: - docker-compose -f docker-compose.yml build - docker-compose run app python3 manage.py test ``` my docker-compose is like this: ``` version: '3' volumes: postgresql_data: services: database: image: postgres:12-alpine environment: - POSTGRES_DB=test - POSTGRES_USER=test - POSTGRES_PASSWORD=123 - POSTGRES_HOST=database - POSTGRES_PORT=5432 volumes: - postgresql_data:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -e \"SHOW DATABASES;\""] interval: 5s timeout: 5s retries: 5 ports: - "5432" restart: on-failure app: container_name: proj hostname: proj build: context: . dockerfile: Dockerfile image: sampleproject command: > bash -c " python3 manage.py migrate && python3 manage.py wait_for_db && gunicorn sampleproject.wsgi:application -c ./gunicorn.py " env_file: .env ports: - "8000:8000" volumes: - .:/srv/app depends_on: - database - redis ``` So why its refusing connection? I don't have any idea and it was working last week.
2021/02/12
[ "https://Stackoverflow.com/questions/66178922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15200334/" ]
Unsure if it would help in your case but I was getting the same issue with docker-compose. What solved it for me was explicitly specifying the hostname for postgres. ``` services: database: image: postgres:12-alpine hostname: database environment: - POSTGRES_DB=test - POSTGRES_USER=test - POSTGRES_PASSWORD=123 - POSTGRES_HOST=database - POSTGRES_PORT=5432 ... ```
Could you do a `docker container ls` and check if the container name of the database is in fact, "database"? You've skipped setting the `container_name` for that container, and it may be so that docker isn't creating it with the default name of the service, i.e. "database", thus the DNS isn't able to find it under that name in the network.
66,178,922
I am trying to run Django tests on Gitlab CI but getting this error, Last week it was working perfectly but suddenly I am getting this error during test run > > django.db.utils.OperationalError: could not connect to server: Connection refused > Is the server running on host "database" (172.19.0.3) and accepting > TCP/IP connections on port 5432? > > > **My gitlab-ci file is like this** ``` image: docker:latest services: - docker:dind variables: DOCKER_HOST: tcp://docker:2375 DOCKER_DRIVER: overlay2 test: stage: test image: tiangolo/docker-with-compose script: - docker-compose -f docker-compose.yml build - docker-compose run app python3 manage.py test ``` my docker-compose is like this: ``` version: '3' volumes: postgresql_data: services: database: image: postgres:12-alpine environment: - POSTGRES_DB=test - POSTGRES_USER=test - POSTGRES_PASSWORD=123 - POSTGRES_HOST=database - POSTGRES_PORT=5432 volumes: - postgresql_data:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -e \"SHOW DATABASES;\""] interval: 5s timeout: 5s retries: 5 ports: - "5432" restart: on-failure app: container_name: proj hostname: proj build: context: . dockerfile: Dockerfile image: sampleproject command: > bash -c " python3 manage.py migrate && python3 manage.py wait_for_db && gunicorn sampleproject.wsgi:application -c ./gunicorn.py " env_file: .env ports: - "8000:8000" volumes: - .:/srv/app depends_on: - database - redis ``` So why its refusing connection? I don't have any idea and it was working last week.
2021/02/12
[ "https://Stackoverflow.com/questions/66178922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15200334/" ]
Unsure if it would help in your case but I was getting the same issue with docker-compose. What solved it for me was explicitly specifying the hostname for postgres. ``` services: database: image: postgres:12-alpine hostname: database environment: - POSTGRES_DB=test - POSTGRES_USER=test - POSTGRES_PASSWORD=123 - POSTGRES_HOST=database - POSTGRES_PORT=5432 ... ```
Reboot the server. I encounter similar errors on Mac and Linux from time to time when I run more than one container that use postgres.
44,130,318
env:linux + python3 + rornado When i run client.py,he suggested that my connection was rejected, for a few ports,Using the 127.0.0.1:7233,The server does not have any response, but the client prompts to refuse to connect,Who can tell me why? server.py ``` # coding:utf8 import socket import time import threading # accept conn def get_hart(host, port): global clien_list print('begin get_hart') s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) s.bind((host, port)) s.listen(5) print(clien_list) while True: try: clien, address = s.accept() try: clien_data = clien.recv(1024).decode('utf8') if clien_data == str(0): clien_id = clien_reg() clien.send(str(clien_id)) print(clien_list) else: clien_list[int(clien_data)]['time'] = time.time() # print clien_data except: print('send fail!') clien.close() except: print("accept fail!") continue # client reg def clien_reg(): global clien_list tim = str(time.time() / 100).split('.') id = int(tim[1]) clien_list[id] = {"time": time.time(), "state": 0} return id # client dict def check_hart(clien_list, delay, lost_time): while True: for id in clien_list: if abs(clien_list[id]['time'] - time.time()) > lost_time: clien_list[id]['state'] = 0 del clien_list[id] # del offline client break # err else: clien_list[id]['state'] = 1 print(clien_list) time.sleep(delay) if __name__ == '__main__': host = '127.0.0.1' port = 7233 global clien_list # Dict: client info clien_list = {} lost_time = 30 # timeout print('begin threading') try: threading.Thread(target=get_hart,args=(host,port,),name='getHart') threading.Thread(target=check_hart,args=(clien_list, 5, lost_time,)) except Exception as e: print("thread error!"+e) while 1: pass print('e') ``` client.py ``` # coding:utf8 import socket import time # sent pkg def send_hart(host, port, delay): s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) print(host,port) global clien_id try: s.connect((host, port)) s.send(clien_id) if clien_id == 0: try: to_clien_id = s.recv(1024) clien_id = to_clien_id except: print('send fail!') print(to_clien_id) # test id time.sleep(delay) except Exception as e: print('connect fail!:'+e) time.sleep(delay) if __name__ == '__main__': host = '127.0.0.1' port = 7233 global clien_id clien_id = 0 # client reg id while True: send_hart(host, port, 5) ``` Error ``` Traceback (most recent call last): File "/home/ubuntu/XPlan/socket_test/client.py", line 13, in send_hart s.connect((host, port)) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/ubuntu/XPlan/socket_test/client.py", line 34, in <module> send_hart(host, port, 5) File "/home/ubuntu/XPlan/socket_test/client.py", line 24, in send_hart print('connect fail!:'+e) TypeError: Can't convert 'ConnectionRefusedError' object to str implicitly Process finished with exit code 1 ```
2017/05/23
[ "https://Stackoverflow.com/questions/44130318", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5472490/" ]
Are the `featured_value_id` array values unique inside the array? If not does it make a difference if you give the planner a little hand by making them unique?: ``` select distinct c.id from dematerialized_products cross join lateral ( select distinct id from unnest(feature_value_ids) u (id) ) c ```
You didn't show execution plans, but obviously the time is spent sorting the values to eliminate doubles. If `EXPLAIN (ANALYZE)` shows that the sort is performed using temporary files, you can improve the performance by raising `work_mem` so that the sort can be performed in memory. You will still experience a performance hit with `DISTINCT`.
74,428,888
`polars.LazyFrame.var` will return variance value for each column in a table as below: ``` >>> df = pl.DataFrame({"a": [1, 2, 3, 4], "b": [1, 2, 1, 1], "c": [1, 1, 1, 1]}).lazy() >>> df.collect() shape: (4, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ 1 ┆ 1 β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 2 ┆ 2 ┆ 1 β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 3 ┆ 1 ┆ 1 β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 4 ┆ 1 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ >>> df.var().collect() shape: (1, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═════║ β”‚ 1.666667 ┆ 0.25 ┆ 0.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ ``` I wish to select columns with value > 0 from LazyFrame but couldn't find the solution. I can iterate over columns in polars dataframe then filter columns by condition as below: ``` >>> data.var() shape: (1, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b ┆ c β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ f64 ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═════║ β”‚ 1.666667 ┆ 0.25 ┆ 0.0 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ >>> cols = pl.select([s for s in data.var() if (s > 0).all()]).columns >>> cols ['a', 'b'] >>> data.select(cols) shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1 ┆ 1 β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 2 ┆ 2 β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 3 ┆ 1 β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 4 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ ``` But it doesn't work in LazyFrame: ``` >>> data = data.lazy() >>> data <polars.internals.lazyframe.frame.LazyFrame object at 0x7f0e3d9966a0> >>> cols = pl.select([s for s in data.var() if (s > 0).all()]).columns Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <listcomp> File "/home/jasmine/miniconda3/envs/jupyternb/lib/python3.9/site-packages/polars/internals/lazyframe/frame.py", line 421, in __getitem__ raise TypeError( TypeError: 'LazyFrame' object is not subscriptable (aside from slicing). Use 'select()' or 'filter()' instead. ``` The reason for doing this in LazyFrame is that we want to maximize the performance. Any advice would be much appreciated. Thanks!
2022/11/14
[ "https://Stackoverflow.com/questions/74428888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/20497072/" ]
polars doesn't know what the variance is until after it is calculated but that's the same time that it is displaying the results so there's no way to filter the columns reported and also have it be more performant than just displaying all the columns, at least with respect to the polars calculation. It could be that python/jupyter takes longer to display more results than fewer. With that said you could do something like this: ``` df.var().melt().filter(pl.col('value')>0).collect() ``` which gives you what you want in one line but it's a different shape. You could also do something like this: ``` dfvar=df.var() dfvar.select(dfvar.melt().filter(pl.col('value')>0).select('variable').collect().to_series().to_list()).collect() ```
Building on the answer from @dean MacGregor, we: * do the var calculation * melt * apply the filter * extract the `variable` column with column names * pass it as a list to `select` ```py df.select( ( df.var().melt().filter(pl.col('value')>0).collect() ["variable"] ) .to_list() ).collect() shape: (4, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1 ┆ 1 β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 2 ┆ 2 β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 3 ┆ 1 β”‚ β”œβ•Œβ•Œβ•Œβ•Œβ•Œβ”Όβ•Œβ•Œβ•Œβ•Œβ•Œβ”€ β”‚ 4 ┆ 1 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ ``` ```
16,748,592
I'm trying to build a simple Scapy script which manually manages 3-way-handshake, makes an HTTP GET request (by sending a single packet) to a web server and manually manages response packets (I need to manually send ACK packets for the response). Here is the beginning of the script: ``` #!/usr/bin/python import logging logging.getLogger("scapy.runtime").setLevel(logging.ERROR) # Import scapy from scapy.all import * # beginning of 3-way-handshake ip=IP(dst="www.website.org") TCP_SYN=TCP(sport=1500, dport=80, flags="S", seq=100, options=[('NOP', None), ('MSS', 1448)]) TCP_SYNACK=sr1(ip/TCP_SYN) my_ack = TCP_SYNACK.seq + 1 TCP_ACK=TCP(sport=1500, dport=80, flags="A", seq=101, ack=my_ack) send(ip/TCP_ACK) TCP_PUSH=TCP(sport=1500, dport=80, flags="PA", seq=102, ack=my_ack) send(ip/TCP_PUSH) # end of 3-way-handshake # beginning of GET request getString = 'GET / HTTP/1.1\r\n\r\n' request = ip / TCP(dport=80, sport=1500, seq=103, ack=my_ack + 1, flags='A') / getString # help needed from here... ``` The script above completes the 3-way-handshake and prepares the HTTP GET request. Then, I've to send the request through: ``` send(request) ``` Now, after the sending, I should obtain/manually read the response. My purpose is indeed to read the response by manually sending the ACK packets of the response (this is the main focus of the script). How can I do that? Is the `send(_)` function appropriate, or it's better to use something like `response = sr1(request)` (but in this case, I suppose ACK are automatically sent)?
2013/05/25
[ "https://Stackoverflow.com/questions/16748592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1194426/" ]
Use sr(), read the data in from every ans that is received (you'll need to parse, as you go, the Content-Length or the chunk lengths depending on whether you're using transfer chunking) then send ACKs in response to the *last* element in the ans list, then repeat until the end criteria is satisfied (don't forget to include a global timeout, I use eight passes without getting any replies which is 2 seconds). Make sure sr is sr(req, multi=True, timeout=0.25) or thereabouts. Occasionally as you loop around there will be periods where no packets turn up, that's why HTTP has all those length specs. If the timeout is too high then you might find the server gets impatient waiting for the ACKs and starts re-sending. Note that when you're using transfer chunking you can also attempt to cheat and just read the 0\r\n\r\n but then if that's genuinely in the data stream... yeah. Don't rely on the TCP PSH, however tempting that may look after a few Wireshark sessions with trivial use-cases. Obviously the end result isn't a completely viable user-agent, but it's close enough for most purposes. Don't forget to send Keep-Alive headers, RST on exceptions and politely close down the TCP session if everything works. Remember that RFC 2616 isn't 176 pages long just for giggles.
Did you try: ``` responce = sr1(request) print responce.show2 ``` Also try using a different sniffer like wireshark or tcpdump as netcat has a few bugs.
16,748,592
I'm trying to build a simple Scapy script which manually manages 3-way-handshake, makes an HTTP GET request (by sending a single packet) to a web server and manually manages response packets (I need to manually send ACK packets for the response). Here is the beginning of the script: ``` #!/usr/bin/python import logging logging.getLogger("scapy.runtime").setLevel(logging.ERROR) # Import scapy from scapy.all import * # beginning of 3-way-handshake ip=IP(dst="www.website.org") TCP_SYN=TCP(sport=1500, dport=80, flags="S", seq=100, options=[('NOP', None), ('MSS', 1448)]) TCP_SYNACK=sr1(ip/TCP_SYN) my_ack = TCP_SYNACK.seq + 1 TCP_ACK=TCP(sport=1500, dport=80, flags="A", seq=101, ack=my_ack) send(ip/TCP_ACK) TCP_PUSH=TCP(sport=1500, dport=80, flags="PA", seq=102, ack=my_ack) send(ip/TCP_PUSH) # end of 3-way-handshake # beginning of GET request getString = 'GET / HTTP/1.1\r\n\r\n' request = ip / TCP(dport=80, sport=1500, seq=103, ack=my_ack + 1, flags='A') / getString # help needed from here... ``` The script above completes the 3-way-handshake and prepares the HTTP GET request. Then, I've to send the request through: ``` send(request) ``` Now, after the sending, I should obtain/manually read the response. My purpose is indeed to read the response by manually sending the ACK packets of the response (this is the main focus of the script). How can I do that? Is the `send(_)` function appropriate, or it's better to use something like `response = sr1(request)` (but in this case, I suppose ACK are automatically sent)?
2013/05/25
[ "https://Stackoverflow.com/questions/16748592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1194426/" ]
TCP\_PUSH=TCP(sport=1500, dport=80, flags="PA", seq=102, ack=my\_ack) send(ip/TCP\_PUSH) seq should be 101 not 102
Did you try: ``` responce = sr1(request) print responce.show2 ``` Also try using a different sniffer like wireshark or tcpdump as netcat has a few bugs.
16,748,592
I'm trying to build a simple Scapy script which manually manages 3-way-handshake, makes an HTTP GET request (by sending a single packet) to a web server and manually manages response packets (I need to manually send ACK packets for the response). Here is the beginning of the script: ``` #!/usr/bin/python import logging logging.getLogger("scapy.runtime").setLevel(logging.ERROR) # Import scapy from scapy.all import * # beginning of 3-way-handshake ip=IP(dst="www.website.org") TCP_SYN=TCP(sport=1500, dport=80, flags="S", seq=100, options=[('NOP', None), ('MSS', 1448)]) TCP_SYNACK=sr1(ip/TCP_SYN) my_ack = TCP_SYNACK.seq + 1 TCP_ACK=TCP(sport=1500, dport=80, flags="A", seq=101, ack=my_ack) send(ip/TCP_ACK) TCP_PUSH=TCP(sport=1500, dport=80, flags="PA", seq=102, ack=my_ack) send(ip/TCP_PUSH) # end of 3-way-handshake # beginning of GET request getString = 'GET / HTTP/1.1\r\n\r\n' request = ip / TCP(dport=80, sport=1500, seq=103, ack=my_ack + 1, flags='A') / getString # help needed from here... ``` The script above completes the 3-way-handshake and prepares the HTTP GET request. Then, I've to send the request through: ``` send(request) ``` Now, after the sending, I should obtain/manually read the response. My purpose is indeed to read the response by manually sending the ACK packets of the response (this is the main focus of the script). How can I do that? Is the `send(_)` function appropriate, or it's better to use something like `response = sr1(request)` (but in this case, I suppose ACK are automatically sent)?
2013/05/25
[ "https://Stackoverflow.com/questions/16748592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1194426/" ]
Use sr(), read the data in from every ans that is received (you'll need to parse, as you go, the Content-Length or the chunk lengths depending on whether you're using transfer chunking) then send ACKs in response to the *last* element in the ans list, then repeat until the end criteria is satisfied (don't forget to include a global timeout, I use eight passes without getting any replies which is 2 seconds). Make sure sr is sr(req, multi=True, timeout=0.25) or thereabouts. Occasionally as you loop around there will be periods where no packets turn up, that's why HTTP has all those length specs. If the timeout is too high then you might find the server gets impatient waiting for the ACKs and starts re-sending. Note that when you're using transfer chunking you can also attempt to cheat and just read the 0\r\n\r\n but then if that's genuinely in the data stream... yeah. Don't rely on the TCP PSH, however tempting that may look after a few Wireshark sessions with trivial use-cases. Obviously the end result isn't a completely viable user-agent, but it's close enough for most purposes. Don't forget to send Keep-Alive headers, RST on exceptions and politely close down the TCP session if everything works. Remember that RFC 2616 isn't 176 pages long just for giggles.
TCP\_PUSH=TCP(sport=1500, dport=80, flags="PA", seq=102, ack=my\_ack) send(ip/TCP\_PUSH) seq should be 101 not 102
57,327,185
I'm using Telethon in python to automatic replies in Telegram's Group. I wanna reporting spam or abuse an account automatically via the Telethon and I read the Telethon document and google it, but I can't find any example. If this can be done, please provide an example with a sample code.
2019/08/02
[ "https://Stackoverflow.com/questions/57327185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5704891/" ]
`display: flex;` This CSS property will make all child inline element. ``` <div style="display:flex; flex-direction: row;"> <div>1</div> <div>2</div> </div ``` This is an example of inline child. for more info please Check this link [CSS Flexbox Layout](https://www.w3schools.com/css/css3_flexbox.asp)
Can you describe what do you mean by "when the "Add Dates" button is pressed I want another line be available." ? In any case, when trying to inline two objects, you should put style="display: inline" property onto very element you want to be displayed inline. Thus, try to add style="display: inline" property on element. Hope this helps.
57,327,185
I'm using Telethon in python to automatic replies in Telegram's Group. I wanna reporting spam or abuse an account automatically via the Telethon and I read the Telethon document and google it, but I can't find any example. If this can be done, please provide an example with a sample code.
2019/08/02
[ "https://Stackoverflow.com/questions/57327185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5704891/" ]
the default layout should be fine for what you're asking. div by default is a block element, meaning, by default, its elements will be placed in a new line. your exact code + iterating the .dates div: [stackblitz demo](https://stackblitz.com/edit/angular-nqkr2f?file=app/sidenav-overview-example.html) [![image of default layout](https://i.stack.imgur.com/D0RaS.png)](https://i.stack.imgur.com/D0RaS.png)
Can you describe what do you mean by "when the "Add Dates" button is pressed I want another line be available." ? In any case, when trying to inline two objects, you should put style="display: inline" property onto very element you want to be displayed inline. Thus, try to add style="display: inline" property on element. Hope this helps.
57,327,185
I'm using Telethon in python to automatic replies in Telegram's Group. I wanna reporting spam or abuse an account automatically via the Telethon and I read the Telethon document and google it, but I can't find any example. If this can be done, please provide an example with a sample code.
2019/08/02
[ "https://Stackoverflow.com/questions/57327185", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5704891/" ]
`display: flex;` This CSS property will make all child inline element. ``` <div style="display:flex; flex-direction: row;"> <div>1</div> <div>2</div> </div ``` This is an example of inline child. for more info please Check this link [CSS Flexbox Layout](https://www.w3schools.com/css/css3_flexbox.asp)
the default layout should be fine for what you're asking. div by default is a block element, meaning, by default, its elements will be placed in a new line. your exact code + iterating the .dates div: [stackblitz demo](https://stackblitz.com/edit/angular-nqkr2f?file=app/sidenav-overview-example.html) [![image of default layout](https://i.stack.imgur.com/D0RaS.png)](https://i.stack.imgur.com/D0RaS.png)
66,510,970
I've tried relentlessly for about 1-2hrs to get this piece of code to work. I need to add the role to a user, simple enough right? This code, searches for the role but cannot find it, is this because I am sending it from a channel which that role doesn't have access to? I need help please. Edit 1: removed quotations around the ID ``` @bot.command() async def addrole(ctx, user: discord.User): #Add the customer role to the user role_id = 810264985258164255 guild = ctx.guild role = discord.utils.get(guild.roles, id=810264985258164255) await user.add_roles(role) ``` ``` 2021-03-06T21:12:51.811235+00:00 app[worker.1]: Your bot is ready. 2021-03-06T21:14:09.010057+00:00 app[worker.1]: Ignoring exception in command addrole: 2021-03-06T21:14:09.012477+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T21:14:09.012553+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped 2021-03-06T21:14:09.012554+00:00 app[worker.1]: ret = await coro(*args, **kwargs) 2021-03-06T21:14:09.012592+00:00 app[worker.1]: File "bot.py", line 92, in addrole 2021-03-06T21:14:09.012593+00:00 app[worker.1]: await user.add_roles(user, role) 2021-03-06T21:14:09.012621+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles 2021-03-06T21:14:09.012621+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason) 2021-03-06T21:14:09.012651+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 243, in request 2021-03-06T21:14:09.012652+00:00 app[worker.1]: raise NotFound(r, data) 2021-03-06T21:14:09.012699+00:00 app[worker.1]: discord.errors.NotFound: 404 Not Found (error code: 10011): Unknown Role 2021-03-06T21:14:09.012735+00:00 app[worker.1]: 2021-03-06T21:14:09.012735+00:00 app[worker.1]: The above exception was the direct cause of the following exception: 2021-03-06T21:14:09.012736+00:00 app[worker.1]: 2021-03-06T21:14:09.012772+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T21:14:09.012832+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke 2021-03-06T21:14:09.012833+00:00 app[worker.1]: await ctx.command.invoke(ctx) 2021-03-06T21:14:09.012863+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke 2021-03-06T21:14:09.012863+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs) 2021-03-06T21:14:09.012890+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped 2021-03-06T21:14:09.012891+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc 2021-03-06T21:14:09.012932+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: NotFound: 404 Not Found (error code: 10011): Unknown Role ``` Edit 2: removed user from add\_role ``` 2021-03-06T22:22:45.062172+00:00 app[worker.1]: Your bot is ready. 2021-03-06T22:22:52.026199+00:00 app[worker.1]: Ignoring exception in command addrole: 2021-03-06T22:22:52.028082+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T22:22:52.028089+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped 2021-03-06T22:22:52.028089+00:00 app[worker.1]: ret = await coro(*args, **kwargs) 2021-03-06T22:22:52.028092+00:00 app[worker.1]: File "bot.py", line 94, in addrole 2021-03-06T22:22:52.028100+00:00 app[worker.1]: await user.add_roles(role) 2021-03-06T22:22:52.028104+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles 2021-03-06T22:22:52.028108+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason) 2021-03-06T22:22:52.028111+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 241, in request 2021-03-06T22:22:52.028111+00:00 app[worker.1]: raise Forbidden(r, data) 2021-03-06T22:22:52.028152+00:00 app[worker.1]: discord.errors.Forbidden: 403 Forbidden (error code: 50013): Missing Permissions 2021-03-06T22:22:52.028160+00:00 app[worker.1]: 2021-03-06T22:22:52.028161+00:00 app[worker.1]: The above exception was the direct cause of the following exception: 2021-03-06T22:22:52.028161+00:00 app[worker.1]: 2021-03-06T22:22:52.028164+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T22:22:52.028223+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke 2021-03-06T22:22:52.028224+00:00 app[worker.1]: await ctx.command.invoke(ctx) 2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke 2021-03-06T22:22:52.028225+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs) 2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped 2021-03-06T22:22:52.028226+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc 2021-03-06T22:22:52.028228+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: Forbidden: 403 Forbidden (error code: 50013): Missing Permissions ```
2021/03/06
[ "https://Stackoverflow.com/questions/66510970", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15250234/" ]
Use `in_array` and looking for last character in array: ``` if (in_array(substr($var, -1, 1), ['s', 'z'])) ```
Or maybe: ``` in_array(array_pop(explode('', $var)), ['s', 'z'])` ? ``` not really much more readable neither gallant, but what do I know? :)
66,510,970
I've tried relentlessly for about 1-2hrs to get this piece of code to work. I need to add the role to a user, simple enough right? This code, searches for the role but cannot find it, is this because I am sending it from a channel which that role doesn't have access to? I need help please. Edit 1: removed quotations around the ID ``` @bot.command() async def addrole(ctx, user: discord.User): #Add the customer role to the user role_id = 810264985258164255 guild = ctx.guild role = discord.utils.get(guild.roles, id=810264985258164255) await user.add_roles(role) ``` ``` 2021-03-06T21:12:51.811235+00:00 app[worker.1]: Your bot is ready. 2021-03-06T21:14:09.010057+00:00 app[worker.1]: Ignoring exception in command addrole: 2021-03-06T21:14:09.012477+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T21:14:09.012553+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped 2021-03-06T21:14:09.012554+00:00 app[worker.1]: ret = await coro(*args, **kwargs) 2021-03-06T21:14:09.012592+00:00 app[worker.1]: File "bot.py", line 92, in addrole 2021-03-06T21:14:09.012593+00:00 app[worker.1]: await user.add_roles(user, role) 2021-03-06T21:14:09.012621+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles 2021-03-06T21:14:09.012621+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason) 2021-03-06T21:14:09.012651+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 243, in request 2021-03-06T21:14:09.012652+00:00 app[worker.1]: raise NotFound(r, data) 2021-03-06T21:14:09.012699+00:00 app[worker.1]: discord.errors.NotFound: 404 Not Found (error code: 10011): Unknown Role 2021-03-06T21:14:09.012735+00:00 app[worker.1]: 2021-03-06T21:14:09.012735+00:00 app[worker.1]: The above exception was the direct cause of the following exception: 2021-03-06T21:14:09.012736+00:00 app[worker.1]: 2021-03-06T21:14:09.012772+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T21:14:09.012832+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke 2021-03-06T21:14:09.012833+00:00 app[worker.1]: await ctx.command.invoke(ctx) 2021-03-06T21:14:09.012863+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke 2021-03-06T21:14:09.012863+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs) 2021-03-06T21:14:09.012890+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped 2021-03-06T21:14:09.012891+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc 2021-03-06T21:14:09.012932+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: NotFound: 404 Not Found (error code: 10011): Unknown Role ``` Edit 2: removed user from add\_role ``` 2021-03-06T22:22:45.062172+00:00 app[worker.1]: Your bot is ready. 2021-03-06T22:22:52.026199+00:00 app[worker.1]: Ignoring exception in command addrole: 2021-03-06T22:22:52.028082+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T22:22:52.028089+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped 2021-03-06T22:22:52.028089+00:00 app[worker.1]: ret = await coro(*args, **kwargs) 2021-03-06T22:22:52.028092+00:00 app[worker.1]: File "bot.py", line 94, in addrole 2021-03-06T22:22:52.028100+00:00 app[worker.1]: await user.add_roles(role) 2021-03-06T22:22:52.028104+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles 2021-03-06T22:22:52.028108+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason) 2021-03-06T22:22:52.028111+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 241, in request 2021-03-06T22:22:52.028111+00:00 app[worker.1]: raise Forbidden(r, data) 2021-03-06T22:22:52.028152+00:00 app[worker.1]: discord.errors.Forbidden: 403 Forbidden (error code: 50013): Missing Permissions 2021-03-06T22:22:52.028160+00:00 app[worker.1]: 2021-03-06T22:22:52.028161+00:00 app[worker.1]: The above exception was the direct cause of the following exception: 2021-03-06T22:22:52.028161+00:00 app[worker.1]: 2021-03-06T22:22:52.028164+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T22:22:52.028223+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke 2021-03-06T22:22:52.028224+00:00 app[worker.1]: await ctx.command.invoke(ctx) 2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke 2021-03-06T22:22:52.028225+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs) 2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped 2021-03-06T22:22:52.028226+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc 2021-03-06T22:22:52.028228+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: Forbidden: 403 Forbidden (error code: 50013): Missing Permissions ```
2021/03/06
[ "https://Stackoverflow.com/questions/66510970", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15250234/" ]
Use `in_array` and looking for last character in array: ``` if (in_array(substr($var, -1, 1), ['s', 'z'])) ```
Tired of code that's too easy to understand? Regular expressions to the rescue: ``` preg_match('/[sz]$/', $var) ```
66,510,970
I've tried relentlessly for about 1-2hrs to get this piece of code to work. I need to add the role to a user, simple enough right? This code, searches for the role but cannot find it, is this because I am sending it from a channel which that role doesn't have access to? I need help please. Edit 1: removed quotations around the ID ``` @bot.command() async def addrole(ctx, user: discord.User): #Add the customer role to the user role_id = 810264985258164255 guild = ctx.guild role = discord.utils.get(guild.roles, id=810264985258164255) await user.add_roles(role) ``` ``` 2021-03-06T21:12:51.811235+00:00 app[worker.1]: Your bot is ready. 2021-03-06T21:14:09.010057+00:00 app[worker.1]: Ignoring exception in command addrole: 2021-03-06T21:14:09.012477+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T21:14:09.012553+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped 2021-03-06T21:14:09.012554+00:00 app[worker.1]: ret = await coro(*args, **kwargs) 2021-03-06T21:14:09.012592+00:00 app[worker.1]: File "bot.py", line 92, in addrole 2021-03-06T21:14:09.012593+00:00 app[worker.1]: await user.add_roles(user, role) 2021-03-06T21:14:09.012621+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles 2021-03-06T21:14:09.012621+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason) 2021-03-06T21:14:09.012651+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 243, in request 2021-03-06T21:14:09.012652+00:00 app[worker.1]: raise NotFound(r, data) 2021-03-06T21:14:09.012699+00:00 app[worker.1]: discord.errors.NotFound: 404 Not Found (error code: 10011): Unknown Role 2021-03-06T21:14:09.012735+00:00 app[worker.1]: 2021-03-06T21:14:09.012735+00:00 app[worker.1]: The above exception was the direct cause of the following exception: 2021-03-06T21:14:09.012736+00:00 app[worker.1]: 2021-03-06T21:14:09.012772+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T21:14:09.012832+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke 2021-03-06T21:14:09.012833+00:00 app[worker.1]: await ctx.command.invoke(ctx) 2021-03-06T21:14:09.012863+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke 2021-03-06T21:14:09.012863+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs) 2021-03-06T21:14:09.012890+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped 2021-03-06T21:14:09.012891+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc 2021-03-06T21:14:09.012932+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: NotFound: 404 Not Found (error code: 10011): Unknown Role ``` Edit 2: removed user from add\_role ``` 2021-03-06T22:22:45.062172+00:00 app[worker.1]: Your bot is ready. 2021-03-06T22:22:52.026199+00:00 app[worker.1]: Ignoring exception in command addrole: 2021-03-06T22:22:52.028082+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T22:22:52.028089+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped 2021-03-06T22:22:52.028089+00:00 app[worker.1]: ret = await coro(*args, **kwargs) 2021-03-06T22:22:52.028092+00:00 app[worker.1]: File "bot.py", line 94, in addrole 2021-03-06T22:22:52.028100+00:00 app[worker.1]: await user.add_roles(role) 2021-03-06T22:22:52.028104+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles 2021-03-06T22:22:52.028108+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason) 2021-03-06T22:22:52.028111+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 241, in request 2021-03-06T22:22:52.028111+00:00 app[worker.1]: raise Forbidden(r, data) 2021-03-06T22:22:52.028152+00:00 app[worker.1]: discord.errors.Forbidden: 403 Forbidden (error code: 50013): Missing Permissions 2021-03-06T22:22:52.028160+00:00 app[worker.1]: 2021-03-06T22:22:52.028161+00:00 app[worker.1]: The above exception was the direct cause of the following exception: 2021-03-06T22:22:52.028161+00:00 app[worker.1]: 2021-03-06T22:22:52.028164+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T22:22:52.028223+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke 2021-03-06T22:22:52.028224+00:00 app[worker.1]: await ctx.command.invoke(ctx) 2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke 2021-03-06T22:22:52.028225+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs) 2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped 2021-03-06T22:22:52.028226+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc 2021-03-06T22:22:52.028228+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: Forbidden: 403 Forbidden (error code: 50013): Missing Permissions ```
2021/03/06
[ "https://Stackoverflow.com/questions/66510970", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15250234/" ]
Use `in_array` and looking for last character in array: ``` if (in_array(substr($var, -1, 1), ['s', 'z'])) ```
If you are able to use PHP8, then you should this simple solution available in PHP8. <https://www.php.net/manual/en/function.str-ends-with.php> ``` <?php $text = 'foods'; if (str_ends_with($text, 's') || str_ends_with($text, 'z')) { ... } ```
66,510,970
I've tried relentlessly for about 1-2hrs to get this piece of code to work. I need to add the role to a user, simple enough right? This code, searches for the role but cannot find it, is this because I am sending it from a channel which that role doesn't have access to? I need help please. Edit 1: removed quotations around the ID ``` @bot.command() async def addrole(ctx, user: discord.User): #Add the customer role to the user role_id = 810264985258164255 guild = ctx.guild role = discord.utils.get(guild.roles, id=810264985258164255) await user.add_roles(role) ``` ``` 2021-03-06T21:12:51.811235+00:00 app[worker.1]: Your bot is ready. 2021-03-06T21:14:09.010057+00:00 app[worker.1]: Ignoring exception in command addrole: 2021-03-06T21:14:09.012477+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T21:14:09.012553+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped 2021-03-06T21:14:09.012554+00:00 app[worker.1]: ret = await coro(*args, **kwargs) 2021-03-06T21:14:09.012592+00:00 app[worker.1]: File "bot.py", line 92, in addrole 2021-03-06T21:14:09.012593+00:00 app[worker.1]: await user.add_roles(user, role) 2021-03-06T21:14:09.012621+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles 2021-03-06T21:14:09.012621+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason) 2021-03-06T21:14:09.012651+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 243, in request 2021-03-06T21:14:09.012652+00:00 app[worker.1]: raise NotFound(r, data) 2021-03-06T21:14:09.012699+00:00 app[worker.1]: discord.errors.NotFound: 404 Not Found (error code: 10011): Unknown Role 2021-03-06T21:14:09.012735+00:00 app[worker.1]: 2021-03-06T21:14:09.012735+00:00 app[worker.1]: The above exception was the direct cause of the following exception: 2021-03-06T21:14:09.012736+00:00 app[worker.1]: 2021-03-06T21:14:09.012772+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T21:14:09.012832+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke 2021-03-06T21:14:09.012833+00:00 app[worker.1]: await ctx.command.invoke(ctx) 2021-03-06T21:14:09.012863+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke 2021-03-06T21:14:09.012863+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs) 2021-03-06T21:14:09.012890+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped 2021-03-06T21:14:09.012891+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc 2021-03-06T21:14:09.012932+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: NotFound: 404 Not Found (error code: 10011): Unknown Role ``` Edit 2: removed user from add\_role ``` 2021-03-06T22:22:45.062172+00:00 app[worker.1]: Your bot is ready. 2021-03-06T22:22:52.026199+00:00 app[worker.1]: Ignoring exception in command addrole: 2021-03-06T22:22:52.028082+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T22:22:52.028089+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped 2021-03-06T22:22:52.028089+00:00 app[worker.1]: ret = await coro(*args, **kwargs) 2021-03-06T22:22:52.028092+00:00 app[worker.1]: File "bot.py", line 94, in addrole 2021-03-06T22:22:52.028100+00:00 app[worker.1]: await user.add_roles(role) 2021-03-06T22:22:52.028104+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles 2021-03-06T22:22:52.028108+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason) 2021-03-06T22:22:52.028111+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 241, in request 2021-03-06T22:22:52.028111+00:00 app[worker.1]: raise Forbidden(r, data) 2021-03-06T22:22:52.028152+00:00 app[worker.1]: discord.errors.Forbidden: 403 Forbidden (error code: 50013): Missing Permissions 2021-03-06T22:22:52.028160+00:00 app[worker.1]: 2021-03-06T22:22:52.028161+00:00 app[worker.1]: The above exception was the direct cause of the following exception: 2021-03-06T22:22:52.028161+00:00 app[worker.1]: 2021-03-06T22:22:52.028164+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T22:22:52.028223+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke 2021-03-06T22:22:52.028224+00:00 app[worker.1]: await ctx.command.invoke(ctx) 2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke 2021-03-06T22:22:52.028225+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs) 2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped 2021-03-06T22:22:52.028226+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc 2021-03-06T22:22:52.028228+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: Forbidden: 403 Forbidden (error code: 50013): Missing Permissions ```
2021/03/06
[ "https://Stackoverflow.com/questions/66510970", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15250234/" ]
Or maybe: ``` in_array(array_pop(explode('', $var)), ['s', 'z'])` ? ``` not really much more readable neither gallant, but what do I know? :)
Tired of code that's too easy to understand? Regular expressions to the rescue: ``` preg_match('/[sz]$/', $var) ```
66,510,970
I've tried relentlessly for about 1-2hrs to get this piece of code to work. I need to add the role to a user, simple enough right? This code, searches for the role but cannot find it, is this because I am sending it from a channel which that role doesn't have access to? I need help please. Edit 1: removed quotations around the ID ``` @bot.command() async def addrole(ctx, user: discord.User): #Add the customer role to the user role_id = 810264985258164255 guild = ctx.guild role = discord.utils.get(guild.roles, id=810264985258164255) await user.add_roles(role) ``` ``` 2021-03-06T21:12:51.811235+00:00 app[worker.1]: Your bot is ready. 2021-03-06T21:14:09.010057+00:00 app[worker.1]: Ignoring exception in command addrole: 2021-03-06T21:14:09.012477+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T21:14:09.012553+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped 2021-03-06T21:14:09.012554+00:00 app[worker.1]: ret = await coro(*args, **kwargs) 2021-03-06T21:14:09.012592+00:00 app[worker.1]: File "bot.py", line 92, in addrole 2021-03-06T21:14:09.012593+00:00 app[worker.1]: await user.add_roles(user, role) 2021-03-06T21:14:09.012621+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles 2021-03-06T21:14:09.012621+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason) 2021-03-06T21:14:09.012651+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 243, in request 2021-03-06T21:14:09.012652+00:00 app[worker.1]: raise NotFound(r, data) 2021-03-06T21:14:09.012699+00:00 app[worker.1]: discord.errors.NotFound: 404 Not Found (error code: 10011): Unknown Role 2021-03-06T21:14:09.012735+00:00 app[worker.1]: 2021-03-06T21:14:09.012735+00:00 app[worker.1]: The above exception was the direct cause of the following exception: 2021-03-06T21:14:09.012736+00:00 app[worker.1]: 2021-03-06T21:14:09.012772+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T21:14:09.012832+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke 2021-03-06T21:14:09.012833+00:00 app[worker.1]: await ctx.command.invoke(ctx) 2021-03-06T21:14:09.012863+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke 2021-03-06T21:14:09.012863+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs) 2021-03-06T21:14:09.012890+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped 2021-03-06T21:14:09.012891+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc 2021-03-06T21:14:09.012932+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: NotFound: 404 Not Found (error code: 10011): Unknown Role ``` Edit 2: removed user from add\_role ``` 2021-03-06T22:22:45.062172+00:00 app[worker.1]: Your bot is ready. 2021-03-06T22:22:52.026199+00:00 app[worker.1]: Ignoring exception in command addrole: 2021-03-06T22:22:52.028082+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T22:22:52.028089+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 85, in wrapped 2021-03-06T22:22:52.028089+00:00 app[worker.1]: ret = await coro(*args, **kwargs) 2021-03-06T22:22:52.028092+00:00 app[worker.1]: File "bot.py", line 94, in addrole 2021-03-06T22:22:52.028100+00:00 app[worker.1]: await user.add_roles(role) 2021-03-06T22:22:52.028104+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/member.py", line 673, in add_roles 2021-03-06T22:22:52.028108+00:00 app[worker.1]: await req(guild_id, user_id, role.id, reason=reason) 2021-03-06T22:22:52.028111+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/http.py", line 241, in request 2021-03-06T22:22:52.028111+00:00 app[worker.1]: raise Forbidden(r, data) 2021-03-06T22:22:52.028152+00:00 app[worker.1]: discord.errors.Forbidden: 403 Forbidden (error code: 50013): Missing Permissions 2021-03-06T22:22:52.028160+00:00 app[worker.1]: 2021-03-06T22:22:52.028161+00:00 app[worker.1]: The above exception was the direct cause of the following exception: 2021-03-06T22:22:52.028161+00:00 app[worker.1]: 2021-03-06T22:22:52.028164+00:00 app[worker.1]: Traceback (most recent call last): 2021-03-06T22:22:52.028223+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/bot.py", line 935, in invoke 2021-03-06T22:22:52.028224+00:00 app[worker.1]: await ctx.command.invoke(ctx) 2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 863, in invoke 2021-03-06T22:22:52.028225+00:00 app[worker.1]: await injected(*ctx.args, **ctx.kwargs) 2021-03-06T22:22:52.028225+00:00 app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/discord/ext/commands/core.py", line 94, in wrapped 2021-03-06T22:22:52.028226+00:00 app[worker.1]: raise CommandInvokeError(exc) from exc 2021-03-06T22:22:52.028228+00:00 app[worker.1]: discord.ext.commands.errors.CommandInvokeError: Command raised an exception: Forbidden: 403 Forbidden (error code: 50013): Missing Permissions ```
2021/03/06
[ "https://Stackoverflow.com/questions/66510970", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15250234/" ]
Or maybe: ``` in_array(array_pop(explode('', $var)), ['s', 'z'])` ? ``` not really much more readable neither gallant, but what do I know? :)
If you are able to use PHP8, then you should this simple solution available in PHP8. <https://www.php.net/manual/en/function.str-ends-with.php> ``` <?php $text = 'foods'; if (str_ends_with($text, 's') || str_ends_with($text, 'z')) { ... } ```
49,510,815
I am trying to create a class that returns the class name together with the attribute. This needs to work both with instance attributes and class attributes ``` class TestClass: obj1 = 'hi' ``` I.e. I want the following (note: both with and without class instantiation) ``` >>> TestClass.obj1 ('TestClass', 'hi') >>> TestClass().obj1 ('TestClass', 'hi') ``` A similar effect is obtained when using the Enum package in python, but if I inherit from Enum, I cannot create an `__init__` function, which I want to do as well If I use Enum I would get: ``` from enum import Enum class TestClass2(Enum): obj1 = 'hi' >>> TestClass2.obj1 <TestClass2.obj1: 'hi'> ``` I've already tried overriding the `__getattribute__` magic method in a meta class as suggested here: [How can I override class attribute access in python](https://stackoverflow.com/questions/9820314/how-can-i-override-class-attribute-access-in-python). However, this breaks the `__dir__` magic method, which then wont return anything, and furthermore it seems to return name of the meta class, rather than the child class. Example below: ```py class BooType(type): def __getattribute__(self, attr): if attr == '__class__': return super().__getattribute__(attr) else: return self.__class__.__name__, attr class Boo(metaclass=BooType): asd = 'hi' >>> print(Boo.asd) ('BooType', 'asd') >>> print(dir(Boo)) AttributeError: 'tuple' object has no attribute 'keys' ``` I have also tried overriding the `__setattr__` magic method, but this seems to only affect instance attributes, and not class attributes. I should state that I am looking for a general solution. Not something where I need to write a `@property` or `@classmethod` function or something similar for each attribute
2018/03/27
[ "https://Stackoverflow.com/questions/49510815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9490769/" ]
I got help from a colleague for defining meta classes, and came up with the following solution ``` class MyMeta(type): def __new__(mcs, name, bases, dct): c = super(MyMeta, mcs).__new__(mcs, name, bases, dct) c._member_names = [] for key, value in c.__dict__.items(): if type(value) is str and not key.startswith("__"): c._member_names.append(key) setattr(c, key, (c.__name__, value)) return c def __dir__(cls): return cls._member_names class TestClass(metaclass=MyMeta): a = 'hi' b = 'hi again' print(TestClass.a) # ('TestClass', 'hi') print(TestClass.b) # ('TestClass', 'hi again') print(dir(TestClass)) # ['a', 'b'] ```
Way 1 ----- You can use [`classmethod`](https://www.programiz.com/python-programming/methods/built-in/classmethod) decorator to define methods callable at the whole class: ``` class TestClass: _obj1 = 'hi' @classmethod def obj1(cls): return cls.__name__, cls._obj1 class TestSubClass(TestClass): pass print(TestClass.obj1()) # ('TestClass', 'hi') print(TestSubClass.obj1()) # ('TestSubClass', 'hi') ``` Way 2 ----- Maybe you should use [`property`](https://www.programiz.com/python-programming/property) decorator so the disered output will be accessible by instances of a certain class instead of the class itself: ``` class TestClass: _obj1 = 'hi' @property def obj1(self): return self.__class__.__name__, self._obj1 class TestSubClass(TestClass): pass a = TestClass() b = TestSubClass() print(a.obj1) # ('TestClass', 'hi') print(b.obj1) # ('TestSubClass', 'hi') ```
40,615,604
I am looking for a nice, efficient and pythonic way to go from something like this: `('zone1', 'pcomp110007')` to this: `'ZONE 1, PCOMP 110007'` without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma. What i wrote is the following: ``` tags = ('zone1', 'pcomp110007') def sep(astr): chars = ''.join([x.upper() for x in astr if x.isalpha()]) nums = ''.join([x for x in astr if x.isnumeric()]) return chars + ' ' + nums print(', '.join(map(sep, tags))) ``` Which does produce the desired result but looks a bit too much for the task. **The tuple might vary in length but the numbers are always going to be at the end of each string.**
2016/11/15
[ "https://Stackoverflow.com/questions/40615604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6162307/" ]
Using `re`: ``` import re tupl = ('zone1', 'pcomp110007') ", ".join(map(lambda x: " ".join(re.findall('([A-Z]+)([0-9])+',x.upper())[0]), tupl)) #'ZONE 1, PCOMP 7' ```
A recursive solution: ``` def non_regex_split(s,i=0): if len(s) == i: return s try: return '%s %d' %(s[:i], int(s[i:])) except: return non_regex_split(s,i+1) ', '.join(non_regex_split(s).upper() for s in tags) ```
40,615,604
I am looking for a nice, efficient and pythonic way to go from something like this: `('zone1', 'pcomp110007')` to this: `'ZONE 1, PCOMP 110007'` without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma. What i wrote is the following: ``` tags = ('zone1', 'pcomp110007') def sep(astr): chars = ''.join([x.upper() for x in astr if x.isalpha()]) nums = ''.join([x for x in astr if x.isnumeric()]) return chars + ' ' + nums print(', '.join(map(sep, tags))) ``` Which does produce the desired result but looks a bit too much for the task. **The tuple might vary in length but the numbers are always going to be at the end of each string.**
2016/11/15
[ "https://Stackoverflow.com/questions/40615604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6162307/" ]
My thoughts: Keep `sep` a normal function like it is in your original code for readability / maintenance, but also leverage `re` as suggested in Abdou's answer. ``` import re tags = ('zone1', 'pcomp110007') def sep(astr): alpha, num = re.match('([^\d]+)([\d]+)', astr).groups() return '{} {}'.format(alpha.upper(), num) print(', '.join(map(sep, tags))) ``` Edit: Note that if you prefer, I think it would also be reasonable to just return: ``` return alpha.upper() + ' ' + num ``` Or older style string formatting: ``` return '%s %s' %(alpha.upper(), num) ``` Whatever you're most comfortable with.
Here is my stab: ``` >>> for i, s in enumerate(tags[1][::-1]): ... if s.isalpha(): ... print (tags[1][:i], tags[1][i:]) break ... pcomp1 10007 ``` I walk back thru the string to find first alpha, then split and print each side (alpha, numeric). I left out conversion to upper case which is easy to add
40,615,604
I am looking for a nice, efficient and pythonic way to go from something like this: `('zone1', 'pcomp110007')` to this: `'ZONE 1, PCOMP 110007'` without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma. What i wrote is the following: ``` tags = ('zone1', 'pcomp110007') def sep(astr): chars = ''.join([x.upper() for x in astr if x.isalpha()]) nums = ''.join([x for x in astr if x.isnumeric()]) return chars + ' ' + nums print(', '.join(map(sep, tags))) ``` Which does produce the desired result but looks a bit too much for the task. **The tuple might vary in length but the numbers are always going to be at the end of each string.**
2016/11/15
[ "https://Stackoverflow.com/questions/40615604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6162307/" ]
Regex does help. This should always work: ``` import re tags = ('zone1', 'pcomp110007') def sep(s): word = re.split(r'\d', s)[0] return word.upper() + " " + s[len(word):] print(', '.join(map(sep, tags))) ```
A recursive solution: ``` def non_regex_split(s,i=0): if len(s) == i: return s try: return '%s %d' %(s[:i], int(s[i:])) except: return non_regex_split(s,i+1) ', '.join(non_regex_split(s).upper() for s in tags) ```
40,615,604
I am looking for a nice, efficient and pythonic way to go from something like this: `('zone1', 'pcomp110007')` to this: `'ZONE 1, PCOMP 110007'` without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma. What i wrote is the following: ``` tags = ('zone1', 'pcomp110007') def sep(astr): chars = ''.join([x.upper() for x in astr if x.isalpha()]) nums = ''.join([x for x in astr if x.isnumeric()]) return chars + ' ' + nums print(', '.join(map(sep, tags))) ``` Which does produce the desired result but looks a bit too much for the task. **The tuple might vary in length but the numbers are always going to be at the end of each string.**
2016/11/15
[ "https://Stackoverflow.com/questions/40615604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6162307/" ]
Regex does help. This should always work: ``` import re tags = ('zone1', 'pcomp110007') def sep(s): word = re.split(r'\d', s)[0] return word.upper() + " " + s[len(word):] print(', '.join(map(sep, tags))) ```
Thank you all for the answers. I timed a couple here are the results and the timing script: ``` setup = ''' import re tags = ('zone1', 'pcomp110007') def sepListComp(astr): res = ''.join([x.upper() for x in astr if x.isalpha()]), ''.join([x for x in astr if x.isnumeric()]) return '{} {}'.format(*res) def sepFilter(astr): res = ''.join(filter(str.isalpha, astr.upper())), ''.join(filter(str.isdigit, astr.upper())) return '{} {}'.format(*res) def sepRe1(astr): alpha, num = re.match('([^\d]+)([\d]+)', astr).groups() return '{} {}'.format(alpha.upper(), num) def sepRe2(s): word = re.split(r'\d', s)[0] return word.upper() + " " + s[len(word):] def Recursive(s,i=0): if len(s) == i: return s try: return '%s %d' %(s[:i], int(s[i:])) except: return Recursive(s,i+1) ''' from timeit import timeit print('sepListComp:', timeit(stmt='", ".join(map(sepListComp, tags))', setup=setup, number=100000)) print('sepFilter:', timeit(stmt='", ".join(map(sepFilter, tags))', setup=setup, number=100000)) print('sepRe1:', timeit(stmt='", ".join(map(sepRe1, tags))', setup=setup, number=100000)) print('sepRe2:', timeit(stmt='", ".join(map(sepRe2, tags))', setup=setup, number=100000)) print('sepRecursive:', timeit(stmt='", ".join(Recursive(s).upper() for s in tags)', setup=setup, number=100000)) ``` * *sepListComp*: 1.0487s * *sepFilter*: 1.1690s * *sepRe1*: 0.8751s * *sepRe2*: 0.8332s * *sepRecursive*: 3.4539s So `regex` wins. The results vary alot though. Any comments or suggestions on the timing are greatly appreciated.
40,615,604
I am looking for a nice, efficient and pythonic way to go from something like this: `('zone1', 'pcomp110007')` to this: `'ZONE 1, PCOMP 110007'` without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma. What i wrote is the following: ``` tags = ('zone1', 'pcomp110007') def sep(astr): chars = ''.join([x.upper() for x in astr if x.isalpha()]) nums = ''.join([x for x in astr if x.isnumeric()]) return chars + ' ' + nums print(', '.join(map(sep, tags))) ``` Which does produce the desired result but looks a bit too much for the task. **The tuple might vary in length but the numbers are always going to be at the end of each string.**
2016/11/15
[ "https://Stackoverflow.com/questions/40615604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6162307/" ]
I think your way is pythonic enough. If you want to make it "more functional style", then you can use this one: ``` sep = lambda s: " ".join((filter(str.isalpha, s).upper(), filter(str.isdigit, s))) print(', '.join(map(sep, tags))) ``` Updated: It's Python3 version, for Python2 you need to use `upper` for `s`, not for `filter`.
A recursive solution: ``` def non_regex_split(s,i=0): if len(s) == i: return s try: return '%s %d' %(s[:i], int(s[i:])) except: return non_regex_split(s,i+1) ', '.join(non_regex_split(s).upper() for s in tags) ```
40,615,604
I am looking for a nice, efficient and pythonic way to go from something like this: `('zone1', 'pcomp110007')` to this: `'ZONE 1, PCOMP 110007'` without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma. What i wrote is the following: ``` tags = ('zone1', 'pcomp110007') def sep(astr): chars = ''.join([x.upper() for x in astr if x.isalpha()]) nums = ''.join([x for x in astr if x.isnumeric()]) return chars + ' ' + nums print(', '.join(map(sep, tags))) ``` Which does produce the desired result but looks a bit too much for the task. **The tuple might vary in length but the numbers are always going to be at the end of each string.**
2016/11/15
[ "https://Stackoverflow.com/questions/40615604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6162307/" ]
Using `re`: ``` import re tupl = ('zone1', 'pcomp110007') ", ".join(map(lambda x: " ".join(re.findall('([A-Z]+)([0-9])+',x.upper())[0]), tupl)) #'ZONE 1, PCOMP 7' ```
Thank you all for the answers. I timed a couple here are the results and the timing script: ``` setup = ''' import re tags = ('zone1', 'pcomp110007') def sepListComp(astr): res = ''.join([x.upper() for x in astr if x.isalpha()]), ''.join([x for x in astr if x.isnumeric()]) return '{} {}'.format(*res) def sepFilter(astr): res = ''.join(filter(str.isalpha, astr.upper())), ''.join(filter(str.isdigit, astr.upper())) return '{} {}'.format(*res) def sepRe1(astr): alpha, num = re.match('([^\d]+)([\d]+)', astr).groups() return '{} {}'.format(alpha.upper(), num) def sepRe2(s): word = re.split(r'\d', s)[0] return word.upper() + " " + s[len(word):] def Recursive(s,i=0): if len(s) == i: return s try: return '%s %d' %(s[:i], int(s[i:])) except: return Recursive(s,i+1) ''' from timeit import timeit print('sepListComp:', timeit(stmt='", ".join(map(sepListComp, tags))', setup=setup, number=100000)) print('sepFilter:', timeit(stmt='", ".join(map(sepFilter, tags))', setup=setup, number=100000)) print('sepRe1:', timeit(stmt='", ".join(map(sepRe1, tags))', setup=setup, number=100000)) print('sepRe2:', timeit(stmt='", ".join(map(sepRe2, tags))', setup=setup, number=100000)) print('sepRecursive:', timeit(stmt='", ".join(Recursive(s).upper() for s in tags)', setup=setup, number=100000)) ``` * *sepListComp*: 1.0487s * *sepFilter*: 1.1690s * *sepRe1*: 0.8751s * *sepRe2*: 0.8332s * *sepRecursive*: 3.4539s So `regex` wins. The results vary alot though. Any comments or suggestions on the timing are greatly appreciated.
40,615,604
I am looking for a nice, efficient and pythonic way to go from something like this: `('zone1', 'pcomp110007')` to this: `'ZONE 1, PCOMP 110007'` without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma. What i wrote is the following: ``` tags = ('zone1', 'pcomp110007') def sep(astr): chars = ''.join([x.upper() for x in astr if x.isalpha()]) nums = ''.join([x for x in astr if x.isnumeric()]) return chars + ' ' + nums print(', '.join(map(sep, tags))) ``` Which does produce the desired result but looks a bit too much for the task. **The tuple might vary in length but the numbers are always going to be at the end of each string.**
2016/11/15
[ "https://Stackoverflow.com/questions/40615604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6162307/" ]
My thoughts: Keep `sep` a normal function like it is in your original code for readability / maintenance, but also leverage `re` as suggested in Abdou's answer. ``` import re tags = ('zone1', 'pcomp110007') def sep(astr): alpha, num = re.match('([^\d]+)([\d]+)', astr).groups() return '{} {}'.format(alpha.upper(), num) print(', '.join(map(sep, tags))) ``` Edit: Note that if you prefer, I think it would also be reasonable to just return: ``` return alpha.upper() + ' ' + num ``` Or older style string formatting: ``` return '%s %s' %(alpha.upper(), num) ``` Whatever you're most comfortable with.
I think your way is pythonic enough. If you want to make it "more functional style", then you can use this one: ``` sep = lambda s: " ".join((filter(str.isalpha, s).upper(), filter(str.isdigit, s))) print(', '.join(map(sep, tags))) ``` Updated: It's Python3 version, for Python2 you need to use `upper` for `s`, not for `filter`.
40,615,604
I am looking for a nice, efficient and pythonic way to go from something like this: `('zone1', 'pcomp110007')` to this: `'ZONE 1, PCOMP 110007'` without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma. What i wrote is the following: ``` tags = ('zone1', 'pcomp110007') def sep(astr): chars = ''.join([x.upper() for x in astr if x.isalpha()]) nums = ''.join([x for x in astr if x.isnumeric()]) return chars + ' ' + nums print(', '.join(map(sep, tags))) ``` Which does produce the desired result but looks a bit too much for the task. **The tuple might vary in length but the numbers are always going to be at the end of each string.**
2016/11/15
[ "https://Stackoverflow.com/questions/40615604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6162307/" ]
My thoughts: Keep `sep` a normal function like it is in your original code for readability / maintenance, but also leverage `re` as suggested in Abdou's answer. ``` import re tags = ('zone1', 'pcomp110007') def sep(astr): alpha, num = re.match('([^\d]+)([\d]+)', astr).groups() return '{} {}'.format(alpha.upper(), num) print(', '.join(map(sep, tags))) ``` Edit: Note that if you prefer, I think it would also be reasonable to just return: ``` return alpha.upper() + ' ' + num ``` Or older style string formatting: ``` return '%s %s' %(alpha.upper(), num) ``` Whatever you're most comfortable with.
Using `re`: ``` import re tupl = ('zone1', 'pcomp110007') ", ".join(map(lambda x: " ".join(re.findall('([A-Z]+)([0-9])+',x.upper())[0]), tupl)) #'ZONE 1, PCOMP 7' ```
40,615,604
I am looking for a nice, efficient and pythonic way to go from something like this: `('zone1', 'pcomp110007')` to this: `'ZONE 1, PCOMP 110007'` without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma. What i wrote is the following: ``` tags = ('zone1', 'pcomp110007') def sep(astr): chars = ''.join([x.upper() for x in astr if x.isalpha()]) nums = ''.join([x for x in astr if x.isnumeric()]) return chars + ' ' + nums print(', '.join(map(sep, tags))) ``` Which does produce the desired result but looks a bit too much for the task. **The tuple might vary in length but the numbers are always going to be at the end of each string.**
2016/11/15
[ "https://Stackoverflow.com/questions/40615604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6162307/" ]
Regex does help. This should always work: ``` import re tags = ('zone1', 'pcomp110007') def sep(s): word = re.split(r'\d', s)[0] return word.upper() + " " + s[len(word):] print(', '.join(map(sep, tags))) ```
Here is my stab: ``` >>> for i, s in enumerate(tags[1][::-1]): ... if s.isalpha(): ... print (tags[1][:i], tags[1][i:]) break ... pcomp1 10007 ``` I walk back thru the string to find first alpha, then split and print each side (alpha, numeric). I left out conversion to upper case which is easy to add
40,615,604
I am looking for a nice, efficient and pythonic way to go from something like this: `('zone1', 'pcomp110007')` to this: `'ZONE 1, PCOMP 110007'` without the use of `regex` if possible (unless it does make a big difference that is..). So turn every letter into uppercase, put a space between letters and numbers and join with a comma. What i wrote is the following: ``` tags = ('zone1', 'pcomp110007') def sep(astr): chars = ''.join([x.upper() for x in astr if x.isalpha()]) nums = ''.join([x for x in astr if x.isnumeric()]) return chars + ' ' + nums print(', '.join(map(sep, tags))) ``` Which does produce the desired result but looks a bit too much for the task. **The tuple might vary in length but the numbers are always going to be at the end of each string.**
2016/11/15
[ "https://Stackoverflow.com/questions/40615604", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6162307/" ]
I think your way is pythonic enough. If you want to make it "more functional style", then you can use this one: ``` sep = lambda s: " ".join((filter(str.isalpha, s).upper(), filter(str.isdigit, s))) print(', '.join(map(sep, tags))) ``` Updated: It's Python3 version, for Python2 you need to use `upper` for `s`, not for `filter`.
Thank you all for the answers. I timed a couple here are the results and the timing script: ``` setup = ''' import re tags = ('zone1', 'pcomp110007') def sepListComp(astr): res = ''.join([x.upper() for x in astr if x.isalpha()]), ''.join([x for x in astr if x.isnumeric()]) return '{} {}'.format(*res) def sepFilter(astr): res = ''.join(filter(str.isalpha, astr.upper())), ''.join(filter(str.isdigit, astr.upper())) return '{} {}'.format(*res) def sepRe1(astr): alpha, num = re.match('([^\d]+)([\d]+)', astr).groups() return '{} {}'.format(alpha.upper(), num) def sepRe2(s): word = re.split(r'\d', s)[0] return word.upper() + " " + s[len(word):] def Recursive(s,i=0): if len(s) == i: return s try: return '%s %d' %(s[:i], int(s[i:])) except: return Recursive(s,i+1) ''' from timeit import timeit print('sepListComp:', timeit(stmt='", ".join(map(sepListComp, tags))', setup=setup, number=100000)) print('sepFilter:', timeit(stmt='", ".join(map(sepFilter, tags))', setup=setup, number=100000)) print('sepRe1:', timeit(stmt='", ".join(map(sepRe1, tags))', setup=setup, number=100000)) print('sepRe2:', timeit(stmt='", ".join(map(sepRe2, tags))', setup=setup, number=100000)) print('sepRecursive:', timeit(stmt='", ".join(Recursive(s).upper() for s in tags)', setup=setup, number=100000)) ``` * *sepListComp*: 1.0487s * *sepFilter*: 1.1690s * *sepRe1*: 0.8751s * *sepRe2*: 0.8332s * *sepRecursive*: 3.4539s So `regex` wins. The results vary alot though. Any comments or suggestions on the timing are greatly appreciated.
23,407,824
I am building a website with django for months. So now i thought its time to test it with some friend. While deploying it to a Ubuntu 14.4 64bit LTS ( Same as development environment ) i found a strange error. Called ``` OSError at /accounts/edit/ [Errno 21] Is a directory: '/var/www/media/' ``` I also tried with different path `BASE_PATH + "/media"` Which was ok for my development. I am running the default django development server (As its a test!) It will be really awesome if someone correct me and teach me what's wrong here. Thanks. Edit: (Traceback) ``` Environment: Request Method: POST Request URL: http://playbox.asia:8000/accounts/edit/ Django Version: 1.6.4 Python Version: 2.7.6 Installed Applications: ('django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'south', 'easy_pjax', 'bootstrap3', 'djangoratings', 'taggit', 'imagekit', 'playbox', 'accounts', 'musics', 'playlist') Installed Middleware: ('django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware') Traceback: File "/root/playbox/venv/local/lib/python2.7/site- packages/django/core/handlers/base.py" in get_response 114. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/root/playbox/venv/local/lib/python2.7/site- packages/django/contrib/auth/decorators.py" in _wrapped_view 22. return view_func(request, *args, **kwargs) File "/root/playbox/accounts/views.py" in edit 189. os.remove(settings.MEDIA_ROOT + "/" + profile.avatar.name) Exception Type: OSError at /accounts/edit/ Exception Value: [Errno 21] Is a directory: '/var/www/media/' ```
2014/05/01
[ "https://Stackoverflow.com/questions/23407824", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2665252/" ]
Looks like `profile.avatar.name` evaluates to a **blank string** So No file is being provided to `os.remove` to remove and It **can't remove a directory** and **raises OSError** : See here: <https://docs.python.org/2/library/os.html#os.remove> You can rectify this error by **applying a conditional**, which is what to be followed: ``` if profile.avatar.name: os.remove(settings.MEDIA_ROOT + "/" + profile.avatar.name) ```
This just happened to me for another reason. Answering here in case it is not the situation above. I had a file named "admin" in my static folder, which matched the directory "admin" where Django's admin app stores its static files. When running `collectstatic` it would conflict between those two. I had to remove my "admin" file (or rename it) so it didn't clash. In general, make sure your files don't clash with any static file provided by other app.
73,401,346
In the following code I am having a problem, the log file should get generated each day with timestamp in its name. Right now the first file has name as **dashboard\_asset\_logs** and then other subsequent log files come as **dashboard\_asset\_logs.2022\_08\_18.log, dashboard\_asset\_logs.2022\_08\_19.log**. How can the first filename also have name like **dashboard\_asset\_logs\_2022\_08\_17.log** and subsequent file have names with **.** removed from between and replaced by **\_** from **dashboard\_asset\_logs.2022\_08\_18.log** to **dashboard\_asset\_logs\_2022\_08\_18.log** Here is the logger.py code with TimedRotatorFileHandler ``` import logging import os from logging.handlers import TimedRotatingFileHandler class Logger(): loggers = {} logger = None def getlogger(self, name): # file_formatter = logging.Formatter('%(asctime)s~%(levelname)s~%(message)s~module:%(module)s~function:%( # module)s') file_formatter = logging.Formatter('%(asctime)s %(levelname)s [%(module)s : %(funcName)s] : %(' 'message)s') console_formatter = logging.Formatter('%(levelname)s -- %(message)s') print(os.getcwd()) # file_name = "src/logs/"+name+".log" # file_handler = logging.FileHandler(file_name) # # file_handler.setLevel(logging.DEBUG) # file_handler.setFormatter(file_formatter) # console_handler = logging.StreamHandler() # console_handler.setLevel(logging.DEBUG) # console_handler.setFormatter(console_formatter) logger = logging.getLogger(name) # logger.addHandler(file_handler) # logger.addHandler(console_handler) logger.setLevel(logging.DEBUG) file_name = "src/logs/dashboard_asset_logs" handler = TimedRotatingFileHandler( file_name, when="midnight", interval=1) handler.suffix = "%Y_%m_%d.log" handler.setLevel(logging.DEBUG) handler.setFormatter(file_formatter) logger.addHandler(handler) logger.propagate = False self.loggers[name] = logger return logger def getCommonLogger(self): if self.loggers.get("dashboard_asset_logs"): return self.loggers.get("dashboard_asset_logs") else: self.logger = self.getlogger("dashboard_asset_logs") return self.logger ``` Here is the app\_log code ``` from src.main import logger import time logger_obj = logger.Logger() logger = logger_obj.getCommonLogger() def log_trial(): while True: time.sleep(10) logger.info("----------Log - start ---------------------") if __name__ == '__main__': log_trial() ``` I run the code by using **python -m src.main.app\_log**
2022/08/18
[ "https://Stackoverflow.com/questions/73401346", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8838167/" ]
In short, you will have to write your own `TimedRotatingFileHandler` implementation to make it happen. The more important question is why you need it? File name without date is a file with log from the current day and this log is incomplete during the day. When log is rotated at midnight, old file (the one without the date) is renamed (date part is added). Just after that, a new empty file without the date part in the name is created. This behavior of log files is very common in most of the systems because it's easy to see which file is the current one and which are the archived ones (those with dates). Adding date to the file name of current one, will mess with this conception and can lead to some problems when this file will be read by people who don't know the idea begind this file naming convention.
You might be able to use the `namer` property of the handler, which you can set to a callable to customise naming. See [the documentation](https://docs.python.org/3/library/logging.handlers.html#logging.handlers.BaseRotatingHandler.namer) for it.
27,656,401
I've got rather small flask application which I run using: ``` $ python wsgi.py ``` When editing files, server reloads on each file save. This reload takes even up to 10sec. That's system section from my Virtual Box: ``` Base: 2048Mb, Memory: Processors: 4 Acceleration: VT-x/AMD-V, Nested Paging, PAE/NX ``` How can I speed it up, or where do I look for issues?
2014/12/26
[ "https://Stackoverflow.com/questions/27656401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2743105/" ]
Your problem might be virtualenv being sync'd too. I stumbled upon the same problem, and the issue was that VirtualBox's default synchronization implementation is very very slow when dealing with too many files in the mounted directory. Upon investigating, I found: ``` $ cd my-project $ tree | tail -n 1 220 directories, 2390 files ``` That looks like too many files for a simple flask project, right? So, as it turns out, I was putting my virtualenv directory inside my project directory too, meaning everything got sync'ed. ``` $ cd my-project/env 203 directories, 2313 files $ cd my-project $ rm -Rf my-project/env $ tree | tail -n 1 17 directories, 77 files ``` Now it looks much more manageable and indeed much faster. Sure, we still need to store the virtualenv somewhere, but it actually makes much more sense to create it somewhere *inside* the guest machine, and not mounted against the host - specially if you consider that the host and the guest could be different OS's anyway. Hope this helps.
Try changing the file system for NFS. I had this problem, I switched to NFS and has been fixed. ``` config.vm.synced_folder ".", "/vagrant", type: "nfs" ``` [ENABLING NFS SYNCED FOLDERS](http://docs.vagrantup.com/v2/synced-folders/nfs.html)
19,077,580
I use python 2.7.5. I have got some files in the directory/sub directory. Sample of the `file1` is given below ``` Title file name path1 /path/to/file options path2=/path/to/file1,/path/to/file2,/path/to/file3,/path/to/file4 some_vale1 some_vale2 some_value3=abcdefg some_value4=/path/to/value some_value5 ``` I would like to insert the text `/root/directory` in the text file. Final outcome i would like to have is as followes:- ``` Title file name path1 /root/directory/path/tofile path2=/root/directory/path/to/file1,/root/directory/path/to/file2,/root/directory/path/to/file3,/root/directory/path/to/file4 options some_vale1 some_vale2 some_value3=abcdefg some_value4=/path/to/value some_value5 ``` The names `path, options and path2` are same in all files. The files in the directory/subdirectory required to be modified with the same outcome as above. I tried to use the `re.sub` to find and replace the string. However I never got the output i wanted.
2013/09/29
[ "https://Stackoverflow.com/questions/19077580", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2221360/" ]
This one-liner does the entire transformation: ``` str = re.sub(r'(options) (\S+)', r'\2\n \1', str.replace('/path/', '/root/directory/path/') ``` See a [live demo](http://codepad.org/I5IpoYcW) of this code
You can try this: ``` result = re.sub(r'([ \t =,])/', replace_text, text, 1) ``` The last `1` is to indicate the first match only, so that only the first path is substituted. By the way, I think that you want to conserve the space/tab or comma right? Make replace\_text like this: ``` replace_text = r'\1/root/directory/' ```
19,077,580
I use python 2.7.5. I have got some files in the directory/sub directory. Sample of the `file1` is given below ``` Title file name path1 /path/to/file options path2=/path/to/file1,/path/to/file2,/path/to/file3,/path/to/file4 some_vale1 some_vale2 some_value3=abcdefg some_value4=/path/to/value some_value5 ``` I would like to insert the text `/root/directory` in the text file. Final outcome i would like to have is as followes:- ``` Title file name path1 /root/directory/path/tofile path2=/root/directory/path/to/file1,/root/directory/path/to/file2,/root/directory/path/to/file3,/root/directory/path/to/file4 options some_vale1 some_vale2 some_value3=abcdefg some_value4=/path/to/value some_value5 ``` The names `path, options and path2` are same in all files. The files in the directory/subdirectory required to be modified with the same outcome as above. I tried to use the `re.sub` to find and replace the string. However I never got the output i wanted.
2013/09/29
[ "https://Stackoverflow.com/questions/19077580", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2221360/" ]
Ok. Got the answer from Bohemian and Jerry. Got it working with combined code. ``` str = re.sub(r'(options) (\S+)', r'\2\n \1', re.sub(r'([ \t =,])/', replace_text, text)) ```
You can try this: ``` result = re.sub(r'([ \t =,])/', replace_text, text, 1) ``` The last `1` is to indicate the first match only, so that only the first path is substituted. By the way, I think that you want to conserve the space/tab or comma right? Make replace\_text like this: ``` replace_text = r'\1/root/directory/' ```
30,265,557
I have a matrix with x rows (i.e. the number of draws) and y columns (the number of observations). They represent a distribution of y forecasts. Now I would like to make sort of a 'heat map' of the draws. That is, I want to plot a 'confidence interval' (not really a confidence interval, but just all the values with shading in between), but as a 'heat map' (an example of a [heat map](http://climate.ncas.ac.uk/~andy/python/pictures/ex13.png) ). That means, that if for instance a lot of draws for observation y=y\* were around 1 but there was also a draw of 5 for that same observation, that then the area of the confidence interval around 1 is darker (but the whole are between 1 and 5 is still shaded). To be totally clear: I like for instance the plot in the answer [here](https://stackoverflow.com/questions/16463325/shading-confidence-intervals-manually-with-ggplot2), but then I would want the grey confidence interval to instead be colored as intensities (i.e. some areas are darker). Could someone please tell me how I could achieve that? Thanks in advance. **Edit:** As per request: example data. Example of the first 20 values of the first column (i.e. y[1:20,1]): ``` [1] 0.032067416 -0.064797792 0.035022338 0.016347263 0.034373065 0.024793101 -0.002514447 0.091411355 -0.064263536 -0.026808208 [11] 0.125831185 -0.039428744 0.017156454 -0.061574540 -0.074207109 -0.029171227 0.018906181 0.092816957 0.028899699 -0.004535961 ```
2015/05/15
[ "https://Stackoverflow.com/questions/30265557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2151205/" ]
So, the hard part of this is transforming your data into the right shape, which is why it's nice to share something that really looks like your data, not just a single column. Let's say your data is this a matrix with 10,000 rows and 10 columns. I'll just use a uniform distribution so it will be a boring plot at the end ``` n = 10000 k = 10 mat = matrix(runif(n * k), nrow = n) ``` Next, we'll calculate quantiles for each column using `apply`, transpose, and make it a data frame: ``` dat = as.data.frame(t(apply(mat, MARGIN = 2, FUN = quantile, probs = seq(.1, 0.9, 0.1)))) ``` Add an `x` variable (since we transposed, each x value corresponds to a column in the original data) ``` dat$x = 1:nrow(dat) ``` We now need to get it into a "long" form, grouped by the min and max values for a certain deviation group around the median, and of course get rid of the pesky percent signs introduced by `quantile`: ``` library(dplyr) library(tidyr) dat_long = gather(dat, "quantile", value = "y", -x) %>% mutate(quantile = as.numeric(gsub("%", "", quantile)), group = abs(50 - quantile)) dat_ribbon = dat_long %>% filter(quantile < 50) %>% mutate(ymin = y) %>% select(x, ymin, group) %>% left_join( dat_long %>% filter(quantile > 50) %>% mutate(ymax = y) %>% select(x, ymax, group) ) dat_median = filter(dat_long, quantile == 50) ``` And finally we can plot. We'll plot a transparent ribbon for each "group", that is 10%-90% interval, 20%-80% interval, ... 40%-60% interval, and then a single line at the median (50%). Using transparency, the middle will be darker as it has more ribbons overlapping on top of it. This doesn't go from the mininum to the maximum, but it will if you set the `probs` in the `quantile` call to go from 0 to 1 instead of .1 to .9. ``` library(ggplot2) ggplot(dat_ribbon, aes(x = x)) + geom_ribbon(aes(ymin = ymin, ymax = ymax, group = group), alpha = 0.2) + geom_line(aes(y = y), data = dat_median, color = "white") ``` ![enter image description here](https://i.stack.imgur.com/j7dhf.png) Worth noting that this is *not* a conventional heatmap. A heatmap usually implies that you have 3 variables, x, y, and z (color), where there is a z-value for every x-y pair. Here you have two variables, x and y, with y depending on x.
That is not a lot to go on, but I would probably start with the `hexbin` or `hexbinplot` package. Several alternatives are presented in this SO post. [Formatting and manipulating a plot from the R package "hexbin"](https://stackoverflow.com/questions/15504983/formatting-and-manipulating-a-plot-from-the-r-package-hexbin)
38,729,374
My `.profile` defines a function ``` myps () { ps -aef|egrep "a|b"|egrep -v "c\-" } ``` I'd like to execute it from my python script ``` import subprocess subprocess.call("ssh user@box \"$(typeset -f); myps\"", shell=True) ``` Getting an error back ``` bash: -c: line 0: syntax error near unexpected token `;' bash: -c: line 0: `; myps' ``` Escaping ; results in ``` bash: ;: command not found ```
2016/08/02
[ "https://Stackoverflow.com/questions/38729374", "https://Stackoverflow.com", "https://Stackoverflow.com/users/359862/" ]
``` script=''' . ~/.profile # load local function definitions so typeset -f can emit them ssh user@box ksh -s <<EOF $(typeset -f) myps EOF ''' import subprocess subprocess.call(['ksh', '-c', script]) # no shell=True ``` --- There are a few pertinent items here: * The dotfile defining this function needs to be locally invoked *before* you run `typeset -f` to dump the function's definition over the wire. By default, a noninteractive shell does not run the majority of dotfiles (any specified by the `ENV` environment variable is an exception). In the given example, this is served by the `. ~/profile` command within the script. * The shell needs to be one supporting `typeset`, so it has to be `bash` or `ksh`, not `sh` (as used by `script=True` by default), which may be provided by `ash` or `dash`, lacking this feature. In the given example, this is served by passing `['ksh', '-c']` is the first two arguments to the argv array. * `typeset` needs to be run locally, so it can't be in an argv position other than the first with `script=True`. (To provide an example: `subprocess.Popen(['''printf '%s\n' "$@"''', 'This is just literal data!', '$(touch /tmp/this-is-not-executed)'], shell=True)` evaluates only `printf '%s\n' "$@"` as a shell script; `This is just literal data!` and `$(touch /tmp/this-is-not-executed)` are passed as literal data, so no file named `/tmp/this-is-not-executed` is created). In the given example, this is mooted by *not using* `script=True`. * Explicitly invoking `ksh -s` (or `bash -s`, as appropriate) ensures that the shell evaluating your function definitions matches the shell you *wrote* those functions against, rather than passing them to `sh -c`, as would happen otherwise. In the given example, this is served by `ssh user@box ksh -s` inside the script.
The original command was not interpreting the `;` before `myps` properly. Using `sh -c` fixes that, but... ( please see Charles Duffy comments below ). Using a combination of single/double quotes sometimes makes the syntax easier to read and less prone to mistakes. With that in mind, a safe way to run the command ( provided the functions in `.profile` are actually accessible in the shell started by the subprocess.Popen object ): ``` subprocess.call('ssh user@box "$(typeset -f); myps"', shell=True), ``` An alternative ( less safe ) method would be to use `sh -c` for the subshell command: ``` subprocess.call('ssh user@box "sh -c $(echo typeset -f); myps"', shell=True) # myps is treated as a command ``` This seemingly returned the same result: ``` subprocess.call('ssh user@box "sh -c typeset -f; myps"', shell=True) ``` There are definitely alternative methods for accomplishing these type of tasks, however, this might give you an idea of what the issue was with the original command.
38,729,374
My `.profile` defines a function ``` myps () { ps -aef|egrep "a|b"|egrep -v "c\-" } ``` I'd like to execute it from my python script ``` import subprocess subprocess.call("ssh user@box \"$(typeset -f); myps\"", shell=True) ``` Getting an error back ``` bash: -c: line 0: syntax error near unexpected token `;' bash: -c: line 0: `; myps' ``` Escaping ; results in ``` bash: ;: command not found ```
2016/08/02
[ "https://Stackoverflow.com/questions/38729374", "https://Stackoverflow.com", "https://Stackoverflow.com/users/359862/" ]
I ended up using this. ``` import subprocess import sys import re HOST = "user@" + box COMMAND = 'my long command with many many flags in single quotes' ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) result = ssh.stdout.readlines() ```
The original command was not interpreting the `;` before `myps` properly. Using `sh -c` fixes that, but... ( please see Charles Duffy comments below ). Using a combination of single/double quotes sometimes makes the syntax easier to read and less prone to mistakes. With that in mind, a safe way to run the command ( provided the functions in `.profile` are actually accessible in the shell started by the subprocess.Popen object ): ``` subprocess.call('ssh user@box "$(typeset -f); myps"', shell=True), ``` An alternative ( less safe ) method would be to use `sh -c` for the subshell command: ``` subprocess.call('ssh user@box "sh -c $(echo typeset -f); myps"', shell=True) # myps is treated as a command ``` This seemingly returned the same result: ``` subprocess.call('ssh user@box "sh -c typeset -f; myps"', shell=True) ``` There are definitely alternative methods for accomplishing these type of tasks, however, this might give you an idea of what the issue was with the original command.
38,729,374
My `.profile` defines a function ``` myps () { ps -aef|egrep "a|b"|egrep -v "c\-" } ``` I'd like to execute it from my python script ``` import subprocess subprocess.call("ssh user@box \"$(typeset -f); myps\"", shell=True) ``` Getting an error back ``` bash: -c: line 0: syntax error near unexpected token `;' bash: -c: line 0: `; myps' ``` Escaping ; results in ``` bash: ;: command not found ```
2016/08/02
[ "https://Stackoverflow.com/questions/38729374", "https://Stackoverflow.com", "https://Stackoverflow.com/users/359862/" ]
I ended up using this. ``` import subprocess import sys import re HOST = "user@" + box COMMAND = 'my long command with many many flags in single quotes' ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) result = ssh.stdout.readlines() ```
``` script=''' . ~/.profile # load local function definitions so typeset -f can emit them ssh user@box ksh -s <<EOF $(typeset -f) myps EOF ''' import subprocess subprocess.call(['ksh', '-c', script]) # no shell=True ``` --- There are a few pertinent items here: * The dotfile defining this function needs to be locally invoked *before* you run `typeset -f` to dump the function's definition over the wire. By default, a noninteractive shell does not run the majority of dotfiles (any specified by the `ENV` environment variable is an exception). In the given example, this is served by the `. ~/profile` command within the script. * The shell needs to be one supporting `typeset`, so it has to be `bash` or `ksh`, not `sh` (as used by `script=True` by default), which may be provided by `ash` or `dash`, lacking this feature. In the given example, this is served by passing `['ksh', '-c']` is the first two arguments to the argv array. * `typeset` needs to be run locally, so it can't be in an argv position other than the first with `script=True`. (To provide an example: `subprocess.Popen(['''printf '%s\n' "$@"''', 'This is just literal data!', '$(touch /tmp/this-is-not-executed)'], shell=True)` evaluates only `printf '%s\n' "$@"` as a shell script; `This is just literal data!` and `$(touch /tmp/this-is-not-executed)` are passed as literal data, so no file named `/tmp/this-is-not-executed` is created). In the given example, this is mooted by *not using* `script=True`. * Explicitly invoking `ksh -s` (or `bash -s`, as appropriate) ensures that the shell evaluating your function definitions matches the shell you *wrote* those functions against, rather than passing them to `sh -c`, as would happen otherwise. In the given example, this is served by `ssh user@box ksh -s` inside the script.
21,083,746
How do you cache a paginated Django queryset, specifically in a ListView? I noticed one query was taking a long time to run, so I'm attempting to cache it. The queryset is huge (over 100k records), so I'm attempting to only cache paginated subsections of it. I can't cache the entire view or template because there are sections that are user/session specific and need to change constantly. ListView has a couple standard methods for retrieving the queryset, `get_queryset()`, which returns the non-paginated data, and `paginate_queryset()`, which filters it by the current page. I first tried caching the query in `get_queryset()`, but quickly realized calling `cache.set(my_query_key, super(MyView, self).get_queryset())` was causing the entire query to be serialized. So then I tried overriding `paginate_queryset()` like: ``` import time from functools import partial from django.core.cache import cache from django.views.generic import ListView class MyView(ListView): ... def paginate_queryset(self, queryset, page_size): cache_key = 'myview-queryset-%s-%s' % (self.page, page_size) print 'paginate_queryset.cache_key:',cache_key t0 = time.time() ret = cache.get(cache_key) if ret is None: print 're-caching' ret = super(MyView, self).paginate_queryset(queryset, page_size) cache.set(cache_key, ret, 60*60) td = time.time() - t0 print 'paginate_queryset.time.seconds:',td (paginator, page, object_list, other_pages) = ret print 'total objects:',len(object_list) return ret ``` However, this takes almost a minute to run, even though only 10 objects are retrieved, and every requests shows "re-caching", implying nothing is being saved to cache. My `settings.CACHE` looks like: ``` CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', } } ``` and `service memcached status` shows memcached is running and `tail -f /var/log/memcached.log` shows absolutely nothing. What am I doing wrong? What is the proper way to cache a paginated query so that the entire queryset isn't retrieved? Edit: I think their may be a bug in either memcached or the Python wrapper. Django appears to support two different memcached backends, one using python-memcached and one using pylibmc. The python-memcached seems to silently hide the error caching the `paginate_queryset()` value. When I switched to the pylibmc backend, now I get an explicit error message "error 10 from memcached\_set: SERVER ERROR" tracing back to django/core/cache/backends/memcached.py in set, line 78.
2014/01/13
[ "https://Stackoverflow.com/questions/21083746", "https://Stackoverflow.com", "https://Stackoverflow.com/users/247542/" ]
The problem turned out to be a combination of factors. Mainly, the result returned by the `paginate_queryset()` contains a reference to the unlimited queryset, meaning it's essentially uncachable. When I called `cache.set(mykey, (paginator, page, object_list, other_pages))`, it was trying to serialize thousands of records instead of just the `page_size` number of records I was expecting, causing the cached item to exceed memcached's limits and fail. The other factor was the horrible default error reporting in the memcached/python-memcached, which silently hides all errors and turns cache.set() into a nop if anything goes wrong, making it very time-consuming to track down the problem. I fixed this by essentially rewriting `paginate_queryset()` to ditch Django's builtin paginator functionality altogether and calculate the queryset myself with: ``` object_list = queryset[page_size*(page-1):page_size*(page-1)+page_size] ``` and then caching **that** `object_list`.
I wanted to paginate my infinite scrolling view on my home page and this is the solution I came up with. It's a mix of Django CCBVs and the author's initial solution. The response times, however, didn't improve as much as I would've hoped for but that's probably because I am testing it on my local with just 6 posts and 2 users haha. ``` # Import from django.core.cache import cache from django.core.paginator import InvalidPage from django.views.generic.list import ListView from django.http Http404 class MyListView(ListView): template_name = 'MY TEMPLATE NAME' model = MY POST MODEL paginate_by = 10 def paginate_queryset(self, queryset, page_size): """Paginate the queryset""" paginator = self.get_paginator( queryset, page_size, orphans=self.get_paginate_orphans(), allow_empty_first_page=self.get_allow_empty()) page_kwarg = self.page_kwarg page = self.kwargs.get(page_kwarg) or self.request.GET.get(page_kwarg) or 1 try: page_number = int(page) except ValueError: if page == 'last': page_number = paginator.num_pages else: raise Http404(_("Page is not 'last', nor can it be converted to an int.")) try: page = paginator.page(page_number) cache_key = 'mylistview-%s-%s' % (page_number, page_size) retreive_cache = cache.get(cache_key) if retreive_cache is None: print('re-caching') retreive_cache = super(MyListView, self).paginate_queryset(queryset, page_size) # Caching for 1 day cache.set(cache_key, retreive_cache, 86400) return retreive_cache except InvalidPage as e: raise Http404(_('Invalid page (%(page_number)s): %(message)s') % { 'page_number': page_number, 'message': str(e) }) ```
21,083,746
How do you cache a paginated Django queryset, specifically in a ListView? I noticed one query was taking a long time to run, so I'm attempting to cache it. The queryset is huge (over 100k records), so I'm attempting to only cache paginated subsections of it. I can't cache the entire view or template because there are sections that are user/session specific and need to change constantly. ListView has a couple standard methods for retrieving the queryset, `get_queryset()`, which returns the non-paginated data, and `paginate_queryset()`, which filters it by the current page. I first tried caching the query in `get_queryset()`, but quickly realized calling `cache.set(my_query_key, super(MyView, self).get_queryset())` was causing the entire query to be serialized. So then I tried overriding `paginate_queryset()` like: ``` import time from functools import partial from django.core.cache import cache from django.views.generic import ListView class MyView(ListView): ... def paginate_queryset(self, queryset, page_size): cache_key = 'myview-queryset-%s-%s' % (self.page, page_size) print 'paginate_queryset.cache_key:',cache_key t0 = time.time() ret = cache.get(cache_key) if ret is None: print 're-caching' ret = super(MyView, self).paginate_queryset(queryset, page_size) cache.set(cache_key, ret, 60*60) td = time.time() - t0 print 'paginate_queryset.time.seconds:',td (paginator, page, object_list, other_pages) = ret print 'total objects:',len(object_list) return ret ``` However, this takes almost a minute to run, even though only 10 objects are retrieved, and every requests shows "re-caching", implying nothing is being saved to cache. My `settings.CACHE` looks like: ``` CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', } } ``` and `service memcached status` shows memcached is running and `tail -f /var/log/memcached.log` shows absolutely nothing. What am I doing wrong? What is the proper way to cache a paginated query so that the entire queryset isn't retrieved? Edit: I think their may be a bug in either memcached or the Python wrapper. Django appears to support two different memcached backends, one using python-memcached and one using pylibmc. The python-memcached seems to silently hide the error caching the `paginate_queryset()` value. When I switched to the pylibmc backend, now I get an explicit error message "error 10 from memcached\_set: SERVER ERROR" tracing back to django/core/cache/backends/memcached.py in set, line 78.
2014/01/13
[ "https://Stackoverflow.com/questions/21083746", "https://Stackoverflow.com", "https://Stackoverflow.com/users/247542/" ]
You can extend the `Paginator` to support caching by a provided `cache_key`. A blog post about usage and implementation of a such `CachedPaginator` can be found [here](http://toastdriven.com/blog/2008/nov/07/cachedpaginator/). The source code is posted at [djangosnippets.org](http://www.djangosnippets.org/snippets/1173/) (here is a [web-acrhive link](https://web.archive.org/web/20150927100427/https://djangosnippets.org/snippets/1173/) because the original is not working). However I will post a slightly modificated example from the original version, which can not only cache objects per page, but the total count too. (sometimes even the count can be an expensive operation). ``` from django.core.cache import cache from django.utils.functional import cached_property from django.core.paginator import Paginator, Page, PageNotAnInteger class CachedPaginator(Paginator): """A paginator that caches the results on a page by page basis.""" def __init__(self, object_list, per_page, orphans=0, allow_empty_first_page=True, cache_key=None, cache_timeout=300): super(CachedPaginator, self).__init__(object_list, per_page, orphans, allow_empty_first_page) self.cache_key = cache_key self.cache_timeout = cache_timeout @cached_property def count(self): """ The original django.core.paginator.count attribute in Django1.8 is not writable and cant be setted manually, but we would like to override it when loading data from cache. (instead of recalculating it). So we make it writable via @cached_property. """ return super(CachedPaginator, self).count def set_count(self, count): """ Override the paginator.count value (to prevent recalculation) and clear num_pages and page_range which values depend on it. """ self.count = count # if somehow we have stored .num_pages or .page_range (which are cached properties) # this can lead to wrong page calculations (because they depend on paginator.count value) # so we clear their values to force recalculations on next calls try: del self.num_pages except AttributeError: pass try: del self.page_range except AttributeError: pass @cached_property def num_pages(self): """This is not writable in Django1.8. We want to make it writable""" return super(CachedPaginator, self).num_pages @cached_property def page_range(self): """This is not writable in Django1.8. We want to make it writable""" return super(CachedPaginator, self).page_range def page(self, number): """ Returns a Page object for the given 1-based page number. This will attempt to pull the results out of the cache first, based on the requested page number. If not found in the cache, it will pull a fresh list and then cache that result + the total result count. """ if self.cache_key is None: return super(CachedPaginator, self).page(number) # In order to prevent counting the queryset # we only validate that the provided number is integer # The rest of the validation will happen when we fetch fresh data. # so if the number is invalid, no cache will be setted # number = self.validate_number(number) try: number = int(number) except (TypeError, ValueError): raise PageNotAnInteger('That page number is not an integer') page_cache_key = "%s:%s:%s" % (self.cache_key, self.per_page, number) page_data = cache.get(page_cache_key) if page_data is None: page = super(CachedPaginator, self).page(number) #cache not only the objects, but the total count too. page_data = (page.object_list, self.count) cache.set(page_cache_key, page_data, self.cache_timeout) else: cached_object_list, cached_total_count = page_data self.set_count(cached_total_count) page = Page(cached_object_list, number, self) return page ```
I wanted to paginate my infinite scrolling view on my home page and this is the solution I came up with. It's a mix of Django CCBVs and the author's initial solution. The response times, however, didn't improve as much as I would've hoped for but that's probably because I am testing it on my local with just 6 posts and 2 users haha. ``` # Import from django.core.cache import cache from django.core.paginator import InvalidPage from django.views.generic.list import ListView from django.http Http404 class MyListView(ListView): template_name = 'MY TEMPLATE NAME' model = MY POST MODEL paginate_by = 10 def paginate_queryset(self, queryset, page_size): """Paginate the queryset""" paginator = self.get_paginator( queryset, page_size, orphans=self.get_paginate_orphans(), allow_empty_first_page=self.get_allow_empty()) page_kwarg = self.page_kwarg page = self.kwargs.get(page_kwarg) or self.request.GET.get(page_kwarg) or 1 try: page_number = int(page) except ValueError: if page == 'last': page_number = paginator.num_pages else: raise Http404(_("Page is not 'last', nor can it be converted to an int.")) try: page = paginator.page(page_number) cache_key = 'mylistview-%s-%s' % (page_number, page_size) retreive_cache = cache.get(cache_key) if retreive_cache is None: print('re-caching') retreive_cache = super(MyListView, self).paginate_queryset(queryset, page_size) # Caching for 1 day cache.set(cache_key, retreive_cache, 86400) return retreive_cache except InvalidPage as e: raise Http404(_('Invalid page (%(page_number)s): %(message)s') % { 'page_number': page_number, 'message': str(e) }) ```
35,795,663
**[What I want]** is to find the only one smallest positive real root of quartic function **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e** **[Existing Method]** My equation is for collision prediction, the maximum degree is quartic function as **f(x)** = **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e** and **a,b,c,d,e** coef can be positive/negative/zero (**real float value**). So my function **f(x)** can be quartic, cubic, or quadratic depending on a, b, c ,d ,e input coefficient. Currently, I use NumPy to find roots as below. ```py import numpy root_output = numpy.roots([a, b, c ,d ,e]) ``` The "**root\_output**" from the NumPy module can be all possible real/complex roots depending on the input coefficient. So I have to look at "**root\_output**" one by one, and check which root is the smallest real positive value (root>0?) **[The Problem]** My program needs to execute **numpy.roots([a, b, c, d, e])** many times, so many times of executing numpy.roots is too slow for my project. and (a, b, c ,d ,e) value is always changed every time when executing numpy.roots My attempt is to run the code on Raspberry Pi2. Below is an example of processing time. * Running many many times of numpy.roots on PC: **1.2 seconds** * Running many many times of numpy.roots on Raspberry Pi2: **17 seconds** Could you please guide me on how to find the smallest positive real root in the **fastest solution**? Using scipy.optimize or implement some algorithm to speed up finding root or any advice from you will be great. Thank you. **[Solution]** * **Quadratic function** only need real positive roots (please be aware of division by zero) ```py def SolvQuadratic(a, b ,c): d = (b**2) - (4*a*c) if d < 0: return [] if d > 0: square_root_d = math.sqrt(d) t1 = (-b + square_root_d) / (2 * a) t2 = (-b - square_root_d) / (2 * a) if t1 > 0: if t2 > 0: if t1 < t2: return [t1, t2] return [t2, t1] return [t1] elif t2 > 0: return [t2] else: return [] else: t = -b / (2*a) if t > 0: return [t] return [] ``` * **Quartic Function** for quartic function, you can use pure python/numba version as **the below answer from @B.M.**. I also add another cython version from @B.M's code. You can use the below code as .pyx file and then compile it to get about 2x faster than pure python (please be aware of rounding issues). ```py import cmath cdef extern from "complex.h": double complex cexp(double complex) cdef double complex J=cexp(2j*cmath.pi/3) cdef double complex Jc=1/J cdef Cardano(double a, double b, double c, double d): cdef double z0 cdef double a2, b2 cdef double p ,q, D cdef double complex r cdef double complex u, v, w cdef double w0, w1, w2 cdef double complex r1, r2, r3 z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=cmath.sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v = v*Jc elif w2<w1 : v = v*Jc else: v = v*J r1 = u+v-z0 r2 = u*J+v*Jc-z0 r3 = u*Jc+v*J-z0 return r1, r2, r3 cdef Roots_2(double a, double complex b, double complex c): cdef double complex bp cdef double complex delta cdef double complex r1, r2 bp=b/2 delta=bp*bp-a*c r1=(-bp-delta**.5)/a r2=-r1-b/a return r1, r2 def SolveQuartic(double a, double b, double c, double d, double e): "Ferrarai's Method" "resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals" "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" cdef double z0 cdef double a2, b2, c2, d2 cdef double p, q, r cdef double A, B, C, D cdef double complex y0, y1, y2 cdef double complex a0, b0 cdef double complex r0, r1, r2, r3 z0=b/4.0/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3.0*b2/(8*a2)+c/a q = b*b2/8.0/a/a2 - 1.0/2*b*c/a2 + d/a r = -3.0/256*b2*b2/a2/a2 + c*b2/a2/a/16 - b*d/a2/4+e/a "Second find y so P2=Ay^3+By^2+Cy+D=0" A=8.0 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=Cardano(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=Roots_2(1,a0,y0+b0) r2,r3=Roots_2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) ``` **[Problem of Ferrari's method]** We're facing the problem when the coefficients of quartic equation is [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] the output from numpy.roots and ferrari methods is entirely different (numpy.roots is correct output). ```py import numpy as np import cmath J=cmath.exp(2j*cmath.pi/3) Jc=1/J def ferrari(a,b,c,d,e): "Ferrarai's Method" "resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals" "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" z0=b/4/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3*b2/(8*a2)+c/a q = b*b2/8/a/a2 - 1/2*b*c/a2 + d/a r = -3/256*b2*b2/a2/a2 +c*b2/a2/a/16-b*d/a2/4+e/a "Second find y so P2=Ay^3+By^2+Cy+D=0" A=8 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=Cardano(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=Roots_2(1,a0,y0+b0) r2,r3=Roots_2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) #~ @jit(nopython=True) def Cardano(a,b,c,d): z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=cmath.sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v*=Jc elif w2<w1 : v*=Jc else: v*=J return u+v-z0, u*J+v*Jc-z0, u*Jc+v*J-z0 #~ @jit(nopython=True) def Roots_2(a,b,c): bp=b/2 delta=bp*bp-a*c r1=(-bp-delta**.5)/a r2=-r1-b/a return r1,r2 coef = [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] print("Coefficient A, B, C, D, E", coef) print("") print("numpy roots: ", np.roots(coef)) print("") print("ferrari python ", ferrari(*coef)) ```
2016/03/04
[ "https://Stackoverflow.com/questions/35795663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6017946/" ]
An other answer : do it with analytic methods ([Ferrari](https://fr.wikipedia.org/wiki/M%C3%A9thode_de_Ferrari),[Cardan](https://fr.wikipedia.org/wiki/M%C3%A9thode_de_Cardan#Formules_de_Cardan)), and speed the code with Just in Time compilation ([Numba](http://numba.pydata.org)) : Let see the improvement first : ``` In [2]: P=poly1d([1,2,3,4],True) In [3]: roots(P) Out[3]: array([ 4., 3., 2., 1.]) In [4]: %timeit roots(P) 1000 loops, best of 3: 465 Β΅s per loop In [5]: ferrari(*P.coeffs) Out[5]: ((1+0j), (2-0j), (3+0j), (4-0j)) In [5]: %timeit ferrari(*P.coeffs) #pure python without jit 10000 loops, best of 3: 116 Β΅s per loop In [6]: %timeit ferrari(*P.coeffs) # with numba.jit 100000 loops, best of 3: 13 Β΅s per loop ``` Then the ugly code : for order 4 : ``` @jit(nopython=True) def ferrari(a,b,c,d,e): "resolution of P=ax^4+bx^3+cx^2+dx+e=0" "CN all coeffs real." "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" z0=b/4/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3*b2/(8*a2)+c/a q = b*b2/8/a/a2 - 1/2*b*c/a2 + d/a r = -3/256*b2*b2/a2/a2 +c*b2/a2/a/16-b*d/a2/4+e/a "Second find X so P2=AX^3+BX^2+C^X+D=0" A=8 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=cardan(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0.real)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=roots2(1,a0,y0+b0) r2,r3=roots2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) ``` for order 3 : ``` J=exp(2j*pi/3) Jc=1/J @jit(nopython=True) def cardan(a,b,c,d): u=empty(2,complex128) z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v*=Jc elif w2<w1 : v*=Jc else: v*=J return u+v-z0, u*J+v*Jc-z0,u*Jc+v*J-z0 ``` for order 2: ``` @jit(nopython=True) def roots2(a,b,c): bp=b/2 delta=bp*bp-a*c u1=(-bp-delta**.5)/a u2=-u1-b/a return u1,u2 ``` Probably needs to be test furthermore, but efficient.
the numpy solution to do that without loop is : ``` p=array([a,b,c,d,e]) r=roots(p) r[(r.imag==0) & (r.real>=0) ].real.min() ``` `scipy.optimize` methods will be slower, unless you don't need precision: ``` In [586]: %timeit r=roots(p);r[(r.imag==0) & (r.real>=0) ].real.min() 1000 loops, best of 3: 334 Β΅s per loop In [587]: %timeit newton(poly1d(p),10,tol=1e-8) 1000 loops, best of 3: 555 Β΅s per loop In [588]: %timeit newton(poly1d(p),10,tol=1) 10000 loops, best of 3: 177 Β΅s per loop ``` And you have then to find the min... **EDIT** for a 2x factor, do by yourself what roots does: ``` In [638]: b=zeros((4,4),float);b[1:,:-1]=eye(3) In [639]: c=b.copy();c[0]=-(p/p[0])[1:];eig(c)[0] Out[639]: array([-7.40849430+0.j , 5.77969794+0.j , -0.18560182+3.48995646j, -0.18560182-3.48995646j]) In [640]: roots(p) Out[640]: array([-7.40849430+0.j , 5.77969794+0.j , -0.18560182+3.48995646j, -0.18560182-3.48995646j]) In [641]: %timeit roots(p) 1000 loops, best of 3: 365 Β΅s per loop In [642]: %timeit c=b.copy();c[0]=-(p/p[0])[1:];eig(c) 10000 loops, best of 3: 181 Β΅s per loop ```
35,795,663
**[What I want]** is to find the only one smallest positive real root of quartic function **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e** **[Existing Method]** My equation is for collision prediction, the maximum degree is quartic function as **f(x)** = **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e** and **a,b,c,d,e** coef can be positive/negative/zero (**real float value**). So my function **f(x)** can be quartic, cubic, or quadratic depending on a, b, c ,d ,e input coefficient. Currently, I use NumPy to find roots as below. ```py import numpy root_output = numpy.roots([a, b, c ,d ,e]) ``` The "**root\_output**" from the NumPy module can be all possible real/complex roots depending on the input coefficient. So I have to look at "**root\_output**" one by one, and check which root is the smallest real positive value (root>0?) **[The Problem]** My program needs to execute **numpy.roots([a, b, c, d, e])** many times, so many times of executing numpy.roots is too slow for my project. and (a, b, c ,d ,e) value is always changed every time when executing numpy.roots My attempt is to run the code on Raspberry Pi2. Below is an example of processing time. * Running many many times of numpy.roots on PC: **1.2 seconds** * Running many many times of numpy.roots on Raspberry Pi2: **17 seconds** Could you please guide me on how to find the smallest positive real root in the **fastest solution**? Using scipy.optimize or implement some algorithm to speed up finding root or any advice from you will be great. Thank you. **[Solution]** * **Quadratic function** only need real positive roots (please be aware of division by zero) ```py def SolvQuadratic(a, b ,c): d = (b**2) - (4*a*c) if d < 0: return [] if d > 0: square_root_d = math.sqrt(d) t1 = (-b + square_root_d) / (2 * a) t2 = (-b - square_root_d) / (2 * a) if t1 > 0: if t2 > 0: if t1 < t2: return [t1, t2] return [t2, t1] return [t1] elif t2 > 0: return [t2] else: return [] else: t = -b / (2*a) if t > 0: return [t] return [] ``` * **Quartic Function** for quartic function, you can use pure python/numba version as **the below answer from @B.M.**. I also add another cython version from @B.M's code. You can use the below code as .pyx file and then compile it to get about 2x faster than pure python (please be aware of rounding issues). ```py import cmath cdef extern from "complex.h": double complex cexp(double complex) cdef double complex J=cexp(2j*cmath.pi/3) cdef double complex Jc=1/J cdef Cardano(double a, double b, double c, double d): cdef double z0 cdef double a2, b2 cdef double p ,q, D cdef double complex r cdef double complex u, v, w cdef double w0, w1, w2 cdef double complex r1, r2, r3 z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=cmath.sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v = v*Jc elif w2<w1 : v = v*Jc else: v = v*J r1 = u+v-z0 r2 = u*J+v*Jc-z0 r3 = u*Jc+v*J-z0 return r1, r2, r3 cdef Roots_2(double a, double complex b, double complex c): cdef double complex bp cdef double complex delta cdef double complex r1, r2 bp=b/2 delta=bp*bp-a*c r1=(-bp-delta**.5)/a r2=-r1-b/a return r1, r2 def SolveQuartic(double a, double b, double c, double d, double e): "Ferrarai's Method" "resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals" "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" cdef double z0 cdef double a2, b2, c2, d2 cdef double p, q, r cdef double A, B, C, D cdef double complex y0, y1, y2 cdef double complex a0, b0 cdef double complex r0, r1, r2, r3 z0=b/4.0/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3.0*b2/(8*a2)+c/a q = b*b2/8.0/a/a2 - 1.0/2*b*c/a2 + d/a r = -3.0/256*b2*b2/a2/a2 + c*b2/a2/a/16 - b*d/a2/4+e/a "Second find y so P2=Ay^3+By^2+Cy+D=0" A=8.0 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=Cardano(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=Roots_2(1,a0,y0+b0) r2,r3=Roots_2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) ``` **[Problem of Ferrari's method]** We're facing the problem when the coefficients of quartic equation is [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] the output from numpy.roots and ferrari methods is entirely different (numpy.roots is correct output). ```py import numpy as np import cmath J=cmath.exp(2j*cmath.pi/3) Jc=1/J def ferrari(a,b,c,d,e): "Ferrarai's Method" "resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals" "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" z0=b/4/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3*b2/(8*a2)+c/a q = b*b2/8/a/a2 - 1/2*b*c/a2 + d/a r = -3/256*b2*b2/a2/a2 +c*b2/a2/a/16-b*d/a2/4+e/a "Second find y so P2=Ay^3+By^2+Cy+D=0" A=8 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=Cardano(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=Roots_2(1,a0,y0+b0) r2,r3=Roots_2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) #~ @jit(nopython=True) def Cardano(a,b,c,d): z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=cmath.sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v*=Jc elif w2<w1 : v*=Jc else: v*=J return u+v-z0, u*J+v*Jc-z0, u*Jc+v*J-z0 #~ @jit(nopython=True) def Roots_2(a,b,c): bp=b/2 delta=bp*bp-a*c r1=(-bp-delta**.5)/a r2=-r1-b/a return r1,r2 coef = [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] print("Coefficient A, B, C, D, E", coef) print("") print("numpy roots: ", np.roots(coef)) print("") print("ferrari python ", ferrari(*coef)) ```
2016/03/04
[ "https://Stackoverflow.com/questions/35795663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6017946/" ]
the numpy solution to do that without loop is : ``` p=array([a,b,c,d,e]) r=roots(p) r[(r.imag==0) & (r.real>=0) ].real.min() ``` `scipy.optimize` methods will be slower, unless you don't need precision: ``` In [586]: %timeit r=roots(p);r[(r.imag==0) & (r.real>=0) ].real.min() 1000 loops, best of 3: 334 Β΅s per loop In [587]: %timeit newton(poly1d(p),10,tol=1e-8) 1000 loops, best of 3: 555 Β΅s per loop In [588]: %timeit newton(poly1d(p),10,tol=1) 10000 loops, best of 3: 177 Β΅s per loop ``` And you have then to find the min... **EDIT** for a 2x factor, do by yourself what roots does: ``` In [638]: b=zeros((4,4),float);b[1:,:-1]=eye(3) In [639]: c=b.copy();c[0]=-(p/p[0])[1:];eig(c)[0] Out[639]: array([-7.40849430+0.j , 5.77969794+0.j , -0.18560182+3.48995646j, -0.18560182-3.48995646j]) In [640]: roots(p) Out[640]: array([-7.40849430+0.j , 5.77969794+0.j , -0.18560182+3.48995646j, -0.18560182-3.48995646j]) In [641]: %timeit roots(p) 1000 loops, best of 3: 365 Β΅s per loop In [642]: %timeit c=b.copy();c[0]=-(p/p[0])[1:];eig(c) 10000 loops, best of 3: 181 Β΅s per loop ```
In SymPy the `real_roots` function will return the sorted roots of a polynomial and you can loop through them and break on the first positive one: ``` for do in range(100): p = Poly.from_list([randint(-100,100) for i in range(5)], x) for i in real_roots(p): if i.is_positive: print(i) break else: print('no positive root', p) ``` So a function for this might be ``` def small_pos_root(a,b,c,d,e): from sympy.abc import x from sympy import Poly, real_roots for i in real_roots(Poly.from_list([a,b,c,d,e], x)): if i.is_positive: return i.n() ``` This will return None if there is no positive real root, otherwise it will return a numerical value of the first positive root.
35,795,663
**[What I want]** is to find the only one smallest positive real root of quartic function **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e** **[Existing Method]** My equation is for collision prediction, the maximum degree is quartic function as **f(x)** = **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e** and **a,b,c,d,e** coef can be positive/negative/zero (**real float value**). So my function **f(x)** can be quartic, cubic, or quadratic depending on a, b, c ,d ,e input coefficient. Currently, I use NumPy to find roots as below. ```py import numpy root_output = numpy.roots([a, b, c ,d ,e]) ``` The "**root\_output**" from the NumPy module can be all possible real/complex roots depending on the input coefficient. So I have to look at "**root\_output**" one by one, and check which root is the smallest real positive value (root>0?) **[The Problem]** My program needs to execute **numpy.roots([a, b, c, d, e])** many times, so many times of executing numpy.roots is too slow for my project. and (a, b, c ,d ,e) value is always changed every time when executing numpy.roots My attempt is to run the code on Raspberry Pi2. Below is an example of processing time. * Running many many times of numpy.roots on PC: **1.2 seconds** * Running many many times of numpy.roots on Raspberry Pi2: **17 seconds** Could you please guide me on how to find the smallest positive real root in the **fastest solution**? Using scipy.optimize or implement some algorithm to speed up finding root or any advice from you will be great. Thank you. **[Solution]** * **Quadratic function** only need real positive roots (please be aware of division by zero) ```py def SolvQuadratic(a, b ,c): d = (b**2) - (4*a*c) if d < 0: return [] if d > 0: square_root_d = math.sqrt(d) t1 = (-b + square_root_d) / (2 * a) t2 = (-b - square_root_d) / (2 * a) if t1 > 0: if t2 > 0: if t1 < t2: return [t1, t2] return [t2, t1] return [t1] elif t2 > 0: return [t2] else: return [] else: t = -b / (2*a) if t > 0: return [t] return [] ``` * **Quartic Function** for quartic function, you can use pure python/numba version as **the below answer from @B.M.**. I also add another cython version from @B.M's code. You can use the below code as .pyx file and then compile it to get about 2x faster than pure python (please be aware of rounding issues). ```py import cmath cdef extern from "complex.h": double complex cexp(double complex) cdef double complex J=cexp(2j*cmath.pi/3) cdef double complex Jc=1/J cdef Cardano(double a, double b, double c, double d): cdef double z0 cdef double a2, b2 cdef double p ,q, D cdef double complex r cdef double complex u, v, w cdef double w0, w1, w2 cdef double complex r1, r2, r3 z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=cmath.sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v = v*Jc elif w2<w1 : v = v*Jc else: v = v*J r1 = u+v-z0 r2 = u*J+v*Jc-z0 r3 = u*Jc+v*J-z0 return r1, r2, r3 cdef Roots_2(double a, double complex b, double complex c): cdef double complex bp cdef double complex delta cdef double complex r1, r2 bp=b/2 delta=bp*bp-a*c r1=(-bp-delta**.5)/a r2=-r1-b/a return r1, r2 def SolveQuartic(double a, double b, double c, double d, double e): "Ferrarai's Method" "resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals" "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" cdef double z0 cdef double a2, b2, c2, d2 cdef double p, q, r cdef double A, B, C, D cdef double complex y0, y1, y2 cdef double complex a0, b0 cdef double complex r0, r1, r2, r3 z0=b/4.0/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3.0*b2/(8*a2)+c/a q = b*b2/8.0/a/a2 - 1.0/2*b*c/a2 + d/a r = -3.0/256*b2*b2/a2/a2 + c*b2/a2/a/16 - b*d/a2/4+e/a "Second find y so P2=Ay^3+By^2+Cy+D=0" A=8.0 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=Cardano(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=Roots_2(1,a0,y0+b0) r2,r3=Roots_2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) ``` **[Problem of Ferrari's method]** We're facing the problem when the coefficients of quartic equation is [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] the output from numpy.roots and ferrari methods is entirely different (numpy.roots is correct output). ```py import numpy as np import cmath J=cmath.exp(2j*cmath.pi/3) Jc=1/J def ferrari(a,b,c,d,e): "Ferrarai's Method" "resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals" "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" z0=b/4/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3*b2/(8*a2)+c/a q = b*b2/8/a/a2 - 1/2*b*c/a2 + d/a r = -3/256*b2*b2/a2/a2 +c*b2/a2/a/16-b*d/a2/4+e/a "Second find y so P2=Ay^3+By^2+Cy+D=0" A=8 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=Cardano(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=Roots_2(1,a0,y0+b0) r2,r3=Roots_2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) #~ @jit(nopython=True) def Cardano(a,b,c,d): z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=cmath.sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v*=Jc elif w2<w1 : v*=Jc else: v*=J return u+v-z0, u*J+v*Jc-z0, u*Jc+v*J-z0 #~ @jit(nopython=True) def Roots_2(a,b,c): bp=b/2 delta=bp*bp-a*c r1=(-bp-delta**.5)/a r2=-r1-b/a return r1,r2 coef = [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] print("Coefficient A, B, C, D, E", coef) print("") print("numpy roots: ", np.roots(coef)) print("") print("ferrari python ", ferrari(*coef)) ```
2016/03/04
[ "https://Stackoverflow.com/questions/35795663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6017946/" ]
An other answer : do it with analytic methods ([Ferrari](https://fr.wikipedia.org/wiki/M%C3%A9thode_de_Ferrari),[Cardan](https://fr.wikipedia.org/wiki/M%C3%A9thode_de_Cardan#Formules_de_Cardan)), and speed the code with Just in Time compilation ([Numba](http://numba.pydata.org)) : Let see the improvement first : ``` In [2]: P=poly1d([1,2,3,4],True) In [3]: roots(P) Out[3]: array([ 4., 3., 2., 1.]) In [4]: %timeit roots(P) 1000 loops, best of 3: 465 Β΅s per loop In [5]: ferrari(*P.coeffs) Out[5]: ((1+0j), (2-0j), (3+0j), (4-0j)) In [5]: %timeit ferrari(*P.coeffs) #pure python without jit 10000 loops, best of 3: 116 Β΅s per loop In [6]: %timeit ferrari(*P.coeffs) # with numba.jit 100000 loops, best of 3: 13 Β΅s per loop ``` Then the ugly code : for order 4 : ``` @jit(nopython=True) def ferrari(a,b,c,d,e): "resolution of P=ax^4+bx^3+cx^2+dx+e=0" "CN all coeffs real." "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" z0=b/4/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3*b2/(8*a2)+c/a q = b*b2/8/a/a2 - 1/2*b*c/a2 + d/a r = -3/256*b2*b2/a2/a2 +c*b2/a2/a/16-b*d/a2/4+e/a "Second find X so P2=AX^3+BX^2+C^X+D=0" A=8 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=cardan(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0.real)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=roots2(1,a0,y0+b0) r2,r3=roots2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) ``` for order 3 : ``` J=exp(2j*pi/3) Jc=1/J @jit(nopython=True) def cardan(a,b,c,d): u=empty(2,complex128) z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v*=Jc elif w2<w1 : v*=Jc else: v*=J return u+v-z0, u*J+v*Jc-z0,u*Jc+v*J-z0 ``` for order 2: ``` @jit(nopython=True) def roots2(a,b,c): bp=b/2 delta=bp*bp-a*c u1=(-bp-delta**.5)/a u2=-u1-b/a return u1,u2 ``` Probably needs to be test furthermore, but efficient.
If polynomial coefficients are known ahead of time, you can speed up by vectorizing the computation in `roots` (given Numpy >= 1.10 or so): ``` import numpy as np def roots_vec(p): p = np.atleast_1d(p) n = p.shape[-1] A = np.zeros(p.shape[:1] + (n-1, n-1), float) A[...,1:,:-1] = np.eye(n-2) A[...,0,:] = -p[...,1:]/p[...,None,0] return np.linalg.eigvals(A) def roots_loop(p): r = [] for pp in p: r.append(np.roots(pp)) return r p = np.random.rand(2000, 4) # 2000 polynomials of 4th order assert np.allclose(roots_vec(p), roots_loop(p)) In [35]: %timeit roots_vec(p) 100 loops, best of 3: 4.49 ms per loop In [36]: %timeit roots_loop(p) 10 loops, best of 3: 81.9 ms per loop ``` It might not beat analytical solution + Numba, but allows higher orders.
35,795,663
**[What I want]** is to find the only one smallest positive real root of quartic function **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e** **[Existing Method]** My equation is for collision prediction, the maximum degree is quartic function as **f(x)** = **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e** and **a,b,c,d,e** coef can be positive/negative/zero (**real float value**). So my function **f(x)** can be quartic, cubic, or quadratic depending on a, b, c ,d ,e input coefficient. Currently, I use NumPy to find roots as below. ```py import numpy root_output = numpy.roots([a, b, c ,d ,e]) ``` The "**root\_output**" from the NumPy module can be all possible real/complex roots depending on the input coefficient. So I have to look at "**root\_output**" one by one, and check which root is the smallest real positive value (root>0?) **[The Problem]** My program needs to execute **numpy.roots([a, b, c, d, e])** many times, so many times of executing numpy.roots is too slow for my project. and (a, b, c ,d ,e) value is always changed every time when executing numpy.roots My attempt is to run the code on Raspberry Pi2. Below is an example of processing time. * Running many many times of numpy.roots on PC: **1.2 seconds** * Running many many times of numpy.roots on Raspberry Pi2: **17 seconds** Could you please guide me on how to find the smallest positive real root in the **fastest solution**? Using scipy.optimize or implement some algorithm to speed up finding root or any advice from you will be great. Thank you. **[Solution]** * **Quadratic function** only need real positive roots (please be aware of division by zero) ```py def SolvQuadratic(a, b ,c): d = (b**2) - (4*a*c) if d < 0: return [] if d > 0: square_root_d = math.sqrt(d) t1 = (-b + square_root_d) / (2 * a) t2 = (-b - square_root_d) / (2 * a) if t1 > 0: if t2 > 0: if t1 < t2: return [t1, t2] return [t2, t1] return [t1] elif t2 > 0: return [t2] else: return [] else: t = -b / (2*a) if t > 0: return [t] return [] ``` * **Quartic Function** for quartic function, you can use pure python/numba version as **the below answer from @B.M.**. I also add another cython version from @B.M's code. You can use the below code as .pyx file and then compile it to get about 2x faster than pure python (please be aware of rounding issues). ```py import cmath cdef extern from "complex.h": double complex cexp(double complex) cdef double complex J=cexp(2j*cmath.pi/3) cdef double complex Jc=1/J cdef Cardano(double a, double b, double c, double d): cdef double z0 cdef double a2, b2 cdef double p ,q, D cdef double complex r cdef double complex u, v, w cdef double w0, w1, w2 cdef double complex r1, r2, r3 z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=cmath.sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v = v*Jc elif w2<w1 : v = v*Jc else: v = v*J r1 = u+v-z0 r2 = u*J+v*Jc-z0 r3 = u*Jc+v*J-z0 return r1, r2, r3 cdef Roots_2(double a, double complex b, double complex c): cdef double complex bp cdef double complex delta cdef double complex r1, r2 bp=b/2 delta=bp*bp-a*c r1=(-bp-delta**.5)/a r2=-r1-b/a return r1, r2 def SolveQuartic(double a, double b, double c, double d, double e): "Ferrarai's Method" "resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals" "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" cdef double z0 cdef double a2, b2, c2, d2 cdef double p, q, r cdef double A, B, C, D cdef double complex y0, y1, y2 cdef double complex a0, b0 cdef double complex r0, r1, r2, r3 z0=b/4.0/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3.0*b2/(8*a2)+c/a q = b*b2/8.0/a/a2 - 1.0/2*b*c/a2 + d/a r = -3.0/256*b2*b2/a2/a2 + c*b2/a2/a/16 - b*d/a2/4+e/a "Second find y so P2=Ay^3+By^2+Cy+D=0" A=8.0 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=Cardano(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=Roots_2(1,a0,y0+b0) r2,r3=Roots_2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) ``` **[Problem of Ferrari's method]** We're facing the problem when the coefficients of quartic equation is [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] the output from numpy.roots and ferrari methods is entirely different (numpy.roots is correct output). ```py import numpy as np import cmath J=cmath.exp(2j*cmath.pi/3) Jc=1/J def ferrari(a,b,c,d,e): "Ferrarai's Method" "resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals" "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" z0=b/4/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3*b2/(8*a2)+c/a q = b*b2/8/a/a2 - 1/2*b*c/a2 + d/a r = -3/256*b2*b2/a2/a2 +c*b2/a2/a/16-b*d/a2/4+e/a "Second find y so P2=Ay^3+By^2+Cy+D=0" A=8 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=Cardano(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=Roots_2(1,a0,y0+b0) r2,r3=Roots_2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) #~ @jit(nopython=True) def Cardano(a,b,c,d): z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=cmath.sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v*=Jc elif w2<w1 : v*=Jc else: v*=J return u+v-z0, u*J+v*Jc-z0, u*Jc+v*J-z0 #~ @jit(nopython=True) def Roots_2(a,b,c): bp=b/2 delta=bp*bp-a*c r1=(-bp-delta**.5)/a r2=-r1-b/a return r1,r2 coef = [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] print("Coefficient A, B, C, D, E", coef) print("") print("numpy roots: ", np.roots(coef)) print("") print("ferrari python ", ferrari(*coef)) ```
2016/03/04
[ "https://Stackoverflow.com/questions/35795663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6017946/" ]
An other answer : do it with analytic methods ([Ferrari](https://fr.wikipedia.org/wiki/M%C3%A9thode_de_Ferrari),[Cardan](https://fr.wikipedia.org/wiki/M%C3%A9thode_de_Cardan#Formules_de_Cardan)), and speed the code with Just in Time compilation ([Numba](http://numba.pydata.org)) : Let see the improvement first : ``` In [2]: P=poly1d([1,2,3,4],True) In [3]: roots(P) Out[3]: array([ 4., 3., 2., 1.]) In [4]: %timeit roots(P) 1000 loops, best of 3: 465 Β΅s per loop In [5]: ferrari(*P.coeffs) Out[5]: ((1+0j), (2-0j), (3+0j), (4-0j)) In [5]: %timeit ferrari(*P.coeffs) #pure python without jit 10000 loops, best of 3: 116 Β΅s per loop In [6]: %timeit ferrari(*P.coeffs) # with numba.jit 100000 loops, best of 3: 13 Β΅s per loop ``` Then the ugly code : for order 4 : ``` @jit(nopython=True) def ferrari(a,b,c,d,e): "resolution of P=ax^4+bx^3+cx^2+dx+e=0" "CN all coeffs real." "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" z0=b/4/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3*b2/(8*a2)+c/a q = b*b2/8/a/a2 - 1/2*b*c/a2 + d/a r = -3/256*b2*b2/a2/a2 +c*b2/a2/a/16-b*d/a2/4+e/a "Second find X so P2=AX^3+BX^2+C^X+D=0" A=8 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=cardan(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0.real)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=roots2(1,a0,y0+b0) r2,r3=roots2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) ``` for order 3 : ``` J=exp(2j*pi/3) Jc=1/J @jit(nopython=True) def cardan(a,b,c,d): u=empty(2,complex128) z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v*=Jc elif w2<w1 : v*=Jc else: v*=J return u+v-z0, u*J+v*Jc-z0,u*Jc+v*J-z0 ``` for order 2: ``` @jit(nopython=True) def roots2(a,b,c): bp=b/2 delta=bp*bp-a*c u1=(-bp-delta**.5)/a u2=-u1-b/a return u1,u2 ``` Probably needs to be test furthermore, but efficient.
In SymPy the `real_roots` function will return the sorted roots of a polynomial and you can loop through them and break on the first positive one: ``` for do in range(100): p = Poly.from_list([randint(-100,100) for i in range(5)], x) for i in real_roots(p): if i.is_positive: print(i) break else: print('no positive root', p) ``` So a function for this might be ``` def small_pos_root(a,b,c,d,e): from sympy.abc import x from sympy import Poly, real_roots for i in real_roots(Poly.from_list([a,b,c,d,e], x)): if i.is_positive: return i.n() ``` This will return None if there is no positive real root, otherwise it will return a numerical value of the first positive root.
35,795,663
**[What I want]** is to find the only one smallest positive real root of quartic function **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e** **[Existing Method]** My equation is for collision prediction, the maximum degree is quartic function as **f(x)** = **a**x^4 + **b**x^3 + **c**x^2 + **d**x + **e** and **a,b,c,d,e** coef can be positive/negative/zero (**real float value**). So my function **f(x)** can be quartic, cubic, or quadratic depending on a, b, c ,d ,e input coefficient. Currently, I use NumPy to find roots as below. ```py import numpy root_output = numpy.roots([a, b, c ,d ,e]) ``` The "**root\_output**" from the NumPy module can be all possible real/complex roots depending on the input coefficient. So I have to look at "**root\_output**" one by one, and check which root is the smallest real positive value (root>0?) **[The Problem]** My program needs to execute **numpy.roots([a, b, c, d, e])** many times, so many times of executing numpy.roots is too slow for my project. and (a, b, c ,d ,e) value is always changed every time when executing numpy.roots My attempt is to run the code on Raspberry Pi2. Below is an example of processing time. * Running many many times of numpy.roots on PC: **1.2 seconds** * Running many many times of numpy.roots on Raspberry Pi2: **17 seconds** Could you please guide me on how to find the smallest positive real root in the **fastest solution**? Using scipy.optimize or implement some algorithm to speed up finding root or any advice from you will be great. Thank you. **[Solution]** * **Quadratic function** only need real positive roots (please be aware of division by zero) ```py def SolvQuadratic(a, b ,c): d = (b**2) - (4*a*c) if d < 0: return [] if d > 0: square_root_d = math.sqrt(d) t1 = (-b + square_root_d) / (2 * a) t2 = (-b - square_root_d) / (2 * a) if t1 > 0: if t2 > 0: if t1 < t2: return [t1, t2] return [t2, t1] return [t1] elif t2 > 0: return [t2] else: return [] else: t = -b / (2*a) if t > 0: return [t] return [] ``` * **Quartic Function** for quartic function, you can use pure python/numba version as **the below answer from @B.M.**. I also add another cython version from @B.M's code. You can use the below code as .pyx file and then compile it to get about 2x faster than pure python (please be aware of rounding issues). ```py import cmath cdef extern from "complex.h": double complex cexp(double complex) cdef double complex J=cexp(2j*cmath.pi/3) cdef double complex Jc=1/J cdef Cardano(double a, double b, double c, double d): cdef double z0 cdef double a2, b2 cdef double p ,q, D cdef double complex r cdef double complex u, v, w cdef double w0, w1, w2 cdef double complex r1, r2, r3 z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=cmath.sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v = v*Jc elif w2<w1 : v = v*Jc else: v = v*J r1 = u+v-z0 r2 = u*J+v*Jc-z0 r3 = u*Jc+v*J-z0 return r1, r2, r3 cdef Roots_2(double a, double complex b, double complex c): cdef double complex bp cdef double complex delta cdef double complex r1, r2 bp=b/2 delta=bp*bp-a*c r1=(-bp-delta**.5)/a r2=-r1-b/a return r1, r2 def SolveQuartic(double a, double b, double c, double d, double e): "Ferrarai's Method" "resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals" "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" cdef double z0 cdef double a2, b2, c2, d2 cdef double p, q, r cdef double A, B, C, D cdef double complex y0, y1, y2 cdef double complex a0, b0 cdef double complex r0, r1, r2, r3 z0=b/4.0/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3.0*b2/(8*a2)+c/a q = b*b2/8.0/a/a2 - 1.0/2*b*c/a2 + d/a r = -3.0/256*b2*b2/a2/a2 + c*b2/a2/a/16 - b*d/a2/4+e/a "Second find y so P2=Ay^3+By^2+Cy+D=0" A=8.0 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=Cardano(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=Roots_2(1,a0,y0+b0) r2,r3=Roots_2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) ``` **[Problem of Ferrari's method]** We're facing the problem when the coefficients of quartic equation is [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] the output from numpy.roots and ferrari methods is entirely different (numpy.roots is correct output). ```py import numpy as np import cmath J=cmath.exp(2j*cmath.pi/3) Jc=1/J def ferrari(a,b,c,d,e): "Ferrarai's Method" "resolution of P=ax^4+bx^3+cx^2+dx+e=0, coeffs reals" "First shift : x= z-b/4/a => P=z^4+pz^2+qz+r" z0=b/4/a a2,b2,c2,d2 = a*a,b*b,c*c,d*d p = -3*b2/(8*a2)+c/a q = b*b2/8/a/a2 - 1/2*b*c/a2 + d/a r = -3/256*b2*b2/a2/a2 +c*b2/a2/a/16-b*d/a2/4+e/a "Second find y so P2=Ay^3+By^2+Cy+D=0" A=8 B=-4*p C=-8*r D=4*r*p-q*q y0,y1,y2=Cardano(A,B,C,D) if abs(y1.imag)<abs(y0.imag): y0=y1 if abs(y2.imag)<abs(y0.imag): y0=y2 a0=(-p+2*y0)**.5 if a0==0 : b0=y0**2-r else : b0=-q/2/a0 r0,r1=Roots_2(1,a0,y0+b0) r2,r3=Roots_2(1,-a0,y0-b0) return (r0-z0,r1-z0,r2-z0,r3-z0) #~ @jit(nopython=True) def Cardano(a,b,c,d): z0=b/3/a a2,b2 = a*a,b*b p=-b2/3/a2 +c/a q=(b/27*(2*b2/a2-9*c/a)+d)/a D=-4*p*p*p-27*q*q r=cmath.sqrt(-D/27+0j) u=((-q-r)/2)**0.33333333333333333333333 v=((-q+r)/2)**0.33333333333333333333333 w=u*v w0=abs(w+p/3) w1=abs(w*J+p/3) w2=abs(w*Jc+p/3) if w0<w1: if w2<w0 : v*=Jc elif w2<w1 : v*=Jc else: v*=J return u+v-z0, u*J+v*Jc-z0, u*Jc+v*J-z0 #~ @jit(nopython=True) def Roots_2(a,b,c): bp=b/2 delta=bp*bp-a*c r1=(-bp-delta**.5)/a r2=-r1-b/a return r1,r2 coef = [0.00614656, -0.0933333333333, 0.527664995846, -1.31617928376, 1.21906444869] print("Coefficient A, B, C, D, E", coef) print("") print("numpy roots: ", np.roots(coef)) print("") print("ferrari python ", ferrari(*coef)) ```
2016/03/04
[ "https://Stackoverflow.com/questions/35795663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6017946/" ]
If polynomial coefficients are known ahead of time, you can speed up by vectorizing the computation in `roots` (given Numpy >= 1.10 or so): ``` import numpy as np def roots_vec(p): p = np.atleast_1d(p) n = p.shape[-1] A = np.zeros(p.shape[:1] + (n-1, n-1), float) A[...,1:,:-1] = np.eye(n-2) A[...,0,:] = -p[...,1:]/p[...,None,0] return np.linalg.eigvals(A) def roots_loop(p): r = [] for pp in p: r.append(np.roots(pp)) return r p = np.random.rand(2000, 4) # 2000 polynomials of 4th order assert np.allclose(roots_vec(p), roots_loop(p)) In [35]: %timeit roots_vec(p) 100 loops, best of 3: 4.49 ms per loop In [36]: %timeit roots_loop(p) 10 loops, best of 3: 81.9 ms per loop ``` It might not beat analytical solution + Numba, but allows higher orders.
In SymPy the `real_roots` function will return the sorted roots of a polynomial and you can loop through them and break on the first positive one: ``` for do in range(100): p = Poly.from_list([randint(-100,100) for i in range(5)], x) for i in real_roots(p): if i.is_positive: print(i) break else: print('no positive root', p) ``` So a function for this might be ``` def small_pos_root(a,b,c,d,e): from sympy.abc import x from sympy import Poly, real_roots for i in real_roots(Poly.from_list([a,b,c,d,e], x)): if i.is_positive: return i.n() ``` This will return None if there is no positive real root, otherwise it will return a numerical value of the first positive root.
68,400,475
I am trying to display a 2d list as CSV like structure in a new tkinter window here is my code ``` import tkinter as tk import requests import pandas as pd import numpy as np from tkinter import messagebox from finta import TA from math import exp import xlsxwriter from tkinter import * from tkinter.ttk import * import yfinance as yf import csv from operator import itemgetter def Createtable(root,rows): # code for creating table newWindow = Toplevel(root) newWindow.title("New Window") # sets the geometry of toplevel newWindow.geometry("600x600") for i in range(len(rows)): for j in range(3): self.e = Entry(root, width=20, fg='blue', font=('Arial',16,'bold')) self.e.grid(row=i, column=j) self.e.insert(END, rows[i][j]) Label(newWindow, text ="Results").pack() ``` This is my code for display the 2d list in a new window, here `rows` is the 2d list This is how I am calling the function ``` def clicked(): selected_stocks=[] converted_tickers=[] selected = box.curselection() for idx in selected: selected_stocks.append(box.get(idx)) for i in selected_stocks: converted_tickers.append(name_to_ticker[i]) rows=compute(converted_tickers) Createtable(app,rows) btn = Button(app, text = 'Submit',command = clicked) btn.pack(side = 'bottom') ``` `rows` works, I printed it out seperately and confirmed. When I execute the program this is the error I receive ``` Exception in Tkinter callback Traceback (most recent call last): File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/tkinter/__init__.py", line 1883, in __call__ return self.func(*args) File "pair.py", line 144, in clicked Createtable(app,rows) File "pair.py", line 31, in Createtable self.e = Entry(root, width=20, fg='blue', File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/tkinter/ttk.py", line 669, in __init__ Widget.__init__(self, master, widget or "ttk::entry", kw) File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/tkinter/ttk.py", line 557, in __init__ tkinter.Widget.__init__(self, master, widgetname, kw=kw) File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/tkinter/__init__.py", line 2567, in __init__ self.tk.call( _tkinter.TclError: unknown option "-fg" ``` I looked up the error and it says that this happens when you are using both old and new version of tkinter. I am not able to understand this as I am not using `tk` and `ttk` seperately anywhere in the `Createtable` function. How can I solve this issue to display the 2d list on the new window? For a reference I am attaching a screenshot here [![enter image description here](https://i.stack.imgur.com/QhoAA.png)](https://i.stack.imgur.com/QhoAA.png) UPDATE: Updated code as per comment 1)I commented out `from tkinter.ttk import *` 2)I also changed the `Createtable` function into a class like this ``` class Table: def __init__(self,root,rows): # code for creating table newWindow = Toplevel(root) newWindow.title("New Window") # sets the geometry of toplevel newWindow.geometry("600x600") for i in range(len(rows)): for j in range(3): self.e=Entry(root, width=20, fg='blue', font=('Arial',16,'bold')) self.e.grid(row=i, column=j) self.e.insert(END, rows[i][j]) Label(newWindow, text ="Results").pack() ``` Now, two errors are happening 1.The 2d list is showing on top of the main window instead of the new window. 2.After removing `ttk` the words on top of the button have become white for some reason. You can see in the picture attached below ( compare with the old picture, you will see "Submit" has become white) How do I solve these? Attaching picture for reference [![enter image description here](https://i.stack.imgur.com/Jpjvp.png)](https://i.stack.imgur.com/Jpjvp.png) UPDATE 2: minimum reproducible code ``` import tkinter as tk import requests import pandas as pd import numpy as np from tkinter import messagebox from finta import TA from math import exp import xlsxwriter from tkinter import * import tkinter.ttk as ttk import yfinance as yf import csv from operator import itemgetter class Table: def __init__(self,root,rows): # code for creating table newWindow = Toplevel(root) newWindow.title("New Window") # sets the geometry of toplevel newWindow.geometry("600x600") for i in range(len(rows)): for j in range(3): self.e=Entry(newWindow) self.e.grid(row=i, column=j) self.e.insert(END, rows[i][j]) Label(newWindow, text ="Results").pack() #######################LIST TO DISPLAY############## final_sorted_list=['Allahabad Bank', 'Andhra Bank', 'Axis Bank Limited','Bank of Baroda','Bank of India Limited'] ########TKINTER############################## app = Tk() app.title('Test') app.geometry("500x800") def clicked(): rows=[['Stock 1', 'Stock 2', 'Value'], ['AXISBANK.NS', 'MAHABANK.NS', 81.10000000000001], ['AXISBANK.NS', 'BANKINDIA.NS', 82.3], ['BANKBARODA.NS', 'MAHABANK.NS', 84.8], ['MAHABANK.NS', 'CANBK.NS', 85.5], ['BANKBARODA.NS', 'BANKINDIA.NS', 90.4], ['BANKINDIA.NS', 'CANBK.NS', 90.9], ['AXISBANK.NS', 'CANBK.NS', 91.5], ['AXISBANK.NS', 'BANKBARODA.NS', 93.30000000000001], ['BANKINDIA.NS', 'MAHABANK.NS', 95.8], ['BANKBARODA.NS', 'CANBK.NS', 97.6]] Table(app,rows) print("Finished") box = Listbox(app, selectmode=MULTIPLE, height=4) for val in final_sorted_list: box.insert(tk.END, val) box.pack(padx=5,pady=8,side=LEFT,fill=BOTH,expand=True) scrollbar = Scrollbar(app) scrollbar.pack(side = RIGHT, fill = BOTH) box.config(yscrollcommand = scrollbar.set) # scrollbar.config(command = box.yview) btn = ttk.Button(app, text = 'Submit',command = clicked) btn.pack(side = 'bottom') exit_button = ttk.Button(app, text='Close', command=app.destroy) exit_button.pack(side='top') clear_button = ttk.Button(app, text='Clear Selection',command=lambda: box.selection_clear(0, 'end')) clear_button.pack(side='top') app.mainloop() ``` If you run this, you get a frontend like in the picture above. You don't need to select anything as a sample result already has been hard coded in `rows`. The data in `rows` needs to be displayed in another window as a table. To recreate the error (new window not popping up) - you can just click on the "Submit" button, you don't need to select anything from the list.
2021/07/15
[ "https://Stackoverflow.com/questions/68400475", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15530206/" ]
The problem update 2 is that you are using pack and grid in the same window. When I click submit the new window comes up (I couldn't reproduce that problem) but the word "results" is missing. This is because you are trying to use pack and grid on the same window. To fix this, show "results" using grid like this: ``` class Table: def __init__(self,root,rows): # code for creating table newWindow = Toplevel(root) newWindow.title("New Window") # sets the geometry of toplevel newWindow.geometry("600x600") Label(newWindow, text ="Results").grid(row = 0, column = 0, columnspan = 3) for i in range(len(rows)): for j in range(3): self.e=Entry(newWindow) self.e.grid(row=i+1, column=j) self.e.insert(END, rows[i][j]) ``` Because the "results" label is now in column 1 I've had to change `self.e.grid` to use `row=i+1` so it displays properly. I also set the `columnspan` of the "results" label to 3 so it will be centered.
I've organized your imports. This should remove naming conflicts. Note: You will have to declare `tkinter` objects with `tk.` and `ttk` objects with `ttk.` This should also help to remove naming conflicts. ```py import tkinter as tk from tkinter import ttk from tkinter import messagebox import requests import csv import pandas as pd import numpy as np import xlsxwriter import yfinance as yf from finta import TA from math import exp from operator import itemgetter ```
61,529,832
I have a python script called script.py that has two optional arguments (-a, -b) and has a single positional argument that either accepts a file or uses stdin. I want to make it so that a flag/file cannot be used multiple times. Thus, something like this shouldn't be allowed ``` ./script.py -a 5 -a 7 test.txt ./script.py -a 5 test1.txt test2.txt ``` Is there a way I can do this? I've been combing through argparse (<https://docs.python.org/3/library/argparse.html>) but can't seem to find anything that specifically meets my needs. I thought maybe I could use nargs=1 in the add\_argument(..) function and potentially check the length of the list generated. Any help would be much appreciated.
2020/04/30
[ "https://Stackoverflow.com/questions/61529832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12912748/" ]
@ajrwhite adding attachments had one trick, you need to use 'Alias' from mactypes to convert a string/path object to a mactypes path. I'm not sure why but it works. here's a working example which creates messages with recipients and can add attachments: ``` from appscript import app, k from mactypes import Alias from pathlib import Path def create_message_with_attachment(): subject = 'This is an important email!' body = 'Just kidding its not.' to_recip = ['myboss@mycompany.com', 'theguyih8@mycompany.com'] msg = Message(subject=subject, body=body, to_recip=to_recip) # attach file p = Path('path/to/myfile.pdf') msg.add_attachment(p) msg.show() class Outlook(object): def __init__(self): self.client = app('Microsoft Outlook') class Message(object): def __init__(self, parent=None, subject='', body='', to_recip=[], cc_recip=[], show_=True): if parent is None: parent = Outlook() client = parent.client self.msg = client.make( new=k.outgoing_message, with_properties={k.subject: subject, k.content: body}) self.add_recipients(emails=to_recip, type_='to') self.add_recipients(emails=cc_recip, type_='cc') if show_: self.show() def show(self): self.msg.open() self.msg.activate() def add_attachment(self, p): # p is a Path() obj, could also pass string p = Alias(str(p)) # convert string/path obj to POSIX/mactypes path attach = self.msg.make(new=k.attachment, with_properties={k.file: p}) def add_recipients(self, emails, type_='to'): if not isinstance(emails, list): emails = [emails] for email in emails: self.add_recipient(email=email, type_=type_) def add_recipient(self, email, type_='to'): msg = self.msg if type_ == 'to': recipient = k.to_recipient elif type_ == 'cc': recipient = k.cc_recipient msg.make(new=recipient, with_properties={k.email_address: {k.address: email}}) ```
Figured it out using [py-appscript](http://appscript.sourceforge.net/py-appscript/doc_3x/appscript-manual/03_quicktutorial.html) ``` pip install appscript ``` ``` from appscript import app, k outlook = app('Microsoft Outlook') msg = outlook.make( new=k.outgoing_message, with_properties={ k.subject: 'Test Email', k.plain_text_content: 'Test email body'}) msg.make( new=k.recipient, with_properties={ k.email_address: { k.name: 'Fake Person', k.address: 'fakeperson@gmail.com'}}) msg.open() msg.activate() ``` Also very useful to download py-appscript's ASDictionary and ASTranslate tools to convert examples of AppleScript to the python version.
61,529,832
I have a python script called script.py that has two optional arguments (-a, -b) and has a single positional argument that either accepts a file or uses stdin. I want to make it so that a flag/file cannot be used multiple times. Thus, something like this shouldn't be allowed ``` ./script.py -a 5 -a 7 test.txt ./script.py -a 5 test1.txt test2.txt ``` Is there a way I can do this? I've been combing through argparse (<https://docs.python.org/3/library/argparse.html>) but can't seem to find anything that specifically meets my needs. I thought maybe I could use nargs=1 in the add\_argument(..) function and potentially check the length of the list generated. Any help would be much appreciated.
2020/04/30
[ "https://Stackoverflow.com/questions/61529832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12912748/" ]
Figured it out using [py-appscript](http://appscript.sourceforge.net/py-appscript/doc_3x/appscript-manual/03_quicktutorial.html) ``` pip install appscript ``` ``` from appscript import app, k outlook = app('Microsoft Outlook') msg = outlook.make( new=k.outgoing_message, with_properties={ k.subject: 'Test Email', k.plain_text_content: 'Test email body'}) msg.make( new=k.recipient, with_properties={ k.email_address: { k.name: 'Fake Person', k.address: 'fakeperson@gmail.com'}}) msg.open() msg.activate() ``` Also very useful to download py-appscript's ASDictionary and ASTranslate tools to convert examples of AppleScript to the python version.
I have been trying to edit Jayme’s code to send an email to several recipients, and I finally figured that I just had to repeat the following lines: ``` msg.make( new=k.recipient, with_properties={ k.email_address: { k.name: 'Fake Person1', k.address: 'fakeperson1@gmail.com'}}) msg.make( new=k.recipient, with_properties={ k.email_address: { k.name: 'Fake Person2', k.address: 'fakeperson2@gmail.com'}}) ```
61,529,832
I have a python script called script.py that has two optional arguments (-a, -b) and has a single positional argument that either accepts a file or uses stdin. I want to make it so that a flag/file cannot be used multiple times. Thus, something like this shouldn't be allowed ``` ./script.py -a 5 -a 7 test.txt ./script.py -a 5 test1.txt test2.txt ``` Is there a way I can do this? I've been combing through argparse (<https://docs.python.org/3/library/argparse.html>) but can't seem to find anything that specifically meets my needs. I thought maybe I could use nargs=1 in the add\_argument(..) function and potentially check the length of the list generated. Any help would be much appreciated.
2020/04/30
[ "https://Stackoverflow.com/questions/61529832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12912748/" ]
@ajrwhite adding attachments had one trick, you need to use 'Alias' from mactypes to convert a string/path object to a mactypes path. I'm not sure why but it works. here's a working example which creates messages with recipients and can add attachments: ``` from appscript import app, k from mactypes import Alias from pathlib import Path def create_message_with_attachment(): subject = 'This is an important email!' body = 'Just kidding its not.' to_recip = ['myboss@mycompany.com', 'theguyih8@mycompany.com'] msg = Message(subject=subject, body=body, to_recip=to_recip) # attach file p = Path('path/to/myfile.pdf') msg.add_attachment(p) msg.show() class Outlook(object): def __init__(self): self.client = app('Microsoft Outlook') class Message(object): def __init__(self, parent=None, subject='', body='', to_recip=[], cc_recip=[], show_=True): if parent is None: parent = Outlook() client = parent.client self.msg = client.make( new=k.outgoing_message, with_properties={k.subject: subject, k.content: body}) self.add_recipients(emails=to_recip, type_='to') self.add_recipients(emails=cc_recip, type_='cc') if show_: self.show() def show(self): self.msg.open() self.msg.activate() def add_attachment(self, p): # p is a Path() obj, could also pass string p = Alias(str(p)) # convert string/path obj to POSIX/mactypes path attach = self.msg.make(new=k.attachment, with_properties={k.file: p}) def add_recipients(self, emails, type_='to'): if not isinstance(emails, list): emails = [emails] for email in emails: self.add_recipient(email=email, type_=type_) def add_recipient(self, email, type_='to'): msg = self.msg if type_ == 'to': recipient = k.to_recipient elif type_ == 'cc': recipient = k.cc_recipient msg.make(new=recipient, with_properties={k.email_address: {k.address: email}}) ```
I have been trying to edit Jayme’s code to send an email to several recipients, and I finally figured that I just had to repeat the following lines: ``` msg.make( new=k.recipient, with_properties={ k.email_address: { k.name: 'Fake Person1', k.address: 'fakeperson1@gmail.com'}}) msg.make( new=k.recipient, with_properties={ k.email_address: { k.name: 'Fake Person2', k.address: 'fakeperson2@gmail.com'}}) ```
22,699,040
I'm trying to pull the parsable-cite info from this [webpage](http://deepbills.cato.org/api/1/bill?congress=113&billnumber=499&billtype=s&billversion=is) using python. For example, for the page listed I would pull pl/111/148 and pl/111/152. My current regex is listed below, but it seems to return everything after parsable cite. It's probably something simple, but I'm relatively new to regexes. Thanks in advance. ``` re.findall(r'^parsable-cite=.*>$',page) ```
2014/03/27
[ "https://Stackoverflow.com/questions/22699040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1153018/" ]
I highly recommend to use this regex which will capture what you want: ``` re.findall(r'parsable-cite=\\\"(.*?)\\\"\>',page) ``` explanation: ``` parsable-cite= matches the characters parsable-cite= literally (case sensitive) \\ matches the character \ literally \" matches the character " literally 1st Capturing group (.*?) .*? matches any character (except newline) Quantifier: Between zero and unlimited times, as few times as possible, expanding as needed \\ matches the character \ literally \" matches the character " literally \> matches the character > literally ``` using **?** is the key ;) hope this helps.
Make your regex lazy: ``` re.findall(r'^parsable-cite=.*?>$',page) ^ ``` Or use a negated class (preferable): ``` re.findall(r'^parsable-cite=[^>]*>$',page) ``` `.*` is greedy by default and will try to match as much as possible before concluding a match. [regex101 demo](http://regex101.com/r/pZ7fW1) If you want to get the parts you need only, you can use capture groups: ``` re.findall(r'^parsable-cite=([^>]*)>$',page) ``` [regex101 demo](http://regex101.com/r/nV6cW6) --- Though, from the layout of your webpage, it doesn't seem that you need the anchors (`^` and `$`) (unless the newlines were somehow removed on the site...)
22,699,040
I'm trying to pull the parsable-cite info from this [webpage](http://deepbills.cato.org/api/1/bill?congress=113&billnumber=499&billtype=s&billversion=is) using python. For example, for the page listed I would pull pl/111/148 and pl/111/152. My current regex is listed below, but it seems to return everything after parsable cite. It's probably something simple, but I'm relatively new to regexes. Thanks in advance. ``` re.findall(r'^parsable-cite=.*>$',page) ```
2014/03/27
[ "https://Stackoverflow.com/questions/22699040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1153018/" ]
I highly recommend to use this regex which will capture what you want: ``` re.findall(r'parsable-cite=\\\"(.*?)\\\"\>',page) ``` explanation: ``` parsable-cite= matches the characters parsable-cite= literally (case sensitive) \\ matches the character \ literally \" matches the character " literally 1st Capturing group (.*?) .*? matches any character (except newline) Quantifier: Between zero and unlimited times, as few times as possible, expanding as needed \\ matches the character \ literally \" matches the character " literally \> matches the character > literally ``` using **?** is the key ;) hope this helps.
The `.*` you have there is "greedy", meaning it will match as much as it can, including any number of `>` characters and whatever comes after them. If what you really want is "everything up to the next `>`" then you should say `[^>]*>` instead, meaning "any number of non-`>` characters, then a `>`".
22,699,040
I'm trying to pull the parsable-cite info from this [webpage](http://deepbills.cato.org/api/1/bill?congress=113&billnumber=499&billtype=s&billversion=is) using python. For example, for the page listed I would pull pl/111/148 and pl/111/152. My current regex is listed below, but it seems to return everything after parsable cite. It's probably something simple, but I'm relatively new to regexes. Thanks in advance. ``` re.findall(r'^parsable-cite=.*>$',page) ```
2014/03/27
[ "https://Stackoverflow.com/questions/22699040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1153018/" ]
I highly recommend to use this regex which will capture what you want: ``` re.findall(r'parsable-cite=\\\"(.*?)\\\"\>',page) ``` explanation: ``` parsable-cite= matches the characters parsable-cite= literally (case sensitive) \\ matches the character \ literally \" matches the character " literally 1st Capturing group (.*?) .*? matches any character (except newline) Quantifier: Between zero and unlimited times, as few times as possible, expanding as needed \\ matches the character \ literally \" matches the character " literally \> matches the character > literally ``` using **?** is the key ;) hope this helps.
maybe something like this: ``` (?<=parsable-cite=\\\")\w{2}\/\d{3}\/\d{3} ``` <http://regex101.com/r/kE9uE3>
22,699,040
I'm trying to pull the parsable-cite info from this [webpage](http://deepbills.cato.org/api/1/bill?congress=113&billnumber=499&billtype=s&billversion=is) using python. For example, for the page listed I would pull pl/111/148 and pl/111/152. My current regex is listed below, but it seems to return everything after parsable cite. It's probably something simple, but I'm relatively new to regexes. Thanks in advance. ``` re.findall(r'^parsable-cite=.*>$',page) ```
2014/03/27
[ "https://Stackoverflow.com/questions/22699040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1153018/" ]
I highly recommend to use this regex which will capture what you want: ``` re.findall(r'parsable-cite=\\\"(.*?)\\\"\>',page) ``` explanation: ``` parsable-cite= matches the characters parsable-cite= literally (case sensitive) \\ matches the character \ literally \" matches the character " literally 1st Capturing group (.*?) .*? matches any character (except newline) Quantifier: Between zero and unlimited times, as few times as possible, expanding as needed \\ matches the character \ literally \" matches the character " literally \> matches the character > literally ``` using **?** is the key ;) hope this helps.
Though this is a json string where html is embedded inside, but you can still use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/documentation.html) for this purpose: ``` soup = BeautifulSoup(htmls); tags = soup.findAll("external-xref", {"parsable-cite":re.compile("")}) for t in tags: print t['parsable-cite'] ```
22,699,040
I'm trying to pull the parsable-cite info from this [webpage](http://deepbills.cato.org/api/1/bill?congress=113&billnumber=499&billtype=s&billversion=is) using python. For example, for the page listed I would pull pl/111/148 and pl/111/152. My current regex is listed below, but it seems to return everything after parsable cite. It's probably something simple, but I'm relatively new to regexes. Thanks in advance. ``` re.findall(r'^parsable-cite=.*>$',page) ```
2014/03/27
[ "https://Stackoverflow.com/questions/22699040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1153018/" ]
I highly recommend to use this regex which will capture what you want: ``` re.findall(r'parsable-cite=\\\"(.*?)\\\"\>',page) ``` explanation: ``` parsable-cite= matches the characters parsable-cite= literally (case sensitive) \\ matches the character \ literally \" matches the character " literally 1st Capturing group (.*?) .*? matches any character (except newline) Quantifier: Between zero and unlimited times, as few times as possible, expanding as needed \\ matches the character \ literally \" matches the character " literally \> matches the character > literally ``` using **?** is the key ;) hope this helps.
This might work if its between `\"` delimiters ``` # \bparsable-cite\s*=\s*\"((?s:(?!\").)*)\" \b parsable-cite \s* = \s* \" ( # (1 start) (?s: (?! \" ) . )* ) # (1 end) \" ``` Or, just ``` # (?s)\bparsable-cite\s*=\s*\"(.*?)\" (?s) \b parsable-cite \s* = \s* \" ( .*? ) # (1) \" ```
22,699,040
I'm trying to pull the parsable-cite info from this [webpage](http://deepbills.cato.org/api/1/bill?congress=113&billnumber=499&billtype=s&billversion=is) using python. For example, for the page listed I would pull pl/111/148 and pl/111/152. My current regex is listed below, but it seems to return everything after parsable cite. It's probably something simple, but I'm relatively new to regexes. Thanks in advance. ``` re.findall(r'^parsable-cite=.*>$',page) ```
2014/03/27
[ "https://Stackoverflow.com/questions/22699040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1153018/" ]
I highly recommend to use this regex which will capture what you want: ``` re.findall(r'parsable-cite=\\\"(.*?)\\\"\>',page) ``` explanation: ``` parsable-cite= matches the characters parsable-cite= literally (case sensitive) \\ matches the character \ literally \" matches the character " literally 1st Capturing group (.*?) .*? matches any character (except newline) Quantifier: Between zero and unlimited times, as few times as possible, expanding as needed \\ matches the character \ literally \" matches the character " literally \> matches the character > literally ``` using **?** is the key ;) hope this helps.
If you think it will be very similar each time: ``` re.findall(r"pl/\d+/\d+", page) ```
45,954
I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information. I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working. I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org. Is there a suitable module available somewhere?
2008/09/05
[ "https://Stackoverflow.com/questions/45954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4761/" ]
You might be interested in the DWARF library from [pydevtools](http://code.google.com/p/pydevtools/): ``` >>> from bintools.dwarf import DWARF >>> dwarf = DWARF('test/test') >>> dwarf.get_loc_by_addr(0x8048475) ('/home/emilmont/Workspace/dbg/test/main.c', 36, 0) ```
I don't know of any, but if all else fails you could use [ctypes](http://docs.python.org/lib/module-ctypes.html) to directly use libdwarf, libelf or libbfd.
45,954
I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information. I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working. I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org. Is there a suitable module available somewhere?
2008/09/05
[ "https://Stackoverflow.com/questions/45954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4761/" ]
Please check [pyelftools](https://github.com/eliben/pyelftools) - a new pure Python library meant to do this.
I don't know of any, but if all else fails you could use [ctypes](http://docs.python.org/lib/module-ctypes.html) to directly use libdwarf, libelf or libbfd.
45,954
I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information. I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working. I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org. Is there a suitable module available somewhere?
2008/09/05
[ "https://Stackoverflow.com/questions/45954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4761/" ]
You might be interested in the DWARF library from [pydevtools](http://code.google.com/p/pydevtools/): ``` >>> from bintools.dwarf import DWARF >>> dwarf = DWARF('test/test') >>> dwarf.get_loc_by_addr(0x8048475) ('/home/emilmont/Workspace/dbg/test/main.c', 36, 0) ```
You should give [Construct](http://construct.wikispaces.com/) a try. It is very useful to parse binary data into python objects. There is even an example for the [ELF32](http://sebulbasvn.googlecode.com/svn/trunk/construct/formats/executable/elf32.py) file format.
45,954
I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information. I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working. I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org. Is there a suitable module available somewhere?
2008/09/05
[ "https://Stackoverflow.com/questions/45954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4761/" ]
Please check [pyelftools](https://github.com/eliben/pyelftools) - a new pure Python library meant to do this.
You should give [Construct](http://construct.wikispaces.com/) a try. It is very useful to parse binary data into python objects. There is even an example for the [ELF32](http://sebulbasvn.googlecode.com/svn/trunk/construct/formats/executable/elf32.py) file format.
45,954
I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information. I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working. I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org. Is there a suitable module available somewhere?
2008/09/05
[ "https://Stackoverflow.com/questions/45954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4761/" ]
You might be interested in the DWARF library from [pydevtools](http://code.google.com/p/pydevtools/): ``` >>> from bintools.dwarf import DWARF >>> dwarf = DWARF('test/test') >>> dwarf.get_loc_by_addr(0x8048475) ('/home/emilmont/Workspace/dbg/test/main.c', 36, 0) ```
I've been developing a DWARF parser using [Construct](http://construct.wikispaces.com/). Currently fairly rough, and parsing is slow. But I thought I should at least let you know. It may suit your needs, with a bit of work. I've got the code in Mercurial, hosted at bitbucket: * <http://bitbucket.org/cmcqueen1975/pythondwarf/> * <http://bitbucket.org/cmcqueen1975/construct/> (necessary modifications to Construct library) [Construct](http://construct.wikispaces.com/) is a very interesting library. DWARF is a complex format (as I'm discovering) and pushes Construct to its limits I think.
45,954
I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information. I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working. I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org. Is there a suitable module available somewhere?
2008/09/05
[ "https://Stackoverflow.com/questions/45954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4761/" ]
Please check [pyelftools](https://github.com/eliben/pyelftools) - a new pure Python library meant to do this.
I've been developing a DWARF parser using [Construct](http://construct.wikispaces.com/). Currently fairly rough, and parsing is slow. But I thought I should at least let you know. It may suit your needs, with a bit of work. I've got the code in Mercurial, hosted at bitbucket: * <http://bitbucket.org/cmcqueen1975/pythondwarf/> * <http://bitbucket.org/cmcqueen1975/construct/> (necessary modifications to Construct library) [Construct](http://construct.wikispaces.com/) is a very interesting library. DWARF is a complex format (as I'm discovering) and pushes Construct to its limits I think.
45,954
I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information. I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working. I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org. Is there a suitable module available somewhere?
2008/09/05
[ "https://Stackoverflow.com/questions/45954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4761/" ]
You might be interested in the DWARF library from [pydevtools](http://code.google.com/p/pydevtools/): ``` >>> from bintools.dwarf import DWARF >>> dwarf = DWARF('test/test') >>> dwarf.get_loc_by_addr(0x8048475) ('/home/emilmont/Workspace/dbg/test/main.c', 36, 0) ```
[hachior](http://bitbucket.org/haypo/hachoir/wiki/Home) is another library for parsing binary data
45,954
I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information. I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working. I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org. Is there a suitable module available somewhere?
2008/09/05
[ "https://Stackoverflow.com/questions/45954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4761/" ]
Please check [pyelftools](https://github.com/eliben/pyelftools) - a new pure Python library meant to do this.
[hachior](http://bitbucket.org/haypo/hachoir/wiki/Home) is another library for parsing binary data
45,954
I'm working on a utility which needs to resolve hex addresses to a symbolic function name and source code line number within a binary. The utility will run on Linux on x86, though the binaries it analyzes will be for a MIPS-based embedded system. The MIPS binaries are in ELF format, using DWARF for the symbolic debugging information. I'm currently planning to fork objdump, passing in a list of hex addresses and parsing the output to get function names and source line numbers. I have compiled an objdump with support for MIPS binaries, and it is working. I'd prefer to have a package allowing me to look things up natively from the Python code without forking another process. I can find no mention of libdwarf, libelf, or libbfd on python.org, nor any mention of python on dwarfstd.org. Is there a suitable module available somewhere?
2008/09/05
[ "https://Stackoverflow.com/questions/45954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4761/" ]
Please check [pyelftools](https://github.com/eliben/pyelftools) - a new pure Python library meant to do this.
You might be interested in the DWARF library from [pydevtools](http://code.google.com/p/pydevtools/): ``` >>> from bintools.dwarf import DWARF >>> dwarf = DWARF('test/test') >>> dwarf.get_loc_by_addr(0x8048475) ('/home/emilmont/Workspace/dbg/test/main.c', 36, 0) ```
45,107,439
This is more a question out of interest. I observed the following behaviour and would like to know why / how this happens (tried in python 2.7.3 & python 3.4.1) ``` ph = {1:0, 2:0} d1 = {'a':ph, 'b':ph} d2 = {'a':{1:0,2:0},{'b':{1:0,2:0}} >>> d1 {'a':{1:0,2:0},{'b':{1:0,2:0}} ``` so d1 and d2 are the same. However when using del or replacing values this happens ``` >>> del d1['a'][1] >>> d1 {'a':{2:0}, 'b':{2:0}} >>> del d2['a'][1] >>> d2 {'a':{2:0},{'b':{1:0,2:0}} ``` The behaviour for d2 is as expected, but the nested dictionaries in d1 (ph) seem to work differently. Is it because all variables in python are really references? Is there a work around if I can't specify the placeholder dictionary for every instance? Thank you!
2017/07/14
[ "https://Stackoverflow.com/questions/45107439", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8308726/" ]
Try using [pythontutor](http://www.pythontutor.com). If you put your code into it you will see this: [![enter image description here](https://i.stack.imgur.com/z3sr1.png)](https://i.stack.imgur.com/z3sr1.png) You can see that as you suspected the dictionaries in `d1` are references to the same object, `ph`. This means that when you `del d1['a'][1]` it really deletes a key in `ph` so you get this: [![enter image description here](https://i.stack.imgur.com/iRL3u.png)](https://i.stack.imgur.com/iRL3u.png) To work around this you need to initialise `d1` with *copies* of the `ph` dictionary, as discussed in other answers here...
Both keys in `d1` point to the same reference, so when you delete from `d1` it affects `ph` whether you reach it by `d1['a']` or `d1['b']` - you're getting to the same place. `d2` on the other hand instantiates two separate objects, so you're only affecting one of those in your example. Your workaround would be [`.deepcopy()`](https://docs.python.org/2/library/copy.html) for your second reference in `d1`. ``` d1 = {'a':ph, 'b':ph.deepcopy()} ``` (You don't really need deepcopy in this example - copy() would be sufficient - but if `ph` were any more complex you would - see the above linked docs.)
69,687,722
I wanna know how can i get **1 minute** gold price data **of a specific time and date interval** (such as an 1 houre interval in 18th october: 2021-10-18 09:30:00 to 2021-10-18 10:30:00) from yfinance or any other source in python? my code is: ``` gold = yf.download(tickers="GC=F", period="5d", interval="1m") ``` it seems it`s just possible to set **period** while i wanna set **specific date and time intervals**. thanks
2021/10/23
[ "https://Stackoverflow.com/questions/69687722", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4688178/" ]
Your call to `yfinance` returns a Pandas `DataFrame` with `datetime` as the index. We can use this to filter the dataframe to only entries between our `start` and `end` times. ``` import yfinance as yf from datetime import datetime gold = yf.download(tickers="GC=F", period="5d", interval="1m") start = datetime(2021, 10, 18, 9, 30, 0) end = datetime(2021, 10, 18, 10, 30, 0) filtered = gold[start: end] ``` Outputs ``` Open High ... Adj Close Volume Datetime ... 2021-10-18 09:30:00-04:00 1770.099976 1770.099976 ... 1767.599976 1035 2021-10-18 09:31:00-04:00 1767.900024 1769.099976 ... 1768.500000 467 2021-10-18 09:32:00-04:00 1768.599976 1769.300049 ... 1769.199951 428 2021-10-18 09:33:00-04:00 1769.300049 1770.199951 ... 1769.099976 750 2021-10-18 09:34:00-04:00 1769.199951 1769.300049 ... 1767.800049 549 ... ... ... ... ... ... 2021-10-18 10:26:00-04:00 1770.300049 1770.500000 ... 1769.900024 147 2021-10-18 10:27:00-04:00 1769.800049 1769.800049 ... 1769.400024 349 2021-10-18 10:28:00-04:00 1769.400024 1770.400024 ... 1770.199951 258 2021-10-18 10:29:00-04:00 1770.300049 1771.000000 ... 1770.099976 382 2021-10-18 10:30:00-04:00 1770.300049 1771.000000 ... 1770.900024 180 [61 rows x 6 columns] ```
Edit 2021-10-25 =============== To clear my answer. Question was: > > i wanna set specific date and time intervals. thanks > > > All you need is in the code documentation. So `start` and `end` could be date or **\_datetime** ``` start: str Download start date string (YYYY-MM-DD) or _datetime. Default is 1900-01-01 ``` Example code: > > Note: something wrong with timezones, i've tryed to pass correcth timezone with start and end but lib didn't handle it correctly and I was finish with convert it manually) > > > ```py import pandas as pd import yfinance as yf import pendulum pd.options.display.max_rows=10 # To decrease printouts start = pendulum.parse('2021-10-18 09:30').add(hours=7) # My tz is UTC+03:00, original TZ UTC-04:00. So adds to my local time 7 hours end = pendulum.parse('2021-10-18 10:30').add(hours=7) # Same print(start) print(yf.download(tickers="GC=F", interval="1m", start=start, end=end)) ``` Result and you can pass whatever datetime ranges you want: ``` 2021-10-18T16:30:00+00:00 [*********************100%***********************] 1 of 1 completed Open High Low Close \ Datetime 2021-10-18 09:30:00-04:00 1770.099976 1770.099976 1767.400024 1767.800049 2021-10-18 09:31:00-04:00 1767.900024 1769.099976 1767.800049 1768.500000 2021-10-18 09:32:00-04:00 1768.599976 1769.300049 1768.199951 1769.199951 2021-10-18 09:33:00-04:00 1769.300049 1770.199951 1768.900024 1769.099976 2021-10-18 09:34:00-04:00 1769.199951 1769.300049 1767.599976 1767.800049 ... ... ... ... ... 2021-10-18 10:25:00-04:00 1769.900024 1770.400024 1769.800049 1770.300049 2021-10-18 10:26:00-04:00 1770.300049 1770.500000 1769.900024 1769.900024 2021-10-18 10:27:00-04:00 1769.800049 1769.800049 1769.099976 1769.400024 2021-10-18 10:28:00-04:00 1769.400024 1770.400024 1769.400024 1770.199951 2021-10-18 10:29:00-04:00 1770.300049 1771.000000 1769.900024 1770.099976 Adj Close Volume Datetime 2021-10-18 09:30:00-04:00 1767.800049 0 2021-10-18 09:31:00-04:00 1768.500000 459 2021-10-18 09:32:00-04:00 1769.199951 428 2021-10-18 09:33:00-04:00 1769.099976 750 2021-10-18 09:34:00-04:00 1767.800049 549 ... ... ... 2021-10-18 10:25:00-04:00 1770.300049 134 2021-10-18 10:26:00-04:00 1769.900024 147 2021-10-18 10:27:00-04:00 1769.400024 349 2021-10-18 10:28:00-04:00 1770.199951 258 2021-10-18 10:29:00-04:00 1770.099976 382 [60 rows x 6 columns] ``` PS: with `start` and `end` you do not have limitation to the last 7 days, but still have limit to the last 30 days: ``` 1 Failed download: - GC=F: 1m data not available for startTime=1631980800 and endTime=1631998800. The requested range must be within the last 30 days. ``` Original ======== this lib has a lack of documentation. But this is python and as result it some kind of self-documented. Read definition of download function here <https://github.com/ranaroussi/yfinance/blob/6654a41a8d5c0c9e869a9b9acb3e143786c765c7/yfinance/multi.py#L32> PS this function have `start=` and `end=` params that I hope help you
59,819,633
i want to login on instagram using selenium python. I tried to find elements by name, tag and css selector but selenium doesn't find any element (i think). I also tried to switch to the iFrame but nothing. This is the error: > > Traceback (most recent call last): > File "C:/Users/anton/Desktop/Instabot/chrome instagram bot/main.py", line 8, in > my\_driver.sign\_in(username=USERNAME, password=PASSWORD) > File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\chrome\_driver\_cli.py", line 39, in sign\_in > username\_input = self.driver.find\_element\_by\_name("username") > File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 496, in find\_element\_by\_name > return self.find\_element(by=By.NAME, value=name) > File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 978, in find\_element > 'value': value})['value'] > File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute > self.error\_handler.check\_response(response) > File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check\_response > raise exception\_class(message, screen, stacktrace) > selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[name="username"]"} > (Session info: chrome=79.0.3945.130) > > > This is the code ``` from selenium import webdriver from selenium.common.exceptions import SessionNotCreatedException from selenium.webdriver.common.keys import Keys supported_versions_path = [ "..\\chrome driver\\CHROME 80\\chromedriver.exe", "..\\chrome driver\\CHROME 79\\chromedriver.exe", "..\\chrome driver\\CHROME 78\\chromedriver.exe" ] instagram_link = "https://www.instagram.com/accounts/login/?source=auth_switcher" class ChromeDriver: def __init__(self): self.driver = self.__startup() def __startup(self): self.driver = None for current_checking_version in supported_versions_path: if self.driver is None: try: self.driver = webdriver.Chrome(current_checking_version) pass except SessionNotCreatedException: self.driver = None return self.driver def sign_in(self, username, password): self.driver.get(instagram_link) frame = self.driver.find_element_by_tag_name("iframe") self.driver.switch_to.frame(frame) username_input = self.driver.find_element_by_name("username") # password_input = self.driver.find_element_by_name('password') # username_input.send_keys(username) # password_input.send_keys(password) # password_input.send_keys(Keys.ENTER) def get_page(self, url): self.driver.get(url) def quit(self): self.driver.quit() ``` Can you help me?
2020/01/20
[ "https://Stackoverflow.com/questions/59819633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12507302/" ]
You try to return an Object and this is not possible. ``` function Main(){ // create an array that holds objects const people = [ { firstName: 'Fathi', lastName: 'Noor', age: 27, colors: ['red', 'blue'], bodyAttributes: { weight: 64, height: 171 } }, { firstName: 'Hanafi', lastName: 'Noor', age: 26, colors: ['white', 'black'], bodyAttributes: { weight: 62, height: 172 } } ] const filterdPeople = people.filter(person => person.age == 27) return( <main> <div> {filterdPeople.map(person => { return ( <div key={person.lastName}> <p>First name: {person.firstName}</p> <p>Last name: {person.lastName}</p> <p>Age: {person.age}</p> <p>Colors: {person.colors.map(color => (<span>{`${color}, `}</span>))}</p> <p> <span>Body Attributes: </span> <span>Weight: {person.bodyAttributes.weight}</span> <span>Height: {person.bodyAttributes.height}</span> </p> </div> ) })} </div> </main> ) } ```
You have to `map` the array. See this [doc](https://reactjs.org/docs/lists-and-keys.html). For each person, you decide how to display each field, here I used `<p>` tags, you can display them in a table, or in a custom card.. Your choice! Remember that inside `{}` brackets goes Javascript expressions while in `()` goes JSX, very similar to `HTML` tags as you can see! Read the documents, follow a tutorial, it'll help a lot! ``` {people.filter(person => person.age == 27).map(person => { return( <div> <p>First name: {person.firstName}</p> <p>Last name: {person.lastName}</p> <p>Age: {person.age}</p> <p>Colors: {person.colors.map(color => (<span>{color+" "}</span>)}</p> <p> <span>Body Attributes: </span> <span>Weight: {person.bodyAttributes.weight}</span> <span>Height: {person.bodyAttributes.height}</span> </p> </div> )})} ```
59,819,633
i want to login on instagram using selenium python. I tried to find elements by name, tag and css selector but selenium doesn't find any element (i think). I also tried to switch to the iFrame but nothing. This is the error: > > Traceback (most recent call last): > File "C:/Users/anton/Desktop/Instabot/chrome instagram bot/main.py", line 8, in > my\_driver.sign\_in(username=USERNAME, password=PASSWORD) > File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\chrome\_driver\_cli.py", line 39, in sign\_in > username\_input = self.driver.find\_element\_by\_name("username") > File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 496, in find\_element\_by\_name > return self.find\_element(by=By.NAME, value=name) > File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 978, in find\_element > 'value': value})['value'] > File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute > self.error\_handler.check\_response(response) > File "C:\Users\anton\Desktop\Instabot\chrome instagram bot\venv\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check\_response > raise exception\_class(message, screen, stacktrace) > selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[name="username"]"} > (Session info: chrome=79.0.3945.130) > > > This is the code ``` from selenium import webdriver from selenium.common.exceptions import SessionNotCreatedException from selenium.webdriver.common.keys import Keys supported_versions_path = [ "..\\chrome driver\\CHROME 80\\chromedriver.exe", "..\\chrome driver\\CHROME 79\\chromedriver.exe", "..\\chrome driver\\CHROME 78\\chromedriver.exe" ] instagram_link = "https://www.instagram.com/accounts/login/?source=auth_switcher" class ChromeDriver: def __init__(self): self.driver = self.__startup() def __startup(self): self.driver = None for current_checking_version in supported_versions_path: if self.driver is None: try: self.driver = webdriver.Chrome(current_checking_version) pass except SessionNotCreatedException: self.driver = None return self.driver def sign_in(self, username, password): self.driver.get(instagram_link) frame = self.driver.find_element_by_tag_name("iframe") self.driver.switch_to.frame(frame) username_input = self.driver.find_element_by_name("username") # password_input = self.driver.find_element_by_name('password') # username_input.send_keys(username) # password_input.send_keys(password) # password_input.send_keys(Keys.ENTER) def get_page(self, url): self.driver.get(url) def quit(self): self.driver.quit() ``` Can you help me?
2020/01/20
[ "https://Stackoverflow.com/questions/59819633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12507302/" ]
You try to return an Object and this is not possible. ``` function Main(){ // create an array that holds objects const people = [ { firstName: 'Fathi', lastName: 'Noor', age: 27, colors: ['red', 'blue'], bodyAttributes: { weight: 64, height: 171 } }, { firstName: 'Hanafi', lastName: 'Noor', age: 26, colors: ['white', 'black'], bodyAttributes: { weight: 62, height: 172 } } ] const filterdPeople = people.filter(person => person.age == 27) return( <main> <div> {filterdPeople.map(person => { return ( <div key={person.lastName}> <p>First name: {person.firstName}</p> <p>Last name: {person.lastName}</p> <p>Age: {person.age}</p> <p>Colors: {person.colors.map(color => (<span>{`${color}, `}</span>))}</p> <p> <span>Body Attributes: </span> <span>Weight: {person.bodyAttributes.weight}</span> <span>Height: {person.bodyAttributes.height}</span> </p> </div> ) })} </div> </main> ) } ```
``` function Main(){ // create an array that holds objects const people = [ { firstName: 'Fathi', lastName: 'Noor', age: 27, colors: ['red', 'blue'], bodyAttributes: { weight: 64, height: 171 } }, { firstName: 'Hanafi', lastName: 'Noor', age: 26, colors: ['white', 'black'], bodyAttributes: { weight: 62, height: 172 } } ] return( <main> <div> /* display person age equal to 27 using filter using ES6 arrow */ <table> <tr> <th>firstName</th> <th>lastName</th> <th>age</th> <th>colors</th> <th>bodyAttributes<th> </tr> {people.filter(person => person.age == 27).map(item=>{ return( <tr> <td>{item.firstName}</td> <td>{item.lastName}</td> <td>{item.age}</td> <td> <ul> { item.colors.map(color=>{ return <li>{color}</li> }) } </ul> {} </td> <td> <ul> <li>W:{item.bodyAttributes.weight}</li> <li>H:{item.bodyAttributes.height}</li> </ul> </td> </tr> ) })} </table> </div> </main> ) } ```
50,091,373
I am new to using Pandas on Windows and I'm not sure what I am doing wrong here. My data is located at 'C:\Users\me\data\lending\_club\loan.csv' ``` path = 'C:\\Users\\me\\data\\lending_club\\loan.csv' pd.read_csv(path) ``` And I get this error: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-107-b5792b17a3c3> in <module>() 1 path = 'C:\\Users\\me\\data\\lending_club\\loan.csv' ----> 2 pd.read_csv(path) C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision) 707 skip_blank_lines=skip_blank_lines) 708 --> 709 return _read(filepath_or_buffer, kwds) 710 711 parser_f.__name__ = name C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds) 447 448 # Create the parser. --> 449 parser = TextFileReader(filepath_or_buffer, **kwds) 450 451 if chunksize or iterator: C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds) 816 self.options['has_index_names'] = kwds['has_index_names'] 817 --> 818 self._make_engine(self.engine) 819 820 def close(self): C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine) 1047 def _make_engine(self, engine='c'): 1048 if engine == 'c': -> 1049 self._engine = CParserWrapper(self.f, **self.options) 1050 else: 1051 if engine == 'python': C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds) 1693 kwds['allow_leading_cols'] = self.index_col is not False 1694 -> 1695 self._reader = parsers.TextReader(src, **kwds) 1696 1697 # XXX pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source() FileNotFoundError: File b'C:\\Users\\me\\data\\lending_club\\loan.csv' does not exist ``` **EDIT:** I re-installed Anaconda and the error went away. Not sure exactly what was going on but could potentially have been related to the initial install being global vs. user specific in my second install. Thanks for the help everybody!
2018/04/29
[ "https://Stackoverflow.com/questions/50091373", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3971910/" ]
Just use forward slash(`'/'`) instead of backslash(`'\'`) ``` path = 'C:/Users/me/data/lending_club/loan.csv' ```
Python only access to the current folder's files. If you want to access files from an other folder, try this : ``` import sys sys.path.insert(0, 'C:/Users/myFolder') ``` Those lines allow your script to access an other folder. Be carrefull, you shoud use slashes /, not backslashes \
51,947,506
i have been trying to install module for python-3.6 through pip. i've read these post from stackoverflow and from python website, which seemed promising but they didn't worked for me. [Install a module using pip for specific python version](https://stackoverflow.com/questions/10919569/install-a-module-using-pip-for-specific-python-version) [python website](https://docs.python.org/3.4/installing/index.html) I've added python3.6 main folder,Scripts and Lib to PATH and i've tried these commands.But somehow they refer to anaconda installations. ``` C:\Program Files (x86)\Python36-32\Scripts> pip3 install xlrd C:\Program Files (x86)\Python36-32\Scripts> pip install xlrd C:\Program Files (x86)\Python36-32\Scripts> pip3.6 install xlrd C:\Program Files (x86)\Python36-32\Scripts> py -3.6 -m pip install xlrd C:\Program Files (x86)\Python36-32\Scripts> py -3 -m pip install xlrd ``` but they give same answer. ``` Requirement already satisfied: xlrd in c:\programdata\anaconda3\lib\site-packages (1.1.0) ```
2018/08/21
[ "https://Stackoverflow.com/questions/51947506", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10088290/" ]
to install package for a specific python installation, you need a package installer shipped with that installation, in your case, `pip` is installed by an anaconda installation, use `pip.exe` or `easy_install.exe` from this python3.6 installation's `Scripts` directory instead.
First, Uninstall all the versions you have! then go to the library <https://www.python.org/downloads/> Select the required vesrsion MSI file. Run as administrator!
51,947,506
i have been trying to install module for python-3.6 through pip. i've read these post from stackoverflow and from python website, which seemed promising but they didn't worked for me. [Install a module using pip for specific python version](https://stackoverflow.com/questions/10919569/install-a-module-using-pip-for-specific-python-version) [python website](https://docs.python.org/3.4/installing/index.html) I've added python3.6 main folder,Scripts and Lib to PATH and i've tried these commands.But somehow they refer to anaconda installations. ``` C:\Program Files (x86)\Python36-32\Scripts> pip3 install xlrd C:\Program Files (x86)\Python36-32\Scripts> pip install xlrd C:\Program Files (x86)\Python36-32\Scripts> pip3.6 install xlrd C:\Program Files (x86)\Python36-32\Scripts> py -3.6 -m pip install xlrd C:\Program Files (x86)\Python36-32\Scripts> py -3 -m pip install xlrd ``` but they give same answer. ``` Requirement already satisfied: xlrd in c:\programdata\anaconda3\lib\site-packages (1.1.0) ```
2018/08/21
[ "https://Stackoverflow.com/questions/51947506", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10088290/" ]
to install package for a specific python installation, you need a package installer shipped with that installation, in your case, `pip` is installed by an anaconda installation, use `pip.exe` or `easy_install.exe` from this python3.6 installation's `Scripts` directory instead.
i think as my python was 3.6 also my anaconda distribution, So anaconda was automatically taking over all commands. i installed python 3.7 with anaconda 3.6 and it worked fine as it was mentioned in python website
42,776,294
I need to extract list of IP addresses and port number as well as other information in the following html table, I am using python 2.7 with lxml currently, but have no idea how to find the proper path to these elements, here is the address to table: [link to table](https://hidester.com/proxylist/)
2017/03/14
[ "https://Stackoverflow.com/questions/42776294", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1624681/" ]
Looking at the examples at [github](https://github.com/kivy/kivy/search?utf8=%E2%9C%93&q=background_color "github") it seems the values aren't 0-255 RGB like you might expect but are 0.0-1.1 ``` bubble.background_color = (1, 0, 0, .5) #50% translucent red background_color: .8, .8, 0, 1 ``` etc. You'll probably need something like ``` background_color: .2, .1, .73, 1 ```
delete background\_normal: ' ', just use background\_color: (R, G, B, A) pick a color in this [tool](https://developer.mozilla.org/es/docs/Web/CSS/CSS_Colors/Herramienta_para_seleccionar_color) and divide R, G and B by 100 A=always 1 (transparency), for example if you choose (255, 79, 25, 1) write instead (2.55, 0.79, 0.25, 1).
21,821,045
I want to get a buffer from a numpy array in Python 3. I have found the following code: ``` $ python3 Python 3.2.3 (default, Sep 25 2013, 18:25:56) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> a = numpy.arange(10) >>> numpy.getbuffer(a) ``` However it produces the error on the last step: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'module' object has no attribute 'getbuffer' ``` Why am I doing wrong? The code works fine for Python 2. The numpy version I'm using is 1.6.1.
2014/02/17
[ "https://Stackoverflow.com/questions/21821045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856504/" ]
According to [Developer notes on the transition to Python 3](https://github.com/numpy/numpy/blob/master/doc/Py3K.rst.txt#pybuffer-object): > > PyBuffer (object) > > > Since there is a native buffer object in Py3, the `memoryview`, the > `newbuffer` and **`getbuffer` functions are removed from multiarray in Py3**: > their functionality is taken over by the new memoryview object. > > > ``` >>> import numpy >>> a = numpy.arange(10) >>> memoryview(a) <memory at 0xb60ae094> >>> m = _ >>> m[0] = 9 >>> a array([9, 1, 2, 3, 4, 5, 6, 7, 8, 9]) ```
Numpy's [`arr.tobytes()`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tobytes.html) seems to be significantly faster than [`bytes(memoryview(arr))`](https://docs.python.org/dev/library/stdtypes.html#memoryview) in returning a `bytes` object. So, you may want to have a look at `tobytes()` as well. *Profiling* for Windows 7 on Intel i7 CPU, CPython v3.5.0, numpy v1.10.1. (**Edit** Note: same result order on Ubuntu 16.04, Intel i7 CPU, CPython v3.6.5, numpy v1.14.5.) ``` setup = '''import numpy as np; x = np.random.random(n).reshape(n//10, -1)''' ``` results ``` globals: {'n': 100}, tested 1e+06 times time (s) speedup methods 0 0.163005 6.03x x.tobytes() 1 0.491887 2.00x x.data.tobytes() 2 0.598286 1.64x memoryview(x).tobytes() 3 0.964653 1.02x bytes(x.data) 4 0.982743 bytes(memoryview(x)) globals: {'n': 1000}, tested 1e+06 times time (s) speedup methods 0 0.378260 3.21x x.tobytes() 1 0.708204 1.71x x.data.tobytes() 2 0.827941 1.47x memoryview(x).tobytes() 3 1.189048 1.02x bytes(x.data) 4 1.213423 bytes(memoryview(x)) globals: {'n': 10000}, tested 1e+06 times time (s) speedup methods 0 3.393949 1.34x x.tobytes() 1 3.739483 1.22x x.data.tobytes() 2 4.033783 1.13x memoryview(x).tobytes() 3 4.469730 1.02x bytes(x.data) 4 4.543620 bytes(memoryview(x)) ```
10,830,820
I need to backup various file types to GDrive (not just those convertible to GDocs formats) from some linux server. What would be the simplest, most elegant way to do that with a python script? Would any of the solutions pertaining to GDocs be applicable?
2012/05/31
[ "https://Stackoverflow.com/questions/10830820", "https://Stackoverflow.com", "https://Stackoverflow.com/users/303295/" ]
You can use the Documents List API to write a script that writes to Drive: <https://developers.google.com/google-apps/documents-list/> Both the Documents List API and the Drive API interact with the same resources (i.e. same documents and files). This sample in the Python client library shows how to upload an unconverted file to Drive: <http://code.google.com/p/gdata-python-client/source/browse/samples/docs/docs_v3_example.py#180>
The current documentation for saving a file to google drive using python can be found here: <https://developers.google.com/drive/v3/web/manage-uploads> However, the way that the google drive api handles document storage and retrieval does not follow the same architecture as POSIX file systems. As a result, if you wish to preserve the hierarchical architecture of the nested files on your linux file system, you will need to write a lot of custom code so that the parent directories are preserved on google drive. On top of that, google makes it difficult to gain write access to a normal drive account. Your permission scope must include the following link: <https://www.googleapis.com/auth/drive> and to obtain a token to access a user's normal account, that user must first [join a group](https://groups.google.com/forum/#!forum/risky-access-by-unreviewed-apps) to provide access to non-reviewed apps. And any oauth token that is created has a limited shelf life. However, if you obtain an access token, the following script should allow you to save any file on your local machine to the same (relative) path on google drive. ``` def migrate(file_path, access_token, drive_space='drive'): ''' a method to save a posix file architecture to google drive NOTE: to write to a google drive account using a non-approved app, the oauth2 grantee account must also join this google group https://groups.google.com/forum/#!forum/risky-access-by-unreviewed-apps :param file_path: string with path to local file :param access_token: string with oauth2 access token grant to write to google drive :param drive_space: string with name of space to write to (drive, appDataFolder, photos) :return: string with id of file on google drive ''' # construct drive client import httplib2 from googleapiclient import discovery from oauth2client.client import AccessTokenCredentials google_credentials = AccessTokenCredentials(access_token, 'my-user-agent/1.0') google_http = httplib2.Http() google_http = google_credentials.authorize(google_http) google_drive = discovery.build('drive', 'v3', http=google_http) drive_client = google_drive.files() # prepare file body from googleapiclient.http import MediaFileUpload media_body = MediaFileUpload(filename=file_path, resumable=True) # determine file modified time import os from datetime import datetime modified_epoch = os.path.getmtime(file_path) modified_time = datetime.utcfromtimestamp(modified_epoch).isoformat() # determine path segments path_segments = file_path.split(os.sep) # construct upload kwargs create_kwargs = { 'body': { 'name': path_segments.pop(), 'modifiedTime': modified_time }, 'media_body': media_body, 'fields': 'id' } # walk through parent directories parent_id = '' if path_segments: # construct query and creation arguments walk_folders = True folder_kwargs = { 'body': { 'name': '', 'mimeType' : 'application/vnd.google-apps.folder' }, 'fields': 'id' } query_kwargs = { 'spaces': drive_space, 'fields': 'files(id, parents)' } while path_segments: folder_name = path_segments.pop(0) folder_kwargs['body']['name'] = folder_name # search for folder id in existing hierarchy if walk_folders: walk_query = "name = '%s'" % folder_name if parent_id: walk_query += "and '%s' in parents" % parent_id query_kwargs['q'] = walk_query response = drive_client.list(**query_kwargs).execute() file_list = response.get('files', []) else: file_list = [] if file_list: parent_id = file_list[0].get('id') # or create folder # https://developers.google.com/drive/v3/web/folder else: if not parent_id: if drive_space == 'appDataFolder': folder_kwargs['body']['parents'] = [ drive_space ] else: del folder_kwargs['body']['parents'] else: folder_kwargs['body']['parents'] = [parent_id] response = drive_client.create(**folder_kwargs).execute() parent_id = response.get('id') walk_folders = False # add parent id to file creation kwargs if parent_id: create_kwargs['body']['parents'] = [parent_id] elif drive_space == 'appDataFolder': create_kwargs['body']['parents'] = [drive_space] # send create request file = drive_client.create(**create_kwargs).execute() file_id = file.get('id') return file_id ``` PS. I have modified this script from the `labpack` python module. There is class called [driveClient](https://collectiveacuity.github.io/labPack/clients/#driveclient) in that module written by rcj1492 which handles saving, loading, searching and deleting files on google drive in a way that preserves the POSIX file system. ``` from labpack.storage.google.drive import driveClient ```
10,830,820
I need to backup various file types to GDrive (not just those convertible to GDocs formats) from some linux server. What would be the simplest, most elegant way to do that with a python script? Would any of the solutions pertaining to GDocs be applicable?
2012/05/31
[ "https://Stackoverflow.com/questions/10830820", "https://Stackoverflow.com", "https://Stackoverflow.com/users/303295/" ]
You can use the Documents List API to write a script that writes to Drive: <https://developers.google.com/google-apps/documents-list/> Both the Documents List API and the Drive API interact with the same resources (i.e. same documents and files). This sample in the Python client library shows how to upload an unconverted file to Drive: <http://code.google.com/p/gdata-python-client/source/browse/samples/docs/docs_v3_example.py#180>
I found that [PyDrive](https://pypi.python.org/pypi/PyDrive) handles the Drive API elegantly, and it also has great [documentation](http://pythonhosted.org/PyDrive/quickstart.html) (especially walking the user through the authentication part). **EDIT:** Combine that with the material on [Automating pydrive verification process](https://stackoverflow.com/questions/24419188/automating-pydrive-verification-process) and [Pydrive google drive automate authentication](https://stackoverflow.com/questions/46978784/pydrive-google-drive-automate-authentication), and that makes for some great documentation to get things going. Hope it helps those who are confused about where to start.
42,852,722
I'm iterating through such a feed: ``` {"siri":{"serviceDelivery":{"responseTimestamp":"2017-03-14T18:37:23Z","producerRef":"IVTR_RELAIS","status":"true","estimatedTimetableDelivery":[ {"lineRef":{"value":"C01742"},"directionRef":{"value":""},"datedVehicleJourneyRef":{"value":"SNCF-ACCES:VehicleJourney::UPAL97_20170314:LOC"},"vehicleMode":["RAIL"],"routeRef":{},"publishedLineName":[{"value":"RER A"}],"directionName":[],"originRef":{},"originName":[],"originShortName":[],"destinationDisplayAtOrigin":[],"via":[],"destinationRef":{"value":"STIF:StopPoint:Q:40918:"},"destinationName":[{"value":"GARE DE CERGY LE HAUT"}],"destinationShortName":[],"originDisplayAtDestination":[],"operatorRef":{"value":"SNCF-ACCES:Operator::SNCF:"},"productCategoryRef":{},"vehicleJourneyName":[],"journeyNote":[],"firstOrLastJourney":"UNSPECIFIED","additionalVehicleJourneyRef":[],"estimatedCalls":{"estimatedCall":[{"stopPointRef":{"value":"STIF:StopPoint:Q:411321:"},"expectedArrivalTime":"2017-03-14T20:02:00.000Z","expectedDepartureTime":"2017-03-14T20:02:00.000Z","aimedArrivalTime":"2017-03-14T20:02:00.000Z","aimedDepartureTime":"2017-03-14T20:02:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:411368:"},"expectedArrivalTime":"2017-03-14T20:09:00.000Z","expectedDepartureTime":"2017-03-14T20:09:00.000Z","aimedArrivalTime":"2017-03-14T20:09:00.000Z","aimedDepartureTime":"2017-03-14T20:09:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:411352:"},"expectedArrivalTime":"2017-03-14T20:05:00.000Z","expectedDepartureTime":"2017-03-14T20:05:00.000Z","aimedArrivalTime":"2017-03-14T20:05:00.000Z","aimedDepartureTime":"2017-03-14T20:05:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:41528:"},"expectedArrivalTime":"2017-03-14T19:56:00.000Z","expectedDepartureTime":"2017-03-14T19:56:00.000Z","aimedArrivalTime":"2017-03-14T19:56:00.000Z","aimedDepartureTime":"2017-03-14T19:56:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:40918:"},"expectedArrivalTime":"2017-03-14T20:12:00.000Z","aimedArrivalTime":"2017-03-14T20:12:00.000Z","aimedDepartureTime":"2017-03-14T20:12:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]}]},"recordedAtTime":"2017-03-14T18:37:12.314Z"} ,{"lineRef":{"value":"C00049"},"directionRef":{"value":""},"datedVehicleJourneyRef":{"value":"004_DEVILLAIRS:VehicleJourney::109173051020957:LOC"},"vehicleMode":[],"routeRef":{},"publishedLineName":[{"value":"42"}],"directionName":[{"value":"Aller"}],"originRef":{},"originName":[],"originShortName":[],"destinationDisplayAtOrigin":[],"via":[],"destinationRef":{"value":"STIF:StopPoint:BP:12689:"},"destinationName":[{"value":"L'Onde Maison des Arts"}],"destinationShortName":[],"originDisplayAtDestination":[],"operatorRef":{"value":"004_DEVILLAIRS:Operator::004_DEVILLAIRS_Operator__004_DEVILLAIRS_Company__Devillairs 4_LOC_:"},"productCategoryRef":{},"vehicleJourneyName":[],"journeyNote":[],"firstOrLastJourney":"UNSPECIFIED","additionalVehicleJourneyRef":[],"estimatedCalls":{"estimatedCall":[{"stopPointRef":{"value":"STIF:StopPoint:Q:12690:"},"expectedArrivalTime":"2017-03-14T18:44:26.000Z","expectedDepartureTime":"2017-03-14T18:44:26.000Z","aimedArrivalTime":"2017-03-14T18:43:39.000Z","aimedDepartureTime":"2017-03-14T18:43:39.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12684:"},"expectedArrivalTime":"2017-03-14T18:38:51.000Z","expectedDepartureTime":"2017-03-14T18:38:51.000Z","aimedArrivalTime":"2017-03-14T18:34:51.000Z","aimedDepartureTime":"2017-03-14T18:34:51.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:40538:"},"expectedArrivalTime":"2017-03-14T18:40:53.000Z","expectedDepartureTime":"2017-03-14T18:40:53.000Z","aimedArrivalTime":"2017-03-14T18:37:24.000Z","aimedDepartureTime":"2017-03-14T18:37:24.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12678:"},"expectedArrivalTime":"2017-03-14T18:41:10.000Z","expectedDepartureTime":"2017-03-14T18:41:10.000Z","aimedArrivalTime":"2017-03-14T18:37:57.000Z","aimedDepartureTime":"2017-03-14T18:37:57.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12682:"},"expectedArrivalTime":"2017-03-14T18:40:00.000Z","expectedDepartureTime":"2017-03-14T18:40:00.000Z","aimedArrivalTime":"2017-03-14T18:36:21.000Z","aimedDepartureTime":"2017-03-14T18:36:21.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:41690:"},"expectedArrivalTime":"2017-03-14T18:42:17.000Z","expectedDepartureTime":"2017-03-14T18:42:17.000Z","aimedArrivalTime":"2017-03-14T18:39:00.000Z","aimedDepartureTime":"2017-03-14T18:39:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12743:"},"expectedDepartureTime":"2017-03-14T18:15:00.000Z","aimedDepartureTime":"2017-03-14T18:15:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12680:"},"expectedArrivalTime":"2017-03-14T18:40:24.000Z","expectedDepartureTime":"2017-03-14T18:40:24.000Z","aimedArrivalTime":"2017-03-14T18:36:52.000Z","aimedDepartureTime":"2017-03-14T18:36:52.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:12676:"},"expectedArrivalTime":"2017-03-14T18:41:42.000Z","expectedDepartureTime":"2017-03-14T18:41:42.000Z","aimedArrivalTime":"2017-03-14T18:38:29.000Z","aimedDepartureTime":"2017-03-14T18:38:29.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]}]},"recordedAtTime":"2017-03-14T18:35:51.000Z"} ,{"lineRef":{"value":"C01375"},"directionRef":{"value":""},"datedVehicleJourneyRef":{"value":"RATP:VehicleJourney::M5.R.1937.1:LOC"},"vehicleMode":[],"routeRef":{},"publishedLineName":[{"value":"Place d'Italie / Bobigny Pablo Picasso"}],"directionName":[{"value":"Bobigny Pablo Picasso"}],"originRef":{},"originName":[],"originShortName":[],"destinationDisplayAtOrigin":[],"via":[],"destinationRef":{"value":"STIF:StopPoint:Q:22015:"},"destinationName":[{"value":"Bobigny Pablo Picasso"}],"destinationShortName":[],"originDisplayAtDestination":[],"operatorRef":{},"productCategoryRef":{},"vehicleJourneyName":[],"journeyNote":[],"firstOrLastJourney":"UNSPECIFIED","additionalVehicleJourneyRef":[],"estimatedCalls":{"estimatedCall":[{"stopPointRef":{"value":"STIF:StopPoint:Q:22003:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22008:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22017:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:21952:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22009:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22016:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22007:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:21903:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22005:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22006:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22004:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22012:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22011:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22013:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:21981:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22000:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22010:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22002:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:22001:"},"expectedDepartureTime":"2017-03-14T18:37:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]}]},"recordedAtTime":"2017-03-14T18:36:55.890Z"} ,{"lineRef":{"value":"C00774"},"directionRef":{"value":""},"datedVehicleJourneyRef":{"value":"STIVO:VehicleJourney::268437511:LOC"},"vehicleMode":[],"routeRef":{},"publishedLineName":[{"value":"CERGY PREFECTURE-VAUREAL TOUPETS"}],"directionName":[{"value":"R"}],"originRef":{},"originName":[],"originShortName":[],"destinationDisplayAtOrigin":[],"via":[],"destinationRef":{"value":"STIF:StopPoint:Q:10118:"},"destinationName":[{"value":"PrΓ©fecture RER"}],"destinationShortName":[],"originDisplayAtDestination":[],"operatorRef":{},"productCategoryRef":{},"vehicleJourneyName":[],"journeyNote":[],"firstOrLastJourney":"UNSPECIFIED","additionalVehicleJourneyRef":[],"estimatedCalls":{"estimatedCall":[{"stopPointRef":{"value":"STIF:StopPoint:Q:8729:"},"expectedDepartureTime":"2017-03-14T18:39:31.000Z","aimedDepartureTime":"2017-03-14T18:39:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:8731:"},"expectedDepartureTime":"2017-03-14T18:40:31.000Z","aimedDepartureTime":"2017-03-14T18:40:20.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:8730:"},"expectedDepartureTime":"2017-03-14T18:40:46.000Z","aimedDepartureTime":"2017-03-14T18:39:51.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]}]},"recordedAtTime":"2017-03-14T18:36:43.000Z"} ,{"lineRef":{"value":"C00697"},"directionRef":{"value":""},"datedVehicleJourneyRef":{"value":"SRVSAE:VehicleJourney::34661-1:LOC"},"vehicleMode":["BUS"],"routeRef":{},"publishedLineName":[{"value":"H "}],"directionName":[],"originRef":{"value":"STIF:StopPoint:BP:20399:"},"originName":[{"value":"Versailles Rive Gauche"}],"originShortName":[],"destinationDisplayAtOrigin":[],"via":[],"destinationRef":{"value":"STIF:StopPoint:BP:4122:"},"destinationName":[{"value":"La Celle St Cloud - Gare"}],"destinationShortName":[],"originDisplayAtDestination":[],"operatorRef":{"value":"SRVSAE:Operator::56 :"},"productCategoryRef":{},"vehicleJourneyName":[{"value":"34661-1"}],"journeyNote":[],"firstOrLastJourney":"OTHER_SERVICE","additionalVehicleJourneyRef":[],"estimatedCalls":{"estimatedCall":[{"stopPointRef":{"value":"STIF:StopPoint:Q:4062:"},"expectedArrivalTime":"2017-03-14T18:39:55.000Z","expectedDepartureTime":"2017-03-14T18:39:55.000Z","aimedArrivalTime":"2017-03-14T18:35:00.000Z","aimedDepartureTime":"2017-03-14T18:35:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:4064:"},"expectedArrivalTime":"2017-03-14T18:38:58.000Z","expectedDepartureTime":"2017-03-14T18:38:58.000Z","aimedArrivalTime":"2017-03-14T18:34:10.000Z","aimedDepartureTime":"2017-03-14T18:34:10.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]},{"stopPointRef":{"value":"STIF:StopPoint:Q:4068:"},"expectedArrivalTime":"2017-03-14T18:36:48.000Z","expectedDepartureTime":"2017-03-14T18:36:48.000Z","aimedArrivalTime":"2017-03-14T18:32:00.000Z","aimedDepartureTime":"2017-03-14T18:32:00.000Z","stopPointName":[],"originDisplay":[],"destinationDisplay":[],"arrivalOperatorRefs":[]}]},"recordedAtTime":"2017-03-14T18:33:37.000Z"} ]}}} ``` Using a python script: ``` import datetime import time import dateutil.parser import pytz import json import gtfs_realtime_pb2 from traceback import print_exc EPOCH = datetime.datetime(1970, 1, 1, tzinfo=pytz.utc) def handle_siri(raw): siri_data = json.loads(raw.decode('utf-8'))['siri'] msg = gtfs_realtime_pb2.FeedMessage() msg.header.gtfs_realtime_version = "1.0" msg.header.incrementality = msg.header.FULL_DATASET msg.header.timestamp = int(time.time()) # msg.header.timestamp = long(siri_data['serviceDelivery']['responseTimestamp']) for i, vehicle in enumerate(siri_data['serviceDelivery']['estimatedTimetableDelivery']): route_id = vehicle['lineRef']['value'][:6].strip() if len(vehicle['datedVehicleJourneyRef']) > 0: operator = vehicle['datedVehicleJourneyRef'].split('[:.]') if operator[0] == "RATP": sens = operator[4] if operator[4] == "A": ent.trip_update.trip.direction_id = 0 if operator[4] == "R": ent.trip_update.trip.direction_id = 1 if operator[0] != "RATP": continue # direction = vehicle['monitoredVehicleJourney']['directionRef'] # Il faudra editer le code pour le definir pour les autres... # ent.trip_update.trip.direction_id = int(direction) - 1 # Il faudra editer le code pour le definir pour les autres... if 'estimatedCalls' not in vehicle['monitoredVehicleJourney']: continue for call in vehicle['monitoredVehicleJourney']['estimatedCalls']: stoptime = ent.trip_update.stop_time_update.add() if 'stopPointRef' in vehicle['monitoredVehicleJourney']['estimatedCall']: stoptime.stop_id = vehicle['monitoredVehicleJourney']['estimatedCall']['stopPointRef'] arrival_time = (dateutil.parser.parse(call['expectedArrivalTime']) - EPOCH).total_seconds() stoptime.arrival.time = int(arrival_time) departure_time = (dateutil.parser.parse(call['expectedDepartureTime']) - EPOCH).total_seconds() stoptime.departure.time = int(departure_time) ent = msg.entity.add() # On garde ca ? ent.id = str(i) # ent.trip_update.timestamp = vehicle['recordedAtTime'] # ent.trip_update.trip.route_id = route_id # # try: # int(vehicle['MonitoredVehicleJourney']['Delay']) # except: # print_exc() # print vehicle, vehicle['MonitoredVehicleJourney']['Delay'] # continue # ent.trip_update.trip.start_date = vehicle['MonitoredVehicleJourney']['FramedVehicleJourneyRef']['DataFrameRef']['value'].replace("-", "") # if 'datedVehicleJourneyRef' in vehicle['monitoredVehicleJourney']['framedVehicleJourneyRef']: # doesn't exist in our feed # start_time = vehicle['monitoredVehicleJourney']['framedVehicleJourneyRef']['datedVehicleJourneyRef'] # ent.trip_update.trip.start_time = start_time[:2]+":"+start_time[2:]+":00" # # # if 'vehicleRef' in vehicle['monitoredVehicleJourney']: # doesn't exist in our feed # ent.trip_update.vehicle.label = vehicle['monitoredVehicleJourney']['vehicleRef']['value'] return msg ``` Unfortunately, This loop is not working, returning an error, while it should iterate through each item starting with `{"lineRef"`: ``` File "/home/nicolas/packaging/stif.py", line 24, in handle_siri operator = vehicle['datedVehicleJourneyRef'].split('[:.]') AttributeError: 'dict' object has no attribute 'split' ``` Could you please help me fix this? Thanks for looking at this.
2017/03/17
[ "https://Stackoverflow.com/questions/42852722", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5689072/" ]
See the code snippet ``` public static class MyDownloadTask extends AsyncTask<Void, Integer, Void> { @Override protected void onProgressUpdate(Integer... values) { super.onProgressUpdate(values); // receive the published update here // progressBar.setProgress(values[0]); } @Override protected Void doInBackground(Void... params) { // publish your download progress here // publishProgress(10); return null; } } ```
With this example you can set download speed on your progressdialog ``` public class AsyncDownload extends AsyncTask<Void, Double, String> { ProgressDialog progressDialog; @Override protected void onPreExecute() { super.onPreExecute(); progressDialog = new ProgressDialog(MainActivity.this); progressDialog.setMessage("Speed: " + 0.0); progressDialog.show(); } @Override protected String doInBackground(Void... voids) { // AsyncDownload Double speed = 0.0; // Calculate speed publishProgress(speed); return null; } @Override protected void onProgressUpdate(Double... values) { super.onProgressUpdate(values); progressDialog.setMessage("Speed " + values[0]); } @Override protected void onPostExecute(String s) { super.onPostExecute(s); } } ``` to calculate download speed you can use this example [Measuring Download Speed Java](https://stackoverflow.com/questions/6322767/measuring-download-speed-java)
22,923,983
I am embedding python code in my c++ program. The use of PyFloat\_AsDouble is causing loss of precision. It keeps only up to 6 precision digits. My program is very sensitive to precision. Is there a known fix for this? Here is the relevant C++ code: ``` _ret = PyObject_CallObject(pFunc, pArgs); vector<double> retVals; for(size_t i=0; i<PyList_Size(_ret); i++){ retVals[i] = PyFloat_AsDouble(PyList_GetItem(_ret, i)); } ``` retVals[i] has precision of only 6, while the value returned by the python code is a float that can have a higher precision. How to get full precision?
2014/04/07
[ "https://Stackoverflow.com/questions/22923983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/249560/" ]
Assuming that the Python object contains floating point values stored to double precision, then your code works as you expect. Most likely you are simply mis-diagnosing a problem that does not exist. My guess is that you are looking at the values in the debugger which only displays the values to a limited precision. Or you are printing them out to a limited precision.
print type(PyList\_GetItem(\_ret, i)) My bet is it will show float. Edit: in the python code, not in the C++ code.
44,576,509
I am having a problem like ``` In [5]: x = "this string takes two like {one} and {two}" In [6]: y = x.format(one="one") --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-b3c89fbea4d3> in <module>() ----> 1 y = x.format(one="one") KeyError: 'two' ``` I have a compound string with many keys that gets kept in a config file. For 8 different queries, they all use the same string, except 1 key is a different setting. I need to be able to substitute a key in that file to save the strings for later like: ``` "this string takes two like one and {two}" ``` How do I substitute one key at a time using `format`?
2017/06/15
[ "https://Stackoverflow.com/questions/44576509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3282434/" ]
you can escape the interpolation of `{two}` by doubling the curly brackets: ``` x = "this string takes two like {one} and {{two}}" y = x.format(one=1) z = y.format(two=2) print(z) # this string takes two like 1 and 2 ``` --- a different way to go are [template strings](https://docs.python.org/3/library/string.html#template-strings): ``` from string import Template t = Template('this string takes two like $one and $two') y = t.safe_substitute(one=1) print(y) # this string takes two like 1 and $two z = Template(y).safe_substitute(two=2) print(z) # this string takes two like 1 and 2 ``` ([this answer](https://stackoverflow.com/a/44576618/4954037) was before mine for the template strings....)
You can replace `{two}` by `{two}` to enable further replacement later: ``` y = x.format(one="one", two="{two}") ``` This easily extends in multiple replacement passages, but it requires that you give all keys, in each iteration.
44,576,509
I am having a problem like ``` In [5]: x = "this string takes two like {one} and {two}" In [6]: y = x.format(one="one") --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-b3c89fbea4d3> in <module>() ----> 1 y = x.format(one="one") KeyError: 'two' ``` I have a compound string with many keys that gets kept in a config file. For 8 different queries, they all use the same string, except 1 key is a different setting. I need to be able to substitute a key in that file to save the strings for later like: ``` "this string takes two like one and {two}" ``` How do I substitute one key at a time using `format`?
2017/06/15
[ "https://Stackoverflow.com/questions/44576509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3282434/" ]
If placeholders in your string don't have any format specifications, in Python 3 you can use `str.format_map` and provide a mapping, returning the field name for missing fields: ``` class Default(dict): def __missing__(self, key): return '{' + key + '}' ``` ``` In [6]: x = "this string takes two like {one} and {two}" In [7]: x.format_map(Default(one=1)) Out[7]: 'this string takes two like 1 and {two}' ``` If you do have format specifications, you'll have to subclass `string.Formatter` and override some methods, or switch to a different formatting method, like `string.Template`.
you can escape the interpolation of `{two}` by doubling the curly brackets: ``` x = "this string takes two like {one} and {{two}}" y = x.format(one=1) z = y.format(two=2) print(z) # this string takes two like 1 and 2 ``` --- a different way to go are [template strings](https://docs.python.org/3/library/string.html#template-strings): ``` from string import Template t = Template('this string takes two like $one and $two') y = t.safe_substitute(one=1) print(y) # this string takes two like 1 and $two z = Template(y).safe_substitute(two=2) print(z) # this string takes two like 1 and 2 ``` ([this answer](https://stackoverflow.com/a/44576618/4954037) was before mine for the template strings....)
44,576,509
I am having a problem like ``` In [5]: x = "this string takes two like {one} and {two}" In [6]: y = x.format(one="one") --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-b3c89fbea4d3> in <module>() ----> 1 y = x.format(one="one") KeyError: 'two' ``` I have a compound string with many keys that gets kept in a config file. For 8 different queries, they all use the same string, except 1 key is a different setting. I need to be able to substitute a key in that file to save the strings for later like: ``` "this string takes two like one and {two}" ``` How do I substitute one key at a time using `format`?
2017/06/15
[ "https://Stackoverflow.com/questions/44576509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3282434/" ]
you can escape the interpolation of `{two}` by doubling the curly brackets: ``` x = "this string takes two like {one} and {{two}}" y = x.format(one=1) z = y.format(two=2) print(z) # this string takes two like 1 and 2 ``` --- a different way to go are [template strings](https://docs.python.org/3/library/string.html#template-strings): ``` from string import Template t = Template('this string takes two like $one and $two') y = t.safe_substitute(one=1) print(y) # this string takes two like 1 and $two z = Template(y).safe_substitute(two=2) print(z) # this string takes two like 1 and 2 ``` ([this answer](https://stackoverflow.com/a/44576618/4954037) was before mine for the template strings....)
All great answers, I will start using this `Template` package soon. Very disappointed in the default behavior here, not understanding why a string template requires passing all the keys each time, if there are 3 keys I can't see a logical reason you can't pass 1 or 2 (but I also don't know how compilers work) Solved by using `%s` for the items I'm immediately substituting in the config file, and `{key}` for the keys I replace later upon execution of the flask server ``` In [1]: issue = "Python3 string {item} are somewhat defective: %s" In [2]: preformatted_issue = issue % 'true' In [3]: preformatted_issue Out[3]: 'Python3 string {item} are somewhat defective: true' In [4]: result = preformatted_issue.format(item='templates') In [5]: result Out[5]: 'Python3 string templates are somewhat defective: true' ```
44,576,509
I am having a problem like ``` In [5]: x = "this string takes two like {one} and {two}" In [6]: y = x.format(one="one") --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-b3c89fbea4d3> in <module>() ----> 1 y = x.format(one="one") KeyError: 'two' ``` I have a compound string with many keys that gets kept in a config file. For 8 different queries, they all use the same string, except 1 key is a different setting. I need to be able to substitute a key in that file to save the strings for later like: ``` "this string takes two like one and {two}" ``` How do I substitute one key at a time using `format`?
2017/06/15
[ "https://Stackoverflow.com/questions/44576509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3282434/" ]
I think [`string.Template`](https://docs.python.org/2/library/string.html#string.Template) does what you want: ``` from string import Template s = "this string takes two like $one and $two" s = Template(s).safe_substitute(one=1) print(s) # this string takes two like 1 and $two s = Template(s).safe_substitute(two=2) print(s) # this string takes two like 1 and 2 ```
You can replace `{two}` by `{two}` to enable further replacement later: ``` y = x.format(one="one", two="{two}") ``` This easily extends in multiple replacement passages, but it requires that you give all keys, in each iteration.
44,576,509
I am having a problem like ``` In [5]: x = "this string takes two like {one} and {two}" In [6]: y = x.format(one="one") --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-b3c89fbea4d3> in <module>() ----> 1 y = x.format(one="one") KeyError: 'two' ``` I have a compound string with many keys that gets kept in a config file. For 8 different queries, they all use the same string, except 1 key is a different setting. I need to be able to substitute a key in that file to save the strings for later like: ``` "this string takes two like one and {two}" ``` How do I substitute one key at a time using `format`?
2017/06/15
[ "https://Stackoverflow.com/questions/44576509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3282434/" ]
If placeholders in your string don't have any format specifications, in Python 3 you can use `str.format_map` and provide a mapping, returning the field name for missing fields: ``` class Default(dict): def __missing__(self, key): return '{' + key + '}' ``` ``` In [6]: x = "this string takes two like {one} and {two}" In [7]: x.format_map(Default(one=1)) Out[7]: 'this string takes two like 1 and {two}' ``` If you do have format specifications, you'll have to subclass `string.Formatter` and override some methods, or switch to a different formatting method, like `string.Template`.
You can replace `{two}` by `{two}` to enable further replacement later: ``` y = x.format(one="one", two="{two}") ``` This easily extends in multiple replacement passages, but it requires that you give all keys, in each iteration.
44,576,509
I am having a problem like ``` In [5]: x = "this string takes two like {one} and {two}" In [6]: y = x.format(one="one") --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-b3c89fbea4d3> in <module>() ----> 1 y = x.format(one="one") KeyError: 'two' ``` I have a compound string with many keys that gets kept in a config file. For 8 different queries, they all use the same string, except 1 key is a different setting. I need to be able to substitute a key in that file to save the strings for later like: ``` "this string takes two like one and {two}" ``` How do I substitute one key at a time using `format`?
2017/06/15
[ "https://Stackoverflow.com/questions/44576509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3282434/" ]
You can replace `{two}` by `{two}` to enable further replacement later: ``` y = x.format(one="one", two="{two}") ``` This easily extends in multiple replacement passages, but it requires that you give all keys, in each iteration.
All great answers, I will start using this `Template` package soon. Very disappointed in the default behavior here, not understanding why a string template requires passing all the keys each time, if there are 3 keys I can't see a logical reason you can't pass 1 or 2 (but I also don't know how compilers work) Solved by using `%s` for the items I'm immediately substituting in the config file, and `{key}` for the keys I replace later upon execution of the flask server ``` In [1]: issue = "Python3 string {item} are somewhat defective: %s" In [2]: preformatted_issue = issue % 'true' In [3]: preformatted_issue Out[3]: 'Python3 string {item} are somewhat defective: true' In [4]: result = preformatted_issue.format(item='templates') In [5]: result Out[5]: 'Python3 string templates are somewhat defective: true' ```
44,576,509
I am having a problem like ``` In [5]: x = "this string takes two like {one} and {two}" In [6]: y = x.format(one="one") --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-b3c89fbea4d3> in <module>() ----> 1 y = x.format(one="one") KeyError: 'two' ``` I have a compound string with many keys that gets kept in a config file. For 8 different queries, they all use the same string, except 1 key is a different setting. I need to be able to substitute a key in that file to save the strings for later like: ``` "this string takes two like one and {two}" ``` How do I substitute one key at a time using `format`?
2017/06/15
[ "https://Stackoverflow.com/questions/44576509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3282434/" ]
If placeholders in your string don't have any format specifications, in Python 3 you can use `str.format_map` and provide a mapping, returning the field name for missing fields: ``` class Default(dict): def __missing__(self, key): return '{' + key + '}' ``` ``` In [6]: x = "this string takes two like {one} and {two}" In [7]: x.format_map(Default(one=1)) Out[7]: 'this string takes two like 1 and {two}' ``` If you do have format specifications, you'll have to subclass `string.Formatter` and override some methods, or switch to a different formatting method, like `string.Template`.
I think [`string.Template`](https://docs.python.org/2/library/string.html#string.Template) does what you want: ``` from string import Template s = "this string takes two like $one and $two" s = Template(s).safe_substitute(one=1) print(s) # this string takes two like 1 and $two s = Template(s).safe_substitute(two=2) print(s) # this string takes two like 1 and 2 ```
44,576,509
I am having a problem like ``` In [5]: x = "this string takes two like {one} and {two}" In [6]: y = x.format(one="one") --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-b3c89fbea4d3> in <module>() ----> 1 y = x.format(one="one") KeyError: 'two' ``` I have a compound string with many keys that gets kept in a config file. For 8 different queries, they all use the same string, except 1 key is a different setting. I need to be able to substitute a key in that file to save the strings for later like: ``` "this string takes two like one and {two}" ``` How do I substitute one key at a time using `format`?
2017/06/15
[ "https://Stackoverflow.com/questions/44576509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3282434/" ]
I think [`string.Template`](https://docs.python.org/2/library/string.html#string.Template) does what you want: ``` from string import Template s = "this string takes two like $one and $two" s = Template(s).safe_substitute(one=1) print(s) # this string takes two like 1 and $two s = Template(s).safe_substitute(two=2) print(s) # this string takes two like 1 and 2 ```
All great answers, I will start using this `Template` package soon. Very disappointed in the default behavior here, not understanding why a string template requires passing all the keys each time, if there are 3 keys I can't see a logical reason you can't pass 1 or 2 (but I also don't know how compilers work) Solved by using `%s` for the items I'm immediately substituting in the config file, and `{key}` for the keys I replace later upon execution of the flask server ``` In [1]: issue = "Python3 string {item} are somewhat defective: %s" In [2]: preformatted_issue = issue % 'true' In [3]: preformatted_issue Out[3]: 'Python3 string {item} are somewhat defective: true' In [4]: result = preformatted_issue.format(item='templates') In [5]: result Out[5]: 'Python3 string templates are somewhat defective: true' ```
44,576,509
I am having a problem like ``` In [5]: x = "this string takes two like {one} and {two}" In [6]: y = x.format(one="one") --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-b3c89fbea4d3> in <module>() ----> 1 y = x.format(one="one") KeyError: 'two' ``` I have a compound string with many keys that gets kept in a config file. For 8 different queries, they all use the same string, except 1 key is a different setting. I need to be able to substitute a key in that file to save the strings for later like: ``` "this string takes two like one and {two}" ``` How do I substitute one key at a time using `format`?
2017/06/15
[ "https://Stackoverflow.com/questions/44576509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3282434/" ]
If placeholders in your string don't have any format specifications, in Python 3 you can use `str.format_map` and provide a mapping, returning the field name for missing fields: ``` class Default(dict): def __missing__(self, key): return '{' + key + '}' ``` ``` In [6]: x = "this string takes two like {one} and {two}" In [7]: x.format_map(Default(one=1)) Out[7]: 'this string takes two like 1 and {two}' ``` If you do have format specifications, you'll have to subclass `string.Formatter` and override some methods, or switch to a different formatting method, like `string.Template`.
All great answers, I will start using this `Template` package soon. Very disappointed in the default behavior here, not understanding why a string template requires passing all the keys each time, if there are 3 keys I can't see a logical reason you can't pass 1 or 2 (but I also don't know how compilers work) Solved by using `%s` for the items I'm immediately substituting in the config file, and `{key}` for the keys I replace later upon execution of the flask server ``` In [1]: issue = "Python3 string {item} are somewhat defective: %s" In [2]: preformatted_issue = issue % 'true' In [3]: preformatted_issue Out[3]: 'Python3 string {item} are somewhat defective: true' In [4]: result = preformatted_issue.format(item='templates') In [5]: result Out[5]: 'Python3 string templates are somewhat defective: true' ```
32,703,469
I am new to kafka. We are trying to import data from a csv file to Kafka. We need to import everyday, in the mean while the previous day's data is depredated. How could remove all messages under a Kafka topic in python? or how could I remove the Kafka topic in python? Or I saw someone suggest to wait to data expire, how could I set the data expiration time if that's possible? Any suggestions will be appreciated! Thanks
2015/09/21
[ "https://Stackoverflow.com/questions/32703469", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1028315/" ]
You cannot delete messages in Kafka topic. You can: * Set `log.retention.*` properties which is basically the expiration of messages. You can choose either time-based expiration (e. g. keep messages that are six hour old or newer) or space-based expiration (e. g. keep at max 1 GB of messages). See [Broker config](http://kafka.apache.org/documentation.html#brokerconfigs) and search for *retention*. You can set different values for different topics. * Delete the whole topic. It's a kind of tricky and I don't recommend this way. * Create a new topic for every day. Something like *my-topic-2015-09-21*. But I don't think you need to delete the messages in the topic at all. Because your Kafka consumer keeps track of messages that has been already processed. Thus when you read all today's messages, Kafka consumer saves this information and you're going to read just the new messages tomorrow. Another possible solution could be [Log compaction](http://kafka.apache.org/documentation.html#compaction). But it's more complicated and probably it's not what you need. Basically you can set a key for every message in the Kafka topic. If you send two different messages with the same key, Kafka will keep just the newest message in the topic and it will delete all older messages with the same key. You can think of it as a kind of "key-value store". Every message with the same key just updates a value under the specific key. But hey, you really don't need this, it's just FYI :-).
The simplest approach is to simply delete the topic. I use this in Python automated test suites, where I want to verify a specific set of test messages gets sent through Kafka, and don't want to see results from previous test runs ``` def delete_kafka_topic(topic_name): call(["/usr/bin/kafka-topics", "--zookeeper", "zookeeper-1:2181", "--delete", "--topic", topic_name]) ```
43,214,036
I've a python script that I'm invoking from C#. Code Provided below. Issue with this process is that if Python script fails I'm not able to understand in C# and display that exception. I'm using C#, MVC, Python. Can you please modify below code and show me how can I catch the exception thrown at the time of Python Script exception? ``` Process process = new Process(); Stopwatch stopWatch = new Stopwatch(); ProcessStartInfo processStartInfo = new ProcessStartInfo(python); processStartInfo.UseShellExecute = false; processStartInfo.RedirectStandardOutput = true; try { process.StartInfo = processStartInfo; stopWatch.Start(); //Start the process process.Start(); // Read the standard output of the app we called. // in order to avoid deadlock we will read output first // and then wait for process terminate: StreamReader myStreamReader = process.StandardOutput; string myString = myStreamReader.ReadLine(); // wait exit signal from the app we called and then close it. process.WaitForExit(); process.Close(); stopWatch.Stop(); TimeSpan ts = stopWatch.Elapsed; Session["Message"] = "Success"; } catch (InvalidOperationException ex) { throw new System.InvalidOperationException("Bulk Upload Failed. Please contact administrator for further details." + ex.StackTrace); Session["Message"] = "Failed"; } ```
2017/04/04
[ "https://Stackoverflow.com/questions/43214036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4879531/" ]
Here is the working code.. To get the error or any exception in Python to C# RedirectStandardError property true and next get the Standard Error. Working version of the code is provided below - ``` process.StartInfo = processStartInfo; processStartInfo.RedirectStandardError = true; stopWatch.Start(); //Start the process process.Start(); string standardError = process.StandardError.ReadToEnd(); // wait exit signal from the app we called and then close it. process.WaitForExit(); process.Close(); stopWatch.Stop(); TimeSpan ts = stopWatch.Elapsed; ```
You can subscribe for `ErrorDataReceived` event and read the error message. ``` process.ErrorDataReceived+=process_ErrorDataReceived; ``` and here is your event handler ``` void process_ErrorDataReceived(object sender, DataReceivedEventArgs e) { Console.WriteLine(e.Data); } ```
43,214,036
I've a python script that I'm invoking from C#. Code Provided below. Issue with this process is that if Python script fails I'm not able to understand in C# and display that exception. I'm using C#, MVC, Python. Can you please modify below code and show me how can I catch the exception thrown at the time of Python Script exception? ``` Process process = new Process(); Stopwatch stopWatch = new Stopwatch(); ProcessStartInfo processStartInfo = new ProcessStartInfo(python); processStartInfo.UseShellExecute = false; processStartInfo.RedirectStandardOutput = true; try { process.StartInfo = processStartInfo; stopWatch.Start(); //Start the process process.Start(); // Read the standard output of the app we called. // in order to avoid deadlock we will read output first // and then wait for process terminate: StreamReader myStreamReader = process.StandardOutput; string myString = myStreamReader.ReadLine(); // wait exit signal from the app we called and then close it. process.WaitForExit(); process.Close(); stopWatch.Stop(); TimeSpan ts = stopWatch.Elapsed; Session["Message"] = "Success"; } catch (InvalidOperationException ex) { throw new System.InvalidOperationException("Bulk Upload Failed. Please contact administrator for further details." + ex.StackTrace); Session["Message"] = "Failed"; } ```
2017/04/04
[ "https://Stackoverflow.com/questions/43214036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4879531/" ]
Here is the working code.. To get the error or any exception in Python to C# RedirectStandardError property true and next get the Standard Error. Working version of the code is provided below - ``` process.StartInfo = processStartInfo; processStartInfo.RedirectStandardError = true; stopWatch.Start(); //Start the process process.Start(); string standardError = process.StandardError.ReadToEnd(); // wait exit signal from the app we called and then close it. process.WaitForExit(); process.Close(); stopWatch.Stop(); TimeSpan ts = stopWatch.Elapsed; ```
You can get exceptions passed from Python to C# when embedding CPython in C# with pythonnet: <https://github.com/pythonnet/pythonnet/blob/master/README.md>
37,947,258
Newbie in mongodb/python/pymongo. My mongodb client code works OK. Here it is : ``` db.meteo.find().forEach( function(myDoc) { db.test.update({"contract_name" : myDoc.current_observation.observation_location.city}, { $set: { "temp_c_meteo" : myDoc.current_observation.temp_c , } }, {upsert:false, multi:true}) }) ``` this adds temp\_c\_meteo column to all documents in my test collection, when contract\_name (from test collection) is equal to current\_observation.observation\_location.city (from meteo collection). And that's what i want! But, i would like this to work the same in Python. Python 2.7-pymongo 2.6.3 is installed. But also Python 2.4.6 (64-bit) and 3.4.0 (64-bit). Here is a part of the Python script : ``` [...] saveInDB: #BE connected to MongoDB client MDBClient = MongoClient() db = MDBClient ['test'] collection = db['infosdb'] [...] meteos = collectionMeteo.find() for met in meteos: logging.info('DEBUG - Update with meteo ' + met['current_observation']['observation_location']['city']) result = collection.update_many( {"contract_name" : met['current_observation']['observation_location']['city']}, { "$set": {"temp_c_meteo" : met['current_observation']['temp_c']} }) ``` And there is the error i get : ==> *[06/21/2016 10:51:03 AM] [WARNING] : 'Collection' object is not callable. If you meant to call the 'update\_many' method on a 'Collection' object it is failing because no such method exists.* I read this problem could be linked to some pymongo release ("The methods save and update are deprecated", that's why i try to use update\_many) but not sure et how can i make this work? Thanks
2016/06/21
[ "https://Stackoverflow.com/questions/37947258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6494503/" ]
Try this solution: ``` ArrayList<IonicBond> list = IonicBond.where { typeRs { classCode == "WD996" } || typeIIs { classCode == "WD996" } }.list() ```
Detached Criteria is one way to query the GORM. You first create a DetachedCriteria with the name of the domain class for which you want to execute the query. Then you call the method 'build' with a 'where' query or criteria query. ``` def criteria = new DetachedCriteria(IonicBond).build { or { typeIIs { eq("classCode", "WD996") } typeRs { eq("classCode", "WD996") } } } def result = criteria.list() ```
37,947,258
Newbie in mongodb/python/pymongo. My mongodb client code works OK. Here it is : ``` db.meteo.find().forEach( function(myDoc) { db.test.update({"contract_name" : myDoc.current_observation.observation_location.city}, { $set: { "temp_c_meteo" : myDoc.current_observation.temp_c , } }, {upsert:false, multi:true}) }) ``` this adds temp\_c\_meteo column to all documents in my test collection, when contract\_name (from test collection) is equal to current\_observation.observation\_location.city (from meteo collection). And that's what i want! But, i would like this to work the same in Python. Python 2.7-pymongo 2.6.3 is installed. But also Python 2.4.6 (64-bit) and 3.4.0 (64-bit). Here is a part of the Python script : ``` [...] saveInDB: #BE connected to MongoDB client MDBClient = MongoClient() db = MDBClient ['test'] collection = db['infosdb'] [...] meteos = collectionMeteo.find() for met in meteos: logging.info('DEBUG - Update with meteo ' + met['current_observation']['observation_location']['city']) result = collection.update_many( {"contract_name" : met['current_observation']['observation_location']['city']}, { "$set": {"temp_c_meteo" : met['current_observation']['temp_c']} }) ``` And there is the error i get : ==> *[06/21/2016 10:51:03 AM] [WARNING] : 'Collection' object is not callable. If you meant to call the 'update\_many' method on a 'Collection' object it is failing because no such method exists.* I read this problem could be linked to some pymongo release ("The methods save and update are deprecated", that's why i try to use update\_many) but not sure et how can i make this work? Thanks
2016/06/21
[ "https://Stackoverflow.com/questions/37947258", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6494503/" ]
I found out that `createCriteria()` uses `INNER JOIN` by default, meaning it will really not include `IonicBond` instances that have only `typeIIs` children, or `typeRs`, only both. The solution is to change it to a `LEFT JOIN` using `createAlias()`: ``` import org.hibernate.Criteria ... ArrayList<IonicBond> list = IonicBond.createCriteria().list { createAlias('typeIIs', 'typeIIs', Criteria.LEFT_JOIN) createAlias('typeRs', 'typeRs', Criteria.LEFT_JOIN) or { eq("typeIIs.classCode", "WD996") eq("typeRs.classCode", "WD996") } } ``` > > Note that Hibernate will not use `LEFT JOIN` as default join, even if you set all of its child to `constraints.nullable: true`. You must state it manually. > > >
Detached Criteria is one way to query the GORM. You first create a DetachedCriteria with the name of the domain class for which you want to execute the query. Then you call the method 'build' with a 'where' query or criteria query. ``` def criteria = new DetachedCriteria(IonicBond).build { or { typeIIs { eq("classCode", "WD996") } typeRs { eq("classCode", "WD996") } } } def result = criteria.list() ```
19,252,087
Hi all we are using google api for e.g. this one '<http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s>' % query via python script but very fast it gets blocked. Any work around for this? Thank you. Below is my current codes. ``` #!/usr/bin/env python import math,sys import json import urllib def gsearch(searchfor): query = urllib.urlencode({'q': searchfor}) url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s' % query search_response = urllib.urlopen(url) search_results = search_response.read() results = json.loads(search_results) data = results['responseData'] return data args = sys.argv[1:] m = 45000000000 if len(args) != 2: print "need two words as arguments" exit n0 = int(gsearch(args[0])['cursor']['estimatedResultCount']) n1 = int(gsearch(args[1])['cursor']['estimatedResultCount']) n2 = int(gsearch(args[0]+" "+args[1])['cursor']['estimatedResultCount']) ```
2013/10/08
[ "https://Stackoverflow.com/questions/19252087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2711681/" ]
Try this: ``` $(".profileImage").click(function() { $(this).animate({opacity: "0.0"}).animate({width: 0}).hide(0); }) ``` Animate your opacity to 0 to fade it from view, then animate your width to 0 to regain the space, then hide it to remove it from visibility altogether. Note that if you want to redisplay, you'll have to restore the previous values. <http://jsfiddle.net/Palpatim/YuFqh/2/>
what about using the fadeOut callback ``` $(".profileImage").click(function() { $(this).animate({opacity:0},400, function(){ // sliding code goes here $(this).animate({width:0},300).hide(); }); }); ```
67,764,659
I working with vscode for python programming.how can i change color text error output in terminal? .can you help me?
2021/05/30
[ "https://Stackoverflow.com/questions/67764659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11847527/" ]
may be useful for someOne. for First and last day of Persian Month : ``` function GetStartEndMonth(currentDate) { const splitDate = splitGeorgianDateToPersianDateArray(currentDate); const year = splitDate[0]; const month = splitDate[1]; const lastDayOfPersianMonth = GetLastDayOfPersianMonth(month, year); //Not Work in some Month !! => moment(currentPersianDate).clone().startOf('month').format('YYYY/MM/DD'); //Not Work at All in persian => moment(currentPersianDate).clone().endof('month').format('YYYY/MM/DD'); const startPersianMonth = year + "/" + month + "/" + 1; const endPersianMonth = year + "/" + month + "/" + lastDayOfPersianMonth; let startGeorgianDate = ConvertPersianDateStringToGeorgianDate(startPersianMonth); let endGeorgianDate = ConvertPersianDateStringToGeorgianDate(endPersianMonth); endGeorgianDate.setHours(23, 59, 59, 59); const newMonthArray = [startGeorgianDate, endGeorgianDate]; return newMonthArray; } function GetLastDayOfPersianMonth(month, year) { //Ω…Ψ­Ψ§Ψ³Ψ¨Ω‡ Ϊ©Ψ¨ΫŒΨ³Ω‡ const leapMatch = [1, 5, 9, 13, 17, 22, 26, 30]; const number = year % 33; const isLeap = leapMatch.includes(number); if (month <= 6) { return 31; } if (month > 6 && month < 12) { return 30; } if (month == 12 && !isLeap) { return 29; } if (month == 12 && isLeap) { return 30; } } function splitGeorgianDateToPersianDateArray(georgianDate) { const persianStringDate = moment(georgianDate).locale('fa').format('YYYY/M/D'); const splitDate = persianStringDate.split("/"); const year = splitDate[0]; const month = splitDate[1]; const day = splitDate[2]; return [year, month, day]; } ```
I think you need to set ``` dayGridMonthPersian :{duration: { week: 4 }} ```
53,489,173
I understand there are many answers already on SO dealing with split python URL's. BUT, I want to split a URL and then use it in a function. I'm using a curl request in python: ``` r = requests.get('http://www.datasciencetoolkit.org/twofishes?query=New%York') r.json() ``` Which provides the following: ``` {'interpretations': [{'what': '', 'where': 'new york', 'feature': {'cc': 'US', 'geometry': {'center': {'lat': 40.742185, 'lng': -73.992602}, ...... # (lots more, but not needed here) ``` I want to be able to call any city/location, and I want to separate out the `lat` and `lng`. For example, I want to call a function where I can input any city, and it responds with the latitude and longitude. Kind of like [this question](https://stackoverflow.com/questions/51489815/scraping-data-using-rvest-and-a-specific-error) (which uses R). This is my attempt: ``` import requests def lat_long(city): geocode_result = requests.get('http://www.datasciencetoolkit.org/twofishes?query= "city"') ``` How do I parse it so I can just call the function with a city?
2018/11/26
[ "https://Stackoverflow.com/questions/53489173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9439560/" ]
Taking your example, I'd suggest using regex and string interpolation. This answer assumes the API returns data the same way every time. ``` import re, requests def lat_long(city: str) -> tuple: # replace spaces with escapes city = re.sub('\s', '%20', city) res = requests.get(f'http://www.datasciencetoolkit.org/twofishes?query={city}') data = res.json() geo = data['interpretations'][0]['feature']['geometry']['center'] return (geo['lat'], geo['lng']) ```
You can loop over this list 'interpretations' looking for the city name, and return the coordinates when you find the correct city. ``` def lat_long(city): geocode_result = requests.get('http://www.datasciencetoolkit.org/twofishes?query= "city"') for interpretation in geocode_result["interpretations"]: if interpretation["where"] == city: return interpretation["feature"]["geometry"]["center"] ```