qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
50,388,396
I try to compile this code but I get this errror : ``` NameError: name 'dtype' is not defined ``` Here is the python code : ``` # -*- coding: utf-8 -*- from __future__ import division import pandas as pd import numpy as np import re import missingno as msno from functools import partial import seaborn as sns sns.set(color_codes=True) if dtype(data.OPP_CREATION_DATE)=="datetime64[ns]": print("OPP_CREATION_DATE is of datetime type") else: print("warning: the type of OPP_CREATION_DATE is not datetime, please fix this") ``` Any idea please to help me to resolve this problem? Thank you
2018/05/17
[ "https://Stackoverflow.com/questions/50388396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9360453/" ]
As written by Amr Keleg, > > If `data` is a pandas dataframe then you can check the type of a > column as follows: > `df['colname'].dtype` or `df.colname.dtype` > > > In that case you need e.g. ``` df['colname'].dtype == np.dtype('datetime64') ``` or ``` df.colname.dtype == np.dtype('datetime64') ```
I have just realized that I could have used: ``` from pandas.api.types import is_string_dtype, is_numeric_dtype ```
52,607,623
I have 3 variables in python (age, gender, race) and I want to create a unique categorical binary code out of them. Firstly, the age is an integer and I want to threshold it for each decade 10-20, 20-30, 30-40 etc., gender 2 values and the race contains 4 values. How can I return a complete categorical code out of the three initial variables?
2018/10/02
[ "https://Stackoverflow.com/questions/52607623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1194864/" ]
Here is a method returning a 7 bit code with first 4 bits for age bracket, next 2 for race, and 1 for gender. 4 bits for age imposes the constraint that there can be a total of 16 age brackets only, which is reasonable as it covers the age range 0-159. The 4 bit age code is simply the 4 bit representation of the integer `age//10`, which effectively discretizes the age value into ranges: 0-9, 10-19, ..., 150-159 The codes for race and gender are simply hard coded using the `race_dict` and `gender_dict` ``` def get_code(age, race, gender): #returns fixed size 7 bit code race_dict = {'African':'00','Hispanic':'01','European':'10','Cantonese':'11'} gender_dict = {'Male':'0','Female':'1'} age_code = '{0:b}'.format(age//10).zfill(4) race_code = race_dict[race] gender_code = gender_dict[gender] return age_code + race_code + gender_code ``` > > Input: age:25, race: 'Hispanic', gender: 'Female' > > > 7-bit code: 0010011 > > > If you would like this code to be an integer value between 0-127 for numerical purposes, you can use `int(code_str, 2)` to achieve that. **EDIT:** to get a numpy array from code string, use `np_code_arr = np.fromstring(' '.join(list(code_str)), dtype = int, sep = ' ')`
You can have a `n+1+4` dimensional vector encoding. Given binary code you require, this would be one way of doing it. You first `n` entries would encode decade. `1` if it belongs to that decade, `0` else. Next `(n+1)th` entry could be `1` if male and `0` if female. Similarly for race, `1` if it belongs to that category, `0` else. Let's say you have up to decades up 100. For 98 year old, male, white, you could do something like `[0 0 0 0 0 0 0 0 1 1 0 1 0 0 0]` assuming you start from `10` year to `100`. ``` import numpy as np def encodeAge(i, n): ageCode=np.zeros(n) ageCode[i]=1 return ageCode n=10 # number of decades dict_race={'w':[1,0,0,0],'b':[0,1,0,0],'a':[0,0,1,0],'l':[0,0,0,1]} # white, black, asian, latino dict_age={i:encodeAge(i, n) for i in range(n)} dict_gender={'m':[1],'f':[0]} def encodeAll(age, gender, race): # encode age code=[] code=np.concatenate([code, dict_age[age//10]]) # encode gender code=np.concatenate([code, dict_gender[gender]]) # encode race code=np.concatenate([code, dict_race[race]]) return code ``` e.g. `encodeAll(12,'m','w')` would return `array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0.])` This is slightly longer encoding than other encodings suggested.
52,607,623
I have 3 variables in python (age, gender, race) and I want to create a unique categorical binary code out of them. Firstly, the age is an integer and I want to threshold it for each decade 10-20, 20-30, 30-40 etc., gender 2 values and the race contains 4 values. How can I return a complete categorical code out of the three initial variables?
2018/10/02
[ "https://Stackoverflow.com/questions/52607623", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1194864/" ]
Here is a method returning a 7 bit code with first 4 bits for age bracket, next 2 for race, and 1 for gender. 4 bits for age imposes the constraint that there can be a total of 16 age brackets only, which is reasonable as it covers the age range 0-159. The 4 bit age code is simply the 4 bit representation of the integer `age//10`, which effectively discretizes the age value into ranges: 0-9, 10-19, ..., 150-159 The codes for race and gender are simply hard coded using the `race_dict` and `gender_dict` ``` def get_code(age, race, gender): #returns fixed size 7 bit code race_dict = {'African':'00','Hispanic':'01','European':'10','Cantonese':'11'} gender_dict = {'Male':'0','Female':'1'} age_code = '{0:b}'.format(age//10).zfill(4) race_code = race_dict[race] gender_code = gender_dict[gender] return age_code + race_code + gender_code ``` > > Input: age:25, race: 'Hispanic', gender: 'Female' > > > 7-bit code: 0010011 > > > If you would like this code to be an integer value between 0-127 for numerical purposes, you can use `int(code_str, 2)` to achieve that. **EDIT:** to get a numpy array from code string, use `np_code_arr = np.fromstring(' '.join(list(code_str)), dtype = int, sep = ' ')`
My answer here: Being age **a**, gender **g** and race **r**, ``` code = np.array([int(i) for i in "{0:04b}{1:01b}{2:02b}".format(a//10,g,r)]) ``` for age=58, gender=1 and race=3, output will be: ``` array([0, 1, 0, 1, 1, 1, 1]) ```
30,296,531
So here is my first test for S3 buckets using boto: ``` import boto user_name, access_key, secret_key = "testing-user", "xxxxxxxxxxxxx", "xxxxxxxx/xxxxxxxxxxxx/xxxxxxxxxx(xxxxx)" conn = boto.connect_s3(access_key, secret_key) buckets = conn.get_all_buckets() ``` I get the following error: ``` Traceback (most recent call last): File "test-s3.py", line 9, in <module> buckets = conn.get_all_buckets() File "xxxxxx/lib/python2.7/site-packages/boto/s3/connection.py", line 440, in get_all_buckets response.status, response.reason, body) boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>AKIAJMHSZXU6MORWA5GA</AWSAccessKeyId><StringToSign>GET Mon, 18 May 2015 06:21:58 GMT /</StringToSign><SignatureProvided>c/+YJAZVInsfmd5giMQmrh81DPA=</SignatureProvided><StringToSignBytes>47 45 54 0a 0a 0a 4d 6f 6e 2c 20 31 38 20 4d 61 79 20 32 30 31 35 20 30 36 3a 32 31 3a 35 38 20 47 4d 54 0a 2f</StringToSignBytes><RequestId>5733F9C8926497E6</RequestId><HostId>FXPejeYuvZ+oV2DJLh7HBpryOh4Ve3Mmj8g8bKA2f/4dTWDHJiG8Bpir8EykLYYW1OJMhZorbIQ=</HostId></Error> ``` How am I supposed to fix this? Boto version is 2.38.0
2015/05/18
[ "https://Stackoverflow.com/questions/30296531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1221660/" ]
Had the same issue. In my case, my generated security key had a special character '+' in between. So I deleted my key and regenerated a new key and it worked with the new key with no '+'. [Source](https://stackoverflow.com/a/12262106)
Today, I saw an error response with `SignatureDoesNotMatch` while playing around an S3 API locally and replacing **localhost** with **127.0.0.1** fixed the problem in my case.
43,837,305
I have a GitHub repository containing a AWS Lambda function. I am currently using Travis CI to build, test and then deploy this function to Lambda if all the tests succeed using ``` deploy: provider: lambda (other settings here) ``` My function has the following dependencies specified in its `requirements.txt` ``` Algorithmia numpy networkx opencv-python ``` I have set the build script for Travis CI to build in the working directory using the below command so as to have the dependencies get properly copied over to my AWS Lambda function. `pip install --target=$TRAVIS_BUILD_DIR -r requirements.txt` The problem is that while the build in Travis CI succeeds and everything is deployed to the Lambda function successfully, testing my Lambda function results in the following error: ``` Unable to import module 'mymodule': Importing the multiarray numpy extension module failed. Most likely you are trying to import a failed build of numpy. If you're working with a numpy git repo, try `git clean -xdf` (removes all files not under version control). Otherwise reinstall numpy. ``` My best guess as to why this is happening is that numpy is being built in the Ubuntu distribution of linux that Travis CI uses but the Amazon Linux that it is running on when executing as a Lambda function isn't able to run it properly. There are numerous forum posts and blog posts such as [this](https://markn.ca/2015/10/python-extension-modules-in-aws-lambda/) one detailing that python modules that need to build C/C++ extensions must be built on a EC2 instance. My question is: This is a real hassle to have to add another complication to the CD pipeline and have to mess around with EC2 instances. Has Amazon come up with some better way to do this (because there really should be a better way to do this) or is there some way to have everything compiled properly in Travis CI or another CI solution? Also, I suppose it's possible that I've mis-identified the problem and that there is some other reason why importing numpy is failing. If anyone has suggestions on how to resolve this that would be great! --- **EDIT:** As suggested by @jordanm it looks like it may be possible to load a docker container with the amazonlinux image when running TravisCI and then perform my build and test inside that container. Unfortunately, while that certainly is easier than using EC2 - I don't think I can use the normal lambda deploy tools in TravisCI - I'll have to write my own deploy script using the aws cli which is a bit of a pain. Any other ideas - or ways to make this smoother? Ideally I would be specify what docker image my builds run on in TravisCI as their default build environment is already using docker...but they don't seem to support that functionality yet: <https://github.com/travis-ci/travis-ci/issues/7726>
2017/05/07
[ "https://Stackoverflow.com/questions/43837305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3474089/" ]
After quite a bit of tinkering I think I've found something that works. I thought I'd post it here in case others have the same problem. I decided to use [Wercker](http://www.wercker.com/) as they have quite a generous free tier and allow you to customize the docker image for your builds. Turns out there is a docker image that has been created to replicate the exact environments that Lambda functions are executed on! See: <https://github.com/lambci/docker-lambda> When running your builds in this docker container, extensions will be built properly so they can execute successfully on Lambda. In case anyone does want to use Wercker here's the `wercker.yml` I used and it may be helpful as a template: ``` box: lambci/lambda:build-python3.6 build: steps: - script: name: Install Dependencies code: | pip install --target=$WERCKER_SOURCE_DIR -r requirements.txt pip install pytest - script: name: Test code code: pytest - script: name: Cleaning up code: find $WERCKER_SOURCE_DIR \( -name \*.pyc -o -name \*.pyo -o -name __pycache__ \) -prune -exec rm -rf {} + - script: name: Create ZIP code: | cd $WERCKER_SOURCE_DIR zip -r $WERCKER_ROOT/lambda_deploy.zip . -x *.git* deploy: box: golang:latest steps: - arjen/lambda: access_key: $AWS_ACCESS_KEY secret_key: $AWS_SECRET_KEY function_name: yourFunction region: us-west-1 filepath: $WERCKER_ROOT/lambda_deploy.zip ```
Although I appreciate you may not want to add further complications to your project, you could potentially use a Python-focused Lambda management tool for setting up your builds and deployments, say something like [Gordon](https://github.com/jorgebastida/gordon). You could also just use this tool to do your deployment from inside the Amazon Linux Docker container running within Travis. If you wish to change CI providers, [CodeShip](https://codeship.com) allows you to build with any Docker container of your choice, and then [deploy to Lambda](https://blog.codeship.com/integrating-aws-lambda-with-codeship/). [Wercker](https://wercker.com) also runs full Docker-based builds and has many user-submitted deploy "steps", some of which [support deployment to Lambda](https://app.wercker.com/explore/steps/search/lambda).
53,268,375
I have a use case which often requires to copy a blob (file) from one Azure region to another. The file size spans from 25 to 45GB. Needless to say, this sometimes goes very slowly, with inconsistent performance. This might take up to two hours, sometimes more. Distance plays a role, but it differs. Even within the same region copying is slower then I would expect. I've been trying: 1. [The Python SDK, and its copy blob method from the blob service.](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice?view=azure-python#copy-blob) 2. [The rest API copy blob](https://learn.microsoft.com/en-us/rest/api/storageservices/copy-blob) 3. az copy from the CLI. Although I didn't really expect different results, since all of them use the same backend methods. Is there any approach I am missing? Is there any way to speed up this process, or any kind of blob sharing integrated in Azure? VHD/disk sharing could also do.
2018/11/12
[ "https://Stackoverflow.com/questions/53268375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/794967/" ]
Data model is wrong. Should be something like this: ``` SQL> create table customer 2 (customer_id number primary key, 3 first_name varchar2(20), 4 last_name varchar2(20), 5 phone varchar2(20)); Table created. SQL> create table items 2 (item_id number primary key, 3 item_name varchar2(20)); Table created. SQL> create table orders 2 (order_id number primary key, 3 customer_id number constraint fk_ord_cust references customer (customer_id) 4 ); Table created. SQL> create table order_details 2 (order_det_id number primary key, 3 order_id number constraint fk_orddet_ord references orders (order_id), 4 item_id number constraint fk_orddet_itm references items (item_id), 5 amount number 6 ); Table created. ``` Some quick & dirty sample data: ``` SQL> insert all 2 into customer values (100, 'Little', 'Foot', '00385xxxyyy') 3 into items values (1, 'Apple') 4 into items values (2, 'Milk') 5 into orders values (55, 100) 6 into order_details values (1000, 55, 1, 5) -- I'm ordering 5 apples 7 into order_details values (1001, 55, 2, 2) -- and 2 milks 8 select * from dual; 6 rows created. SQL> select c.first_name, sum(d.amount) count_of_items 2 from customer c join orders o on o.customer_id = c.customer_id 3 join order_details d on d.order_id = o.order_id 4 group by c.first_name; FIRST_NAME COUNT_OF_ITEMS -------------------- -------------- Little 7 SQL> ``` Or, list of items: ``` SQL> select c.first_name, i.item_name, d.amount 2 from customer c join orders o on o.customer_id = c.customer_id 3 join order_details d on d.order_id = o.order_id 4 join items i on i.item_id = d.item_id; FIRST_NAME ITEM_NAME AMOUNT -------------------- -------------------- ---------- Little Apple 5 Little Milk 2 SQL> ```
There is no relation between the two tables which you wish to combine data from. Kindly create a foreign key relation between the two tables which would help you get a common value based on which you could extract data. For e.g. - The column Customer\_id from customers table could be the foreign key in table orders which would specify the order placed by each customer. The following query should return you the expected result: SELECT customer.first\_name, customer.last\_name, orders.item\_name, orders.quantity FROM customer, orders WHERE customer.customer\_id = orders.customer\_id ORDER BY customer.first\_name; The query specified by you does not return any result as there is no match for any order and customer id in the two tables as both depict two different values. Hope it helps. Cheers!
53,268,375
I have a use case which often requires to copy a blob (file) from one Azure region to another. The file size spans from 25 to 45GB. Needless to say, this sometimes goes very slowly, with inconsistent performance. This might take up to two hours, sometimes more. Distance plays a role, but it differs. Even within the same region copying is slower then I would expect. I've been trying: 1. [The Python SDK, and its copy blob method from the blob service.](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice?view=azure-python#copy-blob) 2. [The rest API copy blob](https://learn.microsoft.com/en-us/rest/api/storageservices/copy-blob) 3. az copy from the CLI. Although I didn't really expect different results, since all of them use the same backend methods. Is there any approach I am missing? Is there any way to speed up this process, or any kind of blob sharing integrated in Azure? VHD/disk sharing could also do.
2018/11/12
[ "https://Stackoverflow.com/questions/53268375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/794967/" ]
Data model is wrong. Should be something like this: ``` SQL> create table customer 2 (customer_id number primary key, 3 first_name varchar2(20), 4 last_name varchar2(20), 5 phone varchar2(20)); Table created. SQL> create table items 2 (item_id number primary key, 3 item_name varchar2(20)); Table created. SQL> create table orders 2 (order_id number primary key, 3 customer_id number constraint fk_ord_cust references customer (customer_id) 4 ); Table created. SQL> create table order_details 2 (order_det_id number primary key, 3 order_id number constraint fk_orddet_ord references orders (order_id), 4 item_id number constraint fk_orddet_itm references items (item_id), 5 amount number 6 ); Table created. ``` Some quick & dirty sample data: ``` SQL> insert all 2 into customer values (100, 'Little', 'Foot', '00385xxxyyy') 3 into items values (1, 'Apple') 4 into items values (2, 'Milk') 5 into orders values (55, 100) 6 into order_details values (1000, 55, 1, 5) -- I'm ordering 5 apples 7 into order_details values (1001, 55, 2, 2) -- and 2 milks 8 select * from dual; 6 rows created. SQL> select c.first_name, sum(d.amount) count_of_items 2 from customer c join orders o on o.customer_id = c.customer_id 3 join order_details d on d.order_id = o.order_id 4 group by c.first_name; FIRST_NAME COUNT_OF_ITEMS -------------------- -------------- Little 7 SQL> ``` Or, list of items: ``` SQL> select c.first_name, i.item_name, d.amount 2 from customer c join orders o on o.customer_id = c.customer_id 3 join order_details d on d.order_id = o.order_id 4 join items i on i.item_id = d.item_id; FIRST_NAME ITEM_NAME AMOUNT -------------------- -------------------- ---------- Little Apple 5 Little Milk 2 SQL> ```
In your last query you have shown that your tables are linked ( customer.customer\_id = orders.order\_id ), but in the tables you have created, there is no link between them. I think this should work: Step 1: Create a Customer table as follow: ``` Create table customer (customer_id id primary key, first_name varchar(25), last_name varchar(25), phone int); ``` Step 2: Create Items table as follow: ``` Create table Items (item_id primary key, item_name varchar(25)); ``` Step 3: Create link table that relates two tables above as follow: ``` Create table Orders (Customer_Id int, Item_ID int, Quantity int); ``` Step 4: Use this query to pull out the information you want: ``` select c.first_name,i.item_name,o.Quantity from customer c inner join orders o on c.customer_id = o.customer_id inner join items i on i.item_id = o.Item_id; ``` Please try it and let me know if there is any problem.
53,268,375
I have a use case which often requires to copy a blob (file) from one Azure region to another. The file size spans from 25 to 45GB. Needless to say, this sometimes goes very slowly, with inconsistent performance. This might take up to two hours, sometimes more. Distance plays a role, but it differs. Even within the same region copying is slower then I would expect. I've been trying: 1. [The Python SDK, and its copy blob method from the blob service.](https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.baseblobservice.baseblobservice?view=azure-python#copy-blob) 2. [The rest API copy blob](https://learn.microsoft.com/en-us/rest/api/storageservices/copy-blob) 3. az copy from the CLI. Although I didn't really expect different results, since all of them use the same backend methods. Is there any approach I am missing? Is there any way to speed up this process, or any kind of blob sharing integrated in Azure? VHD/disk sharing could also do.
2018/11/12
[ "https://Stackoverflow.com/questions/53268375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/794967/" ]
There is no relation between the two tables which you wish to combine data from. Kindly create a foreign key relation between the two tables which would help you get a common value based on which you could extract data. For e.g. - The column Customer\_id from customers table could be the foreign key in table orders which would specify the order placed by each customer. The following query should return you the expected result: SELECT customer.first\_name, customer.last\_name, orders.item\_name, orders.quantity FROM customer, orders WHERE customer.customer\_id = orders.customer\_id ORDER BY customer.first\_name; The query specified by you does not return any result as there is no match for any order and customer id in the two tables as both depict two different values. Hope it helps. Cheers!
In your last query you have shown that your tables are linked ( customer.customer\_id = orders.order\_id ), but in the tables you have created, there is no link between them. I think this should work: Step 1: Create a Customer table as follow: ``` Create table customer (customer_id id primary key, first_name varchar(25), last_name varchar(25), phone int); ``` Step 2: Create Items table as follow: ``` Create table Items (item_id primary key, item_name varchar(25)); ``` Step 3: Create link table that relates two tables above as follow: ``` Create table Orders (Customer_Id int, Item_ID int, Quantity int); ``` Step 4: Use this query to pull out the information you want: ``` select c.first_name,i.item_name,o.Quantity from customer c inner join orders o on c.customer_id = o.customer_id inner join items i on i.item_id = o.Item_id; ``` Please try it and let me know if there is any problem.
55,574,215
I'm logging some Unicode characters to a file using "logging" in Python 3. The code works in the terminal, but fails with a UnicodeEncodeError in PyCharm. I load my logging configuration using `logging.config.fileConfig`. In the configuration, I specify a file handler with `encoding = utf-8`. Logging to console works fine. I'm using PyCharm 2019.1.1 (Community Edition). I don't think I've changed any relevant setting, but when I ran the same code in the PyCharm on another computer, the error was **not** reproduced. Therefore, I suspect the problem is related to a PyCharm setting. Here is a minimal example: ```py import logging from logging.config import fileConfig # ok print('1. café') # ok logging.error('2. café') # UnicodeEncodeError fileConfig('logcfg.ini') logging.error('3. café') ``` The content of logcfg.ini (in the same directory) is the following: ``` [loggers] keys = root [handlers] keys = file_handler [formatters] keys = formatter [logger_root] level = INFO handlers = file_handler [handler_file_handler] class = logging.handlers.RotatingFileHandler formatter = formatter args = ('/tmp/test.log',) encoding = utf-8 [formatter_formatter] format = %(levelname)s: %(message)s ``` I expect to see the first two logging messages in the console, and the third one in the logging file. The first two logging statements worked fine, but the third one failed. Here is the complete console output in PyCharm: ``` 1. café ERROR:root:2. café --- Logging error --- Traceback (most recent call last): File "/anaconda3/lib/python3.6/logging/__init__.py", line 996, in emit stream.write(msg) UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 13: ordinal not in range(128) Call stack: File "/Users/klkh/test.py", line 12, in <module> logging.error('3. café') Message: '3. café' Arguments: () ```
2019/04/08
[ "https://Stackoverflow.com/questions/55574215", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1654411/" ]
on different os need different solutions: on Windows: 1. download the libfile, <http://www.rarlab.com/rar/UnRARDLL.exe>, install it; 2. you'd better choose the default path, C:\Program Files (x86)\UnrarDLL\ 3. the most important is add the environment path, the varname enter UNRAR\_LIB\_PATH, pay attention, it must be!!!. then if your system is 64bit enter C:\Program Files (x86)\UnrarDLL\x64\UnRAR64.dll, if your system is 32bit enter C:\Program Files (x86)\UnrarDLL\UnRAR.dll. 4. after save the environment path, rerun your pycharm. on Linux you need to make so file, which is a little difficult. 1. the same, download the libfile <http://www.rarlab.com/rar/unrarsrc-5.4.5.tar.gz>, you can choose the latest version. 2. after download extract the file get the file unrar, `cd unrar` ,then `make lib`, then `make install-lib`, we'll get file `libunrar.so`(in /usr/lib). 3. last, you also need to set the environment path, `vim /etc/profile` open file `profile`, add `export UNRAR_LIB_PATH=/usr/lib/libunrar.so` in the end of the file. then save the file, use `source /etc/profile` to make the environment successful. 4. rerun the .py file. the resource website:<https://blog.csdn.net/ysy950803/article/details/52939708>
Additionally, after you do the things as mentioned by Tom.chen.kang and balandongiv, if you're using a 32bit DLL with 64bit Python, or vice-versa, then you'll probably get an error like this when you try to import unrar:- > > OSError: [WinError 193] %1 is not a valid Win32 application > > > In that case do this: For 32 Python & 32 bit DLL Change your Environment variables for variable ***UNRAR\_LIB\_PATH*** to : ``` C:\Program Files (x86)\UnrarDLL\UnRAR.dll ``` For 64 bit Python & 64 bit DLL Change your Environment variables for variable ***UNRAR\_LIB\_PATH*** to : ``` C:\Program Files (x86)\UnrarDLL\x64\UnRAR.dll ``` Restart your Pycharm or other Development Environment.
55,574,215
I'm logging some Unicode characters to a file using "logging" in Python 3. The code works in the terminal, but fails with a UnicodeEncodeError in PyCharm. I load my logging configuration using `logging.config.fileConfig`. In the configuration, I specify a file handler with `encoding = utf-8`. Logging to console works fine. I'm using PyCharm 2019.1.1 (Community Edition). I don't think I've changed any relevant setting, but when I ran the same code in the PyCharm on another computer, the error was **not** reproduced. Therefore, I suspect the problem is related to a PyCharm setting. Here is a minimal example: ```py import logging from logging.config import fileConfig # ok print('1. café') # ok logging.error('2. café') # UnicodeEncodeError fileConfig('logcfg.ini') logging.error('3. café') ``` The content of logcfg.ini (in the same directory) is the following: ``` [loggers] keys = root [handlers] keys = file_handler [formatters] keys = formatter [logger_root] level = INFO handlers = file_handler [handler_file_handler] class = logging.handlers.RotatingFileHandler formatter = formatter args = ('/tmp/test.log',) encoding = utf-8 [formatter_formatter] format = %(levelname)s: %(message)s ``` I expect to see the first two logging messages in the console, and the third one in the logging file. The first two logging statements worked fine, but the third one failed. Here is the complete console output in PyCharm: ``` 1. café ERROR:root:2. café --- Logging error --- Traceback (most recent call last): File "/anaconda3/lib/python3.6/logging/__init__.py", line 996, in emit stream.write(msg) UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 13: ordinal not in range(128) Call stack: File "/Users/klkh/test.py", line 12, in <module> logging.error('3. café') Message: '3. café' Arguments: () ```
2019/04/08
[ "https://Stackoverflow.com/questions/55574215", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1654411/" ]
In addition to @tom answer for `Windows 10` environment, the following steps should help: 1. Download the libfile via the [link](http://www.rarlab.com/rar/UnRARDLL.exe) and install it. 2. For easy replication the following steps, choose the default path, C:\Program Files (x86)\UnrarDLL\ 3. Go to Environment Variables window ([link](https://www.computerhope.com/issues/ch000549.htm)) and selected Advanced. 4. Click Environment Setting. 5. Under the User variables, select New. 6. In the New User Variables, rename the Variable name as **UNRAR\_LIB\_PATH** 7. To select the Variable Value, select Browse file. Depending on your system, 64bit enter C:\Program Files (x86)\UnrarDLL\x64\UnRAR64.dll, if your system is 32bit enter C:\Program Files (x86)\UnrarDLL\UnRAR.dll. 8. Save the environment path and rerun your Pycharm. The graphical illustration is as below, [![enter image description here](https://i.stack.imgur.com/QR0U7.png)](https://i.stack.imgur.com/QR0U7.png)
Additionally, after you do the things as mentioned by Tom.chen.kang and balandongiv, if you're using a 32bit DLL with 64bit Python, or vice-versa, then you'll probably get an error like this when you try to import unrar:- > > OSError: [WinError 193] %1 is not a valid Win32 application > > > In that case do this: For 32 Python & 32 bit DLL Change your Environment variables for variable ***UNRAR\_LIB\_PATH*** to : ``` C:\Program Files (x86)\UnrarDLL\UnRAR.dll ``` For 64 bit Python & 64 bit DLL Change your Environment variables for variable ***UNRAR\_LIB\_PATH*** to : ``` C:\Program Files (x86)\UnrarDLL\x64\UnRAR.dll ``` Restart your Pycharm or other Development Environment.
58,799,259
I am using Windows 10, PostgreSQL 12, Python 3.7.5 . I create username `odoo`, password `odoo`, create database `mydb`. Source code is <https://github.com/odoo/odoo/tree/aa0554d224337e1d966479a351a3ed059d297765> I run command ``` python odoo-bin -r odoo -w odoo --addons-path=addons --db-filter=mydb$ ``` Error ``` E:\source_code\github.com\xxxxxxxx\odoo>python odoo-bin -r odoo -w odoo --addons-path=addons --db-filter=mydb$ 2019-11-11 09:20:31,240 7372 INFO ? odoo: Odoo version 13.0 2019-11-11 09:20:31,240 7372 INFO ? odoo: addons paths: ['E:\\source_code\\github.com\\xxxxxxxx\\odoo\\odoo\\addons', 'c:\\users\\xxxxxxxx\\appdata\\local\\openerp s.a\\odoo\\addons\\13.0', 'e:\\source_code\\github.com\\xxxxxxxx\\odoo\\addons', 'e:\\source_code\\github.com\\xxxxxxxx\\odoo\\odoo\\addons'] 2019-11-11 09:20:31,241 7372 INFO ? odoo: database: odoo@default:default 2019-11-11 09:20:31,397 7372 INFO ? odoo.addons.base.models.ir_actions_report: You need Wkhtmltopdf to print a pdf version of the reports. 2019-11-11 09:20:31,517 7372 INFO ? odoo.service.server: HTTP service (werkzeug) running on D1CMPS_VyDN.mptelecom.com:8069 2019-11-11 09:20:45,250 7372 INFO ? odoo.http: HTTP Configuring static files 2019-11-11 09:20:45,296 7372 INFO ? odoo.sql_db: Connection to the database failed 2019-11-11 09:20:45,299 7372 INFO ? werkzeug: 192.168.50.215 - - [11/Nov/2019 09:20:45] "GET / HTTP/1.1" 500 - 0 0.000 0.039 2019-11-11 09:20:45,333 7372 ERROR ? werkzeug: Error on request: Traceback (most recent call last): File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 270, in run_wsgi execute(self.server.app) File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 258, in execute application_iter = app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\server.py", line 414, in app return self.app(e, s) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 142, in application return application_unproxied(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 117, in application_unproxied result = odoo.http.root(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1281, in __call__ return self.dispatch(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1251, in __call__ return self.app(environ, start_wrapped) File "C:\Program Files\Python37\lib\site-packages\werkzeug\wsgi.py", line 766, in __call__ return self.app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1415, in dispatch self.setup_db(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1338, in setup_db httprequest.session.db = db_monodb(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1499, in db_monodb dbs = db_list(True, httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1466, in db_list dbs = odoo.service.db.list_dbs(force) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\db.py", line 378, in list_dbs with closing(db.cursor()) as cr: File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 649, in cursor return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 186, in __init__ self._cnx = pool.borrow(dsn) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 532, in _locked return fun(self, *args, **kwargs) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 600, in borrow **connection_info) File "C:\Program Files\Python37\lib\site-packages\psycopg2\__init__.py", line 126, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: role "odoo" is not permitted to log in - - - 2019-11-11 09:20:45,442 7372 INFO ? odoo.sql_db: Connection to the database failed 2019-11-11 09:20:45,443 7372 INFO ? werkzeug: 192.168.50.215 - - [11/Nov/2019 09:20:45] "GET /favicon.ico HTTP/1.1" 500 - 0 0.000 0.057 2019-11-11 09:20:45,449 7372 ERROR ? werkzeug: Error on request: Traceback (most recent call last): File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 270, in run_wsgi execute(self.server.app) File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 258, in execute application_iter = app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\server.py", line 414, in app return self.app(e, s) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 142, in application return application_unproxied(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 117, in application_unproxied result = odoo.http.root(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1281, in __call__ return self.dispatch(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1251, in __call__ return self.app(environ, start_wrapped) File "C:\Program Files\Python37\lib\site-packages\werkzeug\wsgi.py", line 766, in __call__ return self.app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1415, in dispatch self.setup_db(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1338, in setup_db httprequest.session.db = db_monodb(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1499, in db_monodb dbs = db_list(True, httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1466, in db_list dbs = odoo.service.db.list_dbs(force) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\db.py", line 378, in list_dbs with closing(db.cursor()) as cr: File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 649, in cursor return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 186, in __init__ self._cnx = pool.borrow(dsn) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 532, in _locked return fun(self, *args, **kwargs) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 600, in borrow **connection_info) File "C:\Program Files\Python37\lib\site-packages\psycopg2\__init__.py", line 126, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: role "odoo" is not permitted to log in - - - ``` How to start Odoo 13 success from source code?
2019/11/11
[ "https://Stackoverflow.com/questions/58799259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3728901/" ]
I think you need to configure the DB to trust your IP address: make the following chages in `pg_hba.conf`: ``` # IPv4 local connections: host all all 127.0.0.1/32 trust host all all MY_IP/24 trust ``` see also [this](https://www.odoo.com/documentation/13.0/setup/install.html#id3)
odoo 13 a default user name odoo that user with postgress it use a recently db created. you can pass a database configuration on your config file odoo 13 /debian/odoo.conf ``` [options] ; This is the password that allows database operations: ; admin_passwd = admin db_host = False db_port = False db_user = odoo db_password = False ;addons_path = /usr/lib/python3/dist-packages/odoo/addons ``` you can set db host port user password and use it
58,799,259
I am using Windows 10, PostgreSQL 12, Python 3.7.5 . I create username `odoo`, password `odoo`, create database `mydb`. Source code is <https://github.com/odoo/odoo/tree/aa0554d224337e1d966479a351a3ed059d297765> I run command ``` python odoo-bin -r odoo -w odoo --addons-path=addons --db-filter=mydb$ ``` Error ``` E:\source_code\github.com\xxxxxxxx\odoo>python odoo-bin -r odoo -w odoo --addons-path=addons --db-filter=mydb$ 2019-11-11 09:20:31,240 7372 INFO ? odoo: Odoo version 13.0 2019-11-11 09:20:31,240 7372 INFO ? odoo: addons paths: ['E:\\source_code\\github.com\\xxxxxxxx\\odoo\\odoo\\addons', 'c:\\users\\xxxxxxxx\\appdata\\local\\openerp s.a\\odoo\\addons\\13.0', 'e:\\source_code\\github.com\\xxxxxxxx\\odoo\\addons', 'e:\\source_code\\github.com\\xxxxxxxx\\odoo\\odoo\\addons'] 2019-11-11 09:20:31,241 7372 INFO ? odoo: database: odoo@default:default 2019-11-11 09:20:31,397 7372 INFO ? odoo.addons.base.models.ir_actions_report: You need Wkhtmltopdf to print a pdf version of the reports. 2019-11-11 09:20:31,517 7372 INFO ? odoo.service.server: HTTP service (werkzeug) running on D1CMPS_VyDN.mptelecom.com:8069 2019-11-11 09:20:45,250 7372 INFO ? odoo.http: HTTP Configuring static files 2019-11-11 09:20:45,296 7372 INFO ? odoo.sql_db: Connection to the database failed 2019-11-11 09:20:45,299 7372 INFO ? werkzeug: 192.168.50.215 - - [11/Nov/2019 09:20:45] "GET / HTTP/1.1" 500 - 0 0.000 0.039 2019-11-11 09:20:45,333 7372 ERROR ? werkzeug: Error on request: Traceback (most recent call last): File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 270, in run_wsgi execute(self.server.app) File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 258, in execute application_iter = app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\server.py", line 414, in app return self.app(e, s) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 142, in application return application_unproxied(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 117, in application_unproxied result = odoo.http.root(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1281, in __call__ return self.dispatch(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1251, in __call__ return self.app(environ, start_wrapped) File "C:\Program Files\Python37\lib\site-packages\werkzeug\wsgi.py", line 766, in __call__ return self.app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1415, in dispatch self.setup_db(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1338, in setup_db httprequest.session.db = db_monodb(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1499, in db_monodb dbs = db_list(True, httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1466, in db_list dbs = odoo.service.db.list_dbs(force) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\db.py", line 378, in list_dbs with closing(db.cursor()) as cr: File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 649, in cursor return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 186, in __init__ self._cnx = pool.borrow(dsn) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 532, in _locked return fun(self, *args, **kwargs) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 600, in borrow **connection_info) File "C:\Program Files\Python37\lib\site-packages\psycopg2\__init__.py", line 126, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: role "odoo" is not permitted to log in - - - 2019-11-11 09:20:45,442 7372 INFO ? odoo.sql_db: Connection to the database failed 2019-11-11 09:20:45,443 7372 INFO ? werkzeug: 192.168.50.215 - - [11/Nov/2019 09:20:45] "GET /favicon.ico HTTP/1.1" 500 - 0 0.000 0.057 2019-11-11 09:20:45,449 7372 ERROR ? werkzeug: Error on request: Traceback (most recent call last): File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 270, in run_wsgi execute(self.server.app) File "C:\Program Files\Python37\lib\site-packages\werkzeug\serving.py", line 258, in execute application_iter = app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\server.py", line 414, in app return self.app(e, s) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 142, in application return application_unproxied(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\wsgi_server.py", line 117, in application_unproxied result = odoo.http.root(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1281, in __call__ return self.dispatch(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1251, in __call__ return self.app(environ, start_wrapped) File "C:\Program Files\Python37\lib\site-packages\werkzeug\wsgi.py", line 766, in __call__ return self.app(environ, start_response) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1415, in dispatch self.setup_db(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1338, in setup_db httprequest.session.db = db_monodb(httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1499, in db_monodb dbs = db_list(True, httprequest) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\http.py", line 1466, in db_list dbs = odoo.service.db.list_dbs(force) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\service\db.py", line 378, in list_dbs with closing(db.cursor()) as cr: File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 649, in cursor return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 186, in __init__ self._cnx = pool.borrow(dsn) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 532, in _locked return fun(self, *args, **kwargs) File "E:\source_code\github.com\xxxxxxxx\odoo\odoo\sql_db.py", line 600, in borrow **connection_info) File "C:\Program Files\Python37\lib\site-packages\psycopg2\__init__.py", line 126, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: FATAL: role "odoo" is not permitted to log in - - - ``` How to start Odoo 13 success from source code?
2019/11/11
[ "https://Stackoverflow.com/questions/58799259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3728901/" ]
To [create a PostgreSQL user](https://www.odoo.com/documentation/13.0/setup/install.html#postgresql), follow these steps: 1. Add PostgreSQL’s `bin` directory (by default: `C:\Program Files\PostgreSQL\<version>\bin`) to your `PATH`. 2. Create a postgres user with a password using the pg admin gui: * Open **pgAdminIII**. * Double-click the server to create a connection. * Select Edit ‣ New Object ‣ New Login Role. * Enter the username in the **Role Name** field (e.g. `odoo`). * Open the **Definition** tab and enter the password (e.g. `odoo`), then click **OK**.
odoo 13 a default user name odoo that user with postgress it use a recently db created. you can pass a database configuration on your config file odoo 13 /debian/odoo.conf ``` [options] ; This is the password that allows database operations: ; admin_passwd = admin db_host = False db_port = False db_user = odoo db_password = False ;addons_path = /usr/lib/python3/dist-packages/odoo/addons ``` you can set db host port user password and use it
54,525,141
I have a python environment (it could be conda, virtualenv, venv or global python) - I have a python script - hello.py - that I want to execute within that environment. If I get the path to the python binary within the environment, for example, in windows with a conda environment called myenv, `/path/to/myenv/Scripts/python.exe`, and if I execute the script using that python, as shown below, am I guaranteed that the script is executed in that environment, independent of the type of virtual environment? If not, what can I do to ensure such a guarantee? `/path/to/myenv/Scripts/python.exe path/to/hello.py`
2019/02/04
[ "https://Stackoverflow.com/questions/54525141", "https://Stackoverflow.com", "https://Stackoverflow.com/users/456735/" ]
Yes, you're right! Furthermore you can evaluate the used executable by using the following snippet: ``` import sys print(sys.executable) ``` Then you will see the absolute path, e.g. `/opt/miniconda/envs/epm/bin/python`. If you're using a Unix system, you can run: ``` $ echo "import sys; print(sys.version); print(sys.executable)" | /opt/miniconda/envs/epm/bin/python 2.7.15 |Anaconda, Inc.| (default, Dec 14 2018, 13:10:39) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] /opt/miniconda/envs/epm/bin/python ```
I suspect not. There are a few environment variables (e.g. `PATH`) which are changed when you activate a virtualenv. You can open up `myenv/bin/activate` in a text editor to see what it does. Is there a particular reason you want to call the executable directly, rather than use the environment as designed? (e.g. `. ./myenv/bin/activate; python hello.py`)
36,911,060
I have a JSON file containing various objects each containing elements. With my python script, I only keep the objects I want, and then put the elements I want in a list. But the element has a prefix, which I'd like to suppress form the list. The post-script JSON looks like that: ``` { "ip_prefix": "184.72.128.0/17", "region": "us-east-1", "service": "EC2" } ``` The "IP/mask" is what I'd like to keep. The List looks like that: '"ip\_prefix": **"23.20.0.0/14"**,' So what can I do to only keep **"23.20.0.0/14"** in the list? Here is the code: ``` json_data = open(jsonsourcefile) data = json.load(json_data) print (destfile) d=[] for objects in (data['prefixes']): if servicerequired in json.dumps(objects): #print(json.dumps(objects, sort_keys=True, indent=4)) with open(destfile, 'a') as file: file.write(json.dumps(objects, sort_keys=True, indent=4 )) with open(destfile, 'r') as reads: liste = list() for strip in reads: if "ip_prefix" in strip: strip = strip.strip() liste.append(strip) print(liste) ``` Thanks, dersoi
2016/04/28
[ "https://Stackoverflow.com/questions/36911060", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5532788/" ]
You can also add this variable by using a preprocess hook. The following code will add the `is_front` variable so it can be used in the `html.html.twig` template: ``` // Adds the is_front variable to html.html.twig template. function mytheme_preprocess_html(&$variables) { $variables['is_front'] = \Drupal::service('path.matcher')->isFrontPage(); } ``` Then inside the twig template you could check the variable like so: ``` {% if is_front %} THIS IS THE FRONTPAGE {% endif %} ```
If you want to show a node within the front page and it should look just like the actual node page, you can create a new display for the node, like "On Frontpage". For that display you create a new node template (be careful to use the right naming convention for the twig file, otherwise it won't work). Then you tell the front page to display nodes using the "On Frontpage" display, which will use a different template (including your desired div). Twig templates naming convention: <https://www.drupal.org/node/2354645> So the steps: 1. create a new display mode for your nodes 2. create a new template for that particular display mode 3. tell the frontpage to display nodes using that display
31,573,399
I have a largish pandas dataframe (1.5gig .csv on disk). I can load it into memory and query it. I want to create a new column that is combined value of two other columns, and I tried this: ``` def combined(row): row['combined'] = row['col1'].join(str(row['col2'])) return row df = df.apply(combined, axis=1) ``` This results in my python process being killed, presumably because of memory issues. A more iterative solution to the problem seems to be: ``` df['combined'] = '' col_pos = list(df.columns).index('combined') crs_pos = list(df.columns).index('col1') sub_pos = list(df.columns).index('col2') for row_pos in range(0, len(df) - 1): df.iloc[row_pos, col_pos] = df.iloc[row_pos, sub_pos].join(str(df.iloc[row_pos, crs_pos])) ``` This of course seems very unpandas. And is very slow. Ideally I would like something like `apply_chunk()` which is the same as apply but only works on a piece of the dataframe. I thought `dask` might be an option for this, but `dask` dataframes seemed to have other issues when I used them. This has to be a common problem though, is there a design pattern I should be using for adding columns to large pandas dataframes?
2015/07/22
[ "https://Stackoverflow.com/questions/31573399", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3137396/" ]
I would try using list comprehension + [`itertools`](https://docs.python.org/2/library/itertools.html): ``` df = pd.DataFrame({ 'a': ['ab'] * 200, 'b': ['ffff'] * 200 }) import itertools [a.join(b) for (a, b) in itertools.izip(df.a, df.b)] ``` It might be "unpandas", but pandas doesn't seem to have a `.str` method that helps you here, and it isn't "unpythonic". To create another column, just use: ``` df['c'] = [a.join(b) for (a, b) in itertools.izip(df.a, df.b)] ``` Incidentally, you can also get your chunking using: ``` [a.join(b) for (a, b) in itertools.izip(df.a[10: 20], df.b[10: 20])] ``` If you'd like to play with parallelization. I would first try the above version, as list comprehension and itertools are often surprisingly fast, and parallelization would require an overhead that would need to be outweighed.
One nice way to create a new column in [`pandas`](http://pandas.pydata.org) or [`dask.dataframe`](http://dask.pydata.org/en/latest/dataframe.html) is with the [`.assign`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html) method. ``` In [1]: import pandas as pd In [2]: df = pd.DataFrame({'x': [1, 2, 3, 4], 'y': ['a', 'b', 'a', 'b']}) In [3]: df Out[3]: x y 0 1 a 1 2 b 2 3 a 3 4 b In [4]: df.assign(z=df.x * df.y) Out[4]: x y z 0 1 a a 1 2 b bb 2 3 a aaa 3 4 b bbbb ``` However, if your operation is highly custom (as it appears to be) and if Python iterators are fast enough (as they seem to be) then you might just want to stick with that. Anytime you find yourself using `apply` or `iloc` in a loop it's likely that Pandas is operating much slower than is optimal.
48,125,575
I am trying to read the following code for back propagation in python ``` probs = exp_scores /np.sum(exp_scores, axis=1, keepdims=True) #Backpropagation delta3 = probs delta3[range(num_examples), y] -= 1 dW2 = (a1.T).dot(delta3) .... ``` but I cannot understand the following line of the code: ``` delta3[range(num_examples), y] -= 1 ``` could you please tell me what does this do? Thank you very much for your help!
2018/01/06
[ "https://Stackoverflow.com/questions/48125575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7962244/" ]
There are two things here. First it is using numpy slicing to select only a fraction of `delta3`. Secondly it is removing 1 to every element of this fraction of the matrix. More precisely, `delta3[range(num_example), y]` is selecting lines of the matrix `delta3` ranging from 0 to `num_examples` but only selecting column `y`.
If you're interested, *why* it's computed this way, it's the backpropagation through cross-entropy loss: * `probs` is the vector of class probabilities (computed in a forward pass via softmax). * `delta3` is the error signal from the loss function. * `y` holds the ground truth classes for the mini-batch. Everything else is just a math, which is well explained in [this post](https://deepnotes.io/softmax-crossentropy) and they end up with the same numpy expression.
49,958,177
I am a beginner to python and am working on python 3.6.5 , I was trying to create a Chatbot but I don't understand how to use a comma to separate the two strings(red and Red) because the shell says that it is an invalid syntax(the comma is highlighted but nothing else). What have I done wrong?: ``` colour=input("What is your favourite colour? ") if colour=="red", "Red": print("Red is my favourite colour as well") ``` note:I know this question is very similar to others on the forum but considering I am only a beginner (I literally started learning python on friday) the answers for the other question were a bit confusing because they had different code,so I asked this question using what I am learning.
2018/04/21
[ "https://Stackoverflow.com/questions/49958177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9679321/" ]
Use `in` ``` colour= input("What is your favourite colour? ") if colour in ("red", "Red"): print("Red is my favourite colour as well") ```
You could you use if colour in ['red', 'Red', 'RED', 'ReD'] as mentionned earlier, or you could just sanitize the input: ``` colour= input("What is your favourite colour? ") if colour.lower() == "red": print("Red is my favourite colour as well") ```
20,322,969
I am not sure if there is a solution for this on stack overflow; so apologies if this is a duplicate. There are number of ways of converting the string: ``` s = '[1, 2, 3]' ``` to a list ``` t = [1, 2, 3] ``` but I am looking for the most straightforward pythonic way of doing this. Also, performance matters.
2013/12/02
[ "https://Stackoverflow.com/questions/20322969", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1778980/" ]
One should use [ast.literal\_eval](http://docs.python.org/2/library/ast.html#ast.literal_eval): ``` >>> import ast >>> ast.literal_eval('[1,2,3]') [1, 2, 3] ```
Why never use json library. ``` import json # convert str to list t = json.loads(s) # back to string s2 = json.dumps(t) ```
49,949,398
I am facing an issue while importing java code which uses some external jar say selenium\_standalone\_server jar. I tried with normal code with no jars used in java, in this case i am able to import and run the code, but when i uses some jars in java code and then try to import that class to jython it gives error. Here is the sample code i used. i created jar of the code below "jython\_test.jar" ``` package Jython_workspace; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; public class selenium_try { public void launch_browser() { WebDriver driver = new FirefoxDriver(); System.out.println("Hello Google..."); driver.get("http://google.com"); } } ``` this code uses the selenium\_server-standalone-3.11.0.jar. importing java jar in jython. ``` import sys sys.path.append("jython_test.jar") from jython_test import selenium_try as sel beach = sel.launch_browser() ``` the error encountered. ``` Traceback (most recent call last): File "D:\PD\sublime_code\Jython_workspace\try_selenium_python.py", line 5, in <module> from jython_test import selenium_try as sel java.lang.NoClassDefFoundError: org/openqa/selenium/WebDriver at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.python.core.Py.loadAndInitClass(Py.java:991) at org.python.core.Py.findClassInternal(Py.java:926) at org.python.core.Py.findClassEx(Py.java:977) at org.python.core.packagecache.SysPackageManager.findClass(SysPackageManager.java:133) at org.python.core.packagecache.PackageManager.findClass(PackageManager.java:33) at org.python.core.packagecache.SysPackageManager.findClass(SysPackageManager.java:122) at org.python.core.PyJavaPackage.__findattr_ex__(PyJavaPackage.java:134) at org.python.core.PyObject.__findattr__(PyObject.java:946) at org.python.core.imp.importFromAs(imp.java:1160) at org.python.core.imp.importFrom(imp.java:1132) at org.python.pycode._pyx0.f$0(D:\PD\sublime_code\Jython_workspace\try_selenium_python.py:7) at org.python.pycode._pyx0.call_function(D:\PD\sublime_code\Jython_workspace\try_selenium_python.py) at org.python.core.PyTableCode.call(PyTableCode.java:167) at org.python.core.PyCode.call(PyCode.java:18) at org.python.core.Py.runCode(Py.java:1386) at org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:296) at org.python.util.jython.run(jython.java:362) at org.python.util.jython.main(jython.java:142) Caused by: java.lang.ClassNotFoundException: org.openqa.selenium.WebDriver at org.python.core.SyspathJavaLoader.findClass(SyspathJavaLoader.java:131) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 20 more java.lang.NoClassDefFoundError: java.lang.NoClassDefFoundError: org/openqa/selenium/WebDriver ``` How this issue can be solved when java uses 3rd party jars and then we want to import in jython.
2018/04/20
[ "https://Stackoverflow.com/questions/49949398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3814582/" ]
**1)** We group by Name (assuming `rollapply` should be done separately for each `Name`) and then use `width = list(-seq(4))` with `rollapply` which uses offsets -1, -2, -3, -4 for each application of `mean`. (Offset 0 would be the current point but we want the 4 prior here.) Not clear what you are referring to regarding start time so that part has been left out. Also I have assumed that the data is sorted (which is the case in the question). You might also want to convert the dates to `"Date"` class but that isn't needed to answer the question if the rows are already sorted. ``` library(zoo) roll <- function(x) rollapply(x, list(-seq(4)), mean, fill = NA) transform(DF, Average = ave(Values, Name, FUN = roll)) ``` **2)** or if you like dplyr then using `roll` from above: ``` library(dplyr) library(zoo) DF %>% group_by(Name) %>% mutate(Average = roll(Values)) %>% ungroup() ```
An option is to use `zoo::rollapply` along with `dplyr::lag` as: ``` library(dplyr) library(lubridate) library(zoo) df %>% mutate(DATE = mdy(DATE)) %>% #Convert to Date arrange(Name, DATE) %>% #Order on Name and DATE mutate(Avg = rollapply(Values, 4, mean, fill= NA, align = "right")) %>% mutate(Average = lag(Avg)) %>% # This shows mean for previous 4 rows select(-Avg) # Name DATE Values Average # 1 TestA 2017-03-03 50 NA # 2 TestA 2017-03-04 75 NA # 3 TestA 2017-03-05 25 NA # 4 TestA 2017-03-06 100 NA # 5 TestA 2017-03-07 100 62.50 # 6 TestA 2017-03-08 50 75.00 # 7 TestA 2017-03-09 80 68.75 # 8 TestA 2017-03-10 90 82.50 # 9 TestA 2017-03-11 25 80.00 # 10 TestA 2017-03-12 0 61.25 # 11 TestA 2017-03-13 0 48.75 # 12 TestA 2017-03-14 0 28.75 # 13 TestA 2017-03-15 0 6.25 # 14 TestA 2017-03-16 50 0.00 ``` **Data:** ``` df <- read.table(text = "Name DATE Values TestA '3/3/2017' 50 TestA '3/4/2017' 75 TestA '3/5/2017' 25 TestA '3/6/2017' 100 TestA '3/7/2017' 100 TestA '3/8/2017' 50 TestA '3/9/2017' 80 TestA '3/10/2017' 90 TestA '3/11/2017' 25 TestA '3/12/2017' 0 TestA '3/13/2017' 0 TestA '3/14/2017' 0 TestA '3/15/2017' 0 TestA '3/16/2017' 50", header = TRUE, stringsAsFactors = FALSE) ```
53,846,322
I am exporting LOG\_INTERVAL value as 5. How can I add this env value in python as `time.sleep`? ``` import os import time print("Goodbye, World!") time.sleep(os.environ.get('LOG_INTERVAL')) ``` ``` error:- Goodbye, World! Traceback (most recent call last): File "test.py", line 4, in time.sleep(os.environ.get('LOG_INTERVAL')) TypeError: a float is required ``` `
2018/12/19
[ "https://Stackoverflow.com/questions/53846322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10809642/" ]
The value you get from the environment is a string. You have to convert it to a number in order for it to be an acceptable value for `time.sleep()` ``` time.sleep(float(os.environ.get('LOG_INTERVAL')) ```
I think `LOG_INTERVAL` will be returned as a string. Check it's type with `type(os.environ.get('LOG_INTERVAL'))` If it is an int or a string containing nothing but numbers or fullstops `time.sleep(float(os.environ.get('LOG_INTERVAL')))` should convert it to a float and do the trick.
53,846,322
I am exporting LOG\_INTERVAL value as 5. How can I add this env value in python as `time.sleep`? ``` import os import time print("Goodbye, World!") time.sleep(os.environ.get('LOG_INTERVAL')) ``` ``` error:- Goodbye, World! Traceback (most recent call last): File "test.py", line 4, in time.sleep(os.environ.get('LOG_INTERVAL')) TypeError: a float is required ``` `
2018/12/19
[ "https://Stackoverflow.com/questions/53846322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10809642/" ]
``` time.sleep(float(os.environ.get('LOG_INTERVAL', 0)) ``` I added the default `0` to @tripleee 's reply, so if the variable is not defined, your code doesn't crash.
I think `LOG_INTERVAL` will be returned as a string. Check it's type with `type(os.environ.get('LOG_INTERVAL'))` If it is an int or a string containing nothing but numbers or fullstops `time.sleep(float(os.environ.get('LOG_INTERVAL')))` should convert it to a float and do the trick.
53,846,322
I am exporting LOG\_INTERVAL value as 5. How can I add this env value in python as `time.sleep`? ``` import os import time print("Goodbye, World!") time.sleep(os.environ.get('LOG_INTERVAL')) ``` ``` error:- Goodbye, World! Traceback (most recent call last): File "test.py", line 4, in time.sleep(os.environ.get('LOG_INTERVAL')) TypeError: a float is required ``` `
2018/12/19
[ "https://Stackoverflow.com/questions/53846322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10809642/" ]
The value you get from the environment is a string. You have to convert it to a number in order for it to be an acceptable value for `time.sleep()` ``` time.sleep(float(os.environ.get('LOG_INTERVAL')) ```
``` time.sleep(float(os.environ.get('LOG_INTERVAL', 0)) ``` I added the default `0` to @tripleee 's reply, so if the variable is not defined, your code doesn't crash.
5,559,810
**Question** It seems that PyWin32 is comfortable with giving null-terminated unicode strings as return values. I would like to deal with these strings the 'right' way. Let's say I'm getting a string like: `u'C:\\Users\\Guest\\MyFile.asy\x00\x00sy'`. This appears to be a C-style null-terminated string hanging out in a Python unicode object. I want to trim this bad boy down to a regular ol' string of characters that I could, for example, display in a window title bar. Is trimming the string off at the first null byte the right way to deal with it? I didn't expect to get a return value like this, so I wonder if I'm missing something important about how Python, Win32, and unicode play together... or if this is just a PyWin32 bug. **Background** I'm using the Win32 file chooser function [`GetOpenFileNameW`](http://docs.activestate.com/activepython/2.7/pywin32/win32gui__GetSaveFileNameW_meth.html) from the PyWin32 package. According to the documentation, this function returns a tuple containing the full filename path as a Python unicode object. When I open the dialog with an existing path and filename set, I get a strange return value. For example I had the default set to: `C:\\Users\\Guest\\MyFileIsReallyReallyReallyAwesome.asy` In the dialog I changed the name to `MyFile.asy` and clicked save. The full path part of the return value was: u'C:\Users\Guest\MyFile.asy\x00wesome.asy'` I expected it to be: `u'C:\\Users\\Guest\\MyFile.asy'` The function is returning a recycled buffer without trimming off the terminating bytes. Needless to say, the rest of my code wasn't set up for handling a C-style null-terminated string. **Demo Code** The following code demonstrates null-terminated string in return value from GetSaveFileNameW. Directions: In the dialog change the filename to 'MyFile.asy' then click Save. Observe what is printed to the console. The output I get is `u'C:\\Users\\Guest\\MyFile.asy\x00wesome.asy'`. ``` import win32gui, win32con if __name__ == "__main__": initial_dir = 'C:\\Users\\Guest' initial_file = 'MyFileIsReallyReallyReallyAwesome.asy' filter_string = 'All Files\0*.*\0' (filename, customfilter, flags) = \ win32gui.GetSaveFileNameW(InitialDir=initial_dir, Flags=win32con.OFN_EXPLORER, File=initial_file, DefExt='txt', Title="Save As", Filter=filter_string, FilterIndex=0) print repr(filename) ``` Note: If you don't shorten the filename enough (for example, if you try MyFileIsReally.asy) the string will be complete without a null byte. **Environment** Windows 7 Professional 64-bit (no service pack), Python 2.7.1, PyWin32 Build 216 **UPDATE: PyWin32 Tracker Artifact** Based on the comments and answers I have received so far, this is likely a pywin32 bug so I filed a [tracker artifact](https://sourceforge.net/tracker/?func=detail&aid=3277647&group_id=78018&atid=551954). **UPDATE 2: Fixed!** Mark Hammond reported in the tracker artifact that this is indeed a bug. A fix was checked in to rev f3fdaae5e93d, so hopefully that will make the next release. I think Aleksi Torhamo's answer below is the best solution for versions of PyWin32 before the fix.
2011/04/05
[ "https://Stackoverflow.com/questions/5559810", "https://Stackoverflow.com", "https://Stackoverflow.com/users/182642/" ]
I'd say it's a bug. The right way to deal with it would probably be fixing pywin32, but in case you aren't feeling adventurous enough, just trim it. You can get everything before the first `'\x00'` with `filename.split('\x00', 1)[0]`.
This doesn't happen on the version of PyWin32/Windows/Python I tested; I don't get any nulls in the returned string even if it's very short. You might investigate if a newer version of one of the above fixes the bug.
5,559,810
**Question** It seems that PyWin32 is comfortable with giving null-terminated unicode strings as return values. I would like to deal with these strings the 'right' way. Let's say I'm getting a string like: `u'C:\\Users\\Guest\\MyFile.asy\x00\x00sy'`. This appears to be a C-style null-terminated string hanging out in a Python unicode object. I want to trim this bad boy down to a regular ol' string of characters that I could, for example, display in a window title bar. Is trimming the string off at the first null byte the right way to deal with it? I didn't expect to get a return value like this, so I wonder if I'm missing something important about how Python, Win32, and unicode play together... or if this is just a PyWin32 bug. **Background** I'm using the Win32 file chooser function [`GetOpenFileNameW`](http://docs.activestate.com/activepython/2.7/pywin32/win32gui__GetSaveFileNameW_meth.html) from the PyWin32 package. According to the documentation, this function returns a tuple containing the full filename path as a Python unicode object. When I open the dialog with an existing path and filename set, I get a strange return value. For example I had the default set to: `C:\\Users\\Guest\\MyFileIsReallyReallyReallyAwesome.asy` In the dialog I changed the name to `MyFile.asy` and clicked save. The full path part of the return value was: u'C:\Users\Guest\MyFile.asy\x00wesome.asy'` I expected it to be: `u'C:\\Users\\Guest\\MyFile.asy'` The function is returning a recycled buffer without trimming off the terminating bytes. Needless to say, the rest of my code wasn't set up for handling a C-style null-terminated string. **Demo Code** The following code demonstrates null-terminated string in return value from GetSaveFileNameW. Directions: In the dialog change the filename to 'MyFile.asy' then click Save. Observe what is printed to the console. The output I get is `u'C:\\Users\\Guest\\MyFile.asy\x00wesome.asy'`. ``` import win32gui, win32con if __name__ == "__main__": initial_dir = 'C:\\Users\\Guest' initial_file = 'MyFileIsReallyReallyReallyAwesome.asy' filter_string = 'All Files\0*.*\0' (filename, customfilter, flags) = \ win32gui.GetSaveFileNameW(InitialDir=initial_dir, Flags=win32con.OFN_EXPLORER, File=initial_file, DefExt='txt', Title="Save As", Filter=filter_string, FilterIndex=0) print repr(filename) ``` Note: If you don't shorten the filename enough (for example, if you try MyFileIsReally.asy) the string will be complete without a null byte. **Environment** Windows 7 Professional 64-bit (no service pack), Python 2.7.1, PyWin32 Build 216 **UPDATE: PyWin32 Tracker Artifact** Based on the comments and answers I have received so far, this is likely a pywin32 bug so I filed a [tracker artifact](https://sourceforge.net/tracker/?func=detail&aid=3277647&group_id=78018&atid=551954). **UPDATE 2: Fixed!** Mark Hammond reported in the tracker artifact that this is indeed a bug. A fix was checked in to rev f3fdaae5e93d, so hopefully that will make the next release. I think Aleksi Torhamo's answer below is the best solution for versions of PyWin32 before the fix.
2011/04/05
[ "https://Stackoverflow.com/questions/5559810", "https://Stackoverflow.com", "https://Stackoverflow.com/users/182642/" ]
I'd say it's a bug. The right way to deal with it would probably be fixing pywin32, but in case you aren't feeling adventurous enough, just trim it. You can get everything before the first `'\x00'` with `filename.split('\x00', 1)[0]`.
ISTR that I had this issue some years ago, then I discovered that such Win32 filename-dialog-related functions return a sequence of `'filename1\0filename2\0...filenameN\0\0'`, while including possible garbage characters depending on the buffer that Windows allocated. Now, you might prefer a list instead of the raw return value, but that would be a RFE, not a bug. PS When I had this issue, I quite understood why one would expect `GetOpenFileName` to possibly return a list of filenames, while I couldn't imagine why `GetSaveFileName` would. Perhaps this is considered as API uniformity. Who am I to know, anyway?
5,559,810
**Question** It seems that PyWin32 is comfortable with giving null-terminated unicode strings as return values. I would like to deal with these strings the 'right' way. Let's say I'm getting a string like: `u'C:\\Users\\Guest\\MyFile.asy\x00\x00sy'`. This appears to be a C-style null-terminated string hanging out in a Python unicode object. I want to trim this bad boy down to a regular ol' string of characters that I could, for example, display in a window title bar. Is trimming the string off at the first null byte the right way to deal with it? I didn't expect to get a return value like this, so I wonder if I'm missing something important about how Python, Win32, and unicode play together... or if this is just a PyWin32 bug. **Background** I'm using the Win32 file chooser function [`GetOpenFileNameW`](http://docs.activestate.com/activepython/2.7/pywin32/win32gui__GetSaveFileNameW_meth.html) from the PyWin32 package. According to the documentation, this function returns a tuple containing the full filename path as a Python unicode object. When I open the dialog with an existing path and filename set, I get a strange return value. For example I had the default set to: `C:\\Users\\Guest\\MyFileIsReallyReallyReallyAwesome.asy` In the dialog I changed the name to `MyFile.asy` and clicked save. The full path part of the return value was: u'C:\Users\Guest\MyFile.asy\x00wesome.asy'` I expected it to be: `u'C:\\Users\\Guest\\MyFile.asy'` The function is returning a recycled buffer without trimming off the terminating bytes. Needless to say, the rest of my code wasn't set up for handling a C-style null-terminated string. **Demo Code** The following code demonstrates null-terminated string in return value from GetSaveFileNameW. Directions: In the dialog change the filename to 'MyFile.asy' then click Save. Observe what is printed to the console. The output I get is `u'C:\\Users\\Guest\\MyFile.asy\x00wesome.asy'`. ``` import win32gui, win32con if __name__ == "__main__": initial_dir = 'C:\\Users\\Guest' initial_file = 'MyFileIsReallyReallyReallyAwesome.asy' filter_string = 'All Files\0*.*\0' (filename, customfilter, flags) = \ win32gui.GetSaveFileNameW(InitialDir=initial_dir, Flags=win32con.OFN_EXPLORER, File=initial_file, DefExt='txt', Title="Save As", Filter=filter_string, FilterIndex=0) print repr(filename) ``` Note: If you don't shorten the filename enough (for example, if you try MyFileIsReally.asy) the string will be complete without a null byte. **Environment** Windows 7 Professional 64-bit (no service pack), Python 2.7.1, PyWin32 Build 216 **UPDATE: PyWin32 Tracker Artifact** Based on the comments and answers I have received so far, this is likely a pywin32 bug so I filed a [tracker artifact](https://sourceforge.net/tracker/?func=detail&aid=3277647&group_id=78018&atid=551954). **UPDATE 2: Fixed!** Mark Hammond reported in the tracker artifact that this is indeed a bug. A fix was checked in to rev f3fdaae5e93d, so hopefully that will make the next release. I think Aleksi Torhamo's answer below is the best solution for versions of PyWin32 before the fix.
2011/04/05
[ "https://Stackoverflow.com/questions/5559810", "https://Stackoverflow.com", "https://Stackoverflow.com/users/182642/" ]
This doesn't happen on the version of PyWin32/Windows/Python I tested; I don't get any nulls in the returned string even if it's very short. You might investigate if a newer version of one of the above fixes the bug.
ISTR that I had this issue some years ago, then I discovered that such Win32 filename-dialog-related functions return a sequence of `'filename1\0filename2\0...filenameN\0\0'`, while including possible garbage characters depending on the buffer that Windows allocated. Now, you might prefer a list instead of the raw return value, but that would be a RFE, not a bug. PS When I had this issue, I quite understood why one would expect `GetOpenFileName` to possibly return a list of filenames, while I couldn't imagine why `GetSaveFileName` would. Perhaps this is considered as API uniformity. Who am I to know, anyway?
49,737,459
Forgive me the possibly trivial question, but: *How do I run the script published by pybuilder?* --- I'm trying to follow the official [Pybuilder Tutorial](http://pybuilder.github.io/documentation/tutorial.html#.WsqupUuYNhE). I've walked through the steps and successfully generated a project that * runs unit tests * computes coverage * generates `setup.py` * generates a `.tar.gz` that can be used by `pip install`. That's all very nice, but I still don't see what the actual runnable artifact is? All that is contained in the `target` directory seems to look more or less exactly the same as the content in `src` directory + additional reports and installable archives. The tutorial itself at the end of the "Adding a runnable Script"-section concludes that "the script was picked up". Ok, it was picked up, now how do I run it? At no point does the tutorial demonstrate that we can actually print the string "Hello, World!" on the screen, despite the fact that the whole toy-project is about doing exactly that. --- **MCVE** Below is a Bash Script that generates the following directory tree with Python source files and build script: ``` projectRoot ├── build.py └── src └── main ├── python │   └── pkgRoot │   ├── __init__.py │   ├── pkgA │   │   ├── __init__.py │   │   └── modA.py │   └── pkgB │   ├── __init__.py │   └── modB.py └── scripts └── entryPointScript.py 7 directories, 7 files ================================================================================ projectRoot/build.py -------------------------------------------------------------------------------- from pybuilder.core import use_plugin use_plugin("python.core") use_plugin("python.distutils") default_task = "publish" ================================================================================ projectRoot/src/main/scripts/entryPointScript.py -------------------------------------------------------------------------------- #!/usr/bin/env python from pkgRoot.pkgB.modB import b if __name__ == "__main__": print(f"Hello, world! 42 * 42 - 42 = {b(42)}") ================================================================================ projectRoot/src/main/python/pkgRoot/pkgA/modA.py -------------------------------------------------------------------------------- def a(n): """Computes square of a number.""" return n * n ================================================================================ projectRoot/src/main/python/pkgRoot/pkgB/modB.py -------------------------------------------------------------------------------- from pkgRoot.pkgA.modA import a def b(n): """Evaluates a boring quadratic polynomial.""" return a(n) - n ``` --- **The full script that generates example project (Disclaimer: provided as-is, modifies files and directories, execute at your own risk):** ``` #!/bin/bash # Creates a very simple hello-world like project # that can be build with PyBuilder, and describes # the result. # Uses BASH heredocs and `cut -d'|' -f2-` to strip # margin from indented code. # strict mode set -eu # set up directory tree for packages and scripts ROOTPKG_PATH="projectRoot/src/main/python/pkgRoot" SCRIPTS_PATH="projectRoot/src/main/scripts" mkdir -p "$ROOTPKG_PATH/pkgA" mkdir -p "$ROOTPKG_PATH/pkgB" mkdir -p "$SCRIPTS_PATH" # Touch bunch of `__init__.py` files touch "$ROOTPKG_PATH/__init__.py" touch "$ROOTPKG_PATH/pkgA/__init__.py" touch "$ROOTPKG_PATH/pkgB/__init__.py" # Create module `modA` in package `pkgA` cut -d'|' -f2- <<__HEREDOC > "$ROOTPKG_PATH/pkgA/modA.py" |def a(n): | """Computes square of a number.""" | return n * n | __HEREDOC # Create module `modB` in package `pkgB` cut -d'|' -f2- <<__HEREDOC > "$ROOTPKG_PATH/pkgB/modB.py" |from pkgRoot.pkgA.modA import a | |def b(n): | """Evaluates a boring quadratic polynomial.""" | return a(n) - n | __HEREDOC # Create a hello-world script in `scripts`: cut -d'|' -f2- <<__HEREDOC > "$SCRIPTS_PATH/entryPointScript.py" |#!/usr/bin/env python | |from pkgRoot.pkgB.modB import b | |if __name__ == "__main__": | print(f"Hello, world! 42 * 42 - 42 = {b(42)}") | __HEREDOC # Create a simple `build.py` build script for PyBuilder cut -d'|' -f2- <<__HEREDOC > "projectRoot/build.py" |from pybuilder.core import use_plugin | |use_plugin("python.core") |use_plugin("python.distutils") | |default_task = "publish" | __HEREDOC ################################################# # Directory tree construction finished, only # # debug output below this box. # ################################################# # show the layout of the generater result tree "projectRoot" # walk through each python file, show path and content find "projectRoot" -name "*.py" -print0 | \ while IFS= read -r -d $'\0' pathToFile do if [ -s "$pathToFile" ] then printf "=%.0s" {1..80} # thick horizontal line echo "" echo "$pathToFile" printf -- "-%.0s" {1..80} echo "" cat "$pathToFile" fi done ``` The simplest way that I've found to run the freshly built project was as follows (used from the directory that contains `projectRoot`): ``` virtualenv env source env/bin/activate cd projectRoot pyb cd target/dist/projectRoot-1.0.dev0/dist/ pip install projectRoot-1.0.dev0.tar.gz entryPointScript.py ``` This indeed successfully runs the script with all its dependencies on user-defined packages, and prints: ``` Hello, world! 42 * 42 - 42 = 1722 ``` but the whole procedure seems quite complicated. For comparison, in an analogous situation in SBT, I would just issue the single ``` run ``` command from the SBT-shell -- that's why the seven-step recipe above seems a bit suspicios to me. Is there something like a `pyb run` or `pyb exec` plugin that does the same, but does not require from me that I set up all these environments and install anything? What I'm looking for is the analogon of `sbt run` in SBT or `mvn exec:java` in Maven that will build everything, set up all the classpaths, and then run the class with the `main` method, without leaving any traces outside the project directory. Since there is essentially no difference between the source code and the output of the target, I'm probably missing some obvious way how to run the script. If the `PyBuilder` itself is not needed at all for that, it's fine too: all I want is to somehow get `Hello, world! 42 * 42 - 42 = 1722`-string printed in the terminal.
2018/04/09
[ "https://Stackoverflow.com/questions/49737459", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2707792/" ]
Apparently the following workflow: * pyb publish * pip install .tar.gz * runMyScript.py * uninstall is exactly what is proposed by the creator of PyBuilder [in this talk](http://www.youtube.com/watch?v=iQU18hAjux4&t=14m42s). **Note that the linked video is from 2014. If someone can propose a more streamlined recently provided solution, I'll of course accept that.**
Create task in build.py ``` @task def run(project): path.append("src/main/python") from test_pack import test_app test_app.main() ``` Try: `pyb run`
16,650,680
The following was ported from the pseudo-code from the Wikipedia article on [Newton's method](http://en.wikipedia.org/wiki/Newton%27s_method): ``` #! /usr/bin/env python3 # https://en.wikipedia.org/wiki/Newton's_method import sys x0 = 1 f = lambda x: x ** 2 - 2 fprime = lambda x: 2 * x tolerance = 1e-10 epsilon = sys.float_info.epsilon maxIterations = 20 for i in range(maxIterations): denominator = fprime(x0) if abs(denominator) < epsilon: print('WARNING: Denominator is too small') break newtonX = x0 - f(x0) / denominator if abs(newtonX - x0) < tolerance: print('The root is', newtonX) break x0 = newtonX else: print('WARNING: Not able to find solution within the desired tolerance of', tolerance) print('The last computed approximate root was', newtonX) ``` **Question** Is there an automated way to calculate some form of `fprime` given some form of `f` in Python 3.x?
2013/05/20
[ "https://Stackoverflow.com/questions/16650680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/216356/" ]
A common way of approximating the derivative of `f` at `x` is using a finite difference: ``` f'(x) = (f(x+h) - f(x))/h Forward difference f'(x) = (f(x+h) - f(x-h))/2h Symmetric ``` The best choice of `h` depends on `x` and `f`: mathematically the difference approaches the derivative as h tends to 0, but the method suffers from loss of accuracy due to catastrophic cancellation if `h` is too small. Also x+h should be distinct from x. Something like `h = x*1e-15` might be appropriate for your application. See also [implementing the derivative in C/C++](https://stackoverflow.com/questions/1559695/implementing-the-derivative-in-c-c). You can avoid approximating f' by using the [secant method](http://en.wikipedia.org/wiki/Secant_method). It doesn't converge as fast as Newton's, but it's computationally cheaper and you avoid the problem of having to calculate the derivative.
You can approximate `fprime` any number of ways. One of the simplest would be something like: ``` lambda fprime x,dx=0.1: (f(x+dx) - f(x-dx))/(2*dx) ``` the idea here is to sample `f` around the point `x`. The sampling region (determined by `dx`) should be small enough that the variation in `f` over that region is approximately linear. The algorithm that I've used is known as the midpoint method. You could get more accurate by using higher order polynomial fits for most functions, but that would be more expensive to calculate. Of course, you'll always be more accurate and efficient if you know the analytical derivative.
16,650,680
The following was ported from the pseudo-code from the Wikipedia article on [Newton's method](http://en.wikipedia.org/wiki/Newton%27s_method): ``` #! /usr/bin/env python3 # https://en.wikipedia.org/wiki/Newton's_method import sys x0 = 1 f = lambda x: x ** 2 - 2 fprime = lambda x: 2 * x tolerance = 1e-10 epsilon = sys.float_info.epsilon maxIterations = 20 for i in range(maxIterations): denominator = fprime(x0) if abs(denominator) < epsilon: print('WARNING: Denominator is too small') break newtonX = x0 - f(x0) / denominator if abs(newtonX - x0) < tolerance: print('The root is', newtonX) break x0 = newtonX else: print('WARNING: Not able to find solution within the desired tolerance of', tolerance) print('The last computed approximate root was', newtonX) ``` **Question** Is there an automated way to calculate some form of `fprime` given some form of `f` in Python 3.x?
2013/05/20
[ "https://Stackoverflow.com/questions/16650680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/216356/" ]
A common way of approximating the derivative of `f` at `x` is using a finite difference: ``` f'(x) = (f(x+h) - f(x))/h Forward difference f'(x) = (f(x+h) - f(x-h))/2h Symmetric ``` The best choice of `h` depends on `x` and `f`: mathematically the difference approaches the derivative as h tends to 0, but the method suffers from loss of accuracy due to catastrophic cancellation if `h` is too small. Also x+h should be distinct from x. Something like `h = x*1e-15` might be appropriate for your application. See also [implementing the derivative in C/C++](https://stackoverflow.com/questions/1559695/implementing-the-derivative-in-c-c). You can avoid approximating f' by using the [secant method](http://en.wikipedia.org/wiki/Secant_method). It doesn't converge as fast as Newton's, but it's computationally cheaper and you avoid the problem of having to calculate the derivative.
**Answer** Define the functions `formula` and `derivative` as the following directly after your `import`. ``` def formula(*array): calculate = lambda x: sum(c * x ** p for p, c in enumerate(array)) calculate.coefficients = array return calculate def derivative(function): return (p * c for p, c in enumerate(function.coefficients[1:], 1)) ``` Redefine `f` using `formula` by plugging in the function's coefficients in order of increasing power. ``` f = formula(-2, 0, 1) ``` Redefine `fprime` so that it is automatically created using functions `derivative` and `formula`. ``` fprime = formula(*derivative(f)) ``` That should solve your requirement to automatically calculate `fprime` from `f` in Python 3.x. **Summary** This is the final solution that produces the original answer while automatically calculating `fprime`. ``` #! /usr/bin/env python3 # https://en.wikipedia.org/wiki/Newton's_method import sys def formula(*array): calculate = lambda x: sum(c * x ** p for p, c in enumerate(array)) calculate.coefficients = array return calculate def derivative(function): return (p * c for p, c in enumerate(function.coefficients[1:], 1)) x0 = 1 f = formula(-2, 0, 1) fprime = formula(*derivative(f)) tolerance = 1e-10 epsilon = sys.float_info.epsilon maxIterations = 20 for i in range(maxIterations): denominator = fprime(x0) if abs(denominator) < epsilon: print('WARNING: Denominator is too small') break newtonX = x0 - f(x0) / denominator if abs(newtonX - x0) < tolerance: print('The root is', newtonX) break x0 = newtonX else: print('WARNING: Not able to find solution within the desired tolerance of', tolerance) print('The last computed approximate root was', newtonX) ```
21,397,757
Personally I think it's better to distribute .py files as these will then be compiled by the end-user's own python, which may be more patched. What are the pros and cons of distributing .pyc files versus .py files for a commercial, closed-source python module? In other words, are there any compelling reasons to distribute .pyc files? Edit: In particular, if the .py/.pyc is accompanied by a DLL/SO module which is compiled against a certain version of Python.
2014/01/28
[ "https://Stackoverflow.com/questions/21397757", "https://Stackoverflow.com", "https://Stackoverflow.com/users/906984/" ]
Close the unused file descriptors it will work fine In the inner most child ``` close(f1[1]); ``` In the parent process ``` close(f1[0]); ``` And also syntax error in the line write is called change it to ``` write(f1[1], M1, sizeof(M1)) < 0) ```
change your `if` statement to ``` if (write(f1[1], M1, sizeof(M1)) < 0) ``` instead of ``` if(write(f1[1], M1, sizeof(M1) < 0)) ```
2,177,250
I have a folder with 100k text files. I want to put files with over 20 lines in another folder. How do I do this in python? I used os.listdir, but of course, there isn't enough memory for even loading the filenames into memory. Is there a way to get maybe 100 filenames at a time? Here's my code: ``` import os import shutil dir = '/somedir/' def file_len(fname): f = open(fname,'r') for i, l in enumerate(f): pass f.close() return i + 1 filenames = os.listdir(dir+'labels/') i = 0 for filename in filenames: flen = file_len(dir+'labels/'+filename) print flen if flen > 15: i = i+1 shutil.copyfile(dir+'originals/'+filename[:-5], dir+'filteredOrigs/'+filename[:-5]) print i ``` And Output: ``` Traceback (most recent call last): File "filterimage.py", line 13, in <module> filenames = os.listdir(dir+'labels/') OSError: [Errno 12] Cannot allocate memory: '/somedir/' ``` Here's the modified script: ``` import os import shutil import glob topdir = '/somedir' def filelen(fname, many): f = open(fname,'r') for i, l in enumerate(f): if i > many: f.close() return True f.close() return False path = os.path.join(topdir, 'labels', '*') i=0 for filename in glob.iglob(path): print filename if filelen(filename,5): i += 1 print i ``` it works on a folder with fewer files, but with the larger folder, all it prints is "0"... Works on linux server, prints 0 on mac... oh well...
2010/02/01
[ "https://Stackoverflow.com/questions/2177250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183487/" ]
A couple thoughts. First, you might use the [`glob`](http://docs.python.org/library/glob.html) module to get smaller groups of files. Second, sorting by line count is going to be very time consuming, as you have to open every file and count lines. If you can partition by byte count, you can avoid opening the files by using the [`stat`](http://docs.python.org/library/stat.html) module. If it's crucial that the split happens at 20 lines, you can at least cut out large swaths of files by figuring out a minimum number of characters that a 20 line file of your type will have, and not opening any file smaller than that.
``` import os,shutil os.chdir("/mydir/") numlines=20 destination = os.path.join("/destination","dir1") for file in os.listdir("."): if os.path.isfile(file): flag=0 for n,line in enumerate(open(file)): if n > numlines: flag=1 break if flag: try: shutil.move(file,destination) except Exception,e: print e else: print "%s moved to %s" %(file,destination) ```
2,177,250
I have a folder with 100k text files. I want to put files with over 20 lines in another folder. How do I do this in python? I used os.listdir, but of course, there isn't enough memory for even loading the filenames into memory. Is there a way to get maybe 100 filenames at a time? Here's my code: ``` import os import shutil dir = '/somedir/' def file_len(fname): f = open(fname,'r') for i, l in enumerate(f): pass f.close() return i + 1 filenames = os.listdir(dir+'labels/') i = 0 for filename in filenames: flen = file_len(dir+'labels/'+filename) print flen if flen > 15: i = i+1 shutil.copyfile(dir+'originals/'+filename[:-5], dir+'filteredOrigs/'+filename[:-5]) print i ``` And Output: ``` Traceback (most recent call last): File "filterimage.py", line 13, in <module> filenames = os.listdir(dir+'labels/') OSError: [Errno 12] Cannot allocate memory: '/somedir/' ``` Here's the modified script: ``` import os import shutil import glob topdir = '/somedir' def filelen(fname, many): f = open(fname,'r') for i, l in enumerate(f): if i > many: f.close() return True f.close() return False path = os.path.join(topdir, 'labels', '*') i=0 for filename in glob.iglob(path): print filename if filelen(filename,5): i += 1 print i ``` it works on a folder with fewer files, but with the larger folder, all it prints is "0"... Works on linux server, prints 0 on mac... oh well...
2010/02/01
[ "https://Stackoverflow.com/questions/2177250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183487/" ]
you might try using [`glob.iglob`](http://docs.python.org/library/glob.html) that returns an iterator: ``` topdir = os.path.join('/somedir', 'labels', '*') for filename in glob.iglob(topdir): if filelen(filename) > 15: #do stuff ``` Also, please don't use `dir` for a variable name: you're shadowing the built-in. Another major improvement that you can introduce is to your `filelen` function. If you replace it with the following, you'll save a lot of time. Trust me, [what you have now is the slowest alternative](https://stackoverflow.com/questions/845058/how-to-get-line-count-cheaply-in-python): ``` def many_line(fname, many=15): for i, line in enumerate(open(fname)): if i > many: return True return False ```
``` import os,shutil os.chdir("/mydir/") numlines=20 destination = os.path.join("/destination","dir1") for file in os.listdir("."): if os.path.isfile(file): flag=0 for n,line in enumerate(open(file)): if n > numlines: flag=1 break if flag: try: shutil.move(file,destination) except Exception,e: print e else: print "%s moved to %s" %(file,destination) ```
2,177,250
I have a folder with 100k text files. I want to put files with over 20 lines in another folder. How do I do this in python? I used os.listdir, but of course, there isn't enough memory for even loading the filenames into memory. Is there a way to get maybe 100 filenames at a time? Here's my code: ``` import os import shutil dir = '/somedir/' def file_len(fname): f = open(fname,'r') for i, l in enumerate(f): pass f.close() return i + 1 filenames = os.listdir(dir+'labels/') i = 0 for filename in filenames: flen = file_len(dir+'labels/'+filename) print flen if flen > 15: i = i+1 shutil.copyfile(dir+'originals/'+filename[:-5], dir+'filteredOrigs/'+filename[:-5]) print i ``` And Output: ``` Traceback (most recent call last): File "filterimage.py", line 13, in <module> filenames = os.listdir(dir+'labels/') OSError: [Errno 12] Cannot allocate memory: '/somedir/' ``` Here's the modified script: ``` import os import shutil import glob topdir = '/somedir' def filelen(fname, many): f = open(fname,'r') for i, l in enumerate(f): if i > many: f.close() return True f.close() return False path = os.path.join(topdir, 'labels', '*') i=0 for filename in glob.iglob(path): print filename if filelen(filename,5): i += 1 print i ``` it works on a folder with fewer files, but with the larger folder, all it prints is "0"... Works on linux server, prints 0 on mac... oh well...
2010/02/01
[ "https://Stackoverflow.com/questions/2177250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183487/" ]
you might try using [`glob.iglob`](http://docs.python.org/library/glob.html) that returns an iterator: ``` topdir = os.path.join('/somedir', 'labels', '*') for filename in glob.iglob(topdir): if filelen(filename) > 15: #do stuff ``` Also, please don't use `dir` for a variable name: you're shadowing the built-in. Another major improvement that you can introduce is to your `filelen` function. If you replace it with the following, you'll save a lot of time. Trust me, [what you have now is the slowest alternative](https://stackoverflow.com/questions/845058/how-to-get-line-count-cheaply-in-python): ``` def many_line(fname, many=15): for i, line in enumerate(open(fname)): if i > many: return True return False ```
A couple thoughts. First, you might use the [`glob`](http://docs.python.org/library/glob.html) module to get smaller groups of files. Second, sorting by line count is going to be very time consuming, as you have to open every file and count lines. If you can partition by byte count, you can avoid opening the files by using the [`stat`](http://docs.python.org/library/stat.html) module. If it's crucial that the split happens at 20 lines, you can at least cut out large swaths of files by figuring out a minimum number of characters that a 20 line file of your type will have, and not opening any file smaller than that.
2,177,250
I have a folder with 100k text files. I want to put files with over 20 lines in another folder. How do I do this in python? I used os.listdir, but of course, there isn't enough memory for even loading the filenames into memory. Is there a way to get maybe 100 filenames at a time? Here's my code: ``` import os import shutil dir = '/somedir/' def file_len(fname): f = open(fname,'r') for i, l in enumerate(f): pass f.close() return i + 1 filenames = os.listdir(dir+'labels/') i = 0 for filename in filenames: flen = file_len(dir+'labels/'+filename) print flen if flen > 15: i = i+1 shutil.copyfile(dir+'originals/'+filename[:-5], dir+'filteredOrigs/'+filename[:-5]) print i ``` And Output: ``` Traceback (most recent call last): File "filterimage.py", line 13, in <module> filenames = os.listdir(dir+'labels/') OSError: [Errno 12] Cannot allocate memory: '/somedir/' ``` Here's the modified script: ``` import os import shutil import glob topdir = '/somedir' def filelen(fname, many): f = open(fname,'r') for i, l in enumerate(f): if i > many: f.close() return True f.close() return False path = os.path.join(topdir, 'labels', '*') i=0 for filename in glob.iglob(path): print filename if filelen(filename,5): i += 1 print i ``` it works on a folder with fewer files, but with the larger folder, all it prints is "0"... Works on linux server, prints 0 on mac... oh well...
2010/02/01
[ "https://Stackoverflow.com/questions/2177250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183487/" ]
A couple thoughts. First, you might use the [`glob`](http://docs.python.org/library/glob.html) module to get smaller groups of files. Second, sorting by line count is going to be very time consuming, as you have to open every file and count lines. If you can partition by byte count, you can avoid opening the files by using the [`stat`](http://docs.python.org/library/stat.html) module. If it's crucial that the split happens at 20 lines, you can at least cut out large swaths of files by figuring out a minimum number of characters that a 20 line file of your type will have, and not opening any file smaller than that.
how about using a shell script? you could pick one file at a time: ``` for f in `ls`; loop if `wc -l f`>20; then mv f newfolder fi end loop ``` ppl please correct if i am wrong in any way
2,177,250
I have a folder with 100k text files. I want to put files with over 20 lines in another folder. How do I do this in python? I used os.listdir, but of course, there isn't enough memory for even loading the filenames into memory. Is there a way to get maybe 100 filenames at a time? Here's my code: ``` import os import shutil dir = '/somedir/' def file_len(fname): f = open(fname,'r') for i, l in enumerate(f): pass f.close() return i + 1 filenames = os.listdir(dir+'labels/') i = 0 for filename in filenames: flen = file_len(dir+'labels/'+filename) print flen if flen > 15: i = i+1 shutil.copyfile(dir+'originals/'+filename[:-5], dir+'filteredOrigs/'+filename[:-5]) print i ``` And Output: ``` Traceback (most recent call last): File "filterimage.py", line 13, in <module> filenames = os.listdir(dir+'labels/') OSError: [Errno 12] Cannot allocate memory: '/somedir/' ``` Here's the modified script: ``` import os import shutil import glob topdir = '/somedir' def filelen(fname, many): f = open(fname,'r') for i, l in enumerate(f): if i > many: f.close() return True f.close() return False path = os.path.join(topdir, 'labels', '*') i=0 for filename in glob.iglob(path): print filename if filelen(filename,5): i += 1 print i ``` it works on a folder with fewer files, but with the larger folder, all it prints is "0"... Works on linux server, prints 0 on mac... oh well...
2010/02/01
[ "https://Stackoverflow.com/questions/2177250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183487/" ]
A couple thoughts. First, you might use the [`glob`](http://docs.python.org/library/glob.html) module to get smaller groups of files. Second, sorting by line count is going to be very time consuming, as you have to open every file and count lines. If you can partition by byte count, you can avoid opening the files by using the [`stat`](http://docs.python.org/library/stat.html) module. If it's crucial that the split happens at 20 lines, you can at least cut out large swaths of files by figuring out a minimum number of characters that a 20 line file of your type will have, and not opening any file smaller than that.
The currently accepted answer just plain doesn't work. This function: ``` def many_line(fname, many=15): for i, line in enumerate(line): if i > many: return True return False ``` has two problems: Firstly, the `fname` arg is not used and the file is not opened. Secondly, the call to `enumerate(line)` will fail because `line` is not defined. Changing `enumerate(line)` to `enumerate(open(fname))` will fix it.
2,177,250
I have a folder with 100k text files. I want to put files with over 20 lines in another folder. How do I do this in python? I used os.listdir, but of course, there isn't enough memory for even loading the filenames into memory. Is there a way to get maybe 100 filenames at a time? Here's my code: ``` import os import shutil dir = '/somedir/' def file_len(fname): f = open(fname,'r') for i, l in enumerate(f): pass f.close() return i + 1 filenames = os.listdir(dir+'labels/') i = 0 for filename in filenames: flen = file_len(dir+'labels/'+filename) print flen if flen > 15: i = i+1 shutil.copyfile(dir+'originals/'+filename[:-5], dir+'filteredOrigs/'+filename[:-5]) print i ``` And Output: ``` Traceback (most recent call last): File "filterimage.py", line 13, in <module> filenames = os.listdir(dir+'labels/') OSError: [Errno 12] Cannot allocate memory: '/somedir/' ``` Here's the modified script: ``` import os import shutil import glob topdir = '/somedir' def filelen(fname, many): f = open(fname,'r') for i, l in enumerate(f): if i > many: f.close() return True f.close() return False path = os.path.join(topdir, 'labels', '*') i=0 for filename in glob.iglob(path): print filename if filelen(filename,5): i += 1 print i ``` it works on a folder with fewer files, but with the larger folder, all it prints is "0"... Works on linux server, prints 0 on mac... oh well...
2010/02/01
[ "https://Stackoverflow.com/questions/2177250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183487/" ]
A couple thoughts. First, you might use the [`glob`](http://docs.python.org/library/glob.html) module to get smaller groups of files. Second, sorting by line count is going to be very time consuming, as you have to open every file and count lines. If you can partition by byte count, you can avoid opening the files by using the [`stat`](http://docs.python.org/library/stat.html) module. If it's crucial that the split happens at 20 lines, you can at least cut out large swaths of files by figuring out a minimum number of characters that a 20 line file of your type will have, and not opening any file smaller than that.
You can use os.scandir which is a generator, and therefore does not read all file names at once (comes with python 3.5, otherwise or just simply: pip install scandir). Example: ``` import os for file in os.scandir(path): do_something_with_file(path+file.name) ``` scandir documentation: <https://pypi.org/project/scandir/>
2,177,250
I have a folder with 100k text files. I want to put files with over 20 lines in another folder. How do I do this in python? I used os.listdir, but of course, there isn't enough memory for even loading the filenames into memory. Is there a way to get maybe 100 filenames at a time? Here's my code: ``` import os import shutil dir = '/somedir/' def file_len(fname): f = open(fname,'r') for i, l in enumerate(f): pass f.close() return i + 1 filenames = os.listdir(dir+'labels/') i = 0 for filename in filenames: flen = file_len(dir+'labels/'+filename) print flen if flen > 15: i = i+1 shutil.copyfile(dir+'originals/'+filename[:-5], dir+'filteredOrigs/'+filename[:-5]) print i ``` And Output: ``` Traceback (most recent call last): File "filterimage.py", line 13, in <module> filenames = os.listdir(dir+'labels/') OSError: [Errno 12] Cannot allocate memory: '/somedir/' ``` Here's the modified script: ``` import os import shutil import glob topdir = '/somedir' def filelen(fname, many): f = open(fname,'r') for i, l in enumerate(f): if i > many: f.close() return True f.close() return False path = os.path.join(topdir, 'labels', '*') i=0 for filename in glob.iglob(path): print filename if filelen(filename,5): i += 1 print i ``` it works on a folder with fewer files, but with the larger folder, all it prints is "0"... Works on linux server, prints 0 on mac... oh well...
2010/02/01
[ "https://Stackoverflow.com/questions/2177250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183487/" ]
you might try using [`glob.iglob`](http://docs.python.org/library/glob.html) that returns an iterator: ``` topdir = os.path.join('/somedir', 'labels', '*') for filename in glob.iglob(topdir): if filelen(filename) > 15: #do stuff ``` Also, please don't use `dir` for a variable name: you're shadowing the built-in. Another major improvement that you can introduce is to your `filelen` function. If you replace it with the following, you'll save a lot of time. Trust me, [what you have now is the slowest alternative](https://stackoverflow.com/questions/845058/how-to-get-line-count-cheaply-in-python): ``` def many_line(fname, many=15): for i, line in enumerate(open(fname)): if i > many: return True return False ```
how about using a shell script? you could pick one file at a time: ``` for f in `ls`; loop if `wc -l f`>20; then mv f newfolder fi end loop ``` ppl please correct if i am wrong in any way
2,177,250
I have a folder with 100k text files. I want to put files with over 20 lines in another folder. How do I do this in python? I used os.listdir, but of course, there isn't enough memory for even loading the filenames into memory. Is there a way to get maybe 100 filenames at a time? Here's my code: ``` import os import shutil dir = '/somedir/' def file_len(fname): f = open(fname,'r') for i, l in enumerate(f): pass f.close() return i + 1 filenames = os.listdir(dir+'labels/') i = 0 for filename in filenames: flen = file_len(dir+'labels/'+filename) print flen if flen > 15: i = i+1 shutil.copyfile(dir+'originals/'+filename[:-5], dir+'filteredOrigs/'+filename[:-5]) print i ``` And Output: ``` Traceback (most recent call last): File "filterimage.py", line 13, in <module> filenames = os.listdir(dir+'labels/') OSError: [Errno 12] Cannot allocate memory: '/somedir/' ``` Here's the modified script: ``` import os import shutil import glob topdir = '/somedir' def filelen(fname, many): f = open(fname,'r') for i, l in enumerate(f): if i > many: f.close() return True f.close() return False path = os.path.join(topdir, 'labels', '*') i=0 for filename in glob.iglob(path): print filename if filelen(filename,5): i += 1 print i ``` it works on a folder with fewer files, but with the larger folder, all it prints is "0"... Works on linux server, prints 0 on mac... oh well...
2010/02/01
[ "https://Stackoverflow.com/questions/2177250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183487/" ]
you might try using [`glob.iglob`](http://docs.python.org/library/glob.html) that returns an iterator: ``` topdir = os.path.join('/somedir', 'labels', '*') for filename in glob.iglob(topdir): if filelen(filename) > 15: #do stuff ``` Also, please don't use `dir` for a variable name: you're shadowing the built-in. Another major improvement that you can introduce is to your `filelen` function. If you replace it with the following, you'll save a lot of time. Trust me, [what you have now is the slowest alternative](https://stackoverflow.com/questions/845058/how-to-get-line-count-cheaply-in-python): ``` def many_line(fname, many=15): for i, line in enumerate(open(fname)): if i > many: return True return False ```
The currently accepted answer just plain doesn't work. This function: ``` def many_line(fname, many=15): for i, line in enumerate(line): if i > many: return True return False ``` has two problems: Firstly, the `fname` arg is not used and the file is not opened. Secondly, the call to `enumerate(line)` will fail because `line` is not defined. Changing `enumerate(line)` to `enumerate(open(fname))` will fix it.
2,177,250
I have a folder with 100k text files. I want to put files with over 20 lines in another folder. How do I do this in python? I used os.listdir, but of course, there isn't enough memory for even loading the filenames into memory. Is there a way to get maybe 100 filenames at a time? Here's my code: ``` import os import shutil dir = '/somedir/' def file_len(fname): f = open(fname,'r') for i, l in enumerate(f): pass f.close() return i + 1 filenames = os.listdir(dir+'labels/') i = 0 for filename in filenames: flen = file_len(dir+'labels/'+filename) print flen if flen > 15: i = i+1 shutil.copyfile(dir+'originals/'+filename[:-5], dir+'filteredOrigs/'+filename[:-5]) print i ``` And Output: ``` Traceback (most recent call last): File "filterimage.py", line 13, in <module> filenames = os.listdir(dir+'labels/') OSError: [Errno 12] Cannot allocate memory: '/somedir/' ``` Here's the modified script: ``` import os import shutil import glob topdir = '/somedir' def filelen(fname, many): f = open(fname,'r') for i, l in enumerate(f): if i > many: f.close() return True f.close() return False path = os.path.join(topdir, 'labels', '*') i=0 for filename in glob.iglob(path): print filename if filelen(filename,5): i += 1 print i ``` it works on a folder with fewer files, but with the larger folder, all it prints is "0"... Works on linux server, prints 0 on mac... oh well...
2010/02/01
[ "https://Stackoverflow.com/questions/2177250", "https://Stackoverflow.com", "https://Stackoverflow.com/users/183487/" ]
you might try using [`glob.iglob`](http://docs.python.org/library/glob.html) that returns an iterator: ``` topdir = os.path.join('/somedir', 'labels', '*') for filename in glob.iglob(topdir): if filelen(filename) > 15: #do stuff ``` Also, please don't use `dir` for a variable name: you're shadowing the built-in. Another major improvement that you can introduce is to your `filelen` function. If you replace it with the following, you'll save a lot of time. Trust me, [what you have now is the slowest alternative](https://stackoverflow.com/questions/845058/how-to-get-line-count-cheaply-in-python): ``` def many_line(fname, many=15): for i, line in enumerate(open(fname)): if i > many: return True return False ```
You can use os.scandir which is a generator, and therefore does not read all file names at once (comes with python 3.5, otherwise or just simply: pip install scandir). Example: ``` import os for file in os.scandir(path): do_something_with_file(path+file.name) ``` scandir documentation: <https://pypi.org/project/scandir/>
50,916,340
I'm looking for some general advice on how to either re-write application code to be non-naive, or whether to abandon neo4j for another data storage model. This is not *only* "subjective", as it relates significantly to specific, correct usage of the neo4j driver in Python and why it performs the way it does with my code. Background: ----------- My team and I have been using neo4j to store graph-friendly data that is initially stored in Python objects. Originally, we were advised by a local/in-house expert to use neo4j, as it seemed to fit our data storage and manipulation/querying requirements. The data are always specific instances of a set of carefully-constructed ontologies. For example (pseudo-data): ``` Superclass1 -contains-> SubclassA Superclass1 -implements->SubclassB Superclass1 -isAssociatedWith-> Superclass2 SubclassB -hasColor-> Color1 Color1 -hasLabel-> string::"Red" ``` ...and so on, to create some rather involved and verbose hierarchies. For prototyping, we were storing these data as sequences of grammatical triples (subject->verb/predicate->object) using RDFLib, and using RDFLib's graph-generator to construct a graph. Now, since this information is just a complicated hierarchy, we just store it in some custom Python objects. We also do this in order to provide an easy API to others devs that need to interface with our core service. We hand them a Python library that is our Object model, and let them populate it with data, or, we populate it and hand it to them for easy reading, and they do what they want with it. To store these objects permanently, and to hopefully accelerate the writing and reading (querying/filtering) of these data, we've built custom object-mapping code that utilizes the official neo4j python driver to write and read these Python objects, recursively, to/from a neo4j database. The Problem: ------------ For large and complicated data sets (e.g. 15k+ nodes and 15k+ relations), the object relational mapping (ORM) portion of our code is too slow, and scales poorly. But neither I, nor my colleague are experts in databases or neo4j. I think we're being naive about how to accomplish this ORM. We began to wonder if it even made sense to use neo4j, when more traditional ORMs (e.g. SQL Alchemy) might just be a better choice. For example, the ORM commit algorithm we have now is a recursive function that commits an object like this (pseudo code): ``` def commit(object): for childstr in object: # For each child object child = getattr(object, childstr) # Get the actual object if attribute is <our object base type): # Open transaction, make nodes and relationship with session.begin_transaction() as tx: <construct Cypher query with: MERGE object (make object node) MERGE child (make its child node) MERGE object-[]->child (create relation) > tx.run(<All 3 merges>) commit(child) # Recursively write the child and its children to neo4j ``` Is it naive to do it like this? Would an OGM library like [Py2neo's OGM](http://py2neo.org/v3/ogm.html) be better, despite ours being customized? I've seen [this](https://stackoverflow.com/questions/8356626/orm-with-graph-databases-like-neo4j-in-python?rq=1) and similar questions that recommend this or that OGM method, but in [this](https://neo4j.com/blog/cypher-write-fast-furious/) article, it says not to use OGMs at all. Must we really just implement every method and benchmark for performance? It seems like there must be some best-practices (other than using the [batch IMPORT](https://neo4j.com/blog/bulk-data-import-neo4j-3-0/), which doesn't fit our use cases). And we've read through articles like those linked, and seen the various tips on writing better queries, but it seems better to step back and examine the case more generally before attempting to optimize code line-by line. Although it's clear that we can improve the ORM algorithm to some degree. Does it make sense to write and read large, deep hierarchical objects to/from neo4j using a recursive strategy like this? Is there something in Cypher, or the neo4j drivers that we're missing? Or is it better to use something like Py2neo's OGM? Is it best to just abandon neo4j altogether? The benefits of neo4j and Cypher are difficult to ignore, and our data *does* seem to fit well in a graph. Thanks.
2018/06/18
[ "https://Stackoverflow.com/questions/50916340", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1507854/" ]
You could use a capturing group or to not have `DataHelper.ExecuteProc` in matching result put it in lookbehind: ``` (?<=DataHelper\.ExecuteProc\(")[^\\"]*(?:\\.[^\\"]*)* ``` See live [demo here](https://regex101.com/r/i3AgFx/1) Breakdown: * `(?<=` Start of positive lookbehind + `DataHelper\.ExecuteProc\("` Match it but not consume it * `)` End of lookbehind * `[^\\"]*(?:\\.[^\\"]*)*` Match string enclosed in double qoutation marks
You can do it like this: ``` var pattern = "\bDataHelper\..+?\(\"(?<procedure>[^\"]*?)\""; var result = Regex.Match(input, pattern).Cast<Match>().Select(x=> x.Groups["procedure"].Value).ToList(); ```
41,053,784
I am new to python and trying to implement graph data structure in Python. I have written this code, but i am not getting the desired result i want. Code: ``` class NODE: def __init__(self): self.distance=0 self.colournode="White" adjlist={} def addno(A,B): global adjlist adjlist[A]=B S=NODE() R=NODE() V=NODE() W=NODE() T=NODE() X=NODE() U=NODE() Y=NODE() addno(S,R) for keys in adjlist: print keys ``` I want the code to print {'S':R} on the final line but it is printing this: ``` <__main__.NODE instance at 0x00000000029E6888> ``` Can anybody guide me what am i doing wrong? Also what to do if i want to add another function call like addnode(S,E) and printing should be {S:[R,E]}
2016/12/09
[ "https://Stackoverflow.com/questions/41053784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4825150/" ]
You node needs to have a label to print. You can't use just the variable name. The node has no way knowing name of your variable. ``` class NODE: def __init__(self, name): self.name=name def __repr__(self): return self.name adjlist={} def addno(A,B): global adjlist adjlist[A]=B S=NODE('S') R=NODE('R') addno(S,R) print adjlist >>> {S: R} ``` However python dict may have only one value for each key, so you wont be able to save `{S: R}` and `{S: V}` at the same time. Instead you will need to save aj array of nodes: ``` class NODE: def __init__(self, name): self.name=name def __repr__(self): return self.name adjlist={} def addno(A,B): global adjlist if A not in adjlist: adjlist[A] = [] adjlist[A].append(B) S=NODE('S') R=NODE('R') V=NODE('V') W=NODE('W') addno(S,R) addno(S,V) addno(R,W) print adjlist {S: [R, V], R: [W]} ``` As a side note, using unnessesary global variables is a bad habit. Instead make a class for the graph: ``` class Node: def __init__(self, name): self.name=name def __repr__(self): return self.name class Graph: def __init__(self): self.adjlist={} def addno(self, a, b): if a not in self.adjlist: self.adjlist[a] = [] self.adjlist[a].append(b) def __repr__(self): return str(self.adjlist) G=Graph() S=Node('S') R=Node('R') V=Node('V') W=Node('W') G.addno(S,R) G.addno(S,V) G.addno(R,W) print G >>> {R: [W], S: [R, V]} ```
You get that output because `Node` is an instance of a class ( you get that hint form the output of your program itself see this: `<main.NODE instance at 0x00000000029E6888>` ). i think you are trying to implement `adjacency list` for some graph algorithm. in those cases you will mostly need the `color` and ``distance`to other`nodes`. which you can get by doing : ``` for keys in adjlist: print keys.colournode , keys.distance ```
64,024,941
I am doing object detection using TensorFlow Object Detection API in Google colab. This is my directory structure. ``` object_detection/ training/ exported_model/ pipeline.config model_main_tf2.py exporter_main_v2.py ``` I run bellow for training. ``` !python model_main_tf2.py --model_dir=training --pipeline_config_path=pipeline.config ``` I run bellow for exporting model. > > !python exporter\_main\_v2.py > > --input\_type image\_tensor > > --pipeline\_config\_path pipeline.config > > --trained\_checkpoint\_dir training/ > > --output\_directory exported\_model > > > None of the above will produce any error but after running both I am not able to see exported model in desired directory in my case(exported\_model). I didn't understand what is wrong ?
2020/09/23
[ "https://Stackoverflow.com/questions/64024941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7907965/" ]
I found that while I run training even though it didn't produce any error It also not successful. Because It didn't generate files which should be generated after successful training like checkpoints. The `training/` directory was blank. [this](https://github.com/tensorflow/models/blob/master/research/object_detection/exporter_main_v2.py) file's documentation helped me to figure out my problem. You can check it for further guidance.
i follow the [struction](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md) using export\_tflite\_graph\_tf2.py
45,934,942
I have just started using Tkinter and trying to create a simple pop-up box in python. I have copy pasted a simple code from a website: ``` from Tkinter import * master = Tk() Label(master, text="First Name").grid(row=0) Label(master, text="Last Name").grid(row=1) e1 = Entry(master) e2 = Entry(master) e1.grid(row=0, column=1) e2.grid(row=1, column=1) mainloop( ) ``` This code is taking really long time to run, it has been almost 5 minutes! Is it not possible to just run this snippet? Can anybody tell me how to use Tkinter? I am using jupyter notebook and python version 2.7. I would request a solution for this version only.
2017/08/29
[ "https://Stackoverflow.com/questions/45934942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368577/" ]
``` from Tkinter import * def printData(firstName, lastName): print(firstName) print(lastName) root.destroy() def get_input(): firstName = entry1.get() lastName = entry2.get() printData(firstName, lastName) root = Tk() #Label 1 label1 = Label(root,text = 'First Name') label1.pack() label1.config(justify = CENTER) entry1 = Entry(root, width = 30) entry1.pack() label3 = Label(root, text="Last Name") label3.pack() label1.config(justify = CENTER) entry2 = Entry(root, width = 30) entry2.pack() button1 = Button(root, text = 'submit') button1.pack() button1.config(command = get_input) root.mainloop() ``` Copy paste the above code into a editor, save it and run using the command, ``` python sample.py ``` *Note: The above code is very vague. Have written it in that way for you to understand.*
You can create a popup information window as follow: `showinfo("Window", "Hello World!")` If you want to create a real popup window with input mask, you will need to generate a new TopLevel mask and open a second window. ``` win = tk.Toplevel() win.wm_title("Window") label = tk.Label(win, text="User input") label.grid(row=0, column=0) button = ttk.Button(win, text="Done", command=win.destroy) button.grid(row=1, column=0) ```
45,934,942
I have just started using Tkinter and trying to create a simple pop-up box in python. I have copy pasted a simple code from a website: ``` from Tkinter import * master = Tk() Label(master, text="First Name").grid(row=0) Label(master, text="Last Name").grid(row=1) e1 = Entry(master) e2 = Entry(master) e1.grid(row=0, column=1) e2.grid(row=1, column=1) mainloop( ) ``` This code is taking really long time to run, it has been almost 5 minutes! Is it not possible to just run this snippet? Can anybody tell me how to use Tkinter? I am using jupyter notebook and python version 2.7. I would request a solution for this version only.
2017/08/29
[ "https://Stackoverflow.com/questions/45934942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368577/" ]
Your code is working just fine. Nevertheless for those using `python3` module name has changed from `Tkinter` to `tkinter` all in lowercase. Edit the name and you're good to go! In a nutshell. python2: ``` from Tkinter import * ``` python3: ``` from tkinter import * ``` Look at the screenshot below [![Screenshot](https://i.stack.imgur.com/7RBBA.png)](https://i.stack.imgur.com/7RBBA.png)
You can create a popup information window as follow: `showinfo("Window", "Hello World!")` If you want to create a real popup window with input mask, you will need to generate a new TopLevel mask and open a second window. ``` win = tk.Toplevel() win.wm_title("Window") label = tk.Label(win, text="User input") label.grid(row=0, column=0) button = ttk.Button(win, text="Done", command=win.destroy) button.grid(row=1, column=0) ```
45,934,942
I have just started using Tkinter and trying to create a simple pop-up box in python. I have copy pasted a simple code from a website: ``` from Tkinter import * master = Tk() Label(master, text="First Name").grid(row=0) Label(master, text="Last Name").grid(row=1) e1 = Entry(master) e2 = Entry(master) e1.grid(row=0, column=1) e2.grid(row=1, column=1) mainloop( ) ``` This code is taking really long time to run, it has been almost 5 minutes! Is it not possible to just run this snippet? Can anybody tell me how to use Tkinter? I am using jupyter notebook and python version 2.7. I would request a solution for this version only.
2017/08/29
[ "https://Stackoverflow.com/questions/45934942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368577/" ]
Your code is working just fine. Nevertheless for those using `python3` module name has changed from `Tkinter` to `tkinter` all in lowercase. Edit the name and you're good to go! In a nutshell. python2: ``` from Tkinter import * ``` python3: ``` from tkinter import * ``` Look at the screenshot below [![Screenshot](https://i.stack.imgur.com/7RBBA.png)](https://i.stack.imgur.com/7RBBA.png)
``` from Tkinter import * def printData(firstName, lastName): print(firstName) print(lastName) root.destroy() def get_input(): firstName = entry1.get() lastName = entry2.get() printData(firstName, lastName) root = Tk() #Label 1 label1 = Label(root,text = 'First Name') label1.pack() label1.config(justify = CENTER) entry1 = Entry(root, width = 30) entry1.pack() label3 = Label(root, text="Last Name") label3.pack() label1.config(justify = CENTER) entry2 = Entry(root, width = 30) entry2.pack() button1 = Button(root, text = 'submit') button1.pack() button1.config(command = get_input) root.mainloop() ``` Copy paste the above code into a editor, save it and run using the command, ``` python sample.py ``` *Note: The above code is very vague. Have written it in that way for you to understand.*
45,934,942
I have just started using Tkinter and trying to create a simple pop-up box in python. I have copy pasted a simple code from a website: ``` from Tkinter import * master = Tk() Label(master, text="First Name").grid(row=0) Label(master, text="Last Name").grid(row=1) e1 = Entry(master) e2 = Entry(master) e1.grid(row=0, column=1) e2.grid(row=1, column=1) mainloop( ) ``` This code is taking really long time to run, it has been almost 5 minutes! Is it not possible to just run this snippet? Can anybody tell me how to use Tkinter? I am using jupyter notebook and python version 2.7. I would request a solution for this version only.
2017/08/29
[ "https://Stackoverflow.com/questions/45934942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368577/" ]
``` from Tkinter import * def printData(firstName, lastName): print(firstName) print(lastName) root.destroy() def get_input(): firstName = entry1.get() lastName = entry2.get() printData(firstName, lastName) root = Tk() #Label 1 label1 = Label(root,text = 'First Name') label1.pack() label1.config(justify = CENTER) entry1 = Entry(root, width = 30) entry1.pack() label3 = Label(root, text="Last Name") label3.pack() label1.config(justify = CENTER) entry2 = Entry(root, width = 30) entry2.pack() button1 = Button(root, text = 'submit') button1.pack() button1.config(command = get_input) root.mainloop() ``` Copy paste the above code into a editor, save it and run using the command, ``` python sample.py ``` *Note: The above code is very vague. Have written it in that way for you to understand.*
check it again the code is executing properly but u can't see that output in jupyter notebook itself u can see it in windows column like beside the chrome icons in toggle bar .I'm also confused initially check it once
45,934,942
I have just started using Tkinter and trying to create a simple pop-up box in python. I have copy pasted a simple code from a website: ``` from Tkinter import * master = Tk() Label(master, text="First Name").grid(row=0) Label(master, text="Last Name").grid(row=1) e1 = Entry(master) e2 = Entry(master) e1.grid(row=0, column=1) e2.grid(row=1, column=1) mainloop( ) ``` This code is taking really long time to run, it has been almost 5 minutes! Is it not possible to just run this snippet? Can anybody tell me how to use Tkinter? I am using jupyter notebook and python version 2.7. I would request a solution for this version only.
2017/08/29
[ "https://Stackoverflow.com/questions/45934942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8368577/" ]
Your code is working just fine. Nevertheless for those using `python3` module name has changed from `Tkinter` to `tkinter` all in lowercase. Edit the name and you're good to go! In a nutshell. python2: ``` from Tkinter import * ``` python3: ``` from tkinter import * ``` Look at the screenshot below [![Screenshot](https://i.stack.imgur.com/7RBBA.png)](https://i.stack.imgur.com/7RBBA.png)
check it again the code is executing properly but u can't see that output in jupyter notebook itself u can see it in windows column like beside the chrome icons in toggle bar .I'm also confused initially check it once
60,985,999
This code works correctly in python 2.X version. I am trying to use the similar code in python version 3. The problem is that I do not want to use requests module. I need to make it work using "urllib3". ``` import requests import urllib event = {'url':'http://google.com', 'email':'abc@gmail.com', 'title':'test'} url = event['url'] if event['email']: email=event['email'] if event['title']: title=event['title'] url1 = urllib.parse.unquote(url) myfile=urllib.request.urlopen(url1) requests.post("https://api.mailgun.net/v3/xxx.mailgun.org/messages", auth=("api", "key-xxx"), files=[("attachment", myfile) ], data={"from": "Excited User <excited-user@example.com>", "to": email, "subject": title, "text": "Testing some awesomness with attachments!", "html": myfile}) ``` I am getting TypeError while trying to run this code: ``` import urllib3 event = {'url':'http://oksoft.blogspot.com', 'email':'abc@gmail.com', 'title':'test'} url = event['url'] if event['email']: email=event['email'] if event['title']: title=event['title'] url1 = urllib.parse.unquote(url) myfile=urllib.request.urlopen(url1) http = urllib3.PoolManager() url = "https://api.mailgun.net/v3/xxx.mailgun.org/messages" params={"from": "Excited User <excited-user@example.com>", "to": email, "subject": title, "text": "Testing some awesomness with attachments!", "html": myfile} http.request( "POST", url, headers={"Content-Type": "application/json", "api":"key-xxx"}, body= params ) ```
2020/04/02
[ "https://Stackoverflow.com/questions/60985999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/139150/" ]
You can do something like this: ``` Where x.RoleId == 2 && (loc == null || s.LocationId == loc) ```
Simply extract your managers and filter them if needed. That way you can as well easily apply more filters and code readability isn't hurt. ``` var managers = CSDB.Managers.AsQueryable(); if(loc > 0) managers = managers.Where(man => man.LocationId == loc); var myResult = from allocation in CSDB.Allocations join manager in managers on allocation.ManagerId equals manager.Id where allocation.RoleId == 2 select new { allocation.name, allocation.Date }; ```
60,985,999
This code works correctly in python 2.X version. I am trying to use the similar code in python version 3. The problem is that I do not want to use requests module. I need to make it work using "urllib3". ``` import requests import urllib event = {'url':'http://google.com', 'email':'abc@gmail.com', 'title':'test'} url = event['url'] if event['email']: email=event['email'] if event['title']: title=event['title'] url1 = urllib.parse.unquote(url) myfile=urllib.request.urlopen(url1) requests.post("https://api.mailgun.net/v3/xxx.mailgun.org/messages", auth=("api", "key-xxx"), files=[("attachment", myfile) ], data={"from": "Excited User <excited-user@example.com>", "to": email, "subject": title, "text": "Testing some awesomness with attachments!", "html": myfile}) ``` I am getting TypeError while trying to run this code: ``` import urllib3 event = {'url':'http://oksoft.blogspot.com', 'email':'abc@gmail.com', 'title':'test'} url = event['url'] if event['email']: email=event['email'] if event['title']: title=event['title'] url1 = urllib.parse.unquote(url) myfile=urllib.request.urlopen(url1) http = urllib3.PoolManager() url = "https://api.mailgun.net/v3/xxx.mailgun.org/messages" params={"from": "Excited User <excited-user@example.com>", "to": email, "subject": title, "text": "Testing some awesomness with attachments!", "html": myfile} http.request( "POST", url, headers={"Content-Type": "application/json", "api":"key-xxx"}, body= params ) ```
2020/04/02
[ "https://Stackoverflow.com/questions/60985999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/139150/" ]
You can do something like this: ``` Where x.RoleId == 2 && (loc == null || s.LocationId == loc) ```
Also, you can do smth like this. ``` Where x.RoleId == 2 && (loc?.Equals(s.LocationId) ?? true) ``` If `loc` just `int` I would prefer to use a little bit changed [@Salah Akbari answer](https://stackoverflow.com/a/60986050/2946329): ``` Where x.RoleId == 2 && (loc == 0 || s.LocationId == loc) ```
60,985,999
This code works correctly in python 2.X version. I am trying to use the similar code in python version 3. The problem is that I do not want to use requests module. I need to make it work using "urllib3". ``` import requests import urllib event = {'url':'http://google.com', 'email':'abc@gmail.com', 'title':'test'} url = event['url'] if event['email']: email=event['email'] if event['title']: title=event['title'] url1 = urllib.parse.unquote(url) myfile=urllib.request.urlopen(url1) requests.post("https://api.mailgun.net/v3/xxx.mailgun.org/messages", auth=("api", "key-xxx"), files=[("attachment", myfile) ], data={"from": "Excited User <excited-user@example.com>", "to": email, "subject": title, "text": "Testing some awesomness with attachments!", "html": myfile}) ``` I am getting TypeError while trying to run this code: ``` import urllib3 event = {'url':'http://oksoft.blogspot.com', 'email':'abc@gmail.com', 'title':'test'} url = event['url'] if event['email']: email=event['email'] if event['title']: title=event['title'] url1 = urllib.parse.unquote(url) myfile=urllib.request.urlopen(url1) http = urllib3.PoolManager() url = "https://api.mailgun.net/v3/xxx.mailgun.org/messages" params={"from": "Excited User <excited-user@example.com>", "to": email, "subject": title, "text": "Testing some awesomness with attachments!", "html": myfile} http.request( "POST", url, headers={"Content-Type": "application/json", "api":"key-xxx"}, body= params ) ```
2020/04/02
[ "https://Stackoverflow.com/questions/60985999", "https://Stackoverflow.com", "https://Stackoverflow.com/users/139150/" ]
Also, you can do smth like this. ``` Where x.RoleId == 2 && (loc?.Equals(s.LocationId) ?? true) ``` If `loc` just `int` I would prefer to use a little bit changed [@Salah Akbari answer](https://stackoverflow.com/a/60986050/2946329): ``` Where x.RoleId == 2 && (loc == 0 || s.LocationId == loc) ```
Simply extract your managers and filter them if needed. That way you can as well easily apply more filters and code readability isn't hurt. ``` var managers = CSDB.Managers.AsQueryable(); if(loc > 0) managers = managers.Where(man => man.LocationId == loc); var myResult = from allocation in CSDB.Allocations join manager in managers on allocation.ManagerId equals manager.Id where allocation.RoleId == 2 select new { allocation.name, allocation.Date }; ```
32,829,504
in python is a mathematical operator classed as an interger. for example why isnt this code working ``` import random score = 0 randomnumberforq = (random.randint(1,10)) randomoperator = (random.randint(0,2)) operator = ['*','+','-'] answer = (randomnumberforq ,operator[randomoperator], randomnumberforq) useranswer = input(int(randomnumberforq)+int(operator[randomoperator])+ int(randomnumberforq)) if answer == useranswer: print('correct') else: print('wrong') ```
2015/09/28
[ "https://Stackoverflow.com/questions/32829504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5080233/" ]
You can't just concatenate an operator to a couple of numbers and expect it to be evaluated. You could use `eval` to evaluate the final string. ``` answer = eval(str(randomnumberforq) + operator[randomoperator] + str(randomnumberforq)) ``` A better way to accomplish what you're attempting is to use the functions found in the `operator` module. By assigning the functions to a list, you can choose which one to call randomly: ``` import random from operator import mul, add, sub if __name__ == '__main__': score = 0 randomnumberforq = random.randint(1,10) randomoperator = random.randint(0,2) operator = [[mul, ' * '], [add, ' + '], [sub, ' - ']] answer = operator[randomoperator][0](randomnumberforq, randomnumberforq) useranswer = input(str(randomnumberforq) + operator[randomoperator][1] + str(randomnumberforq) + ' = ') if answer == useranswer: print('correct') else: print('wrong') ```
You try to convert a string to an integer, but which isn't a number: ``` int(operator[randomoperator]) ``` Your operatators in the array "operator" are strings, which don't represent numbers and can't be converted to integer values. On the other hand the input() function desires string as parameter value. So write: ``` ... = input(str(numberValue) + operatorString + str(nubmerValue)) ``` The + operator can be used to concat strings. But Python requires that the operands on both sides are strings. Thats why i added the str() functions to cast the number values to strings.
32,829,504
in python is a mathematical operator classed as an interger. for example why isnt this code working ``` import random score = 0 randomnumberforq = (random.randint(1,10)) randomoperator = (random.randint(0,2)) operator = ['*','+','-'] answer = (randomnumberforq ,operator[randomoperator], randomnumberforq) useranswer = input(int(randomnumberforq)+int(operator[randomoperator])+ int(randomnumberforq)) if answer == useranswer: print('correct') else: print('wrong') ```
2015/09/28
[ "https://Stackoverflow.com/questions/32829504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5080233/" ]
That depends on what you're trying to do. You've given us no sample input or output, no comments, and no error message. It looks like you're trying to write a simple practice engine for arithmetic. If so, then your basic problem is that you don't understand the operations allowed in programming. You can't just throw symbols in a row and expect the computer to figure out how it's supposed to combine them. Your assignment statements for answer and useranswer are structurally flawed. The first gives you a list of strings; the second dies because you tried to convert a symbol (such as \*) to an integer. For more advanced user, I would recommend the "evaluate" operation. For you, however ... When you pick the random operator, you'll need to check to see which one you got. Write a 3-branched "if" to handle the three possibilities. Here's what the head of the first might look like: ``` if randomoperator == 0: operator = '*' answer = randomnumberforq * randomnumberforq elif: ... ``` Note that the two numbers in the operation are the same. If you want different numbers, you have to call randint twice. Does this get you moving ... at a coding level with which you're comfortable?
You try to convert a string to an integer, but which isn't a number: ``` int(operator[randomoperator]) ``` Your operatators in the array "operator" are strings, which don't represent numbers and can't be converted to integer values. On the other hand the input() function desires string as parameter value. So write: ``` ... = input(str(numberValue) + operatorString + str(nubmerValue)) ``` The + operator can be used to concat strings. But Python requires that the operands on both sides are strings. Thats why i added the str() functions to cast the number values to strings.
32,829,504
in python is a mathematical operator classed as an interger. for example why isnt this code working ``` import random score = 0 randomnumberforq = (random.randint(1,10)) randomoperator = (random.randint(0,2)) operator = ['*','+','-'] answer = (randomnumberforq ,operator[randomoperator], randomnumberforq) useranswer = input(int(randomnumberforq)+int(operator[randomoperator])+ int(randomnumberforq)) if answer == useranswer: print('correct') else: print('wrong') ```
2015/09/28
[ "https://Stackoverflow.com/questions/32829504", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5080233/" ]
You can't just concatenate an operator to a couple of numbers and expect it to be evaluated. You could use `eval` to evaluate the final string. ``` answer = eval(str(randomnumberforq) + operator[randomoperator] + str(randomnumberforq)) ``` A better way to accomplish what you're attempting is to use the functions found in the `operator` module. By assigning the functions to a list, you can choose which one to call randomly: ``` import random from operator import mul, add, sub if __name__ == '__main__': score = 0 randomnumberforq = random.randint(1,10) randomoperator = random.randint(0,2) operator = [[mul, ' * '], [add, ' + '], [sub, ' - ']] answer = operator[randomoperator][0](randomnumberforq, randomnumberforq) useranswer = input(str(randomnumberforq) + operator[randomoperator][1] + str(randomnumberforq) + ' = ') if answer == useranswer: print('correct') else: print('wrong') ```
That depends on what you're trying to do. You've given us no sample input or output, no comments, and no error message. It looks like you're trying to write a simple practice engine for arithmetic. If so, then your basic problem is that you don't understand the operations allowed in programming. You can't just throw symbols in a row and expect the computer to figure out how it's supposed to combine them. Your assignment statements for answer and useranswer are structurally flawed. The first gives you a list of strings; the second dies because you tried to convert a symbol (such as \*) to an integer. For more advanced user, I would recommend the "evaluate" operation. For you, however ... When you pick the random operator, you'll need to check to see which one you got. Write a 3-branched "if" to handle the three possibilities. Here's what the head of the first might look like: ``` if randomoperator == 0: operator = '*' answer = randomnumberforq * randomnumberforq elif: ... ``` Note that the two numbers in the operation are the same. If you want different numbers, you have to call randint twice. Does this get you moving ... at a coding level with which you're comfortable?
26,453,920
My problem is that I'm trying to pass a `list` as a variable to a function, and I'd like to mutlti-thread the function processing. I can't seem to use `pool.map` because it only accepts iterables. I can't seem to use `pool.apply` because it seems to block the pool while it works, so I don't really understand how it allow mutli-threading at all (admittedly, I don't seem to understand anything about multi-threading). I tried `pool.apply_async`, but the program finishes in seconds, and only appears to process about 20000 total computations. Here's some psuedo-code for it. ``` import MySQLdb from multiprocessing import Pool def some_math(x, y): f(x[1], x[2], y[1], y[2]) return f def distance(x): x_distances = [] for y in all_y: distance = some_math(x, y) if distance > 1000000: continue else: x_distances.append(x[0], y[0],distance) mysql.executemany(sql_update, x_distances) mydb.commit() all_x = [] all_y = [] sql_x = 'SELECT id, lat, lng FROM table' sql_y = 'SELECT id, lat, lng FROM table' sql_update = 'INSERT INTO distances (id_x, id_y, distance) VALUES (%s, %s, %S)' cursor.execute(sql_x) all_x = cursor.fetchall() cursor.execute(sql_y) all_y = cursor.fetchall() p = Pool(4) for x in all_x: p.apply_async(distance, x) ``` OR, if using map: ``` p = Pool(4) for x in all_x: p.map(distance, x) ``` The error returns: Processing A for distances... ``` Traceback (most recent call last): File "./distance-house.py", line 94, in <module> p.map(range, row) File "/usr/lib/python2.7/multiprocessing/pool.py", line 251, in map return self.map_async(func, iterable, chunksize).get() File "/usr/lib/python2.7/multiprocessing/pool.py", line 558, in get raise self._value TypeError: 'float' object has no attribute '__getitem__' ``` I am trying to multi-thread a long computation - calculating a the distance between something like 10,000 points on a many-to-many basis. Currently, the process is taking several days, and I figure that multiprocessing the results could really improve the efficiency. I'm all ears for suggestions.
2014/10/19
[ "https://Stackoverflow.com/questions/26453920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3972123/" ]
You can use `pool.map`: ``` p = Pool(4) p.map(distance, all_x) ``` as per the first example in the [doc](https://docs.python.org/2/library/multiprocessing.html). It will do the iteration for you!
Another way to Approach it is to pack your variables inside a tuble and unpack inside the function. example: ``` def Add(z): x,y = z return x + y a = [ 0 , 1, 2, 3] b = [ 5, 6, 7, 8] ab = (a,b) Add(ab) ```
27,830,428
I have been trying to compact my code for a primality test in python so that it makes use of list comprehensions, but for some reason it doesn't return the correct results: ``` def isPrime(n): if n > 1: for i in range(2, int(n ** 0.5) + 1): if n % i == 0: return False return True ``` That's the code for my current primality test, but I want to condense it: ``` def isPrime(n): if n > 1: return [False for i in range(2, int(n ** 0.5) + 1) if n % i == 0] return True ``` I tried the above, but it outputs all non-prime integers up to n. What am I doing wrong?
2015/01/07
[ "https://Stackoverflow.com/questions/27830428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4430875/" ]
As you want `False` if **any** lesser number is a divisor, code it directly that way: ``` def isPrime(n): return n<=1 or not any(i for i in range(2, int(n ** 0.5) + 1) if n % i == 0) ``` Note that this uses a **genexp**, not a **listcomp**, because that allows `any` to terminate the whole operation as soon as it finds any suitable `i` divisor and thus knows `n` cannot be prime. List comprehensions generate an in-memory list of all their items, while generator expressions yield items one at a time, and only as long as they're being asked for "the next one" (by a `for` loop, an accumulator such as `any` or `all`, or directly by the `next` built-in).
you can use `all`: ``` >>> def prime_check(n): ... if n > 1: ... return all(False for i in range(2, int(n ** 0.5) + 1) if n % i == 0) ... >>> prime_check(6) False >>> prime_check(23) True >>> prime_check(108) False >>> prime_check(111) False >>> prime_check(101) True ```
27,830,428
I have been trying to compact my code for a primality test in python so that it makes use of list comprehensions, but for some reason it doesn't return the correct results: ``` def isPrime(n): if n > 1: for i in range(2, int(n ** 0.5) + 1): if n % i == 0: return False return True ``` That's the code for my current primality test, but I want to condense it: ``` def isPrime(n): if n > 1: return [False for i in range(2, int(n ** 0.5) + 1) if n % i == 0] return True ``` I tried the above, but it outputs all non-prime integers up to n. What am I doing wrong?
2015/01/07
[ "https://Stackoverflow.com/questions/27830428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4430875/" ]
As you want `False` if **any** lesser number is a divisor, code it directly that way: ``` def isPrime(n): return n<=1 or not any(i for i in range(2, int(n ** 0.5) + 1) if n % i == 0) ``` Note that this uses a **genexp**, not a **listcomp**, because that allows `any` to terminate the whole operation as soon as it finds any suitable `i` divisor and thus knows `n` cannot be prime. List comprehensions generate an in-memory list of all their items, while generator expressions yield items one at a time, and only as long as they're being asked for "the next one" (by a `for` loop, an accumulator such as `any` or `all`, or directly by the `next` built-in).
The problem is that a list containg `False` evaluates to a boolean `True`: ``` >>> isPrime(4) [False] >>> bool([False]) True ```
32,622,825
I need to get some numbers from this website <http://www.preciodolar.com/> But the data I need, takes a little time to load and shows a message of 'wait' until it completely loads. I used find all and some regular expressions to get the data I need, but when I execute, `python` gives me the 'wait' message that appears before the data loads. Is there a way to make python 'wait' until all data is loaded? my code looks like this, ``` import urllib.request from re import findall def divisas(): pag = urllib.request.urlopen('http://www.preciodolar.com/') html = str(pag.read()) brasil = findall('<td class="usdbrl_buy">(.*?)</td>',html) return brasil ```
2015/09/17
[ "https://Stackoverflow.com/questions/32622825", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3435341/" ]
As you are using the ASP.NET you can use the following two options in Page Load. Option : 1 `Request.ServerVariables["HTTP_REFERER"]` Although note on the above it is possible for browsers to block the value (empty value). Option : 2 You can check the `Request.UrlReferrer` of the current `HttpRequest`: it will usually contain the page from where the user is coming from (depends on the browser, though). Reference: [how do I determine where the user came from in asp.net?](https://stackoverflow.com/questions/3166113/how-do-i-determine-where-the-user-came-from-in-asp-net) <https://msdn.microsoft.com/en-us/library/system.web.httprequest.urlreferrer%28v=vs.110%29.aspx>
Session\_Start event is not suitable for these kind of things. Session\_start runs when a user first enters in your applications, think it like the first page load. You can use a query string parameter to determine where the user redirected from. For example, if user redirected from sso.aspx to default.aspx, use url like that : default.aspx?previouspage=sso then, check the previouspage query string parameter at default.aspx, to show or hide your panel.
48,996,494
I have two network interfaces (wifi and ethernet) both with internet access. Let's say my interfaces are `eth` (ethernet) and `wlp2` (wifi). I need specific requests to go through `eth` interface and others through `wpl2`. Something like: ``` // Through "eth" request.post(url="http://myapi.com/store_ip", iface="eth") // Through "wlp2" request.post(url="http://myapi.com/log", iface="wlp2") ``` I'm using `requests`, but I can use `pycurl` or `urllib` if there isn't any way to do it with `requests`. [How to specify source interface in python requests module?](https://stackoverflow.com/questions/26335544/how-to-specify-source-interface-in-python-requests-module) refers to [Requests, bind to an ip](https://stackoverflow.com/questions/12585317/requests-bind-to-an-ip) and it doesn't work.
2018/02/26
[ "https://Stackoverflow.com/questions/48996494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4585081/" ]
I found a way using `pycurl`. This works like a charm. ``` import pycurl from io import BytesIO import json def curl_post(url, data, iface=None): c = pycurl.Curl() buffer = BytesIO() c.setopt(pycurl.URL, url) c.setopt(pycurl.POST, True) c.setopt(pycurl.HTTPHEADER, ['Content-Type: application/json']) c.setopt(pycurl.TIMEOUT, 10) c.setopt(pycurl.WRITEFUNCTION, buffer.write) c.setopt(pycurl.POSTFIELDS, data) if iface: c.setopt(pycurl.INTERFACE, iface) c.perform() # Json response resp = buffer.getvalue().decode('UTF-8') # Check response is a JSON if not there was an error try: resp = json.loads(resp) except json.decoder.JSONDecodeError: pass buffer.close() c.close() return resp if __name__ == '__main__': dat = {"id": 52, "configuration": [{"eno1": {"address": "192.168.1.1"}}]} res = curl_post("http://127.0.0.1:5000/network_configuration/", json.dumps(dat), "wlp2") print(res) ``` I'm leaving the question opened hopping that someone can give an answer using `requests`.
Try changing the internal IP (192.168.0.200) to the corresponding iface in the code below. ``` import requests from requests_toolbelt.adapters import source def check_ip(inet_addr): s = requests.Session() iface = source.SourceAddressAdapter(inet_addr) s.mount('http://', iface) s.mount('https://', iface) url = 'https://emapp.cc/get_my_ip' resp = s.get(url) print(resp.text) if __name__ == '__main__': check_ip('192.168.0.200') ```
48,996,494
I have two network interfaces (wifi and ethernet) both with internet access. Let's say my interfaces are `eth` (ethernet) and `wlp2` (wifi). I need specific requests to go through `eth` interface and others through `wpl2`. Something like: ``` // Through "eth" request.post(url="http://myapi.com/store_ip", iface="eth") // Through "wlp2" request.post(url="http://myapi.com/log", iface="wlp2") ``` I'm using `requests`, but I can use `pycurl` or `urllib` if there isn't any way to do it with `requests`. [How to specify source interface in python requests module?](https://stackoverflow.com/questions/26335544/how-to-specify-source-interface-in-python-requests-module) refers to [Requests, bind to an ip](https://stackoverflow.com/questions/12585317/requests-bind-to-an-ip) and it doesn't work.
2018/02/26
[ "https://Stackoverflow.com/questions/48996494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4585081/" ]
I found a way using `pycurl`. This works like a charm. ``` import pycurl from io import BytesIO import json def curl_post(url, data, iface=None): c = pycurl.Curl() buffer = BytesIO() c.setopt(pycurl.URL, url) c.setopt(pycurl.POST, True) c.setopt(pycurl.HTTPHEADER, ['Content-Type: application/json']) c.setopt(pycurl.TIMEOUT, 10) c.setopt(pycurl.WRITEFUNCTION, buffer.write) c.setopt(pycurl.POSTFIELDS, data) if iface: c.setopt(pycurl.INTERFACE, iface) c.perform() # Json response resp = buffer.getvalue().decode('UTF-8') # Check response is a JSON if not there was an error try: resp = json.loads(resp) except json.decoder.JSONDecodeError: pass buffer.close() c.close() return resp if __name__ == '__main__': dat = {"id": 52, "configuration": [{"eno1": {"address": "192.168.1.1"}}]} res = curl_post("http://127.0.0.1:5000/network_configuration/", json.dumps(dat), "wlp2") print(res) ``` I'm leaving the question opened hopping that someone can give an answer using `requests`.
If you want to do this on Linux you could use `SO_BINDTODEVICE` flag for `setsockopt` (check [man 7 socket](https://man7.org/linux/man-pages/man7/socket.7.html#:%7E:text=since%20Linux%204.6.-,SO_BINDTODEVICE,-Bind%20this%20socket), for more details). In fact, it's what used [by curl](https://github.com/curl/curl/blob/3df8d08d00a82f0bfc076ca1000abb209e0b9770/lib/connect.c#L305), if you use `--interface` [option](https://curl.se/docs/manpage.html#--interface) on linux. But keep in mind that `SO_BINDTODEVICE` requires root permissions (`CAP_NET_RAW`, although there were [some attempts](https://patchwork.ozlabs.org/project/netdev/patch/20200331132009.1306283-1-vincent@bernat.ch/) to change this) and `curl` fallbacks to a regular `bind` trick if `SO_BINDTODEVICE` fails. Here's sample `curl` strace when it fails: ``` strace -f -e setsockopt,bind curl --interface eth2 https://ifconfig.me/ strace: Process 18208 attached [pid 18208] +++ exited with 0 +++ setsockopt(3, SOL_TCP, TCP_NODELAY, [1], 4) = 0 setsockopt(3, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 setsockopt(3, SOL_TCP, TCP_KEEPIDLE, [60], 4) = 0 setsockopt(3, SOL_TCP, TCP_KEEPINTVL, [60], 4) = 0 setsockopt(3, SOL_SOCKET, SO_BINDTODEVICE, "eth2\0", 5) = -1 EPERM (Operation not permitted) bind(3, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("192.168.8.1")}, 16) = 0 # curl fallbacks to regular bind 127.0.0.1+++ exited with 0 +++ ``` Also, wanted to say that using regular `bind` does not always guarantee that traffic would go through specified interface ([@MarSoft answer](https://stackoverflow.com/a/61581069/4042398) uses plain `bind`). On linux, only `SO_BINDTODEVICE` guarantees that traffic would go through the specified device. Here's an example how to use `SO_BINDTODEVICE` with `requests` and [requests-toolbelt](https://pypi.org/project/requests-toolbelt/) (as I said, it requires `CAP_NET_RAW` [permissions](https://man7.org/linux/man-pages/man7/capabilities.7.html#:%7E:text=listen%20to%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20multicasts.-,CAP_NET_RAW,-*%20Use%20RAW%20and)). ```py import socket import requests from requests_toolbelt.adapters.socket_options import SocketOptionsAdapter session = requests.Session() # set interface here options = [(socket.SOL_SOCKET, socket.SO_BINDTODEVICE, b"eth0")] for prefix in ('http://', 'https://'): session.mount(prefix, SocketOptionsAdapter(socket_options=options)) print(session.get("https://ifconfig.me/").text) ``` Alternatively, if you don't want to use `requests-toolbelt` you can implement adapter class yourself: ```py import socket import requests from requests import adapters from urllib3.poolmanager import PoolManager class InterfaceAdapter(adapters.HTTPAdapter): def __init__(self, **kwargs): self.iface = kwargs.pop('iface', None) super(InterfaceAdapter, self).__init__(**kwargs) def _socket_options(self): if self.iface is None: return [] else: return [(socket.SOL_SOCKET, socket.SO_BINDTODEVICE, self.iface)] def init_poolmanager(self, connections, maxsize, block=False): self.poolmanager = PoolManager( num_pools=connections, maxsize=maxsize, block=block, socket_options=self._socket_options() ) session = requests.Session() for prefix in ('http://', 'https://'): session.mount(prefix, InterfaceAdapter(iface=b'eth0')) print(session.get("https://ifconfig.me/").text) ```
48,996,494
I have two network interfaces (wifi and ethernet) both with internet access. Let's say my interfaces are `eth` (ethernet) and `wlp2` (wifi). I need specific requests to go through `eth` interface and others through `wpl2`. Something like: ``` // Through "eth" request.post(url="http://myapi.com/store_ip", iface="eth") // Through "wlp2" request.post(url="http://myapi.com/log", iface="wlp2") ``` I'm using `requests`, but I can use `pycurl` or `urllib` if there isn't any way to do it with `requests`. [How to specify source interface in python requests module?](https://stackoverflow.com/questions/26335544/how-to-specify-source-interface-in-python-requests-module) refers to [Requests, bind to an ip](https://stackoverflow.com/questions/12585317/requests-bind-to-an-ip) and it doesn't work.
2018/02/26
[ "https://Stackoverflow.com/questions/48996494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4585081/" ]
Here is the solution for Requests library without monkey-patching anything. This function will create a Session bound to the given IP address. It is up to you to determine IP address of the desired network interface. Tested to work with `requests==2.23.0`. ``` import requests def session_for_src_addr(addr: str) -> requests.Session: """ Create `Session` which will bind to the specified local address rather than auto-selecting it. """ session = requests.Session() for prefix in ('http://', 'https://'): session.get_adapter(prefix).init_poolmanager( # those are default values from HTTPAdapter's constructor connections=requests.adapters.DEFAULT_POOLSIZE, maxsize=requests.adapters.DEFAULT_POOLSIZE, # This should be a tuple of (address, port). Port 0 means auto-selection. source_address=(addr, 0), ) return session # usage example: s = session_for_src_addr('192.168.1.12') s.get('https://httpbin.org/ip') ``` Be warned though that this approach is identical to `curl`'s `--interface` option, and won't help in some cases. Depending on your routing configuration, it might happen that even though you bind to the specific IP address, request will go through some other interface. So if this answer does not work for you then first check if `curl http://httpbin.org/ip --interface myinterface` will work as expected.
Try changing the internal IP (192.168.0.200) to the corresponding iface in the code below. ``` import requests from requests_toolbelt.adapters import source def check_ip(inet_addr): s = requests.Session() iface = source.SourceAddressAdapter(inet_addr) s.mount('http://', iface) s.mount('https://', iface) url = 'https://emapp.cc/get_my_ip' resp = s.get(url) print(resp.text) if __name__ == '__main__': check_ip('192.168.0.200') ```
48,996,494
I have two network interfaces (wifi and ethernet) both with internet access. Let's say my interfaces are `eth` (ethernet) and `wlp2` (wifi). I need specific requests to go through `eth` interface and others through `wpl2`. Something like: ``` // Through "eth" request.post(url="http://myapi.com/store_ip", iface="eth") // Through "wlp2" request.post(url="http://myapi.com/log", iface="wlp2") ``` I'm using `requests`, but I can use `pycurl` or `urllib` if there isn't any way to do it with `requests`. [How to specify source interface in python requests module?](https://stackoverflow.com/questions/26335544/how-to-specify-source-interface-in-python-requests-module) refers to [Requests, bind to an ip](https://stackoverflow.com/questions/12585317/requests-bind-to-an-ip) and it doesn't work.
2018/02/26
[ "https://Stackoverflow.com/questions/48996494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4585081/" ]
If you want to do this on Linux you could use `SO_BINDTODEVICE` flag for `setsockopt` (check [man 7 socket](https://man7.org/linux/man-pages/man7/socket.7.html#:%7E:text=since%20Linux%204.6.-,SO_BINDTODEVICE,-Bind%20this%20socket), for more details). In fact, it's what used [by curl](https://github.com/curl/curl/blob/3df8d08d00a82f0bfc076ca1000abb209e0b9770/lib/connect.c#L305), if you use `--interface` [option](https://curl.se/docs/manpage.html#--interface) on linux. But keep in mind that `SO_BINDTODEVICE` requires root permissions (`CAP_NET_RAW`, although there were [some attempts](https://patchwork.ozlabs.org/project/netdev/patch/20200331132009.1306283-1-vincent@bernat.ch/) to change this) and `curl` fallbacks to a regular `bind` trick if `SO_BINDTODEVICE` fails. Here's sample `curl` strace when it fails: ``` strace -f -e setsockopt,bind curl --interface eth2 https://ifconfig.me/ strace: Process 18208 attached [pid 18208] +++ exited with 0 +++ setsockopt(3, SOL_TCP, TCP_NODELAY, [1], 4) = 0 setsockopt(3, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 setsockopt(3, SOL_TCP, TCP_KEEPIDLE, [60], 4) = 0 setsockopt(3, SOL_TCP, TCP_KEEPINTVL, [60], 4) = 0 setsockopt(3, SOL_SOCKET, SO_BINDTODEVICE, "eth2\0", 5) = -1 EPERM (Operation not permitted) bind(3, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("192.168.8.1")}, 16) = 0 # curl fallbacks to regular bind 127.0.0.1+++ exited with 0 +++ ``` Also, wanted to say that using regular `bind` does not always guarantee that traffic would go through specified interface ([@MarSoft answer](https://stackoverflow.com/a/61581069/4042398) uses plain `bind`). On linux, only `SO_BINDTODEVICE` guarantees that traffic would go through the specified device. Here's an example how to use `SO_BINDTODEVICE` with `requests` and [requests-toolbelt](https://pypi.org/project/requests-toolbelt/) (as I said, it requires `CAP_NET_RAW` [permissions](https://man7.org/linux/man-pages/man7/capabilities.7.html#:%7E:text=listen%20to%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20multicasts.-,CAP_NET_RAW,-*%20Use%20RAW%20and)). ```py import socket import requests from requests_toolbelt.adapters.socket_options import SocketOptionsAdapter session = requests.Session() # set interface here options = [(socket.SOL_SOCKET, socket.SO_BINDTODEVICE, b"eth0")] for prefix in ('http://', 'https://'): session.mount(prefix, SocketOptionsAdapter(socket_options=options)) print(session.get("https://ifconfig.me/").text) ``` Alternatively, if you don't want to use `requests-toolbelt` you can implement adapter class yourself: ```py import socket import requests from requests import adapters from urllib3.poolmanager import PoolManager class InterfaceAdapter(adapters.HTTPAdapter): def __init__(self, **kwargs): self.iface = kwargs.pop('iface', None) super(InterfaceAdapter, self).__init__(**kwargs) def _socket_options(self): if self.iface is None: return [] else: return [(socket.SOL_SOCKET, socket.SO_BINDTODEVICE, self.iface)] def init_poolmanager(self, connections, maxsize, block=False): self.poolmanager = PoolManager( num_pools=connections, maxsize=maxsize, block=block, socket_options=self._socket_options() ) session = requests.Session() for prefix in ('http://', 'https://'): session.mount(prefix, InterfaceAdapter(iface=b'eth0')) print(session.get("https://ifconfig.me/").text) ```
Try changing the internal IP (192.168.0.200) to the corresponding iface in the code below. ``` import requests from requests_toolbelt.adapters import source def check_ip(inet_addr): s = requests.Session() iface = source.SourceAddressAdapter(inet_addr) s.mount('http://', iface) s.mount('https://', iface) url = 'https://emapp.cc/get_my_ip' resp = s.get(url) print(resp.text) if __name__ == '__main__': check_ip('192.168.0.200') ```
48,996,494
I have two network interfaces (wifi and ethernet) both with internet access. Let's say my interfaces are `eth` (ethernet) and `wlp2` (wifi). I need specific requests to go through `eth` interface and others through `wpl2`. Something like: ``` // Through "eth" request.post(url="http://myapi.com/store_ip", iface="eth") // Through "wlp2" request.post(url="http://myapi.com/log", iface="wlp2") ``` I'm using `requests`, but I can use `pycurl` or `urllib` if there isn't any way to do it with `requests`. [How to specify source interface in python requests module?](https://stackoverflow.com/questions/26335544/how-to-specify-source-interface-in-python-requests-module) refers to [Requests, bind to an ip](https://stackoverflow.com/questions/12585317/requests-bind-to-an-ip) and it doesn't work.
2018/02/26
[ "https://Stackoverflow.com/questions/48996494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4585081/" ]
Here is the solution for Requests library without monkey-patching anything. This function will create a Session bound to the given IP address. It is up to you to determine IP address of the desired network interface. Tested to work with `requests==2.23.0`. ``` import requests def session_for_src_addr(addr: str) -> requests.Session: """ Create `Session` which will bind to the specified local address rather than auto-selecting it. """ session = requests.Session() for prefix in ('http://', 'https://'): session.get_adapter(prefix).init_poolmanager( # those are default values from HTTPAdapter's constructor connections=requests.adapters.DEFAULT_POOLSIZE, maxsize=requests.adapters.DEFAULT_POOLSIZE, # This should be a tuple of (address, port). Port 0 means auto-selection. source_address=(addr, 0), ) return session # usage example: s = session_for_src_addr('192.168.1.12') s.get('https://httpbin.org/ip') ``` Be warned though that this approach is identical to `curl`'s `--interface` option, and won't help in some cases. Depending on your routing configuration, it might happen that even though you bind to the specific IP address, request will go through some other interface. So if this answer does not work for you then first check if `curl http://httpbin.org/ip --interface myinterface` will work as expected.
If you want to do this on Linux you could use `SO_BINDTODEVICE` flag for `setsockopt` (check [man 7 socket](https://man7.org/linux/man-pages/man7/socket.7.html#:%7E:text=since%20Linux%204.6.-,SO_BINDTODEVICE,-Bind%20this%20socket), for more details). In fact, it's what used [by curl](https://github.com/curl/curl/blob/3df8d08d00a82f0bfc076ca1000abb209e0b9770/lib/connect.c#L305), if you use `--interface` [option](https://curl.se/docs/manpage.html#--interface) on linux. But keep in mind that `SO_BINDTODEVICE` requires root permissions (`CAP_NET_RAW`, although there were [some attempts](https://patchwork.ozlabs.org/project/netdev/patch/20200331132009.1306283-1-vincent@bernat.ch/) to change this) and `curl` fallbacks to a regular `bind` trick if `SO_BINDTODEVICE` fails. Here's sample `curl` strace when it fails: ``` strace -f -e setsockopt,bind curl --interface eth2 https://ifconfig.me/ strace: Process 18208 attached [pid 18208] +++ exited with 0 +++ setsockopt(3, SOL_TCP, TCP_NODELAY, [1], 4) = 0 setsockopt(3, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 setsockopt(3, SOL_TCP, TCP_KEEPIDLE, [60], 4) = 0 setsockopt(3, SOL_TCP, TCP_KEEPINTVL, [60], 4) = 0 setsockopt(3, SOL_SOCKET, SO_BINDTODEVICE, "eth2\0", 5) = -1 EPERM (Operation not permitted) bind(3, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr("192.168.8.1")}, 16) = 0 # curl fallbacks to regular bind 127.0.0.1+++ exited with 0 +++ ``` Also, wanted to say that using regular `bind` does not always guarantee that traffic would go through specified interface ([@MarSoft answer](https://stackoverflow.com/a/61581069/4042398) uses plain `bind`). On linux, only `SO_BINDTODEVICE` guarantees that traffic would go through the specified device. Here's an example how to use `SO_BINDTODEVICE` with `requests` and [requests-toolbelt](https://pypi.org/project/requests-toolbelt/) (as I said, it requires `CAP_NET_RAW` [permissions](https://man7.org/linux/man-pages/man7/capabilities.7.html#:%7E:text=listen%20to%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20multicasts.-,CAP_NET_RAW,-*%20Use%20RAW%20and)). ```py import socket import requests from requests_toolbelt.adapters.socket_options import SocketOptionsAdapter session = requests.Session() # set interface here options = [(socket.SOL_SOCKET, socket.SO_BINDTODEVICE, b"eth0")] for prefix in ('http://', 'https://'): session.mount(prefix, SocketOptionsAdapter(socket_options=options)) print(session.get("https://ifconfig.me/").text) ``` Alternatively, if you don't want to use `requests-toolbelt` you can implement adapter class yourself: ```py import socket import requests from requests import adapters from urllib3.poolmanager import PoolManager class InterfaceAdapter(adapters.HTTPAdapter): def __init__(self, **kwargs): self.iface = kwargs.pop('iface', None) super(InterfaceAdapter, self).__init__(**kwargs) def _socket_options(self): if self.iface is None: return [] else: return [(socket.SOL_SOCKET, socket.SO_BINDTODEVICE, self.iface)] def init_poolmanager(self, connections, maxsize, block=False): self.poolmanager = PoolManager( num_pools=connections, maxsize=maxsize, block=block, socket_options=self._socket_options() ) session = requests.Session() for prefix in ('http://', 'https://'): session.mount(prefix, InterfaceAdapter(iface=b'eth0')) print(session.get("https://ifconfig.me/").text) ```
17,477,394
I'm trying to install the M2Crypto on Python26 in Windows, but I am getting the below error. > > **error**: command 'swig.exe' failed: No such file or directory > > > This error occurs both using the "Easy Install" or "PIP Install" command. Follows the Log: > > running build > > > running build\_py > > > running build\_ext > > > building 'M2Crypto.\_\_m2crypto' extension > > > swigging SWIG/\_m2crypto.i to SWIG/\_m2crypto\_wrap.c > > > swig.exe -python -IC:\Python26\include -IC:\Python26\PC > -Ic:\pkg\include -includeall -o SWIG/\_m2crypto\_wrap.c SWIG/\_m2crypto.i > > > error: command 'swig.exe' failed: No such file or directory > > > Any Help?
2013/07/04
[ "https://Stackoverflow.com/questions/17477394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1347355/" ]
This worked for me: (using winpython2.7) ``` pip install M2CryptoWin32 ``` reference: <https://github.com/dsoprea/M2CryptoWin32>
Putting this in answer format: You could try to install a binary build from <http://chandlerproject.org/Projects/MeTooCrypto> from mata's comment that resolved OP's issue
48,888,239
Here is my image: ![enter image description here](https://i.stack.imgur.com/ffDLD.jpg) I want to find the center of mass in this image. I can find the approximate location of the center of mass by drawing two perpendicular lines as shown in this image: ![enter image description here](https://i.stack.imgur.com/d3yBp.jpg) I want to find it using an image processing tool in python. I have a little experience in the image processing library of python (scikit-image) but, I am not sure if this library could help finding the center of mass in my image. I was wondering if anybody could help me to do it. I will be happy if it is possible to find the center of mass in my image using any other library in python. Thanks in advance for your help!
2018/02/20
[ "https://Stackoverflow.com/questions/48888239", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8653226/" ]
[`skimage.measure.regionprops`](http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops) will do what you want. Here's an example: ``` import imageio as iio from skimage import filters from skimage.color import rgb2gray # only needed for incorrectly saved images from skimage.measure import regionprops image = rgb2gray(iio.imread('eyeball.png')) threshold_value = filters.threshold_otsu(image) labeled_foreground = (image > threshold_value).astype(int) properties = regionprops(labeled_foreground, image) center_of_mass = properties[0].centroid weighted_center_of_mass = properties[0].weighted_centroid print(center_of_mass) ``` On my machine and with your example image, I get `(228.48663375508113, 200.85290046969845)`. We can make a pretty picture: ``` import matplotlib.pyplot as plt from skimage.color import label2rgb colorized = label2rgb(labeled_foreground, image, colors=['black', 'red'], alpha=0.1) fig, ax = plt.subplots() ax.imshow(colorized) # Note the inverted coordinates because plt uses (x, y) while NumPy uses (row, column) ax.scatter(center_of_mass[1], center_of_mass[0], s=160, c='C0', marker='+') plt.show() ``` That gives me this output: [![eye center of mass](https://i.stack.imgur.com/0cCSn.png)](https://i.stack.imgur.com/0cCSn.png) You'll note that there's some bits of foreground that you probably don't want in there, like at the bottom right of the picture. That's a whole nother answer, but you can look at `scipy.ndimage.label`, `skimage.morphology.remove_small_objects`, and more generally at `skimage.segmentation`.
You need to know about **[Image Moments](https://en.wikipedia.org/wiki/Image_moment)**. [Here](https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html) there's a tutorial of how use it with opencv and python
48,888,239
Here is my image: ![enter image description here](https://i.stack.imgur.com/ffDLD.jpg) I want to find the center of mass in this image. I can find the approximate location of the center of mass by drawing two perpendicular lines as shown in this image: ![enter image description here](https://i.stack.imgur.com/d3yBp.jpg) I want to find it using an image processing tool in python. I have a little experience in the image processing library of python (scikit-image) but, I am not sure if this library could help finding the center of mass in my image. I was wondering if anybody could help me to do it. I will be happy if it is possible to find the center of mass in my image using any other library in python. Thanks in advance for your help!
2018/02/20
[ "https://Stackoverflow.com/questions/48888239", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8653226/" ]
You can use the [scipy.ndimage.center\_of\_mass](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.center_of_mass.html#scipy.ndimage.center_of_mass) function to find the center of mass of an object. For example, using this question's image: ```sh wget https://i.stack.imgur.com/ffDLD.jpg ``` ```py import matplotlib.image as mpimg import scipy.ndimage as ndi img = mpimg.imread('ffDLD.jpg') img = img.mean(axis=-1).astype('int') # in grayscale cy, cx = ndi.center_of_mass(img) print(cy, cx) ``` ```sh 228.75223713169711 197.40991592129836 ```
You need to know about **[Image Moments](https://en.wikipedia.org/wiki/Image_moment)**. [Here](https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html) there's a tutorial of how use it with opencv and python
48,888,239
Here is my image: ![enter image description here](https://i.stack.imgur.com/ffDLD.jpg) I want to find the center of mass in this image. I can find the approximate location of the center of mass by drawing two perpendicular lines as shown in this image: ![enter image description here](https://i.stack.imgur.com/d3yBp.jpg) I want to find it using an image processing tool in python. I have a little experience in the image processing library of python (scikit-image) but, I am not sure if this library could help finding the center of mass in my image. I was wondering if anybody could help me to do it. I will be happy if it is possible to find the center of mass in my image using any other library in python. Thanks in advance for your help!
2018/02/20
[ "https://Stackoverflow.com/questions/48888239", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8653226/" ]
[`skimage.measure.regionprops`](http://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops) will do what you want. Here's an example: ``` import imageio as iio from skimage import filters from skimage.color import rgb2gray # only needed for incorrectly saved images from skimage.measure import regionprops image = rgb2gray(iio.imread('eyeball.png')) threshold_value = filters.threshold_otsu(image) labeled_foreground = (image > threshold_value).astype(int) properties = regionprops(labeled_foreground, image) center_of_mass = properties[0].centroid weighted_center_of_mass = properties[0].weighted_centroid print(center_of_mass) ``` On my machine and with your example image, I get `(228.48663375508113, 200.85290046969845)`. We can make a pretty picture: ``` import matplotlib.pyplot as plt from skimage.color import label2rgb colorized = label2rgb(labeled_foreground, image, colors=['black', 'red'], alpha=0.1) fig, ax = plt.subplots() ax.imshow(colorized) # Note the inverted coordinates because plt uses (x, y) while NumPy uses (row, column) ax.scatter(center_of_mass[1], center_of_mass[0], s=160, c='C0', marker='+') plt.show() ``` That gives me this output: [![eye center of mass](https://i.stack.imgur.com/0cCSn.png)](https://i.stack.imgur.com/0cCSn.png) You'll note that there's some bits of foreground that you probably don't want in there, like at the bottom right of the picture. That's a whole nother answer, but you can look at `scipy.ndimage.label`, `skimage.morphology.remove_small_objects`, and more generally at `skimage.segmentation`.
You can use the [scipy.ndimage.center\_of\_mass](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.center_of_mass.html#scipy.ndimage.center_of_mass) function to find the center of mass of an object. For example, using this question's image: ```sh wget https://i.stack.imgur.com/ffDLD.jpg ``` ```py import matplotlib.image as mpimg import scipy.ndimage as ndi img = mpimg.imread('ffDLD.jpg') img = img.mean(axis=-1).astype('int') # in grayscale cy, cx = ndi.center_of_mass(img) print(cy, cx) ``` ```sh 228.75223713169711 197.40991592129836 ```
70,152,772
I'm using an AWS Lambda function (in Python) to connect to an Oracle database (RDS) using cx\_Oracle library. But it is giving me the below error - "DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". Steps I've followed - 1. Created a python virtual environment and downloaded the cx\_Oracle library on an EC2 instance. 2. Uploaded the downloaded library on S3 bucket and created a Lambda Layer with it 3. Used this layer in the Lambda function to connect to the Oracle RDS 4. Used below command to connect to the DB - conn = cx\_Oracle.connect(user="*user-name*", password="*password*", dsn="*DB-Endpoint*:1521/"*database-name*",encoding="UTF-8") Please help me in resolving this issue.
2021/11/29
[ "https://Stackoverflow.com/questions/70152772", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17394264/" ]
Set the environment variable `DPI_DEBUG_LEVEL` to the value `64` and then rerun your code. The debugging output should help you figure out what is being searched. Note that you need to have the 64-bit instant client installed as well!
The reason I faced this issue was that I just downloaded cx\_Oracle library. In order to connect to the Oracle database from the Lambda function, we need to download the Oracle client and libaio libraries as well and club them with cx\_Oracle to create a Lambda Layer. Once I followed these steps, I was able to connect to the Oracle database and query its tables. I've created a video for this process, so that others need not go through the issues I faced. Hope it will be helpful to all - <https://youtu.be/BYiueNog-TI>
33,337,302
this is a follow-up from [https://stackoverflow.com/questions/33336963/use-a-python-dictionary-to-insert-into-mysql/33337128#33337128](https://stackoverflow.com/questions/33336963/use-a-python-dictionary-to-insert-into-mysql/33337128#33337128/). ``` import pymysql conn = pymysql.connect(server, user , password, "db") cur = conn.cursor() ORFs={'E7': '562', 'E6': '83', 'E1': '865', 'E2': '2756 '} table="genome" cols = ORFs.keys() vals = ORFs.values() sql = "INSERT INTO %s (%s) VALUES(%s)" % ( table, ",".join(cols), ",".join(vals)) print sql print ORFs.values() cur.execute(sql) cur.close() conn.close() ``` Thanks to Xiaohen, my program works (i.e. it does not throw any errors), but when I go and check the mysql database, the data is not inserted. I noticed that the autoincrement ID column does increase with every failed attempt. So this suggests that I am at least making contact with the database? As always, any help is much appreciated EDIT: I included the output from `mysql> show create table genome;` ``` | genome | CREATE TABLE `genome` ( `ID` int(11) NOT NULL AUTO_INCREMENT, `state` char(255) DEFAULT NULL, `CG` text, `E1` char(25) DEFAULT NULL, `E2` char(25) DEFAULT NULL, `E6` char(25) DEFAULT NULL, `E7` char(25) DEFAULT NULL, PRIMARY KEY (`ID`) ) ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=latin1 | +--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) ```
2015/10/26
[ "https://Stackoverflow.com/questions/33337302", "https://Stackoverflow.com", "https://Stackoverflow.com/users/751241/" ]
Think I figured it out. I will add the info here in case someone else comes across this question: I need to add `conn.commit()` to the script
You can use ``` try: cur.execute(sql) except Exception, e: print e ``` If your code is wrong, the exception can tell you. And it has another question. the cols and vals are not match. The values should be ``` vals = [dict[col] for col in cols] ```
5,948,110
I have been using python for a while now and Im happy using it in most forms but I am wondering which form is more pythonic. Is it right to emulate objects and types or is it better to subclass or inherit from these types. I can see advantages for both and also the disadvantages. Whats the correct method to be doing this? Subclassing method ``` class UniqueDict(dict): def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) def __setitem__(self, key, value): if key not in self: dict.__setitem__(self, key, value) else: raise KeyError("Key already exists") ``` Emulating method ``` class UniqueDict(object): def __init__(self, *args, **kwargs): self.di = dict(*args, **kwargs) def __setitem__(self, key, value): if key not in self.di: self.di[key] = value else: raise KeyError("Key already exists") ```
2011/05/10
[ "https://Stackoverflow.com/questions/5948110", "https://Stackoverflow.com", "https://Stackoverflow.com/users/462604/" ]
Key question you have to ask yourself here is: > > "How should my class change if the 'parent' class changes?" > > > Imagine new methods are added to `dict` which you don't override in your `UniqueDict`. If you want to express that **`UniqueDict` is simply a small derivation** in behaviour from `dict`'s behaviour, then you'd **go with inheritance** since you will get changes to the base class automatically. If you want to express that **`UniqueDict` kinda looks like a `dict` but actually isn't**, you should go with the **'emulation' mode**.
Subclassing is better as you won't have to implement a proxy for every single dict method.
5,948,110
I have been using python for a while now and Im happy using it in most forms but I am wondering which form is more pythonic. Is it right to emulate objects and types or is it better to subclass or inherit from these types. I can see advantages for both and also the disadvantages. Whats the correct method to be doing this? Subclassing method ``` class UniqueDict(dict): def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) def __setitem__(self, key, value): if key not in self: dict.__setitem__(self, key, value) else: raise KeyError("Key already exists") ``` Emulating method ``` class UniqueDict(object): def __init__(self, *args, **kwargs): self.di = dict(*args, **kwargs) def __setitem__(self, key, value): if key not in self.di: self.di[key] = value else: raise KeyError("Key already exists") ```
2011/05/10
[ "https://Stackoverflow.com/questions/5948110", "https://Stackoverflow.com", "https://Stackoverflow.com/users/462604/" ]
Subclassing is better as you won't have to implement a proxy for every single dict method.
I would go for subclass, and for the reason I would refer to the motivation of [PEP 3119](http://www.python.org/dev/peps/pep-3119/#rationale): > > For example, if asking 'is this object > a mutable sequence container?', one > can look for a base class of 'list', > or one can look for a method named > '**getitem**'. But note that although > these tests may seem obvious, neither > of them are correct, as one generates > false negatives, and the other false > positives. > > > The generally agreed-upon remedy is to > standardize the tests, and group them > into a formal arrangement. This is > most easily done by associating with > each class a set of standard testable > properties, either via the inheritance > mechanism or some other means. Each > test carries with it a set of > promises: it contains a promise about > the general behavior of the class, and > a promise as to what other class > methods will be available. > > > In short, it is sometimes desirable to be able to check for mapping properties using `isinstance`.
5,948,110
I have been using python for a while now and Im happy using it in most forms but I am wondering which form is more pythonic. Is it right to emulate objects and types or is it better to subclass or inherit from these types. I can see advantages for both and also the disadvantages. Whats the correct method to be doing this? Subclassing method ``` class UniqueDict(dict): def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) def __setitem__(self, key, value): if key not in self: dict.__setitem__(self, key, value) else: raise KeyError("Key already exists") ``` Emulating method ``` class UniqueDict(object): def __init__(self, *args, **kwargs): self.di = dict(*args, **kwargs) def __setitem__(self, key, value): if key not in self.di: self.di[key] = value else: raise KeyError("Key already exists") ```
2011/05/10
[ "https://Stackoverflow.com/questions/5948110", "https://Stackoverflow.com", "https://Stackoverflow.com/users/462604/" ]
Key question you have to ask yourself here is: > > "How should my class change if the 'parent' class changes?" > > > Imagine new methods are added to `dict` which you don't override in your `UniqueDict`. If you want to express that **`UniqueDict` is simply a small derivation** in behaviour from `dict`'s behaviour, then you'd **go with inheritance** since you will get changes to the base class automatically. If you want to express that **`UniqueDict` kinda looks like a `dict` but actually isn't**, you should go with the **'emulation' mode**.
I would go for subclass, and for the reason I would refer to the motivation of [PEP 3119](http://www.python.org/dev/peps/pep-3119/#rationale): > > For example, if asking 'is this object > a mutable sequence container?', one > can look for a base class of 'list', > or one can look for a method named > '**getitem**'. But note that although > these tests may seem obvious, neither > of them are correct, as one generates > false negatives, and the other false > positives. > > > The generally agreed-upon remedy is to > standardize the tests, and group them > into a formal arrangement. This is > most easily done by associating with > each class a set of standard testable > properties, either via the inheritance > mechanism or some other means. Each > test carries with it a set of > promises: it contains a promise about > the general behavior of the class, and > a promise as to what other class > methods will be available. > > > In short, it is sometimes desirable to be able to check for mapping properties using `isinstance`.
71,297,371
Ok so I am trying to mass format a large text document to convert ``` #{'000','001','002','003','004','005','006','007','008','009'} ``` into ``` #{'000':'001','002':'003','004':'005','006':'007','008':'009'} ``` using python and have my function working, however it will only work if I run it line by line. and was wondering how to get it to run for each line on my input so that it will work on a multi line document ``` with open("input") as core: a = core.read() K = '!' N = 12 res = '' for idx, ele in enumerate(a): if idx % N == 0 and idx != 0: res = res + K else: res = res + ele b = (str(res).replace(",",":").replace("!",",")) l = len(b) c = b[:l-1] d = c + "}" print(d) ``` here is the current result for a multiline text file ``` {'000':'001','002':'003','004':'005','006':'007','008':'009', {'001':'00,':'003':'00,':'005':'00,':'007':'00,':'009':'00,'} {'002':',03':'004':',05':'006':',07':'008':',09':'000':',01'} {'003','004':'005','006':'007','008':'009','000':'001','002'} ``` So Far I have tried ``` with open('input', "r") as a: for line in a: K = '!' N = 12 res = '' for idx, ele in enumerate(a): if idx % N == 0 and idx != 0: res = res + K else: res = res + ele b = (str(res)) l = len(b) c = b[:l-1] d = c + "}" print(d) ``` but no luck FOUND A SOLUTION ``` import re with open("input") as core: coords = core.read() sword = coords.replace("\n",",\n") dung = re.sub('(,[^,]*),', r'\1 ', sword).replace(",",":").replace(" ",",").replace(",\n","\n") print(dung) ``` I know my solution works, but i cant quite apply this to other situations where I am applying different formats based on the need. Its easy enough to work out how to format a single line of text as there is so much documentation out there. Does anybody know of any plugins or particular python elements where you can write your format function and then apply it to all lines. like a kind of applylines() extension instead of readlines()
2022/02/28
[ "https://Stackoverflow.com/questions/71297371", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18335232/" ]
Here is a possible solution: ``` result = [(str(dt.timetuple()[:6])[1:-1], s.split('_')[0]) for dt, s in OUTPUT] ```
> > Eventually I hope to pass the new list of tuples to a pandas dataframe. > > > You can use `.read_sql_query()` to pull the information directly into a DataFrame: ```py import pandas as pd import sqlalchemy as sa connection_url = sa.engine.URL.create( "mssql+pyodbc", username="scott", password="tiger^5HHH", host="192.168.0.199", database="test", query={ "driver": "ODBC Driver 18 for SQL Server", "TrustServerCertificate": "Yes", } ) engine = sa.create_engine(connection_url) table_name = "so71297370" # set up example environment with engine.begin() as conn: conn.exec_driver_sql(f"DROP TABLE IF EXISTS {table_name}") conn.exec_driver_sql(f"CREATE TABLE {table_name} (col1 datetime2, col2 nvarchar(50))") conn.exec_driver_sql(f"""\ INSERT INTO {table_name} (col1, col2) VALUES ('2003-03-26 15:12:15', '490002_space'), ('2003-03-27 16:13:14', '490002_space') """) # example df = pd.read_sql_query( # suffix '_space' is 6 characters in length f"SELECT col1, LEFT(col2, LEN(col2) - 6) AS col2 FROM {table_name}", engine, ) print(df) """ col1 col2 0 2003-03-26 15:12:15 490002 1 2003-03-27 16:13:14 490002 """ ```
20,054,030
I have been getting the below error while using pxssh to get into remote servers to run unix commands ( like uptime ) ``` Traceback (most recent call last): ``` File "./ssh\_pxssh.py", line 33, in login\_remote(hostname, username, password) File "./ssh\_pxssh.py", line 12, in login\_remote if not s.login(hostname, username, password): File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/pexpect/pxssh.py", line 278, in login ``` **raise ExceptionPxssh ('could not synchronize with original prompt') ``` pexpect.pxssh.ExceptionPxssh: could not synchronize with original prompt\*\* Line 33 is where I call this function in main. The function I am using is here : ``` def login_remote(hostname, username, password): s = pxssh.pxssh() s.force_password = True if not s.login(hostname, username, password, auto_prompt_reset=False): print("ssh to host :"+ host + " failed") print(str(s)) else: print("SSH to remote host " + hostname + " successfull") s.sendline('uptime') s.prompt() print(s.before) s.logout() ``` The error does not come each time I run the script. Rather it is intermittent. It comes 7 out of 10 times I run my script.
2013/11/18
[ "https://Stackoverflow.com/questions/20054030", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3005660/" ]
I have solved it by adding **sync\_multiplier** argument to the login function. ``` s.login(hostname, username, password, sync_multiplier=5 auto_prompt_reset=False) ``` note that **sync\_multiplier** is a communication timeout argument to perform successful synchronization. it tries to read prompt for at least **sync\_multiplier** seconds. Worst case performance for this method is **sync\_multiplier** \* 3 seconds. I personally set sync\_multiplier=2 but it depends on the communication speed on the system I work on.
I had the same problem when pxssh tried to login on a very slow connection. The pexpect lib apparently was fooled by the remote motd prompt. This remote motd prompt contained a uname -svr prompt, which itself contained a # character inside. Apparently, pexpect saw it like a prompt. From that point, the lib was not in line anymore with the ssh session. The following workaround was working for me: just remove the # char inside /var/run/motd.dynamic (debian), or in /var/run/motd (ubuntu). Another solution is to ask ssh to no prompt the motd while logging in . But this is not working for me: i added the following: PrintMotd no in /etc/ssh/sshd\_config => not working Another workaround: create a file in the home directory: .hushlogin in the ~ directory
28,329,596
please help me. I have the string (json request) : ``` {"jsonrpc":"2.0","result":[{"hostid":"10158"}],"id":1} ``` i try to parsing it with command : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]'` ``` and still have `[{u'hostid': u'10158'}]` How i can get only 10158 (like in example) THanks. P.S. this is doesn't work too : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]["hostid"]'` ```
2015/02/04
[ "https://Stackoverflow.com/questions/28329596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3731374/" ]
You keep two complete copies of the file in memory at the same time, `@lines` and `$lines`. You might consider instead: ``` open (my $FH, "<", $file) or die "Can't open $file for read: $!"; $FH->input_record_separator(undef); # slurp entire file my $lines = <$FH>; close $FH or die "Cannot close $file: $!"; ``` On sufficiently obsolete versions of Perl you may need to explicitly `use IO::Handle`. Note also: I've switched to lexical file handles from the bare word versions. I presume you aren't striving for compatibility with Perl v4. Of course if cutting your memory requirements by half isn't enough, you could always iterate through the file...
Working with XML using regexes is error prone and inefficient, as code which slurps the whole file as a string shows. To deal with XML you should be using an XML parser. In particular, you want a SAX parser which will work on the XML a piece at a time as opposed to a DOM parser which much read the whole file. I'm going to answer your question as is because there's some value in knowing how to work line by line. If you can avoid it, don't read a whole file into memory. Work line by line. Your task seems to be to remove a handful of lines from an XML file for reasons. Everything between `<DOCUMENT>\n<TYPE>EX-` and `<\/DOCUMENT>`. We can do that line-by-line by keeping a bit of state. ``` use autodie; open (my $infh, "<", $file); open (my $outfh, ">", "$file.tmp"); my $in_document = 0; my $in_type_ex = 0; while( my $line = <$infh> ) { if( $line =~ m{<DOCUMENT>\n}i ) { $in_document = 1; next; } elsif( $line =~ m{</DOCUMENT>}i ) { $in_document = 0; next; } elsif( $line =~ m{<TYPE>EX-}i ) { $in_type_ex = 1; next; } elsif( $in_document and $in_type_ex ) { next; } else { print $outfh $line; } } rename "$file.tmp", $file; ``` Using a temp file allows you to read the file while you construct its replacement. Of course this will fail if the XML document isn't formatted just so (I helpfully added the `/i` flag to the regexes to allow lower case tags), you should really use a SAX XML parser.
28,329,596
please help me. I have the string (json request) : ``` {"jsonrpc":"2.0","result":[{"hostid":"10158"}],"id":1} ``` i try to parsing it with command : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]'` ``` and still have `[{u'hostid': u'10158'}]` How i can get only 10158 (like in example) THanks. P.S. this is doesn't work too : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]["hostid"]'` ```
2015/02/04
[ "https://Stackoverflow.com/questions/28329596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3731374/" ]
Handle the file line by line: ``` while ( my $file = $doc_it->() ) { # go through all documents found print "Stripping $file\n"; open (my $infh, "<", $file) or die "Can't open $file for read: $!"; open (my $outfh, ">", $file . ".tmp") or die "Can't open $file.tmp for write: $!"; while (<$infh>) { if ( /<DOCUMENT>/ ) { # append the next line to test for TYPE $_ .= <$infh>; if (/<TYPE>EX-/) { # document type is excluded, now loop through # $infh until the closing tag is found. while (<$infh>) { last if m|</DOCUMENT>|; } # jump back to the <$infh> loop to resume # processing on the next line after </DOCUMENT> next; } # if we've made it this far, the document was not excluded # fall through to print both lines } print $outfh $_; } close $outfh or die "Cannot close $file: $!"; close $infh or die "Cannot close $file: $!"; unlink $file; rename $file.'.tmp', $file; } ```
You keep two complete copies of the file in memory at the same time, `@lines` and `$lines`. You might consider instead: ``` open (my $FH, "<", $file) or die "Can't open $file for read: $!"; $FH->input_record_separator(undef); # slurp entire file my $lines = <$FH>; close $FH or die "Cannot close $file: $!"; ``` On sufficiently obsolete versions of Perl you may need to explicitly `use IO::Handle`. Note also: I've switched to lexical file handles from the bare word versions. I presume you aren't striving for compatibility with Perl v4. Of course if cutting your memory requirements by half isn't enough, you could always iterate through the file...
28,329,596
please help me. I have the string (json request) : ``` {"jsonrpc":"2.0","result":[{"hostid":"10158"}],"id":1} ``` i try to parsing it with command : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]'` ``` and still have `[{u'hostid': u'10158'}]` How i can get only 10158 (like in example) THanks. P.S. this is doesn't work too : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]["hostid"]'` ```
2015/02/04
[ "https://Stackoverflow.com/questions/28329596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3731374/" ]
You keep two complete copies of the file in memory at the same time, `@lines` and `$lines`. You might consider instead: ``` open (my $FH, "<", $file) or die "Can't open $file for read: $!"; $FH->input_record_separator(undef); # slurp entire file my $lines = <$FH>; close $FH or die "Cannot close $file: $!"; ``` On sufficiently obsolete versions of Perl you may need to explicitly `use IO::Handle`. Note also: I've switched to lexical file handles from the bare word versions. I presume you aren't striving for compatibility with Perl v4. Of course if cutting your memory requirements by half isn't enough, you could always iterate through the file...
While working on a somewhat large (1.2G) file with Perl 5.10.1 on Windows Server 2013, I have noticed that ``` foreach my $line (<LOG>) {} ``` fails with out of memory, while ``` while (my $line = <LOG>) {} ``` works in a simple script that just runs some regexp'es and prints lines I'm interesting in.
28,329,596
please help me. I have the string (json request) : ``` {"jsonrpc":"2.0","result":[{"hostid":"10158"}],"id":1} ``` i try to parsing it with command : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]'` ``` and still have `[{u'hostid': u'10158'}]` How i can get only 10158 (like in example) THanks. P.S. this is doesn't work too : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]["hostid"]'` ```
2015/02/04
[ "https://Stackoverflow.com/questions/28329596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3731374/" ]
Handle the file line by line: ``` while ( my $file = $doc_it->() ) { # go through all documents found print "Stripping $file\n"; open (my $infh, "<", $file) or die "Can't open $file for read: $!"; open (my $outfh, ">", $file . ".tmp") or die "Can't open $file.tmp for write: $!"; while (<$infh>) { if ( /<DOCUMENT>/ ) { # append the next line to test for TYPE $_ .= <$infh>; if (/<TYPE>EX-/) { # document type is excluded, now loop through # $infh until the closing tag is found. while (<$infh>) { last if m|</DOCUMENT>|; } # jump back to the <$infh> loop to resume # processing on the next line after </DOCUMENT> next; } # if we've made it this far, the document was not excluded # fall through to print both lines } print $outfh $_; } close $outfh or die "Cannot close $file: $!"; close $infh or die "Cannot close $file: $!"; unlink $file; rename $file.'.tmp', $file; } ```
Working with XML using regexes is error prone and inefficient, as code which slurps the whole file as a string shows. To deal with XML you should be using an XML parser. In particular, you want a SAX parser which will work on the XML a piece at a time as opposed to a DOM parser which much read the whole file. I'm going to answer your question as is because there's some value in knowing how to work line by line. If you can avoid it, don't read a whole file into memory. Work line by line. Your task seems to be to remove a handful of lines from an XML file for reasons. Everything between `<DOCUMENT>\n<TYPE>EX-` and `<\/DOCUMENT>`. We can do that line-by-line by keeping a bit of state. ``` use autodie; open (my $infh, "<", $file); open (my $outfh, ">", "$file.tmp"); my $in_document = 0; my $in_type_ex = 0; while( my $line = <$infh> ) { if( $line =~ m{<DOCUMENT>\n}i ) { $in_document = 1; next; } elsif( $line =~ m{</DOCUMENT>}i ) { $in_document = 0; next; } elsif( $line =~ m{<TYPE>EX-}i ) { $in_type_ex = 1; next; } elsif( $in_document and $in_type_ex ) { next; } else { print $outfh $line; } } rename "$file.tmp", $file; ``` Using a temp file allows you to read the file while you construct its replacement. Of course this will fail if the XML document isn't formatted just so (I helpfully added the `/i` flag to the regexes to allow lower case tags), you should really use a SAX XML parser.
28,329,596
please help me. I have the string (json request) : ``` {"jsonrpc":"2.0","result":[{"hostid":"10158"}],"id":1} ``` i try to parsing it with command : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]'` ``` and still have `[{u'hostid': u'10158'}]` How i can get only 10158 (like in example) THanks. P.S. this is doesn't work too : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]["hostid"]'` ```
2015/02/04
[ "https://Stackoverflow.com/questions/28329596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3731374/" ]
Working with XML using regexes is error prone and inefficient, as code which slurps the whole file as a string shows. To deal with XML you should be using an XML parser. In particular, you want a SAX parser which will work on the XML a piece at a time as opposed to a DOM parser which much read the whole file. I'm going to answer your question as is because there's some value in knowing how to work line by line. If you can avoid it, don't read a whole file into memory. Work line by line. Your task seems to be to remove a handful of lines from an XML file for reasons. Everything between `<DOCUMENT>\n<TYPE>EX-` and `<\/DOCUMENT>`. We can do that line-by-line by keeping a bit of state. ``` use autodie; open (my $infh, "<", $file); open (my $outfh, ">", "$file.tmp"); my $in_document = 0; my $in_type_ex = 0; while( my $line = <$infh> ) { if( $line =~ m{<DOCUMENT>\n}i ) { $in_document = 1; next; } elsif( $line =~ m{</DOCUMENT>}i ) { $in_document = 0; next; } elsif( $line =~ m{<TYPE>EX-}i ) { $in_type_ex = 1; next; } elsif( $in_document and $in_type_ex ) { next; } else { print $outfh $line; } } rename "$file.tmp", $file; ``` Using a temp file allows you to read the file while you construct its replacement. Of course this will fail if the XML document isn't formatted just so (I helpfully added the `/i` flag to the regexes to allow lower case tags), you should really use a SAX XML parser.
While working on a somewhat large (1.2G) file with Perl 5.10.1 on Windows Server 2013, I have noticed that ``` foreach my $line (<LOG>) {} ``` fails with out of memory, while ``` while (my $line = <LOG>) {} ``` works in a simple script that just runs some regexp'es and prints lines I'm interesting in.
28,329,596
please help me. I have the string (json request) : ``` {"jsonrpc":"2.0","result":[{"hostid":"10158"}],"id":1} ``` i try to parsing it with command : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]'` ``` and still have `[{u'hostid': u'10158'}]` How i can get only 10158 (like in example) THanks. P.S. this is doesn't work too : ``` reference_id2=`echo "$reference_id" | python -c 'import json, sys; print json.load(sys.stdin)["result"]["hostid"]'` ```
2015/02/04
[ "https://Stackoverflow.com/questions/28329596", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3731374/" ]
Handle the file line by line: ``` while ( my $file = $doc_it->() ) { # go through all documents found print "Stripping $file\n"; open (my $infh, "<", $file) or die "Can't open $file for read: $!"; open (my $outfh, ">", $file . ".tmp") or die "Can't open $file.tmp for write: $!"; while (<$infh>) { if ( /<DOCUMENT>/ ) { # append the next line to test for TYPE $_ .= <$infh>; if (/<TYPE>EX-/) { # document type is excluded, now loop through # $infh until the closing tag is found. while (<$infh>) { last if m|</DOCUMENT>|; } # jump back to the <$infh> loop to resume # processing on the next line after </DOCUMENT> next; } # if we've made it this far, the document was not excluded # fall through to print both lines } print $outfh $_; } close $outfh or die "Cannot close $file: $!"; close $infh or die "Cannot close $file: $!"; unlink $file; rename $file.'.tmp', $file; } ```
While working on a somewhat large (1.2G) file with Perl 5.10.1 on Windows Server 2013, I have noticed that ``` foreach my $line (<LOG>) {} ``` fails with out of memory, while ``` while (my $line = <LOG>) {} ``` works in a simple script that just runs some regexp'es and prints lines I'm interesting in.
62,585,876
Our python Dataflow pipeline works locally but not when deployed using the Dataflow managed service on Google Cloud Platform. It doesn't show signs that it is connected to the PubSub subscription. We have tried subscribing to both subscription and topic, neither of them worked. The messages accumulate in the PubSub subscription and the Dataflow pipeline doesn't show signs of being called or anything. We have double-checked the project is the same Any directions on this would be very much appreciated Here is the code to connect to a pull subscription ```py with beam.Pipeline(options=options) as p: something = p | "ReadPubSub" >> beam.io.ReadFromPubSub( subscription="projects/PROJECT_ID/subscriptions/cloudflow" ) ``` Here goes the options used ```py options = PipelineOptions() file_processing_options = PipelineOptions().view_as(FileProcessingOptions) if options.view_as(GoogleCloudOptions).project is None: print(sys.argv[0] + ": error: argument --project is required") sys.exit(1) options.view_as(SetupOptions).save_main_session = True options.view_as(StandardOptions).streaming = True ``` The PubSub subscription has this configuration: ``` Delivery type: Pull Subscription expiration: Subscription expires in 31 days if there is no activity. Acknowledgement deadline: 57 Seconds Subscription filter: — Message retention duration: 7 Days Retained acknowledged messages: No Dead lettering: Disabled Retry policy : Retry immediately ```
2020/06/25
[ "https://Stackoverflow.com/questions/62585876", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6019494/" ]
Very late answer, it may still help someone else. I had the same problem, solved it like this: 1. Thanks to user Paramnesia1 who wrote [this](https://www.reddit.com/r/googlecloud/comments/srh28m/dataflow_pipeline_not_consuming_messages_from/) answer, I figured out that I was not observing all the logs on Logs Explorer. Some default job\_name query filters were preventing me from that. I am quoting & claryfing the steps to follow to be able to see all logs: > > Open the Logs tab in the Dataflow Job UI, section Job Logs > > > Click the "View in Logs Explorer" button > > > In the new Logs Explorer screen, in your Query window, remove all the existing "logName" filters, keep only resource.type and resource.labels.job\_id > > > 2. Now you will be able to see all the logs and investigate further your error. In my case, I was getting some 'Syncing Pod' errors, which were due to importing the wrong data file in my setup.py.
I think for Pulling from subscription we need to pass with\_attributes parameter as True. with\_attributes – True - output elements will be PubsubMessage objects. False - output elements will be of type bytes (message data only). Found similar one here: [When using Beam IO ReadFromPubSub module, can you pull messages with attributes in Python? It's unclear if its supported](https://stackoverflow.com/questions/55320682/when-using-beam-io-readfrompubsub-module-can-you-pull-messages-with-attributes)
22,488,763
I have been trying to insert data into the database using the following code in python: ``` import sqlite3 as db conn = db.connect('insertlinks.db') cursor = conn.cursor() db.autocommit(True) a="asd" b="adasd" cursor.execute("Insert into links (link,id) values (?,?)",(a,b)) conn.close() ``` The code runs without any errors. But no updation to the database takes place. I tried adding the `conn.commit()` but it gives an error saying module not found. Please help?
2014/03/18
[ "https://Stackoverflow.com/questions/22488763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2923505/" ]
You do have to commit after inserting: ``` cursor.execute("Insert into links (link,id) values (?,?)",(a,b)) conn.commit() ``` or use the [connection as a context manager](http://docs.python.org/2/library/sqlite3.html#using-the-connection-as-a-context-manager): ``` with conn: cursor.execute("Insert into links (link,id) values (?,?)", (a, b)) ``` or set autocommit correctly by setting the `isolation_level` keyword parameter to the `connect()` method to `None`: ``` conn = db.connect('insertlinks.db', isolation_level=None) ``` See [Controlling Transactions](http://docs.python.org/2/library/sqlite3.html#controlling-transactions).
It can be a bit late but set the `autocommit = true` save my time! especially if you have a script to run some bulk action as `update/insert/delete`... **Reference:** <https://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.isolation_level> it is the way I usually have in my scripts: ``` def get_connection(): conn = sqlite3.connect('../db.sqlite3', isolation_level=None) cursor = conn.cursor() return conn, cursor def get_jobs(): conn, cursor = get_connection() if conn is None: raise DatabaseError("Could not get connection") ``` I hope it helps you!
51,696,395
I'm trying to install gogle-assistant-sdk on Windows 10, and I'm getting a weird error which I can't understand. After installing python for all users and setting ENV variables when i run this command - ``` py -m pip install google-assistant-sdk[samples] ``` I got following error - ``` Command ""C:\Program Files (x86)\Python37-32\python.exe" -u -c "import setuptools, tokenize;__file__='C:\\Users\\ramji\\AppData\\Local\\Temp\\pip- install-p29c8ggl\\grpcio\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\ramji\AppData\Local\Temp\pip- record-7znlf3rz\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\ramji\AppData\Local\Temp\pip- install-p29c8ggl\grpcio\ ``` Detailed output - ``` During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\ramji\AppData\Local\Temp\pip-install-p29c8ggl\grpcio\src\python\grpcio\commands.py", line 292, in build_extensions build_ext.build_ext.build_extensions(self) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\command\build_ext.py", line 199, in build_extension _build_ext.build_extension(self, ext) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 533, in build_extension depends=ext.depends) File "C:\Program Files (x86)\Python37-32\lib\distutils\_msvccompiler.py", line 345, in compile self.initialize() File "C:\Program Files (x86)\Python37-32\lib\distutils\_msvccompiler.py", line 238, in initialize vc_env = _get_vc_env(plat_spec) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 185, in msvc14_get_vc_env return EnvironmentInfo(plat_spec, vc_min_ver=14.0).return_env() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 844, in __init__ self.si = SystemInfo(self.ri, vc_ver) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 486, in __init__ self.vc_ver = vc_ver or self._find_latest_available_vc_ver() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 493, in _find_latest_available_vc_ver raise distutils.errors.DistutilsPlatformError(err) distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\ramji\AppData\Local\Temp\pip-install-p29c8ggl\grpcio\setup.py", line 348, in <module> cmdclass=COMMAND_CLASS, File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\__init__.py", line 129, in setup return distutils.core.setup(**attrs) File "C:\Program Files (x86)\Python37-32\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\command\install.py", line 61, in run return orig.install.run(self) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\install.py", line 545, in run self.run_command('build') File "C:\Program Files (x86)\Python37-32\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build.py", line 135, in run self.run_command(cmd_name) File "C:\Program Files (x86)\Python37-32\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Program Files (x86)\Python37-32\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\command\build_ext.py", line 78, in run _build_ext.run(self) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 339, in run self.build_extensions() File "C:\Users\ramji\AppData\Local\Temp\pip-install-p29c8ggl\grpcio\src\python\grpcio\commands.py", line 295, in build_extensions support.diagnose_build_ext_error(self, error, formatted_exception) File "C:\Users\ramji\AppData\Local\Temp\pip-install-p29c8ggl\grpcio\src\python\grpcio\support.py", line 109, in diagnose_build_ext_error "backtrace).\n\n{}".format(formatted)) commands.CommandError: We could not diagnose your build failure. If you are unable to proceed, please file an issue at http://www.github.com/grpc/grpc with `[Python install]` in the title; please attach the whole log (including everything that may have appeared above the Python backtrace). Traceback (most recent call last): File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 490, in _find_latest_available_vc_ver return self.find_available_vc_vers()[-1] IndexError: list index out of range During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\ramji\AppData\Local\Temp\pip-install-p29c8ggl\grpcio\src\python\grpcio\commands.py", line 292, in build_extensions build_ext.build_ext.build_extensions(self) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 448, in build_extensions self._build_extensions_serial() File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 473, in _build_extensions_serial self.build_extension(ext) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\command\build_ext.py", line 199, in build_extension _build_ext.build_extension(self, ext) File "C:\Program Files (x86)\Python37-32\lib\distutils\command\build_ext.py", line 533, in build_extension depends=ext.depends) File "C:\Program Files (x86)\Python37-32\lib\distutils\_msvccompiler.py", line 345, in compile self.initialize() File "C:\Program Files (x86)\Python37-32\lib\distutils\_msvccompiler.py", line 238, in initialize vc_env = _get_vc_env(plat_spec) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 185, in msvc14_get_vc_env return EnvironmentInfo(plat_spec, vc_min_ver=14.0).return_env() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 844, in __init__ self.si = SystemInfo(self.ri, vc_ver) File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 486, in __init__ self.vc_ver = vc_ver or self._find_latest_available_vc_ver() File "C:\Program Files (x86)\Python37-32\lib\site-packages\setuptools\msvc.py", line 493, in _find_latest_available_vc_ver raise distutils.errors.DistutilsPlatformError(err) distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command ""C:\Program Files (x86)\Python37-32\python.exe" -u -c "import setuptools, tokenize;__file__='C:\\Users\\ramji\\AppData\\Local\\Temp\\pip- install-p29c8ggl\\grpcio\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\ramji\AppData\Local\Temp\pip- record-7znlf3rz\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\ramji\AppData\Local\Temp\pip- `enter code here`install-p29c8ggl\grpcio\ ``` I even tried executing this commmand on another windows 10 machine And i got the same result. Does anyone know what is the problem and solution for it? Thanks and Regards.
2018/08/05
[ "https://Stackoverflow.com/questions/51696395", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6007248/" ]
Try this one In the platforms/android/cordova-safe/starter-conceal.gradle change this compile('com.facebook.conceal:conceal:1.0.0@aar') to this compile('com.facebook.conceal:conceal:2.0.1@aar') This has worked for me.
Open `platforms/android/cordova-safe/starter-conceal.gradle`, then update the version of **com.facebook.conceal:conceal** from **1.0.0** to **1.1.3**, so the code should now be ``` dependencies { compile('com.facebook.conceal:conceal:1.1.3@aar') { transitive = true } } ```
48,644,767
I'm looking at [\_math.c](https://github.com/python/cpython/blob/master/Modules/_math.c) in git (line 25): ``` #if !defined(HAVE_ACOSH) || !defined(HAVE_ASINH) static const double ln2 = 6.93147180559945286227E-01; static const double two_pow_p28 = 268435456.0; /* 2**28 */ ``` and I noticed that ln2 value is different from the what [wolframalpha](https://www.wolframalpha.com/input/?i=ln(2)) value for ln2. (bald part is the difference) ln2 = 0.693147180559945**286227** (cpython) ln2 = 0.693147180559945**3094172321214581** (wolframalpha) ln2 = 0.693147180559945**309417232121458** (wikipedia) so my question is why there is a difference? what am I missing?
2018/02/06
[ "https://Stackoverflow.com/questions/48644767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4828285/" ]
As user2357112 noted, this code came from FDLIBM. That was carefully written for IEEE-754 machines, where C doubles have 53 bits of precision. It doesn't really care what the actual log of 2 is, but cares a whole lot about the best 53-bit approximation to `log(2)`. To reproduce the intended 53-bit-precise value, [17 decimal digits would have sufficed](http://www.exploringbinary.com/number-of-digits-required-for-round-trip-conversions/). So why did they use 21 decimal digits instead? My guess: 21 decimal digits is the minimum needed to guarantee that the converted result will be correct to 64 bits of precision. Which *may* have been an issue at the time, if a compiler somehow decided to convert the literal to a Pentium's 80-bit float format (which has 64 bits of precision). So they displayed the 53-bit result with enough decimal digits so that *if* it were converted to a binary float format with 64 bits of precision, the trailing 11 bits (=64-53) would all be zeroes, thus ensuring they'd be working with the 53-bit value they intended from the start. ``` >>> import mpmath >>> x = mpmath.log(2) >>> x mpf('0.69314718055994529') >>> mpmath.mp.prec = 64 >>> y = mpmath.mpf("0.693147180559945286227") >>> x == y True >>> y mpf('0.693147180559945286227') ``` In English, `x` is the 53-bit precise value of `log(2)`, and `y` is the result of converting the decimal string in the code to a binary float format with 64 bits of precision. They're identical. In current reality, I expect all compilers now convert the literal to the native IEEE-754 double format, with 53 bits of precision. Either way, the code ensures the best 53-bit approximation to `log(2)` will be used.
Python seems wrong, although I'm not sure it is an oversight or it has a deeper meaning. The explanation of BlackJack seems reasonable, but I don't understand, why they would give additional digits that are wrong. You can check this yourself by using the formula under [More efficient series](https://en.wikipedia.org/wiki/Logarithm#Power_series). In Mathematica, you can calculate it up to 70 (35 summands) with ``` log2 = 2*Sum[1/i*(1/3)^i, {i, 1, 70, 2}] (* 79535292197135923776615186805136682215642574454974413288086/ 114745171628462663795273979107442710223059517312975273318225 *) ``` With `N[log2,30]` you get the correct digits ``` 0.693147180559945309417232121458 ``` which supports the correctness of Wikipedia and W|A. If you like, you can do the same calculation for machine precision numbers. In Mathematica, this usually means `double`. ``` logC = Compile[{{z, _Real, 0}}, 2.0*Sum[1/i*((z - 1)/(z + 1))^i, {i, 1, 100, 2}] ] ``` Note that this code gets completely compiled to a normal iteration and does not use some error reducing summation scheme. So there is no magical compiled `Sum` function. This gives on my machine: ``` logC[2]//FullForm (* 0.6931471805599451` *) ``` and is correct up to the digits you pointed out. This has the precision that was suggested by BlackJack ``` $MachinePrecision (* 15.9546 *) ``` Edit ---- As pointed out in comments and answers, the value you see in `_math.c` might be the 53 bit representation ``` digits = RealDigits[log2, 2, 53]; N[FromDigits[digits, 2], 21] (* 0.693147180559945286227 *) ```
48,644,767
I'm looking at [\_math.c](https://github.com/python/cpython/blob/master/Modules/_math.c) in git (line 25): ``` #if !defined(HAVE_ACOSH) || !defined(HAVE_ASINH) static const double ln2 = 6.93147180559945286227E-01; static const double two_pow_p28 = 268435456.0; /* 2**28 */ ``` and I noticed that ln2 value is different from the what [wolframalpha](https://www.wolframalpha.com/input/?i=ln(2)) value for ln2. (bald part is the difference) ln2 = 0.693147180559945**286227** (cpython) ln2 = 0.693147180559945**3094172321214581** (wolframalpha) ln2 = 0.693147180559945**309417232121458** (wikipedia) so my question is why there is a difference? what am I missing?
2018/02/06
[ "https://Stackoverflow.com/questions/48644767", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4828285/" ]
As user2357112 noted, this code came from FDLIBM. That was carefully written for IEEE-754 machines, where C doubles have 53 bits of precision. It doesn't really care what the actual log of 2 is, but cares a whole lot about the best 53-bit approximation to `log(2)`. To reproduce the intended 53-bit-precise value, [17 decimal digits would have sufficed](http://www.exploringbinary.com/number-of-digits-required-for-round-trip-conversions/). So why did they use 21 decimal digits instead? My guess: 21 decimal digits is the minimum needed to guarantee that the converted result will be correct to 64 bits of precision. Which *may* have been an issue at the time, if a compiler somehow decided to convert the literal to a Pentium's 80-bit float format (which has 64 bits of precision). So they displayed the 53-bit result with enough decimal digits so that *if* it were converted to a binary float format with 64 bits of precision, the trailing 11 bits (=64-53) would all be zeroes, thus ensuring they'd be working with the 53-bit value they intended from the start. ``` >>> import mpmath >>> x = mpmath.log(2) >>> x mpf('0.69314718055994529') >>> mpmath.mp.prec = 64 >>> y = mpmath.mpf("0.693147180559945286227") >>> x == y True >>> y mpf('0.693147180559945286227') ``` In English, `x` is the 53-bit precise value of `log(2)`, and `y` is the result of converting the decimal string in the code to a binary float format with 64 bits of precision. They're identical. In current reality, I expect all compilers now convert the literal to the native IEEE-754 double format, with 53 bits of precision. Either way, the code ensures the best 53-bit approximation to `log(2)` will be used.
Up to the precision of binary64 floating-point representation, these values are equal: ``` In [21]: 0.6931471805599453094172321214581 == 0.693147180559945286227 Out[21]: True ``` `0.693147180559945286227` is what you get if you store the most accurate representable approximation of ln(2) into a 64-bit float and then print it to that many digits. Trying to stuff more digits in a float just gets the result rounded to the same value: ``` In [23]: '%.21f' % 0.6931471805599453094172321214581 Out[23]: '0.693147180559945286227' ``` As for why they wrote `0.693147180559945286227` in the code, you'd have to ask the guys who wrote [FDLIBM](http://www.netlib.org/fdlibm/e_acosh.c) at Sun back in 1993. This code came from FDLIBM.
73,348,659
I've recently had to implement a simple bruteforce software in python, and I was getting terrible execution times (even for a O(n^2) time complexity), topping the 10 minutes of runtime for a total of 3700 \* 11125 \* 2 = 82325000 access operations on numpy arrays (intel i5 4300U). I'm talking about access operations because I initialized all the arrays beforehand (out of the bruteforce loops) and then I just recycle them by just overwriting without reallocating anything. I get that bruteforce algorithms are supposed to be very slow, but 82 million acesses on contiguous-memory float arrays should not take 10 minutes even on the oldest of cpus... After some research I found this post: [Why is numpy.array so slow?](https://stackoverflow.com/questions/6559463/why-is-numpy-array-so-slow), which lead me to think that maybe numpy arrays with length of 3700 are too small to overcome some sort of overhead (which I do not know since, as I said, I recycle and not reallocate my arrays), therefore I tried substituting arrays with lists and... voilà, run times were down to the minute (55 seconds) for a total of 10x decrease! But now I'm kinda baffled on why on earth a list can be faster than an array, but more importantly, where is the threshold? The cited post said that numpy arrays are very efficient on large quantities of data, but where does "a lot of data" becomes "enough data" to actually get advantages? Is there a safe way to determine whether to use arrays or lists? Should I get worried about writing my functions two times, one for each case? For anyone wondering, the original script was something like (list version, mockup data): ``` X = [0] * 3700 #list of 3700 elements y = [0] * 3700 combinations = [(0, 0)] * 11125 grg_results = [[0, 0, 0]] * len(combinations) rgr_results = [[0, 0, 0]] * len(combinations) grg_temp = [100] * (3700 + 1) rgr_temp = [100] * (3700 + 1) for comb in range(len(combinations)): pivot_a = combinations[comb][0] pivot_b = combinations[comb][1] for i in range(len(X)): _x = X[i][0] _y = y[i][0] if _x < pivot_a: grg_temp[i + 1] = _y * grg_temp[i] rgr_temp[i + 1] = (2 - _y) * rgr_temp[i] elif _x >= pivot_a and _x <= pivot_b: grg_temp[i + 1] = (2 - _y) * grg_temp[i] rgr_temp[i + 1] = _y * rgr_temp[i] else: grg_temp[i + 1] = _y * grg_temp[i] rgr_temp[i + 1] = (2 - _y) * rgr_temp[i] grg_results[comb][0] = pivot_a grg_results[comb][1] = pivot_b rgr_results[comb][0] = pivot_a rgr_results[comb][1] = pivot_b grg_results[comb][2] = metrics[0](grg_temp) rgr_results[comb][2] = metrics[0](rgr_temp) ```
2022/08/14
[ "https://Stackoverflow.com/questions/73348659", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17474667/" ]
Let's do some simple list and array comparisons. Make a list of 0s (as you do): ``` In [108]: timeit [0]*1000 2.83 µs ± 0.399 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) ``` Make an array from that list - a lot more time: ``` In [109]: timeit np.array([0]*1000) 84.9 µs ± 103 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) ``` Make the array of 0s in the correct numpy way: ``` In [110]: timeit np.zeros(1000) 735 ns ± 0.727 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each) ``` Now sum all the elements of the list: ``` In [111]: %%timeit alist = [0]*1000 ...: sum(alist) 5.74 µs ± 215 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) ``` Sum of the array: ``` In [112]: %%timeit arr = np.zeros(1000) ...: arr.sum() 6.41 µs ± 26.5 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each) ``` In this case the list is a bit faster; but make a much larger list/array: ``` In [113]: %%timeit alist = [0]*100000 ...: sum(alist) 545 µs ± 17.9 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) In [114]: %%timeit arr = np.zeros(100000) ...: arr.sum() 56.7 µs ± 37 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) ``` The list sum scales O(n); the array scales much better (it has a higher "setup", but lower O(n) dependency). The iteration is done in c code. Add 1 to all elements of the list: ``` In [115]: %%timeit alist = [0]*100000 ...: [i+1 for i in alist] 4.64 ms ± 168 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) ``` A similar list comprehension on the array is slower: ``` In [116]: %%timeit arr = np.zeros(100000) ...: np.array([i+1 for i in arr]) 24.2 ms ± 37.6 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) ``` But doing a whole-array operation (where it iteration and addition is done in `c` code): ``` In [117]: %%timeit arr = np.zeros(100000) ...: arr+1 64.1 µs ± 85.3 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each) ``` For smaller size the [115] comprehension will be faster; there's no clear threshold.
These are the 4 main advantages of an ndarray as far as i know : 1. It uses less storage for the pointers (1 byte instead of 8) because its a raw python object and not an array. It also only allows homogeneous numeric data types which also lead to a increase in performance. 2. Slicing doesnt copy the array (which is a slow opeation) just returns a view on the same array 3. The same goes for joining and splitting arrays (doesnt copy) and other array operations 4. Some operations like adding the values of two arrays together are written in c and of course faster as a result Conclusion : Manually looping over a ndarray is not fast as there is no optimization implemented. A numpy array offers memory and execution efficiency, and not to forget that the array is serving as an interchange format for many existing libraries. Sources: <https://pythoninformer.com/python-libraries/numpy/advantages/> , <https://numpy.org/devdocs/user/absolute_beginners.html>
22,425,567
I'm using [Loggly](https://www.loggly.com/) in order to have a centralized logs aggregator for my app running on AWS (Elastic beanstalk). However I'm not able to save my application logs using the Python logging library and the django logging configuration. In my Loggly control panel I can see a lot of logs coming from the underlying OS and software stack of my EC2 instance, but spefic logs from my app are not displayed and I don't understand why. I configured Loggly by configuring RSYSLOG on my EC2 instance (using the automated python script provided by Loggly itself), then I defined the following in my django settings: ``` LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'standard': { 'format': '[%(asctime)s] [%(levelname)s] [%(name)s:%(lineno)s] %(message)s', 'datefmt': '%d/%b/%Y %H:%M:%S' }, 'loggly': { 'format': 'loggly: %(message)s', }, }, 'filters': { 'require_debug_false': { '()': 'django.utils.log.RequireDebugFalse' } }, 'handlers': { 'mail': { 'level': 'ERROR', 'filters': ['require_debug_false'], 'class': 'django.utils.log.AdminEmailHandler', 'formatter': 'standard', }, 'syslog': { 'level': 'INFO', 'class': 'logging.handlers.SysLogHandler', 'facility': 'local5', 'formatter': 'loggly', }, 'cygora': { 'level': 'DEBUG', 'class': 'logging.handlers.RotatingFileHandler', 'filename': '/tmp/cygora.log', 'maxBytes': 1024 * 1024 * 5, # 5 mb 'backupCount': 10, 'formatter': 'standard', }, 'django': { 'level': 'INFO', 'class': 'logging.handlers.RotatingFileHandler', 'filename': '/tmp/django.log', 'maxBytes': 1024 * 1024 * 5, # 5 mb 'backupCount': 10, 'formatter': 'standard', }, 'celery': { 'level': 'DEBUG', 'class': 'logging.handlers.RotatingFileHandler', 'filename': '/tmp/celery.log', 'maxBytes': 1024 * 1024 * 5, # 5 mb 'backupCount': 10, 'formatter': 'standard', }, }, 'loggers': { 'loggly': { 'handlers': ['syslog'], 'propagate': True, 'format': 'loggly: %(message)s', 'level': 'DEBUG', }, 'django': { 'handlers': ['syslog', 'django'], 'level': 'WARNING', 'propagate': True, }, 'django.db.backends': { 'handlers': ['syslog', 'django'], 'level': 'INFO', 'propagate': True, }, 'django.request': { 'handlers': ['syslog', 'mail', 'django'], 'level': 'INFO', 'propagate': True, }, 'celery': { 'handlers': ['syslog', 'mail', 'celery'], 'level': 'DEBUG', 'propagate': True, }, 'com.cygora': { 'handlers': ['syslog', 'cygora'], 'level': 'INFO', 'propagate': True, }, } } ``` In my classes I use the "standard" approach of having a module-level logger: ``` import logging logger = logging.getLogger(__name__) logger.info('This message is not displayed on Loggly!! :(') ``` but it doesn't work, neither using: ``` import logging logger = logging.getLogger('loggly') logger.info('This message is not displayed on Loggly!! :(') ``` Any idea? (is there someone using Django + Loggly with RSYSLOG)
2014/03/15
[ "https://Stackoverflow.com/questions/22425567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/267719/" ]
The problem is either in the local rsyslog service *receiving* the logs or in *sending* them. Your `LOGGING` setting is solid, but since you are taking control of everything (like the Django loggers) you should set `'disable_existing_loggers': True`. (Minor point: you can drop 'format' from the `loggly` loggers; the syslog handler will do that with its 'formatter'.) Following the Loggly Django [example](http://community.loggly.com/customer/portal/articles/1256304-logging-from-python#setup-django), there are two steps left. 1. Setup Syslog with these [instructions](http://community.loggly.com/customer/portal/articles/1271671-rsyslog-basic-configuration). You've done this and it sounds like it's working. 2. Make sure the service is listening by uncommenting these lines in `rsyslog.conf`: `# provides UDP syslog reception $ModLoad imudp $UDPServerRun 514` Verify rsyslog's config with `rsyslog -N1` and restart the service. Rsyslog [troubleshooting](http://www.rsyslog.com/doc/v7-stable/troubleshooting/troubleshoot.html) mentions looking at the rsyslog log and running the service interactively; hopefully you don't have to go to those depths. Look at Loggly's [gotchas](http://community.loggly.com/customer/portal/articles/1271671-rsyslog-basic-configuration#gotchas) section first. Start a Django shell and test it (which you're already doing—nice work!). ``` > manage.py shell import logging logger = logging.getLogger('loggly') logger.info('This message is not displayed on Loggly!! :(') ``` Also, you do not have a root logger. So it's quite likely that `logging.getLogger(__name__)` won't be caught and handled. That's a general note; your efforts are thorough and not limited by this.
Googled around and saw your post on loggly's support. Did you see their reply and did it help you? <http://community.loggly.com/customer/portal/questions/5898190-django-loggly-app-logs-not-saved>
24,148,039
I'm trying to use in python a shared\_ptr of a fundamental type (for instance int or double), but I don't know how to export it to python: I have the following class: ``` class Holder { public: Holder(int v) : value(new int(v)) {}; boost::shared_ptr<int> value; }; ``` The class is being exported in this way: ``` class_<Holder>("Holder", init<int>()) .def_readwrite("value", &Holder::value); ``` In the python code, I'm trying to set the "holder instance .value" using an instance that already exists. ``` h1 = mk.Holder(10) h2 = mk.Holder(20) h1.value = h2.value ``` The following has occured: ``` TypeError: No to_python (by-value) converter found for C++ type: class boost::shared_ptr<int> ``` My question is: how can I export `boost::shared_ptr<int>` to python?
2014/06/10
[ "https://Stackoverflow.com/questions/24148039", "https://Stackoverflow.com", "https://Stackoverflow.com/users/697884/" ]
One can use [`boost::python::class_`](http://www.boost.org/doc/libs/release/libs/python/doc/v2/class.html#class_-spec) to export `boost::shared_ptr<int>` to Python in the same manner as other types: ```cpp boost::python::class_<boost::shared_ptr<int> >(...); ``` However, be careful in the semantics introduced when exposing `shared_ptr`. For example, consider the case where assigning one `Holder.value` to another `Holder.value` simply invokes `boost::shared_ptr<int>`'s assignment operator, and `Holder.increment()` manipulates the `int` pointed to by `value` rather than having `value` point to a new `int`: ``` h1 = Holder(10) h2 = Holder(20) h1.value = h2.value # h1.value and h2.value point to the same int. h1.increment() # Change observed in h2.value. The semantics may be unexpected # by Python developers. ``` --- Here is a complete example exposing `boost::shared_ptr<int>` based on the original code: ```cpp #include <sstream> #include <string> #include <boost/python.hpp> #include <boost/shared_ptr.hpp> class Holder { public: Holder(int v) : value(new int(v)) {}; boost::shared_ptr<int> value; }; std::string holder_value_str(const boost::shared_ptr<int>& value) { std::stringstream stream; stream << value.get() << " contains " << *value; return stream.str(); } BOOST_PYTHON_MODULE(example) { namespace python = boost::python; { python::scope holder = python::class_<Holder>( "Holder", python::init<int>()) .def_readwrite("value", &Holder::value) ; // Holder.Value python::class_<boost::shared_ptr<int> >("Value", python::no_init) .def("__str__", &holder_value_str) ; } } ``` Interactive usage: ```python >>> import example >>> h1 = example.Holder(10) >>> h2 = example.Holder(20) >>> print h1.value 0x25f4bd0 contains 10 >>> print h2.value 0x253f220 contains 20 >>> h1.value = h2.value >>> print h1.value 0x253f220 contains 20 ``` --- Alternatively, if one considers `shared_ptr` as nothing more than a C++ memory management proxy, then it may be reasonable to expose `Holder.value` as an `int` in Python, even though `Holder::value` is a `boost::shared_ptr<int>`. This would provide Python developers with the expected semantics and permit statements such `h1.value = h2.value + 5`. Here is an example that uses auxiliary functions to transform `Holder::value` to and from an `int` instead of exposing the `boost::shared_ptr<int>` type: ```cpp #include <boost/python.hpp> #include <boost/shared_ptr.hpp> class Holder { public: Holder(int v) : value(new int(v)) {}; boost::shared_ptr<int> value; }; /// @brief Auxiliary function used to get Holder.value. int get_holder_value(const Holder& self) { return *(self.value); } /// @brief Auxiliary function used to set Holder.value. void set_holder_value(Holder& self, int value) { *(self.value) = value; } BOOST_PYTHON_MODULE(example) { namespace python = boost::python; python::scope holder = python::class_<Holder>( "Holder", python::init<int>()) .add_property("value", python::make_function(&get_holder_value), python::make_function(&set_holder_value)) ; } ``` Interactive usage: ```python >>> import example >>> h1 = example.Holder(10) >>> h2 = example.Holder(20) >>> assert(h1.value == 10) >>> assert(h2.value == 20) >>> h1.value = h2.value >>> assert(h1.value == 20) >>> h1.value += 22 >>> assert(h1.value == 42) >>> assert(h2.value == 20) ```
Do you need to? Python has its own reference counting mechanism, and it might be simpler just to use that. (But a lot depends on what is going on on the C++ side.) Otherwise: you probably need to define a Python object to contain the shared pointer. This is relatively straightforward: just define something like: ``` struct PythonWrapper { PyObject_HEAD boost::shared_ptr<int> value; // Constructors and destructors can go here, to manage // value. }; ``` And declare and manage it like you would any other object; just make sure you do a `new` when ever objects of the type are created (and they must be created in functions you provide to Python, if nothing else in the function in the `tp_new` field), and a `delete` in the function you put in the `tp_dealloc` field of the type object you register.
51,201,658
I am trying to learn to code using python on my own but I ran into a problem. I am using python's subprocess module to execute a .bat file, but the process seems to get stuck at the bat file. The python code currently looks like this: ``` import getpass username = getpass.getuser() from subprocess import Popen p = Popen("hidefolder.bat", cwd=r"C:\Users\%s\Desktop" % username) stdout, stderr = p.communicate() import sys sys.exit() ``` And the .bat file looks like this: ``` if exist "C:\Users\%username%\Desktop\HiddenFolder\" ( attrib -s -h "HiddenFolder" rename "HiddenFolder" "Projects" exit ) if exist "C:\Users\%username%\Desktop\Projects\" ( rename "Projects" "HiddenFolder" attrib +s +h "HiddenFolder" exit ) if not exist "C:\Users\%username%\Desktop\HiddenFolder\" ( mkdir "C:\Users\%username%\Desktop\HiddenFolder\" ) exit ``` Is there a way to kill the child process even if the python script is waiting for the child process to be terminated before continuing? Or is the problem in the child process to start with? Thank you in advance.
2018/07/06
[ "https://Stackoverflow.com/questions/51201658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10039856/" ]
You need to use `subprocess.PIPE` for `stdout` and `stderr`, or else they can't be fetched through `Popen.communicate`, and is the reason why your process is stuck. ``` from subprocess import Popen, PIPE import getpass username = getpass.getuser() p = Popen("hidefolder.bat", cwd=r"C:\Users\%s\Desktop" % username, stdout=PIPE, stderr=PIPE) stdout, stderr = p.communicate() import sys sys.exit() ```
I am a new programmer but i could solve my problem writting below code. ``` import subprocess subprocess.call([r'ProcurementSoftwareRun.bat']) print ('Software run successful') ``` My bat file was like: ``` @ECHO OFF cmd /c start "" "C:\Program Files (x86)\UserName\ERPModule\PROCUREMENT.exe exit ```
45,010,682
I wanted to convert an object of type bytes to binary representation in python 3.x. For example, I want to convert the bytes object `b'\x11'` to the binary representation `00010001` in binary (or 17 in decimal). I tried this: ``` print(struct.unpack("h","\x11")) ``` But I'm getting: ``` error struct.error: unpack requires a bytes object of length 2 ```
2017/07/10
[ "https://Stackoverflow.com/questions/45010682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1987575/" ]
Starting from Python 3.2, you can use [`int.from_bytes`](https://docs.python.org/3/library/stdtypes.html#int.from_bytes). Second argument, `byteorder`, specifies [endianness](https://en.wikipedia.org/wiki/Endianness) of your bytestring. It can be either `'big'` or `'little'`. You can also use `sys.byteorder` to get your host machine's native byteorder. ```py import sys int.from_bytes(b'\x11', byteorder=sys.byteorder) # => 17 bin(int.from_bytes(b'\x11', byteorder=sys.byteorder)) # => '0b10001' ```
Iterating over a bytes object gives you 8 bit ints which you can easily format to output in binary representation: ```py import numpy as np >>> my_bytes = np.random.bytes(10) >>> my_bytes b'_\xd9\xe97\xed\x06\xa82\xe7\xbf' >>> type(my_bytes) bytes >>> my_bytes[0] 95 >>> type(my_bytes[0]) int >>> for my_byte in my_bytes: >>> print(f'{my_byte:0>8b}', end=' ') 01011111 11011001 11101001 00110111 11101101 00000110 10101000 00110010 11100111 10111111 ``` A function for a hex string representation is builtin: ``` >>> my_bytes.hex(sep=' ') '5f d9 e9 37 ed 06 a8 32 e7 bf' ```
72,703,006
I am trying to have this repo on docker: <https://github.com/facebookresearch/detectron2/tree/main/docker> but when I want to docker compose it, I receive this error: ``` ERROR: Package 'detectron2' requires a different Python: 3.6.9 not in '>=3.7' ``` The default version of the python I am using is 3.10 but I don't know why through docker it's trying to run it on python 3.6.9. Is there a way for me to change it to a higher version of python while running the following dockerfile? ``` FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04 # use an older system (18.04) to avoid opencv incompatibility (issue#3524) ENV DEBIAN_FRONTEND noninteractive RUN apt-get update && apt-get install -y \ python3-opencv ca-certificates python3-dev git wget sudo ninja-build RUN ln -sv /usr/bin/python3 /usr/bin/python # create a non-root user ARG USER_ID=1000 RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g sudo RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers USER appuser WORKDIR /home/appuser ENV PATH="/home/appuser/.local/bin:${PATH}" RUN wget https://bootstrap.pypa.io/pip/3.6/get-pip.py && \ python3 get-pip.py --user && \ rm get-pip.py # install dependencies # See https://pytorch.org/ for other options if you use a different version of CUDA RUN pip install --user tensorboard cmake # cmake from apt-get is too old RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html RUN pip install --user 'git+https://github.com/facebookresearch/fvcore' # install detectron2 RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo # set FORCE_CUDA because during `docker build` cuda is not accessible ENV FORCE_CUDA="1" # This will by default build detectron2 for all common cuda architectures and take a lot more time, # because inside `docker build`, there is no way to tell which architecture will be used. ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing" ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}" RUN pip install --user -e detectron2_repo # Set a fixed model cache directory. ENV FVCORE_CACHE="/tmp" WORKDIR /home/appuser/detectron2_repo # run detectron2 under user "appuser": # wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg # python3 demo/demo.py \ #--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ #--input input.jpg --output outputs/ \ #--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl ```
2022/06/21
[ "https://Stackoverflow.com/questions/72703006", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13334873/" ]
This is an [open issue with facebookresearch/detectron2](https://github.com/facebookresearch/detectron2/issues/4335). The developers updated the base Python requirement from 3.6+ to 3.7+ with [commit 5934a14](https://github.com/facebookresearch/detectron2/commit/5934a1452801e669bbf9479ae222ce1a8a51f52e) last week but didn't modify the `Dockerfile`. I've created a `Dockerfile` based on Nvidia CUDA's CentOS8 image (rather than Ubuntu) that should work. ``` FROM nvidia/cuda:11.1.1-cudnn8-devel-centos8 RUN cd /etc/yum.repos.d/ && \ sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-* && \ sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-* && \ dnf check-update; dnf install -y ca-certificates python38 python38-devel git sudo which gcc-c++ mesa-libGL && \ dnf clean all RUN alternatives --set python /usr/bin/python3 && alternatives --install /usr/bin/pip pip /usr/bin/pip3 1 # create a non-root user ARG USER_ID=1000 RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g wheel RUN echo '%wheel ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers USER appuser WORKDIR /home/appuser ENV PATH="/home/appuser/.local/bin:${PATH}" # install dependencies # See https://pytorch.org/ for other options if you use a different version of CUDA ARG CXX="g++" RUN pip install --user tensorboard ninja cmake opencv-python opencv-contrib-python # cmake from apt-get is too old RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html RUN pip install --user 'git+https://github.com/facebookresearch/fvcore' # install detectron2 RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo # set FORCE_CUDA because during `docker build` cuda is not accessible ENV FORCE_CUDA="1" # This will by default build detectron2 for all common cuda architectures and take a lot more time, # because inside `docker build`, there is no way to tell which architecture will be used. ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing" ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}" RUN pip install --user -e detectron2_repo # Set a fixed model cache directory. ENV FVCORE_CACHE="/tmp" WORKDIR /home/appuser/detectron2_repo # run detectron2 under user "appuser": # curl -o input.jpg http://images.cocodataset.org/val2017/000000439715.jpg # python3 demo/demo.py \ #--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ #--input input.jpg --output outputs/ \ #--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl ``` --- Alternatively, this is untested as the following images don't work on my machine (because I run `arm64`) so I can't debug... In the original `Dockerfile`, changing your `FROM` line to this *might* resolve it, but I haven't verified this (and the image mentioned in the issue (`pytorch/pytorch:1.10.0-cuda11.3-cudnn8-devel`) might work as well. ``` FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu20.04 ```
You can use pyenv: <https://github.com/pyenv/pyenv> Just google `docker pyenv container`, will give you some entries like: <https://gist.github.com/jprjr/7667947> If you follow the gist you can see how it has been updated, very easy to update to latest python that pyenv support. anything since 2.2 to 3.11 Only drawback is that container becomes quite large because it holds all glibc development tools and libraries to compile cpython, but often it helps in case you need modules without wheels and need to compile because it is already there. Below is a minimal Pyenv Dockerfile Just change the PYTHONVER or set a --build-arg to anything pythonversion pyenv support have (`pyenv install -l`): ``` FROM ubuntu:22.04 ARG MYHOME=/root ENV MYHOME ${MYHOME} ARG PYTHONVER=3.10.5 ENV PYTHONVER ${PYTHONVER} ARG PYTHONNAME=base ENV PYTHONNAME ${PYTHONNAME} ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update && apt-get upgrade -y && \ apt-get install -y locales wget git curl zip vim apt-transport-https tzdata language-pack-nb language-pack-nb-base manpages \ build-essential libjpeg-dev libssl-dev xvfb zlib1g-dev libbz2-dev libreadline-dev libreadline6-dev libsqlite3-dev tk-dev libffi-dev libpng-dev libfreetype6-dev \ libx11-dev libxtst-dev libfontconfig1 lzma lzma-dev RUN git clone https://github.com/pyenv/pyenv.git ${MYHOME}/.pyenv && \ git clone https://github.com/yyuu/pyenv-virtualenv.git ${MYHOME}/.pyenv/plugins/pyenv-virtualenv && \ git clone https://github.com/pyenv/pyenv-update.git ${MYHOME}/.pyenv/plugins/pyenv-update SHELL ["/bin/bash", "-c", "-l"] COPY ./.bash_profile /tmp/ RUN cat /tmp/.bash_profile >> ${MYHOME}/.bashrc && \ cat /tmp/.bash_profile >> ${MYHOME}/.bash_profile && \ rm -f /tmp/.bash_profile && \ source ${MYHOME}/.bash_profile && \ pyenv install ${PYTHONVER} && \ pyenv virtualenv ${PYTHONVER} ${PYTHONNAME} && \ pyenv global ${PYTHONNAME} ``` and the pyenv config to be saved as .bash\_profile in Dockerfile directory: ``` # profile for pyenv export PYENV_ROOT="$HOME/.pyenv" export PATH="$PYENV_ROOT/bin:$PATH" eval "$(pyenv init -)" eval "$(pyenv init --path)" eval "$(pyenv virtualenv-init -)" ``` build with: `docker build -t pyenv:3.10.5 .` Will build the image, but as said it is quite big: ``` docker images REPOSITORY TAG IMAGE ID CREATED SIZE pyenv 3.10.5 64a4b91364d4 2 minutes ago 1.04GB ``` very easy to test any python version only changing PYTHONVER ``` docker run -ti pyenv:3.10.5 /bin/bash (base) root@968fd2178c8a:/# python --version Python 3.10.5 (base) root@968fd2178c8a:/# which python /root/.pyenv/shims/python ``` if I build with `docker build -t pyenv:3.12-dev --build-arg PYTHONVER=3.12.dev .` or change the PYTHONVER in the Dockerfile: ``` docker run -ti pyenv:3.12-dev /bin/bash (base) root@c7245ea9f52e:/# python --version Python 3.12.0a0 ```
46,830,144
There seem to be two kinds of generator-based coroutine: 1. From [a reply](https://stackoverflow.com/a/46203922/156458) by Jim Fasarakis Hilliard: > > **Generator-based coroutine**: A generator (`def` + `yield`) that is wrapped by [`types.coroutine`](https://docs.python.org/3/library/types.html#types.coroutine) . You need to wrap it in > `types.coroutine` if you need it to be considered a coroutine object. > > > 2. From Python in a Nutshell, which doesn't explicitly call it "generator-based coroutine": > > When you write Python code based on `asyncio` (ideally also using > add-on modules from asyncio.org), you’ll usually be writing > coroutine functions. Up to Python 3.4 included, such functions > are generators using the `yield from` statement covered in “yield > from (v3-only)” on page 95, decorated with `@asyncio.coroutine` , > covered in “asyncio coroutines” on page 518; > > > From <https://www.python.org/dev/peps/pep-0492/#differences-from-generators> > > generator-based coroutines (for asyncio code must be decorated with @asyncio.coroutine) > > > <http://masnun.com/2015/11/13/python-generators-coroutines-native-coroutines-and-async-await.html> also calls it "generator-based coroutine". Are the two kinds of generator-based coroutines the same concept? If not, what are their differences in purposes and usages? Thanks.
2017/10/19
[ "https://Stackoverflow.com/questions/46830144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/156458/" ]
They're the same kind of coroutine. `types.coroutine` and `asyncio.coroutine` are just two separate ways to create them. `asyncio.coroutine` is older, predating the introduction of `async` coroutines, and its functionality has shifted somewhat from its original behavior now that `async` coroutines exist. `asyncio.coroutine` and `types.coroutine` have subtly different behavior, especially if applied to anything other than a generator function, or if asyncio is in [debug mode](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONASYNCIODEBUG).
As far as I’m concerned, `async def` is the **proper** way to define a coroutine. `yield` and `yield from` have their purpose in generators, and they are also used to implement “futures”, which are the low-level mechanism that handles switching between different coroutine contexts. I did [this diagram](https://default-cube.deviantart.com/art/Van-Rossum-s-Triangle-679791228) a few months ago to summarize the relationships between them. But frankly, you can safely ignore the whole business. Event loops have the job of handling all the low-level details of managing the execution of coroutines, so use one of those, like [asyncio](https://docs.python.org/3/library/asyncio.html). There are also `asyncio`-compatible wrappers for other event loops, like my own [`glibcoro`](https://github.com/ldo/glibcoro) for GLib/GTK. In other words, stick to the `asyncio` API, and you can write “event-loop-agnostic” coroutines!
49,922,073
I just installed termcolor for python 2.7 on windows8.1. When I try to print colored text, I get the strange output. ``` from termcolor import colored print colored('Hello world','red') ``` Here is the result: ``` [31mHello world[0m ``` Help to get out from this problem.Thanks,In advance
2018/04/19
[ "https://Stackoverflow.com/questions/49922073", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9467325/" ]
See this [stackOverflow](https://stackoverflow.com/questions/287871/how-to-print-colored-text-in-terminal-in-python) post. It basically says that in order to get the escape sequences working in Windows, you need to run os.system('color') first. For example: ``` import termcolor import os os.system('color') print(termcolor.colored("Stack Overflow", "green") ```
`termcolor` or `colored` works perfectly fine under python 2.7 and I can't replicate your error on my Mac/Linux. If you looks into the source code of `colored`, it basically print the string in the format as ``` \033[%dm%s\033[0m' % (COLORS[color], text) ``` Somehow your terminal environment does not recognise the non-printing escape sequences that is used in the unix/linux system for setting the foreground color of xterm.
3,079,684
As you know, Windows has a "Add/Remove Programs" system in the Control Panel. Let's say I am preparing an installer and I want to register my program to list of installed programs and want it to be uninstallable from "Add/Remove Programs"? Which protocols should I use. Any tutorials or docs about registering programs to that list? I am coding with python and I can use WMI (Windows Management Instrument) or Win32 API. IMHO, it is done with Registry keys but I am not sure with it. I also want to execute an uninstaller upon the Uninstallation to remove installed files. Any related docs or tutorials are highly appreciated. Thanks.
2010/06/20
[ "https://Stackoverflow.com/questions/3079684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/54929/" ]
As stated on IRC: "Windows keeps its uninstall information in the registry" Its in HLLM\Software\Microsoft\Windows\CurrentVersion\uninstall\ keys. You need a few things from the Win32 API, but I belive there's a fair amount of Python support for the win32 API. Basically, a key in ...\Uninstall\ with a unique name (like "MyApp") with a few special values stashed in there. Add/Remove programs looks through there. Its pretty self-explanatory.
Inno Setup is open source so perhaps you can get some ideas from that.
3,079,684
As you know, Windows has a "Add/Remove Programs" system in the Control Panel. Let's say I am preparing an installer and I want to register my program to list of installed programs and want it to be uninstallable from "Add/Remove Programs"? Which protocols should I use. Any tutorials or docs about registering programs to that list? I am coding with python and I can use WMI (Windows Management Instrument) or Win32 API. IMHO, it is done with Registry keys but I am not sure with it. I also want to execute an uninstaller upon the Uninstallation to remove installed files. Any related docs or tutorials are highly appreciated. Thanks.
2010/06/20
[ "https://Stackoverflow.com/questions/3079684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/54929/" ]
As stated on IRC: "Windows keeps its uninstall information in the registry" Its in HLLM\Software\Microsoft\Windows\CurrentVersion\uninstall\ keys. You need a few things from the Win32 API, but I belive there's a fair amount of Python support for the win32 API. Basically, a key in ...\Uninstall\ with a unique name (like "MyApp") with a few special values stashed in there. Add/Remove programs looks through there. Its pretty self-explanatory.
If you are developing for Windows platform I think using Windows Installer from Microsoft won't be a problem. You can check documentation of Windows Installer from [Microsoft.com Windows Installer Page](http://msdn.microsoft.com/en-us/library/cc185688%28v=VS.85%29.aspx)
3,079,684
As you know, Windows has a "Add/Remove Programs" system in the Control Panel. Let's say I am preparing an installer and I want to register my program to list of installed programs and want it to be uninstallable from "Add/Remove Programs"? Which protocols should I use. Any tutorials or docs about registering programs to that list? I am coding with python and I can use WMI (Windows Management Instrument) or Win32 API. IMHO, it is done with Registry keys but I am not sure with it. I also want to execute an uninstaller upon the Uninstallation to remove installed files. Any related docs or tutorials are highly appreciated. Thanks.
2010/06/20
[ "https://Stackoverflow.com/questions/3079684", "https://Stackoverflow.com", "https://Stackoverflow.com/users/54929/" ]
If you are developing for Windows platform I think using Windows Installer from Microsoft won't be a problem. You can check documentation of Windows Installer from [Microsoft.com Windows Installer Page](http://msdn.microsoft.com/en-us/library/cc185688%28v=VS.85%29.aspx)
Inno Setup is open source so perhaps you can get some ideas from that.
53,119,083
In the [`xonsh`](https://github.com/xonsh/xonsh/) shell how can I receive from a pipe to a python expression? Example with a `find` command as pipe provider: ``` find $WORKON_HOME -name pyvenv.cfg -print | for p in <stdin>: $(ls -dl @(p)) ``` The `for p in <stdin>:` is obviously pseudo code. What do I have to replace it with? Note: In bash I would use a construct like this: ``` ... | while read p; do ... done ```
2018/11/02
[ "https://Stackoverflow.com/questions/53119083", "https://Stackoverflow.com", "https://Stackoverflow.com/users/65889/" ]
The easiest way to pipe input into a Python expression is to use a function that is a [callable alias](https://xon.sh/tutorial.html#callable-aliases), which happens to accept a stdin file-like object. For example, ``` def func(args, stdin=None): for line in stdin: ls -dl @(line.strip()) find $WORKON_HOME -name pyvenv.cfg -print | @(func) ``` Of course you could skip the `@(func)` by putting func in `aliases`, ``` aliases['myls'] = func find $WORKON_HOME -name pyvenv.cfg -print | myls ``` Or if all you wanted to do was iterate over the output of `find`, you don't even need to pipe. ``` for line in !(find $WORKON_HOME -name pyvenv.cfg -print): ls -dl @(line.strip()) ```
Drawing on the answer from [Anthony Scopatz](https://stackoverflow.com/users/2312428/anthony-scopatz) you can do this on one line with a [callable alias](https://xon.sh/tutorial.html#callable-aliases) as a lambda. The function takes the third form, `def mycmd2(args, stdin=None)`. I discarded `args` with `_` because I don't need it and shortened `stdin` to `s` for convenience. Here is a command to hash a file: ``` type image.qcow2 | @(lambda _,s: hashlib.sha256(s.read()).hexdigest()) ```
62,032,878
I am new in ebpf & xdp topic and want to do learn it. My question is how to use ebpf filter to filter the packet on specific payload matching? for example, if the data(payload) of the packet is 1234 its passes to the network stack otherwise it blocks the packet. I reached payload length. For example, if I want to match the message payload length it works fine but when I start matching the payload characters I got an error. here is my code: ```c int ret_val; unsigned long payload_offset; unsigned long payload_size; const char *payload = "test"; struct ethhdr *eth = data; if ((void*)eth + sizeof(*eth) <= data_end) { struct iphdr *ip = data + sizeof(*eth); if ((void*)ip + sizeof(*ip) <= data_end) { if (ip->protocol == IPPROTO_UDP ) { struct udphdr *udp = (void*)ip + sizeof(*ip); if ((void*)udp + sizeof(*udp) <= data_end) { if (udp->dest == ntohs(5005)) { payload_offset = sizeof(struct udphdr); payload_size = ntohs(udp->len) - sizeof(struct udphdr); unsigned char *s = (unsigned char *)&payload_size; if (ret_val == __builtin_memcmp(s,payload,4) == 0) { return XDP_DROP; } } } } } } ``` The error had removed but unable to compare the payload... I am sending the UDP message from python socket code. If I compare the payload length it works fine.
2020/05/26
[ "https://Stackoverflow.com/questions/62032878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13623550/" ]
What did you try? You should probably read a bit more about eBPF to try to understand how to process packets, the basic example you give does not sound too complicated. Basically you would have to parse the headers to see where your payload begins. [Simple BPF parsing examples](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/samples/bpf/parse_simple.c/?h=v5.6) might help you understand the principles: 1. Start from beginning of header (e.g. Ethernet at first) 2. Check packet is long enough to hold the header (or you would risk an out-of-bound access when trying to access the upper layers otherwise) 3. Add header length to get the offset of your next header (e.g. IPv4, then e.g. TCP...) 4. Rinse and repeat. In your case you would **process all headers until you get the offset of the data payload**. Note that this is trivial if the traffic you try to match always has the same headers (e.g. always IPv4 and UDP), but you get more cases to sort out if there is a mix (IPv4 + IPv6, encapsulation, IPv4 options...). Once you have the offset for your data, **just compare data at this offset** to your pattern (that you may hardcode in the BPF program or get from a BPF map, depending on your use case). Note that you [do not have access to `strcmp()`](https://stackoverflow.com/a/60384186/3716552), but `__builtin_memcmp()` is available if you need to compare more than 64 bits. (All the above applying of course to a C program that you would compile into an object file containing eBPF instructions with the LLVM back-end.) If you were to search for a string at an arbitrary offset in the payload, know that eBPF now supports (bounded) loops since kernel 5.3 (if I remember correctly).
Your edit is pretty much a new question, so here an updated answer. Please consider opening a new question instead in the future. There are a number of things that are wrong in your program. In particular: ```c 1| payload_offset = sizeof(struct udphdr); 2| payload_size = ntohs(udp->len) - sizeof(struct udphdr); 3| unsigned char *s = (unsigned char *)&payload_size; 4| 5| if (ret_val == __builtin_memcmp(s, payload, 4) == 0) { 6| return XDP_DROP; 7| } ``` * On line 1, your `payload_offset` variable is not an offset, it just contains the length of the UDP header. You would need to add that to the start of the UDP header to get the actual payload offset. * Line 2 is fine. * Line 3 does not make any sense! You make `s` (that you later compare to your pattern) point towards *the size of the payload*? (a.k.a “I told you so in the comments! :)”). Instead, it should point to... the beginning of the payload, maybe? So, basically, `data + payload_offset` (once offset is fixed). * Between lines 3 and 5, the check on payload length is missing. When you try to access your payload in `s` (`__builtin_memcmp(s, payload, 4)`), you try to compare four bytes of packet data; you *must* ensure that the packet is long enough to read those four bytes (just as you checked the length each time before you read from an Ethernet, IP or UDP header field). * While at it, we can also check that the length of the payload is equal to the length of the pattern to match, and exit if they differ without having to compare the bytes. * Line 5 has a `==` instead of `=`, as discussed in the comments. Easy to fix. However, I had no luck with `__builtin_memcmp()` for your program, it seems LLVM does not want to inline it and turns it into a failing function call. Never mind, we can work without it. For your example, you can cast to `int` and compare the four-byte long values directly. For longer patterns, and for recent kernels (or by unrolling if pattern size is fixed), we can use bounded loops. Here is a amended version of your program, that works on my setup. ```c #include <arpa/inet.h> #include <linux/bpf.h> #include <linux/if_ether.h> #include <linux/ip.h> #include <linux/udp.h> int xdp_func(struct xdp_md *ctx) { void *data_end = (void *)(long)ctx->data_end; void *data = (void *)(long)ctx->data; char match_pattern[] = "test"; unsigned int payload_size, i; struct ethhdr *eth = data; unsigned char *payload; struct udphdr *udp; struct iphdr *ip; if ((void *)eth + sizeof(*eth) > data_end) return XDP_PASS; ip = data + sizeof(*eth); if ((void *)ip + sizeof(*ip) > data_end) return XDP_PASS; if (ip->protocol != IPPROTO_UDP) return XDP_PASS; udp = (void *)ip + sizeof(*ip); if ((void *)udp + sizeof(*udp) > data_end) return XDP_PASS; if (udp->dest != ntohs(5005)) return XDP_PASS; payload_size = ntohs(udp->len) - sizeof(*udp); // Here we use "size - 1" to account for the final '\0' in "test". // This '\0' may or may not be in your payload, adjust if necessary. if (payload_size != sizeof(match_pattern) - 1) return XDP_PASS; // Point to start of payload. payload = (unsigned char *)udp + sizeof(*udp); if ((void *)payload + payload_size > data_end) return XDP_PASS; // Compare each byte, exit if a difference is found. for (i = 0; i < payload_size; i++) if (payload[i] != match_pattern[i]) return XDP_PASS; // Same payload, drop. return XDP_DROP; } ```
54,468,348
From the cmd window I have to do this every time I run a script: ``` C:\>cd C:\Users\my name\AppData\Local\Programs\Python\Python37 C:\Users\my name\AppData\Local\Programs\Python\Python37>python "C:\\Users\\my name\\AppData\\Local\\Programs\\Python\\Python37\\scripts\\helloWorld.py" hello world ``` How can I get away from having to paste in all of the paths? I tried this and a few other things: <https://www.youtube.com/watch?v=Y2q_b4ugPWk> thanks!
2019/01/31
[ "https://Stackoverflow.com/questions/54468348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3524158/" ]
You need to pay attention to the current working directory of your python interpreter. It basically means the directory you are currently in where you execute the python interpreter, and it relies on that path to look for your script passed in. If you're inside the script already, you can easily check with `os.getcwd()` method. In your case, you could have easily done this instead: ``` C:\Users\my name\AppData\Local\Programs\Python\Python37>python "scripts\helloWorld.py" hello world ``` Since your current working directory is `C:\Users\my name\AppData\Local\Programs\Python\Python37`, you just need to give it the relative path `scripts\helloWorld.py`. The current working directory can be easily visualized like this: ``` # cwd.py import os print("Current Working Directory is " + os.getcwd()) ``` And then when you run the scripts: ``` C:\Users\MyUserName\Documents>python cwd.py Current Working Directory is C:\Users\MyUserName\Documents C:\Users\MyUserName\Documents\Some\Other\Path>python cwd.py Current Working Directory is C:\Users\MyUserName\Documents\Some\Other\Path ``` Note in any case, if the `cwd.py` was not in the current working directory or in your PATH environment variable, python interpreter would complain it couldn't find the script (because why should it know where your script is stored?) If you insist in adding the environment variable though, you will need to add the directory to your `PATH` or `PYTHONPATH`... though I have a feeling `\Python37` is already under there.
There is a designated directory where you can put your .py scripts if you want to invoke them without specifying the full path. Setting this up correctly will allow you to run the script simply by invoking the script name (if the .py extension is registered to the interpreter and not an editor). Windows ======= If you have a per-user python installation - which is the installer default - the directory is: ``` %LOCALAPPDATA%/python/python39/Scripts ``` Adjust version number as needed. If you have a system-wide all-user installation, the directory is: ``` %APPDATA%/python/python39/Scripts ``` ### Configuring PATH automatically The Windows python installer includes an [option to automatically add](https://docs.python.org/3/using/windows.html#finding-the-python-executable) this directory (plus the python interpreter path) to your `PATH` environment variable during installation. Select the checkbox at the bottom or use the `PrependPath=1` CLI option. [![Python installer dialog showing add-to-path checkbox](https://i.stack.imgur.com/3MBsR.png)](https://i.stack.imgur.com/3MBsR.png) If python is already installed, you can still use the installer to do this. In Control Panel, `Programs and Features`, select the python entry and choose "Uninstall/Change". Then choose "Modify" and select the "Add Python to PATH" checkbox. Alternatively, if you want to add it manually - search for the `Edit environment variables for your account` in Windows 10. Edit the `PATH` variable in that dialog to add the directory. Linux ===== ``` ~/.local/bin ```
63,841,244
I have been trying to scrape data from [this site](http://www.indianbluebook.com/). I need to fill **Get the precise price of your car** form ie. the year, make, model etc.. I have written the following code till now: ``` import requests import time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait, Select from selenium.common.exceptions import NoSuchElementException from selenium.common.exceptions import StaleElementReferenceException from selenium.webdriver.support import expected_conditions from bs4 import BeautifulSoup import re chrome_options = webdriver.ChromeOptions() driver = webdriver.Chrome('chromedriver_win32/chromedriver.exe', options=chrome_options) url = "http://www.indianbluebook.com/" driver.get(url) save_city = driver.find_element_by_xpath('//*[@id="cityPopup"]/div[2]/div/div[2]/form/div[2]/div/a[1]').click() #Bangalore #fill year year_dropdown = Select(driver.find_element_by_xpath('//*[@id="car_value"]/div[2]/div[1]/div[1]/div/select')) driver.implicitly_wait(50) year_dropdown.select_by_value('2020') time.sleep(5) ``` But, its giving this error: ``` ElementNotInteractableException Traceback (most recent call last) <ipython-input-25-a4eb8001e649> in <module> 8 year_dropdown = Select(driver.find_element_by_xpath('//*[@id="car_value"]/div[2]/div[1]/div[1]/div/select')) 9 driver.implicitly_wait(50) ---> 10 year_dropdown.select_by_value('2020') 11 12 time.sleep(5) ~\anaconda3\lib\site-packages\selenium\webdriver\support\select.py in select_by_value(self, value) 80 matched = False 81 for opt in opts: ---> 82 self._setSelected(opt) 83 if not self.is_multiple: 84 return ~\anaconda3\lib\site-packages\selenium\webdriver\support\select.py in _setSelected(self, option) 210 def _setSelected(self, option): 211 if not option.is_selected(): --> 212 option.click() 213 214 def _unsetSelected(self, option): ~\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py in click(self) 78 def click(self): 79 """Clicks the element.""" ---> 80 self._execute(Command.CLICK_ELEMENT) 81 82 def submit(self): ~\anaconda3\lib\site-packages\selenium\webdriver\remote\webelement.py in _execute(self, command, params) 631 params = {} 632 params['id'] = self._id --> 633 return self._parent.execute(command, params) 634 635 def find_element(self, by=By.ID, value=None): ~\anaconda3\lib\site-packages\selenium\webdriver\remote\webdriver.py in execute(self, driver_command, params) 319 response = self.command_executor.execute(driver_command, params) 320 if response: --> 321 self.error_handler.check_response(response) 322 response['value'] = self._unwrap_value( 323 response.get('value', None)) ~\anaconda3\lib\site-packages\selenium\webdriver\remote\errorhandler.py in check_response(self, response) 240 alert_text = value['alert'].get('text') 241 raise exception_class(message, screen, stacktrace, alert_text) --> 242 raise exception_class(message, screen, stacktrace) 243 244 def _value_or_default(self, obj, key, default): ElementNotInteractableException: Message: element not interactable: Element is not currently visible and may not be manipulated (Session info: chrome=85.0.4183.102) ``` Note: I have tried many available solutions on internet like using Expected conditions with WebDriverWait. Sometimes I get the error, `StaleElementException`. I don't know what to do now. Please help. I'm new to this.
2020/09/11
[ "https://Stackoverflow.com/questions/63841244", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9952858/" ]
You can use the below approach to achieve the same. ``` #Set link according to data need driver.get('http://www.indianbluebook.com/') #Wait webpage to fully load necessary tables def ajaxwait(): for i in range(1, 30): x = driver.execute_script("return (window.jQuery != null) && jQuery.active") time.sleep(1) if x == 0: print("All Ajax element loaded successfully") break ajaxwait() wait = WebDriverWait(driver, 20) wait.until(EC.element_to_be_clickable((By.XPATH, "//*[@id='cityPopup']/div[2]/div/div[2]/form/div[2]/div/a[1]"))) save_city = driver.find_element_by_xpath('//*[@id="cityPopup"]/div[2]/div/div[2]/form/div[2]/div/a[1]').click() #Bangalore ajaxwait() #fill year wait.until(EC.element_to_be_clickable((By.XPATH, "//div[@class='form-group']//select[@class='form-control' and @name='manufacture_year']/following-sibling::div/a"))) #//div[@class='form-group']//select[@class='form-control' and @name='manufacture_year'] this is the only unique elemnt with reference to this we can find other element. #click on select year field then a dropdown will be open we will enter the year in the input box. Then select the item from the ul list. driver.find_element_by_xpath("//div[@class='form-group']//select[@class='form-control' and @name='manufacture_year']/following-sibling::div/a").click() driver.find_element_by_xpath("//div[@class='form-group']//select[@class='form-control' and @name='manufacture_year']/following-sibling::div//input").send_keys("2017") driver.find_element_by_xpath("//div[@class='form-group']//select[@class='form-control' and @name='manufacture_year']/following-sibling::div//em").click() ``` Similarly you can select other drop down by changing the `@name='manufacture_year'` attribute value. Note: Updated the code with Ajax wait.
To click on **BANGALORE** and then select **2020** from the dropdown, you need to induce [WebDriverWait](https://stackoverflow.com/questions/49775502/webdriverwait-not-working-as-expected/49775808#49775808) for the `element_to_be_clickable()` and you can use the following [Locator Strategies](https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890): * Using `XPATH`: ``` driver.get('http://www.indianbluebook.com/') WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.LINK_TEXT, "BANGALORE"))).click() WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//span[text()='Select Year']"))).click() WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//ul[@class='chosen-results']//li[@class='active-result' and text()='2020']"))).click() ``` * **Note**: You have to add the following imports : ``` from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC ``` * Browser Snapshot: [![indianbluebook](https://i.stack.imgur.com/l65n2.png)](https://i.stack.imgur.com/l65n2.png)
59,410,323
so I have a csv file which is of the form - ``` No. Name Money 1 Tom Cat 100 2 Dan Man 200 3 Marie Claw300 4 Catherine K. 400 ``` I need to detect if the some part of my second column data is in my third column. Is there a way in python to do this efficiently? Also, this is a made up example the whole dataset contains many cases like this and it is not a one-off incident. Edit - Expected output ``` No. Name Money 1 Tom Cat 100 2 Dan Man 200 3 Marie Claw 300 4 Catherine K. 400 ```
2019/12/19
[ "https://Stackoverflow.com/questions/59410323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12021224/" ]
Unfortunately you cannot use blade syntax within Vue unless you are writing the Vue code directly in the blade template, which would not be best practice. One thing I have found helpful is to write out all my Laravel API routes in a google docs so they are easier to refer to when referencing them in Vue. I hope that helps!
You can only use blade syntax, if you're in a `.blade` file. You have to statically set this route or others when calling a API NOT RECOMMENDED: Or you can define a js variable in your "master" blade file, which you're then using in the `Register.vue` file.
59,410,323
so I have a csv file which is of the form - ``` No. Name Money 1 Tom Cat 100 2 Dan Man 200 3 Marie Claw300 4 Catherine K. 400 ``` I need to detect if the some part of my second column data is in my third column. Is there a way in python to do this efficiently? Also, this is a made up example the whole dataset contains many cases like this and it is not a one-off incident. Edit - Expected output ``` No. Name Money 1 Tom Cat 100 2 Dan Man 200 3 Marie Claw 300 4 Catherine K. 400 ```
2019/12/19
[ "https://Stackoverflow.com/questions/59410323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12021224/" ]
There is no way of doing this without a `.blade` file. there is no support given for vue components to use laravel routes dynamically. but you could use some third party packages to achieve this something like `Ziggy` <https://github.com/tightenco/ziggy>
You can only use blade syntax, if you're in a `.blade` file. You have to statically set this route or others when calling a API NOT RECOMMENDED: Or you can define a js variable in your "master" blade file, which you're then using in the `Register.vue` file.