qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
63,867,203
I wrote some code in python to see how many times one number can be divided by a number, until it gets a value of one. ``` counter_var = 1 quotient = num1/num2 if quotient<1: print('1 time') else: while quotient >= 1: quotient = num1/num2 counter_var = counter_var + 1 print(counter_var) ``` It is not ending the process but neither is it giving any output.
2020/09/13
[ "https://Stackoverflow.com/questions/63867203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14108602/" ]
you are not changing the value of quotient in the while loop. it remains constant. instead of **quotient = num1/num2** it should be **quotient /= num2** if I understand your problem correctly.
Let us breakdown your code with an examples: section 1: all good code ======================== ``` counter_var = 1 quotient = num1/num2 if quotient<1: print('1 time') else: ``` section 2: problem code ======================= ``` while quotient >= 1: quotient = num1/num2 counter_var = counter_var + 1 print(counter_var) ``` Let us take an example where num1 = 20 ========= num2 = 4 ======== It goes through section 1. All is good. The value of `quotient` is `5.0`. So it goes into while statement. The `quotient` will always remain `5.0`. What do you think will be the result of the next calculation? You didn't change the value of `num1` and `num2` so it will continue to be `5.0`. And so it goes into an endless loop. If you want to change the code, you have to figure out how to change the value of `num1` so it can divide further. Hope this helps you debug your code to get to the right answer. I will leave you to figure out how to fix this. Hint: any number divided by itself will continue to remain at 1. You man want to look at your while loop and make a small change. Should it be `>=` ?
63,867,203
I wrote some code in python to see how many times one number can be divided by a number, until it gets a value of one. ``` counter_var = 1 quotient = num1/num2 if quotient<1: print('1 time') else: while quotient >= 1: quotient = num1/num2 counter_var = counter_var + 1 print(counter_var) ``` It is not ending the process but neither is it giving any output.
2020/09/13
[ "https://Stackoverflow.com/questions/63867203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14108602/" ]
you are not changing the value of quotient in the while loop. it remains constant. instead of **quotient = num1/num2** it should be **quotient /= num2** if I understand your problem correctly.
you write a infinite loop ! for example if you take 1 as input of num1 and 1 also an input of num2 ! so quotient here take 1 !! so you stuck in infinite loop ! i don't understand your program i mean what you wanna do ! you have to show me the problem first
63,867,203
I wrote some code in python to see how many times one number can be divided by a number, until it gets a value of one. ``` counter_var = 1 quotient = num1/num2 if quotient<1: print('1 time') else: while quotient >= 1: quotient = num1/num2 counter_var = counter_var + 1 print(counter_var) ``` It is not ending the process but neither is it giving any output.
2020/09/13
[ "https://Stackoverflow.com/questions/63867203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14108602/" ]
you are not changing the value of quotient in the while loop. it remains constant. instead of **quotient = num1/num2** it should be **quotient /= num2** if I understand your problem correctly.
The value of quotient is not changing so the loop is going on forever. For example if quotient is 3, num1 is 18 and num2 is 6. The program will go into the else statement and then continue on to the while statement. It divides 18 by 6 and sets the value of quotient to 3 and increments counter\_var by one. Than it goes back to the top of the loop and does the same thing since quotient is >= 1, and the value of quotient never changes which is causing the loop to go on forever. Other problems: Variable names should not have an underscore and should be camelCased meaning it should be counterVar. Also you should have more descriptive names rather than num1 and num2.
3,300,716
I'm attempting to use mysql after only having worked with sqlite in the past. I've installed `XAMPP` on Linux (ubuntu) and have `mysql` up and running fine (seems like that with phpMyadmin at least). However, I'm having trouble getting the MySQLdb (the python lib) working {installed this using apt}. to be exact: ```py >>> import MySQLdb >>> db = MySQLdb.connect(host="localhost",db="opfine") Traceback (most recent call last): File "<input>", line 1, in <module> File "/usr/lib/pymodules/python2.6/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 170, in __init_ ... super(Connection, self).__init__(*args, **kwargs2) ``` > > OperationalError: (2002, "Can't connect to local MySQL server through > socket '/var > /run/mysqld/mysqld.sock' (2)") > > > I'm guessing > > Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock > > > means its expecting some sort of local installation (i.e. not within `XAMPP`), but I can't figure out how to go about modding this to get it to work with the `XAMMP` flavor of `mysql`. Help is much appreciated!
2010/07/21
[ "https://Stackoverflow.com/questions/3300716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264875/" ]
For the record (and thanks to a pointer from Igancio), I found that the below works (terrible I didn't think of this before): ``` db=MySQLdb.connect( user="root" ,passwd="" ,db="my_db" ,unix_socket="/opt/lampp/var/mysql/mysql.sock") ```
It means that you didn't start the MySQL server, or it's configured to not use a domain socket.
3,300,716
I'm attempting to use mysql after only having worked with sqlite in the past. I've installed `XAMPP` on Linux (ubuntu) and have `mysql` up and running fine (seems like that with phpMyadmin at least). However, I'm having trouble getting the MySQLdb (the python lib) working {installed this using apt}. to be exact: ```py >>> import MySQLdb >>> db = MySQLdb.connect(host="localhost",db="opfine") Traceback (most recent call last): File "<input>", line 1, in <module> File "/usr/lib/pymodules/python2.6/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 170, in __init_ ... super(Connection, self).__init__(*args, **kwargs2) ``` > > OperationalError: (2002, "Can't connect to local MySQL server through > socket '/var > /run/mysqld/mysqld.sock' (2)") > > > I'm guessing > > Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock > > > means its expecting some sort of local installation (i.e. not within `XAMPP`), but I can't figure out how to go about modding this to get it to work with the `XAMMP` flavor of `mysql`. Help is much appreciated!
2010/07/21
[ "https://Stackoverflow.com/questions/3300716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264875/" ]
It means that you didn't start the MySQL server, or it's configured to not use a domain socket.
Have the same issue using and look for your SQL configuration file my.cnf. ``` # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /Applications/XAMPP/xamppfiles/var/mysql/mysql.sock ``` and use socket as parameter: ``` mysql://read:read@localhost/phonehome?unix_socket=/Applications/XAMPP/xamppfiles/var/mysql/mysql.sock ``` In my case: ``` app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://read:read@localhost/phonehome?unix_socket=/Applications/XAMPP/xamppfiles/var/mysql/mysql.sock' ```
3,300,716
I'm attempting to use mysql after only having worked with sqlite in the past. I've installed `XAMPP` on Linux (ubuntu) and have `mysql` up and running fine (seems like that with phpMyadmin at least). However, I'm having trouble getting the MySQLdb (the python lib) working {installed this using apt}. to be exact: ```py >>> import MySQLdb >>> db = MySQLdb.connect(host="localhost",db="opfine") Traceback (most recent call last): File "<input>", line 1, in <module> File "/usr/lib/pymodules/python2.6/MySQLdb/__init__.py", line 81, in Connect return Connection(*args, **kwargs) File "/usr/lib/pymodules/python2.6/MySQLdb/connections.py", line 170, in __init_ ... super(Connection, self).__init__(*args, **kwargs2) ``` > > OperationalError: (2002, "Can't connect to local MySQL server through > socket '/var > /run/mysqld/mysqld.sock' (2)") > > > I'm guessing > > Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock > > > means its expecting some sort of local installation (i.e. not within `XAMPP`), but I can't figure out how to go about modding this to get it to work with the `XAMMP` flavor of `mysql`. Help is much appreciated!
2010/07/21
[ "https://Stackoverflow.com/questions/3300716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264875/" ]
For the record (and thanks to a pointer from Igancio), I found that the below works (terrible I didn't think of this before): ``` db=MySQLdb.connect( user="root" ,passwd="" ,db="my_db" ,unix_socket="/opt/lampp/var/mysql/mysql.sock") ```
Have the same issue using and look for your SQL configuration file my.cnf. ``` # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /Applications/XAMPP/xamppfiles/var/mysql/mysql.sock ``` and use socket as parameter: ``` mysql://read:read@localhost/phonehome?unix_socket=/Applications/XAMPP/xamppfiles/var/mysql/mysql.sock ``` In my case: ``` app.config['SQLALCHEMY_DATABASE_URI'] = 'mysql://read:read@localhost/phonehome?unix_socket=/Applications/XAMPP/xamppfiles/var/mysql/mysql.sock' ```
53,798,252
I'm fairly new to python and attempting to add lines 1-10 of a csv into a JSON file, however, I only seem to be getting the 10th line of the CSV. I can't seem to figure out what is incorrect about my argument. Any help appcreated! ``` import csv, json, itertools csvFilePath = "example.csv" jsonFilePath = "example.json" # Read the CSV and add data to a dictionary data = {} with open(csvFilePath) as csvFile: csvReader = csv.DictReader(csvFile) for csvRow in itertools.islice(csv.DictReader(csvFile), 0,10): data = csvRow print(data) #Write the data to a JSON file with open(jsonFilePath, "w") as jsonFile: jsonFile.write(json.dumps(data, indent=4)) ```
2018/12/15
[ "https://Stackoverflow.com/questions/53798252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10796111/" ]
At `data = csvRow`, the `data` variable keeps getting overwritten, so at the end only the last line you read will be inside `data`. Try something like this: ``` import csv, json, itertools csvFilePath = "example.csv" jsonFilePath = "example.json" # Read the CSV and add data to a dictionary data = {} with open(csvFilePath) as csvFile: csvReader = csv.DictReader(csvFile) for csvRow in itertools.islice(csv.DictReader(csvFile), 0,10): #email = csvRow["email"] data[len(data)] = csvRow print(data) # Write the data to a JSON file with open(jsonFilePath, "w") as jsonFile: jsonFile.write(json.dumps(line, indent=4)) ``` (Didn't test this, but the idea is to add `csvRow` as new elements of the dict `data`)
Assuming that the input CSV is ``` 1,2,3,4,5 a,b,c,d,e ``` We have the following code: ``` import json import csv inpf = open("test.csv", "r") csv_reader = csv.reader(inpf) # here you slice the columns with [2:4] for example lines = [row[2:4] for row in csv_reader] inpf.close() lines_json = json.dumps(lines) outpf = open("out.json", "w") outpf.write(lines_json) outpf.close() ``` which creates ``` [ [ "3", "4" ], [ "c", "d" ] ] ```
53,798,252
I'm fairly new to python and attempting to add lines 1-10 of a csv into a JSON file, however, I only seem to be getting the 10th line of the CSV. I can't seem to figure out what is incorrect about my argument. Any help appcreated! ``` import csv, json, itertools csvFilePath = "example.csv" jsonFilePath = "example.json" # Read the CSV and add data to a dictionary data = {} with open(csvFilePath) as csvFile: csvReader = csv.DictReader(csvFile) for csvRow in itertools.islice(csv.DictReader(csvFile), 0,10): data = csvRow print(data) #Write the data to a JSON file with open(jsonFilePath, "w") as jsonFile: jsonFile.write(json.dumps(data, indent=4)) ```
2018/12/15
[ "https://Stackoverflow.com/questions/53798252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10796111/" ]
At `data = csvRow`, the `data` variable keeps getting overwritten, so at the end only the last line you read will be inside `data`. Try something like this: ``` import csv, json, itertools csvFilePath = "example.csv" jsonFilePath = "example.json" # Read the CSV and add data to a dictionary data = {} with open(csvFilePath) as csvFile: csvReader = csv.DictReader(csvFile) for csvRow in itertools.islice(csv.DictReader(csvFile), 0,10): #email = csvRow["email"] data[len(data)] = csvRow print(data) # Write the data to a JSON file with open(jsonFilePath, "w") as jsonFile: jsonFile.write(json.dumps(line, indent=4)) ``` (Didn't test this, but the idea is to add `csvRow` as new elements of the dict `data`)
You are overwriting your `data` dictionary in your loop through the file. ``` CSV_FILE_PATH = "example.csv" with open(CSV_FILE_PATH) as myfile: # You might set a key for each index as you loop: data = {i: next(myfile) for i in range(10)} print(data) ```
43,732,642
I need the status of the task like if it is running or upforretry or failed within the same dag. So i tried to get it using the below code, though i got no output... ``` Auto = PythonOperator( task_id='test_sleep', python_callable=execute_on_emr, op_kwargs={'cmd':'python /home/hadoop/test/testsleep.py'}, dag=dag) logger.info(Auto) ``` The intention is to kill certain running tasks once a particular task on airflow completes. Question is **how do i get the state of a task like is it in the running state or failed or success**
2017/05/02
[ "https://Stackoverflow.com/questions/43732642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6714806/" ]
Okay, I think I know what you're doing and I don't really agree with it, but I'll start with an answer. A straightforward, but hackish, way would be to query the task\_instance table. I'm in postgres, but the structure should be the same. Start by grabbing the task\_ids and state of the task you're interested in with a db call. ``` SELECT task_id, state FROM task_instance WHERE dag_id = '<dag_id_attrib>' AND execution_date = '<execution_date_attrib>' AND task_id = '<task_to_check>' ``` That should give you the state (and name, for reference) of the task you're trying to monitor. State is stored as a simple lowercase string.
You can use the command line Interface for this: ``` airflow task_state [-h] [-sd SUBDIR] dag_id task_id execution_date ``` For more on this you can refer official airflow documentation: <http://airflow.incubator.apache.org/cli.html>
43,732,642
I need the status of the task like if it is running or upforretry or failed within the same dag. So i tried to get it using the below code, though i got no output... ``` Auto = PythonOperator( task_id='test_sleep', python_callable=execute_on_emr, op_kwargs={'cmd':'python /home/hadoop/test/testsleep.py'}, dag=dag) logger.info(Auto) ``` The intention is to kill certain running tasks once a particular task on airflow completes. Question is **how do i get the state of a task like is it in the running state or failed or success**
2017/05/02
[ "https://Stackoverflow.com/questions/43732642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6714806/" ]
Take a look at the code responsible for the command line interface operation suggested by Priyank. <https://github.com/apache/incubator-airflow/blob/2318cea74d4f71fba353eaca9bb3c4fd3cdb06c0/airflow/bin/cli.py#L581> ``` def task_state(args): dag = get_dag(args) task = dag.get_task(task_id=args.task_id) ti = TaskInstance(task, args.execution_date) print(ti.current_state()) ``` Hence, it seem you should easily be able to accomplish this within your DAG codebase using similar code. Alternatively you could execute these CLI operations from within your code using python's `subprocess` library.
You can use the command line Interface for this: ``` airflow task_state [-h] [-sd SUBDIR] dag_id task_id execution_date ``` For more on this you can refer official airflow documentation: <http://airflow.incubator.apache.org/cli.html>
43,732,642
I need the status of the task like if it is running or upforretry or failed within the same dag. So i tried to get it using the below code, though i got no output... ``` Auto = PythonOperator( task_id='test_sleep', python_callable=execute_on_emr, op_kwargs={'cmd':'python /home/hadoop/test/testsleep.py'}, dag=dag) logger.info(Auto) ``` The intention is to kill certain running tasks once a particular task on airflow completes. Question is **how do i get the state of a task like is it in the running state or failed or success**
2017/05/02
[ "https://Stackoverflow.com/questions/43732642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6714806/" ]
I am doing something similar. I need to check for one task if the previous 10 runs of another task were successful. taky2 sent me on the right path. It is actually fairly easy: ``` from airflow.models import TaskInstance ti = TaskInstance(*your_task*, execution_date) state = ti.current_state() ``` As I want to check that within the dag, it is not neccessary to specify the dag. I simply created a function to loop through the past n\_days and check the status. ``` def check_status(**kwargs): last_n_days = 10 for n in range(0,last_n_days): date = kwargs['execution_date']- timedelta(n) ti = TaskInstance(*my_task*, date) #my_task is the task you defined within the DAG rather than the task_id (as in the example below: check_success_task rather than 'check_success_days_before') state = ti.current_state() if state != 'success': raise ValueError('Not all previous tasks successfully completed.') ``` When you call the function make sure to set provide\_context. ``` check_success_task = PythonOperator( task_id='check_success_days_before', python_callable= check_status, provide_context=True, dag=dag ) ``` UPDATE: When you want to call a task from another dag, you need to call it like this: ``` from airflow import configuration as conf from airflow.models import DagBag, TaskInstance dag_folder = conf.get('core','DAGS_FOLDER') dagbag = DagBag(dag_folder) check_dag = dagbag.dags[*my_dag_id*] my_task = check_dag.get_task(*my_task_id*) ti = TaskInstance(my_task, date) ``` Apparently there is also an api-call by now doing the same thing: ``` from airflow.api.common.experimental.get_task_instance import get_task_instance ti = get_task_instance(*my_dag_id*, *my_task_id*, date) ```
You can use the command line Interface for this: ``` airflow task_state [-h] [-sd SUBDIR] dag_id task_id execution_date ``` For more on this you can refer official airflow documentation: <http://airflow.incubator.apache.org/cli.html>
43,732,642
I need the status of the task like if it is running or upforretry or failed within the same dag. So i tried to get it using the below code, though i got no output... ``` Auto = PythonOperator( task_id='test_sleep', python_callable=execute_on_emr, op_kwargs={'cmd':'python /home/hadoop/test/testsleep.py'}, dag=dag) logger.info(Auto) ``` The intention is to kill certain running tasks once a particular task on airflow completes. Question is **how do i get the state of a task like is it in the running state or failed or success**
2017/05/02
[ "https://Stackoverflow.com/questions/43732642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6714806/" ]
I am doing something similar. I need to check for one task if the previous 10 runs of another task were successful. taky2 sent me on the right path. It is actually fairly easy: ``` from airflow.models import TaskInstance ti = TaskInstance(*your_task*, execution_date) state = ti.current_state() ``` As I want to check that within the dag, it is not neccessary to specify the dag. I simply created a function to loop through the past n\_days and check the status. ``` def check_status(**kwargs): last_n_days = 10 for n in range(0,last_n_days): date = kwargs['execution_date']- timedelta(n) ti = TaskInstance(*my_task*, date) #my_task is the task you defined within the DAG rather than the task_id (as in the example below: check_success_task rather than 'check_success_days_before') state = ti.current_state() if state != 'success': raise ValueError('Not all previous tasks successfully completed.') ``` When you call the function make sure to set provide\_context. ``` check_success_task = PythonOperator( task_id='check_success_days_before', python_callable= check_status, provide_context=True, dag=dag ) ``` UPDATE: When you want to call a task from another dag, you need to call it like this: ``` from airflow import configuration as conf from airflow.models import DagBag, TaskInstance dag_folder = conf.get('core','DAGS_FOLDER') dagbag = DagBag(dag_folder) check_dag = dagbag.dags[*my_dag_id*] my_task = check_dag.get_task(*my_task_id*) ti = TaskInstance(my_task, date) ``` Apparently there is also an api-call by now doing the same thing: ``` from airflow.api.common.experimental.get_task_instance import get_task_instance ti = get_task_instance(*my_dag_id*, *my_task_id*, date) ```
Okay, I think I know what you're doing and I don't really agree with it, but I'll start with an answer. A straightforward, but hackish, way would be to query the task\_instance table. I'm in postgres, but the structure should be the same. Start by grabbing the task\_ids and state of the task you're interested in with a db call. ``` SELECT task_id, state FROM task_instance WHERE dag_id = '<dag_id_attrib>' AND execution_date = '<execution_date_attrib>' AND task_id = '<task_to_check>' ``` That should give you the state (and name, for reference) of the task you're trying to monitor. State is stored as a simple lowercase string.
43,732,642
I need the status of the task like if it is running or upforretry or failed within the same dag. So i tried to get it using the below code, though i got no output... ``` Auto = PythonOperator( task_id='test_sleep', python_callable=execute_on_emr, op_kwargs={'cmd':'python /home/hadoop/test/testsleep.py'}, dag=dag) logger.info(Auto) ``` The intention is to kill certain running tasks once a particular task on airflow completes. Question is **how do i get the state of a task like is it in the running state or failed or success**
2017/05/02
[ "https://Stackoverflow.com/questions/43732642", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6714806/" ]
I am doing something similar. I need to check for one task if the previous 10 runs of another task were successful. taky2 sent me on the right path. It is actually fairly easy: ``` from airflow.models import TaskInstance ti = TaskInstance(*your_task*, execution_date) state = ti.current_state() ``` As I want to check that within the dag, it is not neccessary to specify the dag. I simply created a function to loop through the past n\_days and check the status. ``` def check_status(**kwargs): last_n_days = 10 for n in range(0,last_n_days): date = kwargs['execution_date']- timedelta(n) ti = TaskInstance(*my_task*, date) #my_task is the task you defined within the DAG rather than the task_id (as in the example below: check_success_task rather than 'check_success_days_before') state = ti.current_state() if state != 'success': raise ValueError('Not all previous tasks successfully completed.') ``` When you call the function make sure to set provide\_context. ``` check_success_task = PythonOperator( task_id='check_success_days_before', python_callable= check_status, provide_context=True, dag=dag ) ``` UPDATE: When you want to call a task from another dag, you need to call it like this: ``` from airflow import configuration as conf from airflow.models import DagBag, TaskInstance dag_folder = conf.get('core','DAGS_FOLDER') dagbag = DagBag(dag_folder) check_dag = dagbag.dags[*my_dag_id*] my_task = check_dag.get_task(*my_task_id*) ti = TaskInstance(my_task, date) ``` Apparently there is also an api-call by now doing the same thing: ``` from airflow.api.common.experimental.get_task_instance import get_task_instance ti = get_task_instance(*my_dag_id*, *my_task_id*, date) ```
Take a look at the code responsible for the command line interface operation suggested by Priyank. <https://github.com/apache/incubator-airflow/blob/2318cea74d4f71fba353eaca9bb3c4fd3cdb06c0/airflow/bin/cli.py#L581> ``` def task_state(args): dag = get_dag(args) task = dag.get_task(task_id=args.task_id) ti = TaskInstance(task, args.execution_date) print(ti.current_state()) ``` Hence, it seem you should easily be able to accomplish this within your DAG codebase using similar code. Alternatively you could execute these CLI operations from within your code using python's `subprocess` library.
12,125,362
In a [previous question](https://stackoverflow.com/questions/12124275/splitting-a-string-by-capital-letters-python), it was suggested that, in order to divide a string and store it, I should use a list, like so: ``` [a for a in re.split(r'([A-Z][a-z]*)', 'MgSO4') if a] ['Mg', u'S', u'O', u'4'] ``` What I'd like to ask this time around is how would I be able to use that to store the different strings created into variables so I can look them up in the CSV file I have, if it's at all possible. Where it says 'MgSO4' would be coming from a variable called 'formula', which is produced from a raw\_input, like so: ``` formula = raw_input("Enter formula: ") ``` Full program code can be found [here](http://pastebin.com/3G761hb0), and I've included the more relevant part below. Thanks in advance for any help! ``` formula = raw_input("Enter formula: ") [a for a in re.split(r'([A-Z][a-z]*)', 'MgSO4') if a] weight_sum = sum(float(formul_data.get(elem.lower())) for elem in elements) print "Total weight =", weightSum ```
2012/08/25
[ "https://Stackoverflow.com/questions/12125362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1423819/" ]
If your goal is to be able to add up the molecular weights of the atoms comprising a molecule, I suggest doing your regular expressions a bit differently. Instead of having the numbers mixed in with the element symbols in your split list, attach them to the preceding element instead (and attach a 1 if there was no number). Here's how I'd do that: ``` import re # a partial table of atomic weights, replace with something better! weights = { "H" : 1, "Na": 11, "Mg": 12, "C" : 12, "N" : 14, "O" : 16, "F" : 19, "Al": 27, "S" : 32, "Cl": 35, } def molecularWeight(formula): matches = re.findall(r"([A-Z][a-z]?)([0-9]*)", formula) return sum(weights[symbol] * (int(count) if count else 1) for (symbol, count) in matches) ``` To make this fit with the code you've shown, replace `weights[symbol]` with something like `formul_data.get(symbol.lower(), 0)` (or whatever is necessary to get appropriate atomic weights by symbol in your code). This should handle any empiric formula, and many structural ones, as long as there are no parentheses. To solve fully parenthesized formulas you'll need to make a better parser, as simple regular expressions won't work.
After running ``` >>> import re >>> elements = [a for a in re.split(r'([A-Z][a-z]*)', 'MgSO4') if a] ``` you can access the splitted parts using indices ``` >>> print elements[0] 'Mg' >>> print elements[-1] # print the last element '4' ```
12,125,362
In a [previous question](https://stackoverflow.com/questions/12124275/splitting-a-string-by-capital-letters-python), it was suggested that, in order to divide a string and store it, I should use a list, like so: ``` [a for a in re.split(r'([A-Z][a-z]*)', 'MgSO4') if a] ['Mg', u'S', u'O', u'4'] ``` What I'd like to ask this time around is how would I be able to use that to store the different strings created into variables so I can look them up in the CSV file I have, if it's at all possible. Where it says 'MgSO4' would be coming from a variable called 'formula', which is produced from a raw\_input, like so: ``` formula = raw_input("Enter formula: ") ``` Full program code can be found [here](http://pastebin.com/3G761hb0), and I've included the more relevant part below. Thanks in advance for any help! ``` formula = raw_input("Enter formula: ") [a for a in re.split(r'([A-Z][a-z]*)', 'MgSO4') if a] weight_sum = sum(float(formul_data.get(elem.lower())) for elem in elements) print "Total weight =", weightSum ```
2012/08/25
[ "https://Stackoverflow.com/questions/12125362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1423819/" ]
If your goal is to be able to add up the molecular weights of the atoms comprising a molecule, I suggest doing your regular expressions a bit differently. Instead of having the numbers mixed in with the element symbols in your split list, attach them to the preceding element instead (and attach a 1 if there was no number). Here's how I'd do that: ``` import re # a partial table of atomic weights, replace with something better! weights = { "H" : 1, "Na": 11, "Mg": 12, "C" : 12, "N" : 14, "O" : 16, "F" : 19, "Al": 27, "S" : 32, "Cl": 35, } def molecularWeight(formula): matches = re.findall(r"([A-Z][a-z]?)([0-9]*)", formula) return sum(weights[symbol] * (int(count) if count else 1) for (symbol, count) in matches) ``` To make this fit with the code you've shown, replace `weights[symbol]` with something like `formul_data.get(symbol.lower(), 0)` (or whatever is necessary to get appropriate atomic weights by symbol in your code). This should handle any empiric formula, and many structural ones, as long as there are no parentheses. To solve fully parenthesized formulas you'll need to make a better parser, as simple regular expressions won't work.
This is just a guess, but it may be that you're not realizing that the `re.split` code can be applied to any string, including the string you read from `raw_input`. Is this what you're asking for? ``` formula = raw_input("Enter formula: ") elements = [a for a in re.split(r'([A-Z][a-z]*)', formula) if a] weight_sum = sum(float(formul_data.get(elem.lower(), 0)) for elem in elements) print "Total weight =", weight_sum ```
34,794,417
I am trying to make kivy work with SDL2 on centos 7 but when I run my main.py I get the following messages: ``` [INFO ] [Logger ] Record log in /home/etienne/.kivy/logs/kivy_16-01-14_51.txt [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pil (img_pygame, img_ffpyplayer ignored) [CRITICAL] [Window ] Unable to find any valuable Window provider at all! egl_rpi - ImportError: cannot import name bcm File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_egl_rpi.py", line 12, in <module> from kivy.lib.vidcore_lite import bcm, egl pygame - ImportError: No module named pygame File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 8, in <module> import pygame x11 - ImportError: No module named window_x11 File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) [INFO ] [Text ] Provider: pil(['text_pygame'] ignored) [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. ``` I have installed the following libraries: ``` SDL.x86_64 1.2.15-14.el7 @base SDL-devel.x86_64 1.2.15-14.el7 @base SDL2.x86_64 2.0.3-9.el7 @epel SDL2-devel.x86_64 2.0.3-9.el7 @epel SDL_image.x86_64 1.2.12-11.el7 @epel SDL_mixer.x86_64 1.2.12-4.el7 @epel SDL_mixer-devel.x86_64 1.2.12-4.el7 @epel SDL_ttf.x86_64 2.0.11-6.el7 @epel SDL_ttf-devel.x86_64 2.0.11-6.el7 @epel ``` I make it work with the same main.py on Fedora 20 and it also work if I install pygame but he is too heavy so I would like to use SDL2. If you have any idea on how to make it work ;)
2016/01/14
[ "https://Stackoverflow.com/questions/34794417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269531/" ]
Actually `sprintf` didn't work for me, so if you don't mind a common dependency: ``` #reproducible example -- this happens with zip codes sometimes X <- data.frame(A = c('10002','8540','BIRD'), stringsAsFactors=FALSE) # X$A <- sprintf('%05s',X$A) didn't work for me # Note in ?sprintf: 0: For numbers, pad to the field width with leading zeros. # For characters, this zero-pads on some platforms and is ignored on others. library('stringr') X$A <- str_pad(X$A, width=5, side='left', pad='0') X # A #1 10002 #2 08540 #3 0BIRD ``` or, if you prefer a base solution, the following is equivalent: ``` X$A <- ifelse(nchar(X$A) < 5, paste(c(rep("0",5-nchar(X$A)), X$A), collapse=""), X$A) ``` (note this works on strings of length 4 or less, not just 4)
Try something like this (assuming data frame name and column name are right): ``` element_of_X$a <- with(element_of_X , ifelse(nchar(a) == 4, paste('0', a, sep = ''), a) ```
34,794,417
I am trying to make kivy work with SDL2 on centos 7 but when I run my main.py I get the following messages: ``` [INFO ] [Logger ] Record log in /home/etienne/.kivy/logs/kivy_16-01-14_51.txt [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pil (img_pygame, img_ffpyplayer ignored) [CRITICAL] [Window ] Unable to find any valuable Window provider at all! egl_rpi - ImportError: cannot import name bcm File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_egl_rpi.py", line 12, in <module> from kivy.lib.vidcore_lite import bcm, egl pygame - ImportError: No module named pygame File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 8, in <module> import pygame x11 - ImportError: No module named window_x11 File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) [INFO ] [Text ] Provider: pil(['text_pygame'] ignored) [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. ``` I have installed the following libraries: ``` SDL.x86_64 1.2.15-14.el7 @base SDL-devel.x86_64 1.2.15-14.el7 @base SDL2.x86_64 2.0.3-9.el7 @epel SDL2-devel.x86_64 2.0.3-9.el7 @epel SDL_image.x86_64 1.2.12-11.el7 @epel SDL_mixer.x86_64 1.2.12-4.el7 @epel SDL_mixer-devel.x86_64 1.2.12-4.el7 @epel SDL_ttf.x86_64 2.0.11-6.el7 @epel SDL_ttf-devel.x86_64 2.0.11-6.el7 @epel ``` I make it work with the same main.py on Fedora 20 and it also work if I install pygame but he is too heavy so I would like to use SDL2. If you have any idea on how to make it work ;)
2016/01/14
[ "https://Stackoverflow.com/questions/34794417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269531/" ]
This should do the trick: ``` X$A <- ifelse(nchar(X$A) < 5, paste("0", X$A, sep=""), X$A) ```
Try something like this (assuming data frame name and column name are right): ``` element_of_X$a <- with(element_of_X , ifelse(nchar(a) == 4, paste('0', a, sep = ''), a) ```
34,794,417
I am trying to make kivy work with SDL2 on centos 7 but when I run my main.py I get the following messages: ``` [INFO ] [Logger ] Record log in /home/etienne/.kivy/logs/kivy_16-01-14_51.txt [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pil (img_pygame, img_ffpyplayer ignored) [CRITICAL] [Window ] Unable to find any valuable Window provider at all! egl_rpi - ImportError: cannot import name bcm File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_egl_rpi.py", line 12, in <module> from kivy.lib.vidcore_lite import bcm, egl pygame - ImportError: No module named pygame File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 8, in <module> import pygame x11 - ImportError: No module named window_x11 File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) [INFO ] [Text ] Provider: pil(['text_pygame'] ignored) [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. ``` I have installed the following libraries: ``` SDL.x86_64 1.2.15-14.el7 @base SDL-devel.x86_64 1.2.15-14.el7 @base SDL2.x86_64 2.0.3-9.el7 @epel SDL2-devel.x86_64 2.0.3-9.el7 @epel SDL_image.x86_64 1.2.12-11.el7 @epel SDL_mixer.x86_64 1.2.12-4.el7 @epel SDL_mixer-devel.x86_64 1.2.12-4.el7 @epel SDL_ttf.x86_64 2.0.11-6.el7 @epel SDL_ttf-devel.x86_64 2.0.11-6.el7 @epel ``` I make it work with the same main.py on Fedora 20 and it also work if I install pygame but he is too heavy so I would like to use SDL2. If you have any idea on how to make it work ;)
2016/01/14
[ "https://Stackoverflow.com/questions/34794417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269531/" ]
If you use `dplyr` and `stringr` you could do the following ``` library(dplyr) library(stringr) ## Assuming "element_of_X" has element 'A' element_of_X <- element_of_X %>% mutate(A = str_pad(A, 5, side = 'left', pad = '0')) ``` **Edit** Or perhaps more simply, as suggested in the comments: ``` element_of_X$A <- str_pad(element_of_X$A, 5, side = 'left', pad = '0') ```
Try something like this (assuming data frame name and column name are right): ``` element_of_X$a <- with(element_of_X , ifelse(nchar(a) == 4, paste('0', a, sep = ''), a) ```
34,794,417
I am trying to make kivy work with SDL2 on centos 7 but when I run my main.py I get the following messages: ``` [INFO ] [Logger ] Record log in /home/etienne/.kivy/logs/kivy_16-01-14_51.txt [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pil (img_pygame, img_ffpyplayer ignored) [CRITICAL] [Window ] Unable to find any valuable Window provider at all! egl_rpi - ImportError: cannot import name bcm File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_egl_rpi.py", line 12, in <module> from kivy.lib.vidcore_lite import bcm, egl pygame - ImportError: No module named pygame File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 8, in <module> import pygame x11 - ImportError: No module named window_x11 File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) [INFO ] [Text ] Provider: pil(['text_pygame'] ignored) [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. ``` I have installed the following libraries: ``` SDL.x86_64 1.2.15-14.el7 @base SDL-devel.x86_64 1.2.15-14.el7 @base SDL2.x86_64 2.0.3-9.el7 @epel SDL2-devel.x86_64 2.0.3-9.el7 @epel SDL_image.x86_64 1.2.12-11.el7 @epel SDL_mixer.x86_64 1.2.12-4.el7 @epel SDL_mixer-devel.x86_64 1.2.12-4.el7 @epel SDL_ttf.x86_64 2.0.11-6.el7 @epel SDL_ttf-devel.x86_64 2.0.11-6.el7 @epel ``` I make it work with the same main.py on Fedora 20 and it also work if I install pygame but he is too heavy so I would like to use SDL2. If you have any idea on how to make it work ;)
2016/01/14
[ "https://Stackoverflow.com/questions/34794417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269531/" ]
Try something like this (assuming data frame name and column name are right): ``` element_of_X$a <- with(element_of_X , ifelse(nchar(a) == 4, paste('0', a, sep = ''), a) ```
``` library(stringr) x$A=str_pad(x$A, 5, pad = "0") ```
34,794,417
I am trying to make kivy work with SDL2 on centos 7 but when I run my main.py I get the following messages: ``` [INFO ] [Logger ] Record log in /home/etienne/.kivy/logs/kivy_16-01-14_51.txt [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pil (img_pygame, img_ffpyplayer ignored) [CRITICAL] [Window ] Unable to find any valuable Window provider at all! egl_rpi - ImportError: cannot import name bcm File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_egl_rpi.py", line 12, in <module> from kivy.lib.vidcore_lite import bcm, egl pygame - ImportError: No module named pygame File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 8, in <module> import pygame x11 - ImportError: No module named window_x11 File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) [INFO ] [Text ] Provider: pil(['text_pygame'] ignored) [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. ``` I have installed the following libraries: ``` SDL.x86_64 1.2.15-14.el7 @base SDL-devel.x86_64 1.2.15-14.el7 @base SDL2.x86_64 2.0.3-9.el7 @epel SDL2-devel.x86_64 2.0.3-9.el7 @epel SDL_image.x86_64 1.2.12-11.el7 @epel SDL_mixer.x86_64 1.2.12-4.el7 @epel SDL_mixer-devel.x86_64 1.2.12-4.el7 @epel SDL_ttf.x86_64 2.0.11-6.el7 @epel SDL_ttf-devel.x86_64 2.0.11-6.el7 @epel ``` I make it work with the same main.py on Fedora 20 and it also work if I install pygame but he is too heavy so I would like to use SDL2. If you have any idea on how to make it work ;)
2016/01/14
[ "https://Stackoverflow.com/questions/34794417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269531/" ]
Actually `sprintf` didn't work for me, so if you don't mind a common dependency: ``` #reproducible example -- this happens with zip codes sometimes X <- data.frame(A = c('10002','8540','BIRD'), stringsAsFactors=FALSE) # X$A <- sprintf('%05s',X$A) didn't work for me # Note in ?sprintf: 0: For numbers, pad to the field width with leading zeros. # For characters, this zero-pads on some platforms and is ignored on others. library('stringr') X$A <- str_pad(X$A, width=5, side='left', pad='0') X # A #1 10002 #2 08540 #3 0BIRD ``` or, if you prefer a base solution, the following is equivalent: ``` X$A <- ifelse(nchar(X$A) < 5, paste(c(rep("0",5-nchar(X$A)), X$A), collapse=""), X$A) ``` (note this works on strings of length 4 or less, not just 4)
This should do the trick: ``` X$A <- ifelse(nchar(X$A) < 5, paste("0", X$A, sep=""), X$A) ```
34,794,417
I am trying to make kivy work with SDL2 on centos 7 but when I run my main.py I get the following messages: ``` [INFO ] [Logger ] Record log in /home/etienne/.kivy/logs/kivy_16-01-14_51.txt [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pil (img_pygame, img_ffpyplayer ignored) [CRITICAL] [Window ] Unable to find any valuable Window provider at all! egl_rpi - ImportError: cannot import name bcm File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_egl_rpi.py", line 12, in <module> from kivy.lib.vidcore_lite import bcm, egl pygame - ImportError: No module named pygame File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 8, in <module> import pygame x11 - ImportError: No module named window_x11 File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) [INFO ] [Text ] Provider: pil(['text_pygame'] ignored) [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. ``` I have installed the following libraries: ``` SDL.x86_64 1.2.15-14.el7 @base SDL-devel.x86_64 1.2.15-14.el7 @base SDL2.x86_64 2.0.3-9.el7 @epel SDL2-devel.x86_64 2.0.3-9.el7 @epel SDL_image.x86_64 1.2.12-11.el7 @epel SDL_mixer.x86_64 1.2.12-4.el7 @epel SDL_mixer-devel.x86_64 1.2.12-4.el7 @epel SDL_ttf.x86_64 2.0.11-6.el7 @epel SDL_ttf-devel.x86_64 2.0.11-6.el7 @epel ``` I make it work with the same main.py on Fedora 20 and it also work if I install pygame but he is too heavy so I would like to use SDL2. If you have any idea on how to make it work ;)
2016/01/14
[ "https://Stackoverflow.com/questions/34794417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269531/" ]
Actually `sprintf` didn't work for me, so if you don't mind a common dependency: ``` #reproducible example -- this happens with zip codes sometimes X <- data.frame(A = c('10002','8540','BIRD'), stringsAsFactors=FALSE) # X$A <- sprintf('%05s',X$A) didn't work for me # Note in ?sprintf: 0: For numbers, pad to the field width with leading zeros. # For characters, this zero-pads on some platforms and is ignored on others. library('stringr') X$A <- str_pad(X$A, width=5, side='left', pad='0') X # A #1 10002 #2 08540 #3 0BIRD ``` or, if you prefer a base solution, the following is equivalent: ``` X$A <- ifelse(nchar(X$A) < 5, paste(c(rep("0",5-nchar(X$A)), X$A), collapse=""), X$A) ``` (note this works on strings of length 4 or less, not just 4)
``` library(stringr) x$A=str_pad(x$A, 5, pad = "0") ```
34,794,417
I am trying to make kivy work with SDL2 on centos 7 but when I run my main.py I get the following messages: ``` [INFO ] [Logger ] Record log in /home/etienne/.kivy/logs/kivy_16-01-14_51.txt [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pil (img_pygame, img_ffpyplayer ignored) [CRITICAL] [Window ] Unable to find any valuable Window provider at all! egl_rpi - ImportError: cannot import name bcm File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_egl_rpi.py", line 12, in <module> from kivy.lib.vidcore_lite import bcm, egl pygame - ImportError: No module named pygame File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 8, in <module> import pygame x11 - ImportError: No module named window_x11 File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) [INFO ] [Text ] Provider: pil(['text_pygame'] ignored) [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. ``` I have installed the following libraries: ``` SDL.x86_64 1.2.15-14.el7 @base SDL-devel.x86_64 1.2.15-14.el7 @base SDL2.x86_64 2.0.3-9.el7 @epel SDL2-devel.x86_64 2.0.3-9.el7 @epel SDL_image.x86_64 1.2.12-11.el7 @epel SDL_mixer.x86_64 1.2.12-4.el7 @epel SDL_mixer-devel.x86_64 1.2.12-4.el7 @epel SDL_ttf.x86_64 2.0.11-6.el7 @epel SDL_ttf-devel.x86_64 2.0.11-6.el7 @epel ``` I make it work with the same main.py on Fedora 20 and it also work if I install pygame but he is too heavy so I would like to use SDL2. If you have any idea on how to make it work ;)
2016/01/14
[ "https://Stackoverflow.com/questions/34794417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269531/" ]
If you use `dplyr` and `stringr` you could do the following ``` library(dplyr) library(stringr) ## Assuming "element_of_X" has element 'A' element_of_X <- element_of_X %>% mutate(A = str_pad(A, 5, side = 'left', pad = '0')) ``` **Edit** Or perhaps more simply, as suggested in the comments: ``` element_of_X$A <- str_pad(element_of_X$A, 5, side = 'left', pad = '0') ```
This should do the trick: ``` X$A <- ifelse(nchar(X$A) < 5, paste("0", X$A, sep=""), X$A) ```
34,794,417
I am trying to make kivy work with SDL2 on centos 7 but when I run my main.py I get the following messages: ``` [INFO ] [Logger ] Record log in /home/etienne/.kivy/logs/kivy_16-01-14_51.txt [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pil (img_pygame, img_ffpyplayer ignored) [CRITICAL] [Window ] Unable to find any valuable Window provider at all! egl_rpi - ImportError: cannot import name bcm File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_egl_rpi.py", line 12, in <module> from kivy.lib.vidcore_lite import bcm, egl pygame - ImportError: No module named pygame File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 8, in <module> import pygame x11 - ImportError: No module named window_x11 File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) [INFO ] [Text ] Provider: pil(['text_pygame'] ignored) [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. ``` I have installed the following libraries: ``` SDL.x86_64 1.2.15-14.el7 @base SDL-devel.x86_64 1.2.15-14.el7 @base SDL2.x86_64 2.0.3-9.el7 @epel SDL2-devel.x86_64 2.0.3-9.el7 @epel SDL_image.x86_64 1.2.12-11.el7 @epel SDL_mixer.x86_64 1.2.12-4.el7 @epel SDL_mixer-devel.x86_64 1.2.12-4.el7 @epel SDL_ttf.x86_64 2.0.11-6.el7 @epel SDL_ttf-devel.x86_64 2.0.11-6.el7 @epel ``` I make it work with the same main.py on Fedora 20 and it also work if I install pygame but he is too heavy so I would like to use SDL2. If you have any idea on how to make it work ;)
2016/01/14
[ "https://Stackoverflow.com/questions/34794417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269531/" ]
This should do the trick: ``` X$A <- ifelse(nchar(X$A) < 5, paste("0", X$A, sep=""), X$A) ```
``` library(stringr) x$A=str_pad(x$A, 5, pad = "0") ```
34,794,417
I am trying to make kivy work with SDL2 on centos 7 but when I run my main.py I get the following messages: ``` [INFO ] [Logger ] Record log in /home/etienne/.kivy/logs/kivy_16-01-14_51.txt [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.5 (default, Nov 20 2015, 02:00:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_gif, img_pil (img_pygame, img_ffpyplayer ignored) [CRITICAL] [Window ] Unable to find any valuable Window provider at all! egl_rpi - ImportError: cannot import name bcm File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_egl_rpi.py", line 12, in <module> from kivy.lib.vidcore_lite import bcm, egl pygame - ImportError: No module named pygame File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/window/window_pygame.py", line 8, in <module> import pygame x11 - ImportError: No module named window_x11 File "/home/etienne/Demo/Test/test_virtualenv/lib/python2.7/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) [INFO ] [Text ] Provider: pil(['text_pygame'] ignored) [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. Exception SystemExit: 1 in 'kivy.properties.dpi2px' ignored [CRITICAL] [App ] Unable to get a Window, abort. ``` I have installed the following libraries: ``` SDL.x86_64 1.2.15-14.el7 @base SDL-devel.x86_64 1.2.15-14.el7 @base SDL2.x86_64 2.0.3-9.el7 @epel SDL2-devel.x86_64 2.0.3-9.el7 @epel SDL_image.x86_64 1.2.12-11.el7 @epel SDL_mixer.x86_64 1.2.12-4.el7 @epel SDL_mixer-devel.x86_64 1.2.12-4.el7 @epel SDL_ttf.x86_64 2.0.11-6.el7 @epel SDL_ttf-devel.x86_64 2.0.11-6.el7 @epel ``` I make it work with the same main.py on Fedora 20 and it also work if I install pygame but he is too heavy so I would like to use SDL2. If you have any idea on how to make it work ;)
2016/01/14
[ "https://Stackoverflow.com/questions/34794417", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5269531/" ]
If you use `dplyr` and `stringr` you could do the following ``` library(dplyr) library(stringr) ## Assuming "element_of_X" has element 'A' element_of_X <- element_of_X %>% mutate(A = str_pad(A, 5, side = 'left', pad = '0')) ``` **Edit** Or perhaps more simply, as suggested in the comments: ``` element_of_X$A <- str_pad(element_of_X$A, 5, side = 'left', pad = '0') ```
``` library(stringr) x$A=str_pad(x$A, 5, pad = "0") ```
46,050,045
I would like to run a bigquery query from python only if it is below a certain cost estimation. Is there a way to programmatically check the estimated cost of a query before executing it, just like the Web UI (see attached image)? [![enter image description here](https://i.stack.imgur.com/UWdbL.png)](https://i.stack.imgur.com/UWdbL.png)
2017/09/05
[ "https://Stackoverflow.com/questions/46050045", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1134753/" ]
Yes, you can use the `dryRun` flag. This will return `totalBytesProcessed` i.e. the amount of data that will be processed if the query is executed. <https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.dryRun> [![enter image description here](https://i.stack.imgur.com/Rd2lO.png)](https://i.stack.imgur.com/Rd2lO.png)
> > I would like to run a bigquery query from python only if it is below a certain cost estimation > > > First, please note - BigQuery UI in fact uses DryRun which only estimates `Total Bytes Processed` leaving another important factor `Billing Tier` unknown. Use of DryRun of course useful and can help in certain scenarios! Meantime, I can propose using below two attributes [`configuration.query.maximumBillingTier`](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.maximumBillingTier) and [`configuration.query.maximumBytesBilled`](https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.maximumBytesBilled) They respectively limit the billing tier and bytes billed for the job Queries that have resource usage beyond max tier or have bytes billed beyond max bytes will fail (`without incurring a charge`)
59,077,162
I am using Python 3.8 and Pip 3.8 I cannot seem to install certain modules using pip. For example, when attempting to install the keras module: ``` (venv) C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer>pip install keras Collecting keras Using cached https://files.pythonhosted.org/packages/ad/fd/6bfe87920d7f4fd475acd28500a42482b6b84479832bdc0fe9e589a60ceb/Keras-2.3.1-py2.py3-none-any.whl Collecting h5py (from keras) Using cached https://files.pythonhosted.org/packages/5f/97/a58afbcf40e8abecededd9512978b4e4915374e5b80049af082f49cebe9a/h5py-2.10.0.tar.gz Collecting keras-preprocessing>=1.0.5 (from keras) Using cached https://files.pythonhosted.org/packages/28/6a/8c1f62c37212d9fc441a7e26736df51ce6f0e38455816445471f10da4f0a/Keras_Preprocessing-1.1.0-py2.py3-none-any.whl Collecting numpy>=1.9.1 (from keras) Using cached https://files.pythonhosted.org/packages/ff/59/d3f6d46aa1fd220d020bdd61e76ca51f6548c6ad6d24ddb614f4037cf49d/numpy-1.17.4.zip Collecting six>=1.9.0 (from keras) Using cached https://files.pythonhosted.org/packages/65/26/32b8464df2a97e6dd1b656ed26b2c194606c16fe163c695a992b36c11cdf/six-1.13.0-py2.py3-none-any.whl Collecting pyyaml (from keras) Using cached https://files.pythonhosted.org/packages/29/16/e4d675da1275a3aabd5e2a35e868273ba3f4859993acb55e77792f806315/PyYAML-5.1.2-cp38-cp38m-win32.whl Collecting scipy>=0.14 (from keras) Using cached https://files.pythonhosted.org/packages/a7/5c/495190b8c7cc71977c3d3fafe788d99d43eeb4740ac56856095df6a23fbd/scipy-1.3.3.tar.gz Installing build dependencies ... error Complete output from command "C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer\venv\Scripts\python.exe" "C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer\venv\lib\site-packages\pip-19.0.3-py3.8.egg\pip" install --ignore-installed --no-user --prefix "C:\U sers\Spencer Pruitt\AppData\Local\Temp\pip-build-env-qcebzlj8\overlay" --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- wheel setuptools Cython>=0.29.13 "numpy==1.13.3; python_version=='3.5' and platform_system!='AIX'" "n umpy==1.13.3; python_version=='3.6' and platform_system!='AIX'" "numpy==1.14.5; python_version=='3.7' and platform_system!='AIX'" "numpy==1.17.3; python_version>='3.8' and platform_system!='AIX'" "numpy==1.16.0; python_version=='3.5' and platform_system=='AIX'" "numpy ==1.16.0; python_version=='3.6' and platform_system=='AIX'" "numpy==1.16.0; python_version=='3.7' and platform_system=='AIX'" "numpy==1.17.3; python_version>='3.8' and platform_system=='AIX'": Ignoring numpy: markers 'python_version == "3.5" and platform_system != "AIX"' don't match your environment Ignoring numpy: markers 'python_version == "3.6" and platform_system != "AIX"' don't match your environment Ignoring numpy: markers 'python_version == "3.7" and platform_system != "AIX"' don't match your environment Ignoring numpy: markers 'python_version == "3.5" and platform_system == "AIX"' don't match your environment Ignoring numpy: markers 'python_version == "3.6" and platform_system == "AIX"' don't match your environment Ignoring numpy: markers 'python_version == "3.7" and platform_system == "AIX"' don't match your environment Ignoring numpy: markers 'python_version >= "3.8" and platform_system == "AIX"' don't match your environment Collecting wheel Using cached https://files.pythonhosted.org/packages/00/83/b4a77d044e78ad1a45610eb88f745be2fd2c6d658f9798a15e384b7d57c9/wheel-0.33.6-py2.py3-none-any.whl Collecting setuptools Using cached https://files.pythonhosted.org/packages/9e/d5/444a443d890f09fc1ca1a2c3c9fc7e84cb148177b05ac94fe5084e3d9abb/setuptools-42.0.1-py2.py3-none-any.whl Collecting Cython>=0.29.13 Using cached https://files.pythonhosted.org/packages/9c/9b/706dac7338c2860cd063a28cdbf5e9670995eaea408abbf2e88ba070d90d/Cython-0.29.14.tar.gz Collecting numpy==1.17.3 Using cached https://files.pythonhosted.org/packages/b6/d6/be8f975f5322336f62371c9abeb936d592c98c047ad63035f1b38ae08efe/numpy-1.17.3.zip Installing collected packages: wheel, setuptools, Cython, numpy Running setup.py install for Cython: started Running setup.py install for Cython: finished with status 'done' Running setup.py install for numpy: started Running setup.py install for numpy: still running... Running setup.py install for numpy: finished with status 'done' Could not install packages due to an EnvironmentError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '"C:' ---------------------------------------- Command ""C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer\venv\Scripts\python.exe" "C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer\venv\lib\site-packages\pip-19.0.3-py3.8.egg\pip" install --ignore-installed --no-user --prefix "C:\Users\Spencer Pruitt\Ap pData\Local\Temp\pip-build-env-qcebzlj8\overlay" --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- wheel setuptools Cython>=0.29.13 "numpy==1.13.3; python_version=='3.5' and platform_system!='AIX'" "numpy==1.13.3; python_v ersion=='3.6' and platform_system!='AIX'" "numpy==1.14.5; python_version=='3.7' and platform_system!='AIX'" "numpy==1.17.3; python_version>='3.8' and platform_system!='AIX'" "numpy==1.16.0; python_version=='3.5' and platform_system=='AIX'" "numpy==1.16.0; python_versi on=='3.6' and platform_system=='AIX'" "numpy==1.16.0; python_version=='3.7' and platform_system=='AIX'" "numpy==1.17.3; python_version>='3.8' and platform_system=='AIX'"" failed with error code 1 in None ``` A similar problem occurs when I attempted to install numpy. ``` (venv) C:\Users\Spencer Pruitt\PycharmProjects\MNIST Analyzer>pip install numpy Collecting numpy Using cached https://files.pythonhosted.org/packages/ff/59/d3f6d46aa1fd220d020bdd61e76ca51f6548c6ad6d24ddb614f4037cf49d/numpy-1.17.4.zip Installing collected packages: numpy Running setup.py install for numpy ... done Could not install packages due to an EnvironmentError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '"C:' ``` **Expected behavior** The installation process would complete satisfactorily and the command prompt would read "[module name] is ready to use!" or something to that effect. **Misc.** I am new to programming and am not familiar with the terminology being thrown around here. I am entirely self taught and this is my first time using Python. I would sincerely appreciate someone giving me a run-down of how to install these modules properly or at least get pip to work. I've been looking at other issues, but the answers do not make sense to me or the problems do not appear to be the same as mine. Thank you!
2019/11/27
[ "https://Stackoverflow.com/questions/59077162", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12447974/" ]
Please use version <3.7 of python for numpy installation. Or see module/package required which version of python. So, just upgrade and degrade the python version to work with packages.
See [this thread](https://github.com/numpy/numpy/issues/11451) re: spaces in path causing issues with installing numpy. Any possibility of moving your virtual environment/project to something like "C:\Temp\MNIST\_Analyzer"? [This thread](https://stackoverflow.com/questions/15472430/using-virtualenv-with-spaces-in-a-path) for getting around spaces in the path is for Mac, but may be relevant.
28,664,632
This is my project set up: ``` my_project ./my_project ./__init__.py ./foo ./__init__.py ./bar.py ./tests ./__init__.py ./test_bar.py ``` Inside `test_bar.py` I have the following import statement: `from foo import bar` However when I run `python /my_project/tests/test_bar.py` I get this error: `ImportError: No module named foo`. Any ideas on how to fix this?
2015/02/22
[ "https://Stackoverflow.com/questions/28664632", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2680879/" ]
``` import sys sys.path.append('/path/to/my_project/') ``` Now you can import ``` from foo import bar ```
You can use relative imports: ``` from ..foo import bar ``` <https://docs.python.org/2/whatsnew/2.5.html#pep-328-absolute-and-relative-imports> but i think too that using absolute paths by [installing](https://docs.python.org/2/distutils/setupscript.html) your project in venv is better way.
28,664,632
This is my project set up: ``` my_project ./my_project ./__init__.py ./foo ./__init__.py ./bar.py ./tests ./__init__.py ./test_bar.py ``` Inside `test_bar.py` I have the following import statement: `from foo import bar` However when I run `python /my_project/tests/test_bar.py` I get this error: `ImportError: No module named foo`. Any ideas on how to fix this?
2015/02/22
[ "https://Stackoverflow.com/questions/28664632", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2680879/" ]
Think about what is on your `PYTHONPATH`. The toplevel package for your project is `my_project`, so that must be the start of any import for something in your project. ``` from my_project.foo import bar ``` You could also use a relative import, although this isn't as clear, and would break if you ever changed the relative location of the module you were performing this import from. ``` from ..foo import bar ``` --- Ideally, the `test` folder is not a package at all, and is not part of your application package. See pytests's page on [good practices](http://pytest.org/latest/goodpractises.html#choosing-a-test-layout-import-rules). This requires that you add a `setup.py` to your package and install it to your virtualenv in develop mode. ``` pip install -e . ``` --- Don't run the tests by pointing directly at a file within your application. After structuring/installing your project correctly, use the discovery mechanism for whatever framework you're using to run the tests for you. For example, with pytest, just point at the test folder: ``` pytest tests ``` Or for the built-in unittest module: ``` python -m unittest discover -s tests ```
You can use relative imports: ``` from ..foo import bar ``` <https://docs.python.org/2/whatsnew/2.5.html#pep-328-absolute-and-relative-imports> but i think too that using absolute paths by [installing](https://docs.python.org/2/distutils/setupscript.html) your project in venv is better way.
28,664,632
This is my project set up: ``` my_project ./my_project ./__init__.py ./foo ./__init__.py ./bar.py ./tests ./__init__.py ./test_bar.py ``` Inside `test_bar.py` I have the following import statement: `from foo import bar` However when I run `python /my_project/tests/test_bar.py` I get this error: `ImportError: No module named foo`. Any ideas on how to fix this?
2015/02/22
[ "https://Stackoverflow.com/questions/28664632", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2680879/" ]
Think about what is on your `PYTHONPATH`. The toplevel package for your project is `my_project`, so that must be the start of any import for something in your project. ``` from my_project.foo import bar ``` You could also use a relative import, although this isn't as clear, and would break if you ever changed the relative location of the module you were performing this import from. ``` from ..foo import bar ``` --- Ideally, the `test` folder is not a package at all, and is not part of your application package. See pytests's page on [good practices](http://pytest.org/latest/goodpractises.html#choosing-a-test-layout-import-rules). This requires that you add a `setup.py` to your package and install it to your virtualenv in develop mode. ``` pip install -e . ``` --- Don't run the tests by pointing directly at a file within your application. After structuring/installing your project correctly, use the discovery mechanism for whatever framework you're using to run the tests for you. For example, with pytest, just point at the test folder: ``` pytest tests ``` Or for the built-in unittest module: ``` python -m unittest discover -s tests ```
``` import sys sys.path.append('/path/to/my_project/') ``` Now you can import ``` from foo import bar ```
62,126,379
Very sorry in advance for the long paste. The code is straight from the text. It may be due to class `Scene`, that seems to have the instruction to: subclass it and implement enter(). But I don't know what that means. ```py from sys import exit from random import randint from textwrap import dedent class Scene(object): def enter(self): print("This scene is not yet configured.") print("Subclass it and implement enter()") exit(1) # Skeleton code: a base class for Scene that will have # the common things that all scenes do. class Engine(object): def __init__(self, scene_map): self.scene_map = scene_map def play(self): current_scene = self.scene_map.opening_scene() last_scene = self.scene_map.next_scene('finished') while current_scene != last_scene: next_scene_name = current_scene.enter() current_scene = self.scene_map.next_scene(next_scene_name) # be sure to print out the last scene current_scene.enter() class Death(Scene): quips = [ "You died. You kinda suck at this.", "Your Mom would be proud...if she were smarter.", "Such a luser.", "I have a small puppy that's better at this.", "You're worse than your Dad's jokes." ] def enter(self): print(Death.quips[randint(0, len(self.quips)-1)]) class CentralCorridor(Scene): def enter(self): print(dedent(""" The Gothons of Planet Percal #25 have invaded your ship and destroyed your entire crew. You are the last surviving member member and your last mission is to get the neutron destruct bomb from the Weapons Armory, put it in the bridge, and blow the ship up after getting into an escape pod. You're running down the central corridor to the Weapons Armory when a Gothon jumps out, red scaly skin, dark grimy teeth, and evil clown costume flowing around his hate filled body. He's blocking the door to the Armory and about to pull a weapon to blast you. """)) action = input("> ") if action == "shoot!": print(dedent(""" Quick on the draw you yank out your blaster and fire it at the Gothon. His clown costume is flowing and moving around his body, which throws off your aim. Your laster hits his costume but misses him entirely. This completely ruins his brand new costume his mother bought him, which makes him fly into an insane rage and blast you repeatedly in the face until you are dead. Then he eats you. """)) return 'death' elif action == "dodge!": print(dedent(""" Like a world class boxer you dodge, weave, slip and slide right as the Gothon's blaster cranks a laser past your head. In the middle of your artful dodge your foot slips and you bang your head on the metal wall and pass out. You wake up shortly after only to die as the Gothon stomps on your head and eats you. """)) return 'death' elif action == "tell a joke": print(dedent(""" Lucky for you they made you learn Gothon insults in the academy. You tell the one Gothon joke you know: Lbhe zbgure vf fb sng, jura fur fvgf nebhaq gur ubhfr, fur fvgf nebhaq gur ubhfr. The Gothon stops, tries not to laugh, then busts out laughing and can't move. While he's laughing you run up and shoot him square in the head putting him down, then jump through the Weapon Armory door. """)) return 'laster_weapon_armory' else: print("DOES NOT COMPUTE!") return 'central_corridor' class LaserWeaponArmory(Scene): def enter(self): print(dedent(""" You do a dive roll into the Weapon Armory, crouch and scan the room for more GOthon that might be hiding. It's dead quiet, too quiet. You stand up and run to the far side of the room and find the neutron bomb in its container. There's a keypad lock on the box and you need the code to get the bomb out. If you get the code wrong 10 times then the lock closes forever and you can't get the bomb. The code is 3 digits. """)) code = f"{randint(1,9)}{randint(1,9)}{randint(1,9)}" guess = input("[keypad]> ") guesses = 0 while guess != code and guesses <10: print("BZZZZEDDD!") guesses += 1 guess = input("[keypad]> ") if guess == code: print(dedent(""" The container clicks open and the seal breaks, letting gas out. You grab the neutron bomb and run as fast as you can to the bridge where you must place it in the right spot. """)) return 'the_bridge' else: print(dedent(""" The lock buzzes one last time and then you hear a sickening melting sound as the mechanism is fused together. You decide to sit there, and finally the Gothons blow up the ship from their ship and you die. """)) return 'death' class Finished(Scene): def enter(self): print("You won! Good job.") return 'finished' class Map(object): scenes = { 'central_corridor': CentralCorridor(), 'laster_weapon_armory': LaserWeaponArmory(), 'the_bridge': TheBridge(), 'escape_pod': EscapePod(), 'death': Death(), 'finished': Finished() } def __init__(self, start_scene): self.start_scene = start_scene def next_scene(self, scene_name): val = Map.scenes.get(scene_name) return val def opening_scene(self): return self.next_scene(self.start_scene) # Finally the code that runs the game by making a Map, handing that map to # an Engine before calling play to make game work. a_map = Map('central_corridor') a_game = Engine(a_map) a_game.play() ``` I'm trying to understand what's going on here, but I don't see it. I've also deleted two of the larger classes because they don't seem to be at issue here. EDIT: The return I'm putting in is: ``` (base) ➜ Python python ex43.py ``` Then, ``` dodge! ``` This returns: ``` Like a world class boxer you dodge, weave, slip and slide right as the Gothon's blaster cranks a laser past your head. In the middle of your artful dodge your foot slips and you bang your head on the metal wall and pass out. You wake up shortly after only to die as the Gothon stomps on your head and eats you. You're worse than your Dad's jokes. Traceback (most recent call last): File "ex43.py", line 283, in <module> a_game.play() File "ex43.py", line 45, in play next_scene_name = current_scene.enter() AttributeError: 'NoneType' object has no attribute 'enter' ```
2020/06/01
[ "https://Stackoverflow.com/questions/62126379", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9808986/" ]
The `enter` method for the `Death` Scene doesn't return anything. In Python, all functions without an explicit return statement return `None`, which explains the error you're getting. ``` class Death(Scene): quips = [ "You died. You kinda suck at this.", "Your Mom would be proud...if she were smarter.", "Such a luser.", "I have a small puppy that's better at this.", "You're worse than your Dad's jokes." ] def enter(self): print(Death.quips[randint(0, len(self.quips)-1)]) # NEED TO ADD RETURN LINE HERE <--- ```
As the error says, ``` next_scene_name = current_scene.enter() AttributeError: 'Nonetype' object has no attribute 'enter' ``` That means that your `current_scene` variable is equal to `None` when you call `current_scene.enter()` in the `play` method. You need to make sure the variable `current_scene` is properly instantiated so that its not `None`. One way to debug this is to add a `print(current_scene)` line right after every mention of `current_scene`. This will tell you when its equal to `None` in your code.
16,178,519
I wrote a metaclass that I'm using for logging purposes in my python project. It makes every class automatically log all activity. The only issue is that I don't want to go into every file and have to add in: ``` __metaclass__ = myMeta ``` Is there a way to set the metaclass in the top level folder so that all the files underneath use that metaclass?
2013/04/23
[ "https://Stackoverflow.com/questions/16178519", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1226565/" ]
No, you can only specify the metaclass per class or per module. You cannot set it for the whole package. In Python 3.1 and onwards, you *can* intercept the `builtins.__build_class__` hook and insert a metaclass programatically, see [Overriding the default type() metaclass before Python runs](https://stackoverflow.com/questions/15293172/overriding-the-default-type-metaclass-before-python-runs/15392133#15392133). In Python 2.7, you could replace `__builtins__.object` with a subclass that uses your metaclass. Like the `builtins.__build_class__` hook, this is *advanced hackery* and break your code as much as getting your metaclass in everwhere. Do so by replacing the `object` reference on the [`__builtin__` module](https://docs.python.org/2/library/__builtin__.html): ``` import __builtin__ class MetaClass(type): def __new__(mcls, name, *args): # do something in the metaclass return super(MetaClass, mcls).__new__(mcls, name, *args) orig_object = __builtin__.orig_object class metaobject(orig_object): __metaclass__ = MetaClass def enable(): # *replace* object with one that uses your metaclass __builtin__.object = metaobject def disable(): __builtin__.object = orig_object ``` Run `enable()` this before importing your package and all new-style classes (those that *can* support a metaclass) will have your metaclass. Note that this behaviour now will propagate to **all** Python code not already loaded, including the standard library, as your package imports code. You probably want to use: ``` enable() import package disable() ``` to limit the effects.
Here's a simple technique. Just *subclass* the *class* itself with `__metaclass__` attribute in the subclass. This process can be automated. util.py ``` class A(object): def __init__(self, first, second): self.first = first self.second = second def __str__(self): return '{} {}'.format(self.first, self.second) ``` main.py ``` from datetime import datetime from util import A def meta(*args): cls = type(*args) setattr(cls, 'created', datetime.now().ctime()) return cls try: print(A.created) except AttributeError as e: print(e) class A(A): __metaclass__ = meta print(A.created, str(A('Michael', 'Jackson'))) ``` Test; ``` $ python main.py type object 'A' has no attribute 'created' ('Wed Mar 9 22:58:16 2016', 'Michael Jackson') ```
20,428,784
Is it possible to write every line, I receive from this script, into a mysql table ? I want to have 2 columns: The ip-adress I need for the command (ipAdresse) and a part of the output of the command itself (I want to split some content of the output).. I do not want to ask for any code but I just want to know whether it's even possible to keep this code as it is and add some stuff to it or I have to rewrite it to get the results I want :) Now I just write the output of the command into a text file. ``` #!/usr/bin/python import subprocess import commands import subprocess ipAdresse_4 = 0 datei = open("pointerRecord.txt", "w") while (ipAdresse_4 < 255): ipAdresse_4 = ipAdresse_4 + 1 ipAdresse = '82.198.205.%d' % (ipAdresse_4,) subprocess.Popen("host %s" % ipAdresse, stdout=datei, shell=True) ```
2013/12/06
[ "https://Stackoverflow.com/questions/20428784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2968265/" ]
Instead of using `respond_to?` why don't you do: ``` def date_or_time?(obj) obj.kind_of?(Date) || obj.kind_of?(Time) end [19] pry(main)> a = Date.new => #<Date: -4712-01-01 ((0j,0s,0n),+0s,2299161j)> [20] pry(main)> date_or_time? a => true [21] pry(main)> b = DateTime.new => #<DateTime: -4712-01-01T00:00:00+00:00 ((0j,0s,0n),+0s,2299161j)> [22] pry(main)> date_or_time? b => true [23] pry(main)> c = Time.new => 2013-12-06 10:44:57 -0600 [24] pry(main)> date_or_time? c => true ```
Alternatively you could still use `respond_to?` with `:iso8601`. I believe only 'date-y' types will respond to that (Date, Time, DateTime).
11,174,532
I connect to a mysql database using pymysql and after executing a request I got the following string: `\xd0\xbc\xd0\xb0\xd1\x80\xd0\xba\xd0\xb0`. This should be 5 characters in utf8, but when I do `print s.encode('utf-8')` I get this: `╨╝╨░╤А╨║╨░`. The string looks like byte representation of unicode characters, which python fails to recognize. So what do I do to make python process them properly?
2012/06/24
[ "https://Stackoverflow.com/questions/11174532", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1477552/" ]
You want to `decode` (not `encode`) to get a unicode string from a byte string. ``` >>> s = '\xd0\xbc\xd0\xb0\xd1\x80\xd0\xba\xd0\xb0' >>> us = s.decode('utf-8') >>> print us марка ``` Note that you may not be able to `print` it because it contains characters outside ASCII. But you should be able to see its value in a Unicode-aware debugger. I ran the above in IDLE. **Update** It seems what you actually have is this: ``` >>> s = u'\xd0\xbc\xd0\xb0\xd1\x80\xd0\xba\xd0\xb0' ``` This is trickier because you first have to get those bytes into a bytestring before you call `decode`. I'm not sure what the "best" way to do that is, but this works: ``` >>> us = ''.join(chr(ord(c)) for c in s).decode('utf-8') >>> print us марка ``` Note that you should of course be decoding it *before* you store it in the database as a string.
Mark is right: you need to decode the string. Byte strings become Unicode strings by decoding them, encoding goes the other way. This and many other details are at [Pragmatic Unicode, or, How Do I Stop The Pain?](http://bit.ly/unipain).
9,425,556
I'm trying to use the app wapiti to make some security test in a web project running in localhost, but i have some problems with the syntax of Python. I follow the instructions that they give in wapiti project site and write this: ``` C:\Python27\python C:\Wapiti\wapiti.py http://server.com/base/url/ ``` but i get this: ``` SintaxError: Invalid Sintax ``` I had read that the syntax of python changed in that version... I really need help please.
2012/02/24
[ "https://Stackoverflow.com/questions/9425556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1229915/" ]
MapKit does not expose a means of performing driving directions. So, it's not as simple as asking the map to display a course from location A to location B. You have two options: 1) Integrate with Google's API to get the driving directions, and overlay your own lines onto the MapKit map. or 2) Simply direct your users out of app and delegate this functionality to the built in Map app. I have no experience with the former, but the later is very easy. Simply: ``` CLLocationCoordinate2D location = [[map userLocation] location].coordinate; double currentLat = location.latitude; double currentLong = location.longitude; NSString *googleUrl = [[NSString alloc] initWithFormat:@"http://maps.google.com/maps?saddr=%f,%f&daddr=%f,%f", currentLat, currentLong, item.latitude, item.longitude]; NSLog(@"%@", googleUrl); [[UIApplication sharedApplication] openURL:[[NSURL alloc] initWithString:googleUrl]]; ```
Actually there is no api supported by iPhone sdk to draw route on map. There a repo on github which is using google maps api to draw route on map by using map overlay. It has some limitation but you can take help from this repo - <https://github.com/kishikawakatsumi/MapKit-Route-Directions>
32,788,322
I want to add a column in a `DataFrame` with some arbitrary value (that is the same for each row). I get an error when I use `withColumn` as follows: ``` dt.withColumn('new_column', 10).head(5) ``` ```none --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-50-a6d0257ca2be> in <module>() 1 dt = (messages 2 .select(messages.fromuserid, messages.messagetype, floor(messages.datetime/(1000*60*5)).alias("dt"))) ----> 3 dt.withColumn('new_column', 10).head(5) /Users/evanzamir/spark-1.4.1/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col) 1166 [Row(age=2, name=u'Alice', age2=4), Row(age=5, name=u'Bob', age2=7)] 1167 """ -> 1168 return self.select('*', col.alias(colName)) 1169 1170 @ignore_unicode_prefix AttributeError: 'int' object has no attribute 'alias' ``` It seems that I can trick the function into working as I want by adding and subtracting one of the other columns (so they add to zero) and then adding the number I want (10 in this case): ``` dt.withColumn('new_column', dt.messagetype - dt.messagetype + 10).head(5) ``` ```none [Row(fromuserid=425, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=47019141, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=49746356, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=93506471, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=80488242, messagetype=1, dt=4809600.0, new_column=10)] ``` This is supremely hacky, right? I assume there is a more legit way to do this?
2015/09/25
[ "https://Stackoverflow.com/questions/32788322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1245418/" ]
**Spark 2.2+** Spark 2.2 introduces `typedLit` to support `Seq`, `Map`, and `Tuples` ([SPARK-19254](https://issues.apache.org/jira/browse/SPARK-19254)) and following calls should be supported (Scala): ```scala import org.apache.spark.sql.functions.typedLit df.withColumn("some_array", typedLit(Seq(1, 2, 3))) df.withColumn("some_struct", typedLit(("foo", 1, 0.3))) df.withColumn("some_map", typedLit(Map("key1" -> 1, "key2" -> 2))) ``` **Spark 1.3+** (`lit`), **1.4+** (`array`, `struct`), **2.0+** (`map`): The second argument for `DataFrame.withColumn` should be a `Column` so you have to use a literal: ``` from pyspark.sql.functions import lit df.withColumn('new_column', lit(10)) ``` If you need complex columns you can build these using blocks like `array`: ``` from pyspark.sql.functions import array, create_map, struct df.withColumn("some_array", array(lit(1), lit(2), lit(3))) df.withColumn("some_struct", struct(lit("foo"), lit(1), lit(.3))) df.withColumn("some_map", create_map(lit("key1"), lit(1), lit("key2"), lit(2))) ``` Exactly the same methods can be used in Scala. ``` import org.apache.spark.sql.functions.{array, lit, map, struct} df.withColumn("new_column", lit(10)) df.withColumn("map", map(lit("key1"), lit(1), lit("key2"), lit(2))) ``` To provide names for `structs` use either `alias` on each field: ``` df.withColumn( "some_struct", struct(lit("foo").alias("x"), lit(1).alias("y"), lit(0.3).alias("z")) ) ``` or `cast` on the whole object ``` df.withColumn( "some_struct", struct(lit("foo"), lit(1), lit(0.3)).cast("struct<x: string, y: integer, z: double>") ) ``` It is also possible, although slower, to use an UDF. **Note**: The same constructs can be used to pass constant arguments to UDFs or SQL functions.
In spark 2.2 there are two ways to add constant value in a column in DataFrame: 1) Using `lit` 2) Using `typedLit`. The difference between the two is that `typedLit` can also handle parameterized scala types e.g. List, Seq, and Map **Sample DataFrame:** ``` val df = spark.createDataFrame(Seq((0,"a"),(1,"b"),(2,"c"))).toDF("id", "col1") +---+----+ | id|col1| +---+----+ | 0| a| | 1| b| +---+----+ ``` **1) Using `lit`:** Adding constant string value in new column named newcol: ``` import org.apache.spark.sql.functions.lit val newdf = df.withColumn("newcol",lit("myval")) ``` Result: ``` +---+----+------+ | id|col1|newcol| +---+----+------+ | 0| a| myval| | 1| b| myval| +---+----+------+ ``` **2) Using `typedLit`:** ``` import org.apache.spark.sql.functions.typedLit df.withColumn("newcol", typedLit(("sample", 10, .044))) ``` Result: ``` +---+----+-----------------+ | id|col1| newcol| +---+----+-----------------+ | 0| a|[sample,10,0.044]| | 1| b|[sample,10,0.044]| | 2| c|[sample,10,0.044]| +---+----+-----------------+ ```
32,788,322
I want to add a column in a `DataFrame` with some arbitrary value (that is the same for each row). I get an error when I use `withColumn` as follows: ``` dt.withColumn('new_column', 10).head(5) ``` ```none --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-50-a6d0257ca2be> in <module>() 1 dt = (messages 2 .select(messages.fromuserid, messages.messagetype, floor(messages.datetime/(1000*60*5)).alias("dt"))) ----> 3 dt.withColumn('new_column', 10).head(5) /Users/evanzamir/spark-1.4.1/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col) 1166 [Row(age=2, name=u'Alice', age2=4), Row(age=5, name=u'Bob', age2=7)] 1167 """ -> 1168 return self.select('*', col.alias(colName)) 1169 1170 @ignore_unicode_prefix AttributeError: 'int' object has no attribute 'alias' ``` It seems that I can trick the function into working as I want by adding and subtracting one of the other columns (so they add to zero) and then adding the number I want (10 in this case): ``` dt.withColumn('new_column', dt.messagetype - dt.messagetype + 10).head(5) ``` ```none [Row(fromuserid=425, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=47019141, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=49746356, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=93506471, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=80488242, messagetype=1, dt=4809600.0, new_column=10)] ``` This is supremely hacky, right? I assume there is a more legit way to do this?
2015/09/25
[ "https://Stackoverflow.com/questions/32788322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1245418/" ]
**Spark 2.2+** Spark 2.2 introduces `typedLit` to support `Seq`, `Map`, and `Tuples` ([SPARK-19254](https://issues.apache.org/jira/browse/SPARK-19254)) and following calls should be supported (Scala): ```scala import org.apache.spark.sql.functions.typedLit df.withColumn("some_array", typedLit(Seq(1, 2, 3))) df.withColumn("some_struct", typedLit(("foo", 1, 0.3))) df.withColumn("some_map", typedLit(Map("key1" -> 1, "key2" -> 2))) ``` **Spark 1.3+** (`lit`), **1.4+** (`array`, `struct`), **2.0+** (`map`): The second argument for `DataFrame.withColumn` should be a `Column` so you have to use a literal: ``` from pyspark.sql.functions import lit df.withColumn('new_column', lit(10)) ``` If you need complex columns you can build these using blocks like `array`: ``` from pyspark.sql.functions import array, create_map, struct df.withColumn("some_array", array(lit(1), lit(2), lit(3))) df.withColumn("some_struct", struct(lit("foo"), lit(1), lit(.3))) df.withColumn("some_map", create_map(lit("key1"), lit(1), lit("key2"), lit(2))) ``` Exactly the same methods can be used in Scala. ``` import org.apache.spark.sql.functions.{array, lit, map, struct} df.withColumn("new_column", lit(10)) df.withColumn("map", map(lit("key1"), lit(1), lit("key2"), lit(2))) ``` To provide names for `structs` use either `alias` on each field: ``` df.withColumn( "some_struct", struct(lit("foo").alias("x"), lit(1).alias("y"), lit(0.3).alias("z")) ) ``` or `cast` on the whole object ``` df.withColumn( "some_struct", struct(lit("foo"), lit(1), lit(0.3)).cast("struct<x: string, y: integer, z: double>") ) ``` It is also possible, although slower, to use an UDF. **Note**: The same constructs can be used to pass constant arguments to UDFs or SQL functions.
As the other answers have described, `lit` and `typedLit` are how to add constant columns to DataFrames. `lit` is an important Spark function that you will use frequently, but not for adding constant columns to DataFrames. You'll commonly be using `lit` to create `org.apache.spark.sql.Column` objects because that's the column type required by most of the `org.apache.spark.sql.functions`. Suppose you have a DataFrame with a `some_date` DateType column and would like to add a column with the days between December 31, 2020 and `some_date`. Here's your DataFrame: ``` +----------+ | some_date| +----------+ |2020-09-23| |2020-01-05| |2020-04-12| +----------+ ``` Here's how to calculate the days till the year end: ```scala val diff = datediff(lit(Date.valueOf("2020-12-31")), col("some_date")) df .withColumn("days_till_yearend", diff) .show() ``` ``` +----------+-----------------+ | some_date|days_till_yearend| +----------+-----------------+ |2020-09-23| 99| |2020-01-05| 361| |2020-04-12| 263| +----------+-----------------+ ``` You could also use `lit` to create a `year_end` column and compute the `days_till_yearend` like so: ```scala import java.sql.Date df .withColumn("yearend", lit(Date.valueOf("2020-12-31"))) .withColumn("days_till_yearend", datediff(col("yearend"), col("some_date"))) .show() ``` ``` +----------+----------+-----------------+ | some_date| yearend|days_till_yearend| +----------+----------+-----------------+ |2020-09-23|2020-12-31| 99| |2020-01-05|2020-12-31| 361| |2020-04-12|2020-12-31| 263| +----------+----------+-----------------+ ``` Most of the time, you don't need to use `lit` to append a constant column to a DataFrame. You just need to use `lit` to convert a Scala type to a `org.apache.spark.sql.Column` object because that's what's required by the function. See the `datediff` function signature: [![enter image description here](https://i.stack.imgur.com/1RBwk.png)](https://i.stack.imgur.com/1RBwk.png) As you can see, `datediff` requires two Column arguments.
32,788,322
I want to add a column in a `DataFrame` with some arbitrary value (that is the same for each row). I get an error when I use `withColumn` as follows: ``` dt.withColumn('new_column', 10).head(5) ``` ```none --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-50-a6d0257ca2be> in <module>() 1 dt = (messages 2 .select(messages.fromuserid, messages.messagetype, floor(messages.datetime/(1000*60*5)).alias("dt"))) ----> 3 dt.withColumn('new_column', 10).head(5) /Users/evanzamir/spark-1.4.1/python/pyspark/sql/dataframe.pyc in withColumn(self, colName, col) 1166 [Row(age=2, name=u'Alice', age2=4), Row(age=5, name=u'Bob', age2=7)] 1167 """ -> 1168 return self.select('*', col.alias(colName)) 1169 1170 @ignore_unicode_prefix AttributeError: 'int' object has no attribute 'alias' ``` It seems that I can trick the function into working as I want by adding and subtracting one of the other columns (so they add to zero) and then adding the number I want (10 in this case): ``` dt.withColumn('new_column', dt.messagetype - dt.messagetype + 10).head(5) ``` ```none [Row(fromuserid=425, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=47019141, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=49746356, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=93506471, messagetype=1, dt=4809600.0, new_column=10), Row(fromuserid=80488242, messagetype=1, dt=4809600.0, new_column=10)] ``` This is supremely hacky, right? I assume there is a more legit way to do this?
2015/09/25
[ "https://Stackoverflow.com/questions/32788322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1245418/" ]
In spark 2.2 there are two ways to add constant value in a column in DataFrame: 1) Using `lit` 2) Using `typedLit`. The difference between the two is that `typedLit` can also handle parameterized scala types e.g. List, Seq, and Map **Sample DataFrame:** ``` val df = spark.createDataFrame(Seq((0,"a"),(1,"b"),(2,"c"))).toDF("id", "col1") +---+----+ | id|col1| +---+----+ | 0| a| | 1| b| +---+----+ ``` **1) Using `lit`:** Adding constant string value in new column named newcol: ``` import org.apache.spark.sql.functions.lit val newdf = df.withColumn("newcol",lit("myval")) ``` Result: ``` +---+----+------+ | id|col1|newcol| +---+----+------+ | 0| a| myval| | 1| b| myval| +---+----+------+ ``` **2) Using `typedLit`:** ``` import org.apache.spark.sql.functions.typedLit df.withColumn("newcol", typedLit(("sample", 10, .044))) ``` Result: ``` +---+----+-----------------+ | id|col1| newcol| +---+----+-----------------+ | 0| a|[sample,10,0.044]| | 1| b|[sample,10,0.044]| | 2| c|[sample,10,0.044]| +---+----+-----------------+ ```
As the other answers have described, `lit` and `typedLit` are how to add constant columns to DataFrames. `lit` is an important Spark function that you will use frequently, but not for adding constant columns to DataFrames. You'll commonly be using `lit` to create `org.apache.spark.sql.Column` objects because that's the column type required by most of the `org.apache.spark.sql.functions`. Suppose you have a DataFrame with a `some_date` DateType column and would like to add a column with the days between December 31, 2020 and `some_date`. Here's your DataFrame: ``` +----------+ | some_date| +----------+ |2020-09-23| |2020-01-05| |2020-04-12| +----------+ ``` Here's how to calculate the days till the year end: ```scala val diff = datediff(lit(Date.valueOf("2020-12-31")), col("some_date")) df .withColumn("days_till_yearend", diff) .show() ``` ``` +----------+-----------------+ | some_date|days_till_yearend| +----------+-----------------+ |2020-09-23| 99| |2020-01-05| 361| |2020-04-12| 263| +----------+-----------------+ ``` You could also use `lit` to create a `year_end` column and compute the `days_till_yearend` like so: ```scala import java.sql.Date df .withColumn("yearend", lit(Date.valueOf("2020-12-31"))) .withColumn("days_till_yearend", datediff(col("yearend"), col("some_date"))) .show() ``` ``` +----------+----------+-----------------+ | some_date| yearend|days_till_yearend| +----------+----------+-----------------+ |2020-09-23|2020-12-31| 99| |2020-01-05|2020-12-31| 361| |2020-04-12|2020-12-31| 263| +----------+----------+-----------------+ ``` Most of the time, you don't need to use `lit` to append a constant column to a DataFrame. You just need to use `lit` to convert a Scala type to a `org.apache.spark.sql.Column` object because that's what's required by the function. See the `datediff` function signature: [![enter image description here](https://i.stack.imgur.com/1RBwk.png)](https://i.stack.imgur.com/1RBwk.png) As you can see, `datediff` requires two Column arguments.
36,791,792
I am using django-cors-headers to overcome cors issues in python django. But I am getting. > > 'Access-Control-Allow-Origin' header contains multiple values '\*, \*', but only one is allowed. while trying to access using angularjs from <http://localhost:8000> > > > here is my settings for CORS that I am using. ``` INSTALLED_APPS = INSTALLED_APPS + ['corsheaders'] MIDDLEWARE_CLASSES = MIDDLEWARE_CLASSES + ['corsheaders.middleware.CorsMiddleware', 'corsheaders.middleware.CorsPostCsrfMiddleware'] CORS_ORIGIN_ALLOW_ALL = True CORS_REPLACE_HTTPS_REFERER = True CORS_ALLOW_HEADERS = ( 'x-requested-with', 'content-type', 'accept', 'origin', 'authorization', 'x-csrftoken', 'accept-encoding' ) CORS_ALLOW_METHODS = ( 'GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS' ) ``` if anyone has resolved this issue please let me know.
2016/04/22
[ "https://Stackoverflow.com/questions/36791792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1433639/" ]
You need to do ``` MIDDLEWARE_CLASSES = ( ... 'corsheaders.middleware.CorsMiddleware', 'django.middleware.common.CommonMiddleware', ... ) CORS_ORIGIN_ALLOW_ALL = True #for testing. ``` Look `CorsMiddleware` is on top of `CommonMiddleware`. Hope this helps.
``` CORS_ORIGIN_ALLOW_ALL = False ``` change allow all to false
30,930,052
(I'm using Python 3.4 for this, on Windows) So, I have this code I whipped out to better show my troubles: ``` #!/usr/bin/env python # -*- coding: utf-8 -*- import os os.startfile('C:\\téxt.txt') ``` On IDLE it works as it should (it just opens that file I specified), but on Console (double-click) it keeps saying Windows can't find the file. Of course, if I try to open "text.txt" instead it works perfectly, as long as it exists. It's slowly driving me insane. Someone help me, please.
2015/06/19
[ "https://Stackoverflow.com/questions/30930052", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5026708/" ]
try this. Select tableview and go to Attribute Inspector. In separator just set the table view separator color to clear color.
Try this. Select tableview and go to Attribute Inspector. Find the property Separator and make it Default to None. And also set the color to Clear Color. I hope it will work for you...good luck !! :)
30,930,052
(I'm using Python 3.4 for this, on Windows) So, I have this code I whipped out to better show my troubles: ``` #!/usr/bin/env python # -*- coding: utf-8 -*- import os os.startfile('C:\\téxt.txt') ``` On IDLE it works as it should (it just opens that file I specified), but on Console (double-click) it keeps saying Windows can't find the file. Of course, if I try to open "text.txt" instead it works perfectly, as long as it exists. It's slowly driving me insane. Someone help me, please.
2015/06/19
[ "https://Stackoverflow.com/questions/30930052", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5026708/" ]
You just need to set the "selection" property of UITableViewCell to "none" and add the following code to your controller. see screenshot its working. ![enter image description here](https://i.stack.imgur.com/TKxH9.png) Code: ``` - (void)tableView:(UITableView *)tableView didDeselectRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = (UITableViewCell *)[tableView cellForRowAtIndexPath:indexPath]; [self setCellColor:[UIColor colorWithRed:0.239 green:0.239 blue:0.239 alpha:1] ForCell:cell]; //normal color } ```
Try this. Select tableview and go to Attribute Inspector. Find the property Separator and make it Default to None. And also set the color to Clear Color. I hope it will work for you...good luck !! :)
11,697,096
I am trying to send a message through GCM (Google Cloud Messaging). I have registered through Google APIs, I can send a regID to my website (which is a Google App Engine Backend) from multiple Android test phones. However, I can't send anything to GCM from Google App Engine. Here is what I am trying to use. ``` regId = "APA91b..." json_data = {"collapse_key" : "Food-Promo", "data" : { "Category" : "FOOD", "Type": "VEG", }, "registration_ids": [regId], } url = 'https://android.googleapis.com/gcm/send' apiKey = "AI..." myKey = "key=" + apiKey headers = {'Content-Type': 'application/json', 'Authorization': myKey} data = urllib.urlencode(json_data) data2 = {"title": title} data3 = urllib.urlencode(data2) req = urllib2.Request(url, data, headers) f = urllib2.urlopen(req) response = f.read() f.close() logging.debug("***!!!!!!!WriteEntry TEST ----- Response: " + response) ``` And here is the error that I am receiving. ``` Traceback (most recent call last): File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in __call__ handler.post(*groups) File "/base/data/home/apps/s~journaltestza/26.360625174851783344/main.py", line 213, in post f = urllib2.urlopen(req) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 124, in urlopen return _opener.open(url, data) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 387, in open response = meth(req, response) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 425, in error return self._call_chain(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 360, in _call_chain result = func(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 400: Bad Request ``` Thanks!
2012/07/28
[ "https://Stackoverflow.com/questions/11697096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1256336/" ]
What are data2 and data3 used for ? The data you are posting was not proper json so you need to use json.dumps(data).Code should be like this : ``` json_data = {"collapse_key" : "Food-Promo", "data" : { "Category" : "FOOD", "Type": "VEG", }, "registration_ids": [regId], } url = 'https://android.googleapis.com/gcm/send' apiKey = "AI..." myKey = "key=" + apiKey data = json.dumps(json_data) headers = {'Content-Type': 'application/json', 'Authorization': myKey} req = urllib2.Request(url, data, headers) f = urllib2.urlopen(req) response = json.loads(f.read()) reply = {} if response ['failure'] == 0: reply['error'] = '0' else: response ['error'] = '1' return HttpResponse(json.dumps(reply), mimetype="application/javascript") ```
Try using [python-gcm](https://github.com/geeknam/python-gcm). It can handle errors as well.
11,697,096
I am trying to send a message through GCM (Google Cloud Messaging). I have registered through Google APIs, I can send a regID to my website (which is a Google App Engine Backend) from multiple Android test phones. However, I can't send anything to GCM from Google App Engine. Here is what I am trying to use. ``` regId = "APA91b..." json_data = {"collapse_key" : "Food-Promo", "data" : { "Category" : "FOOD", "Type": "VEG", }, "registration_ids": [regId], } url = 'https://android.googleapis.com/gcm/send' apiKey = "AI..." myKey = "key=" + apiKey headers = {'Content-Type': 'application/json', 'Authorization': myKey} data = urllib.urlencode(json_data) data2 = {"title": title} data3 = urllib.urlencode(data2) req = urllib2.Request(url, data, headers) f = urllib2.urlopen(req) response = f.read() f.close() logging.debug("***!!!!!!!WriteEntry TEST ----- Response: " + response) ``` And here is the error that I am receiving. ``` Traceback (most recent call last): File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in __call__ handler.post(*groups) File "/base/data/home/apps/s~journaltestza/26.360625174851783344/main.py", line 213, in post f = urllib2.urlopen(req) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 124, in urlopen return _opener.open(url, data) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 387, in open response = meth(req, response) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 425, in error return self._call_chain(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 360, in _call_chain result = func(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 400: Bad Request ``` Thanks!
2012/07/28
[ "https://Stackoverflow.com/questions/11697096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1256336/" ]
What are data2 and data3 used for ? The data you are posting was not proper json so you need to use json.dumps(data).Code should be like this : ``` json_data = {"collapse_key" : "Food-Promo", "data" : { "Category" : "FOOD", "Type": "VEG", }, "registration_ids": [regId], } url = 'https://android.googleapis.com/gcm/send' apiKey = "AI..." myKey = "key=" + apiKey data = json.dumps(json_data) headers = {'Content-Type': 'application/json', 'Authorization': myKey} req = urllib2.Request(url, data, headers) f = urllib2.urlopen(req) response = json.loads(f.read()) reply = {} if response ['failure'] == 0: reply['error'] = '0' else: response ['error'] = '1' return HttpResponse(json.dumps(reply), mimetype="application/javascript") ```
Here is how I ended up solving it, but the above works as well. ``` def sendGCM(self, regid, email, entry_id, date_modified, kind): url = 'https://android.googleapis.com/gcm/send' apiKey = _MY_API_KEY myKey = "key=" + apiKey json_data = { "registration_id": regid, "data" : { "entry_id" : entry_id, "email": email, "date_modified": date_modified, "kind": kind, "reg_id": regid, }, } ### Get regids registration_data = { "registration_ids": [regid], } headers = {'Content-Type': 'application/json', 'Authorization': myKey} data = urllib.urlencode(json_data) req = urllib2.Request(url, data) req.add_header("Authorization", myKey) f = urllib2.urlopen(req) response = f.read() f.close() ```
11,697,096
I am trying to send a message through GCM (Google Cloud Messaging). I have registered through Google APIs, I can send a regID to my website (which is a Google App Engine Backend) from multiple Android test phones. However, I can't send anything to GCM from Google App Engine. Here is what I am trying to use. ``` regId = "APA91b..." json_data = {"collapse_key" : "Food-Promo", "data" : { "Category" : "FOOD", "Type": "VEG", }, "registration_ids": [regId], } url = 'https://android.googleapis.com/gcm/send' apiKey = "AI..." myKey = "key=" + apiKey headers = {'Content-Type': 'application/json', 'Authorization': myKey} data = urllib.urlencode(json_data) data2 = {"title": title} data3 = urllib.urlencode(data2) req = urllib2.Request(url, data, headers) f = urllib2.urlopen(req) response = f.read() f.close() logging.debug("***!!!!!!!WriteEntry TEST ----- Response: " + response) ``` And here is the error that I am receiving. ``` Traceback (most recent call last): File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 703, in __call__ handler.post(*groups) File "/base/data/home/apps/s~journaltestza/26.360625174851783344/main.py", line 213, in post f = urllib2.urlopen(req) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 124, in urlopen return _opener.open(url, data) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 387, in open response = meth(req, response) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 425, in error return self._call_chain(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 360, in _call_chain result = func(*args) File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 400: Bad Request ``` Thanks!
2012/07/28
[ "https://Stackoverflow.com/questions/11697096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1256336/" ]
Try using [python-gcm](https://github.com/geeknam/python-gcm). It can handle errors as well.
Here is how I ended up solving it, but the above works as well. ``` def sendGCM(self, regid, email, entry_id, date_modified, kind): url = 'https://android.googleapis.com/gcm/send' apiKey = _MY_API_KEY myKey = "key=" + apiKey json_data = { "registration_id": regid, "data" : { "entry_id" : entry_id, "email": email, "date_modified": date_modified, "kind": kind, "reg_id": regid, }, } ### Get regids registration_data = { "registration_ids": [regid], } headers = {'Content-Type': 'application/json', 'Authorization': myKey} data = urllib.urlencode(json_data) req = urllib2.Request(url, data) req.add_header("Authorization", myKey) f = urllib2.urlopen(req) response = f.read() f.close() ```
67,828,477
Iterable objects are those that implement `__iter__` function, which returns an iterator object, i.e. and object providing the functions `__iter__` and `__next__` and behaving correctly. Usually the size of the iterable object is not known beforehand, and iterable object is not expected to know how long the iteration will last; however, there are some cases in which knowing the length of the iterable is valuable, for example, when creating an array. `list(x for x in range(1000000))`, for example, creates an initial array of small size, copies it after it is full, and repeats for many times as explained [here](https://stackoverflow.com/a/33045038/6087087). Of course, it is not that important in this example, but it explains the point. Is there a protocol in use for those iterable objects who know their length beforehand? That is, is there a protocol extending [Sized and Iterable](https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes) but not [Collection or Reversible](https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes)? It seems like there is no such protocol in language features, is there such a protocol for well-known third-party libraries? How this discussion relates to generators?
2021/06/03
[ "https://Stackoverflow.com/questions/67828477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6087087/" ]
It sounds like you're asking about something like `__length_hint__`. Excerpts from [PEP 424 – A method for exposing a length hint](https://peps.python.org/pep-0424/): > > CPython currently defines a `__length_hint__` method on several types, such as various iterators. This method is then used by various other functions (such as `list`) to presize lists based on the estimate returned by `__length_hint__`. Types which are not sized, and thus should not define `__len__`, can then define `__length_hint__`, to allow estimating or computing a size (such as many iterators). > > > > > Being able to pre-allocate lists based on the expected size, as estimated by `__length_hint__`, can be a significant optimization. CPython has been observed to run some code faster than PyPy, purely because of this optimization being present. > > > For example, `range` iterators support this ([Try it online!](https://tio.run/##K6gsycjPM7YoKPr/P7NEwVYhsyS1SKMoMS89VcPQwMBAU5OroCgzr0Qjs0QvPj4nNS@9JCM@AygQH68BlMtLrQBJ4VX0/z8A "Python 3.8 (pre-release) – Try It Online")): ``` it = iter(range(1000)) print(it.__length_hint__()) # prints 1000 next(it) print(it.__length_hint__()) # prints 999 ``` And `list` iterators even take list length changes into account ([Try it online!](https://tio.run/##K6gsycjPM7YoKPr/P1HBViHaLz8vNVZBS8HQgCuzBCiQWZJapJGoyVVQlJlXopFZohcfn5Oal16SEZ8BFIiP19DU5MpLrQBJ4VeUqFeQX6BBSE1iQUFqXooGyBV4lf7/DwA "Python 3.8 (pre-release) – Try It Online")): ``` a = [None] * 10 it = iter(a) print(it.__length_hint__()) # prints 10 next(it) print(it.__length_hint__()) # prints 9 a.pop() print(it.__length_hint__()) # prints 8 a.append(None) print(it.__length_hint__()) # prints 9 ``` Generator iterators don't support it, but you can support it in other iterators you write. Here's a demo iterator that... * Produces 10,000 elements. * Hints at having 5,000 elements. * After every 1,000 elements it shows the memory size of the list being built. ``` import gc beacon = object() class MyIterator: def __init__(self): self.n = 10_000 def __iter__(self): return self def __length_hint__(self): print('__length_hint__ called') return 5_000 def __next__(self): if self.n == 0: raise StopIteration self.n -= 1 if self.n % 1_000 == 0: for obj in gc.get_objects(): if isinstance(obj, list) and obj and obj[0] is beacon: print(obj.__sizeof__()) return beacon list(MyIterator()) ``` Output ([Try it online!](https://tio.run/##bZHBSsQwEIbveYq5yCagJYsIIuwDePDkUSRk02l3pE5KEsH15WvSLnZtm0vI8OWbyZ/@nE6e7x/7MAz02fuQoHVCHNE6z3AAf/xAl6QSwnU2Rng5PycMNvnwJCCvGhswhpiSMTJi16ipXlY5VkWy10Zrfc1nx5oPmL4Cj9eu2A65TSdzIt5o0YdclrsFBc52HdY7tVQ/LOZg/N6QUvM3@gH0XB9FliLCa/L9lAN5Xr73Lj94w3UD@9J9w9n4UHIG4px91WIyU@pRqv/gRUiROCbLDmUGb6GjmBRYrkfLZX/T7xmE6SPXmjm8zFbGRPpB3@Qs1Cq0SSFEaSPn/8/kMPwC "Python 3.8 (pre-release) – Try It Online")): ``` __length_hint__ called 45088 45088 45088 45088 45088 50776 57168 64360 72456 81560 ``` We see that `list` asks for a length hint and from the start pre-allocates enough memory for 5,000 references of 8 bytes each, plus 12.5% overallocation. After the first 5,000 elements, it doesn't ask for length hints anymore, and keeps increasing its size bit by bit. If my `__length_hint__` instead accurately returns 10,000, then `list` instead pre-allocates `90088` bytes and that remains until the end.
If I now understand your question, you're still trying to combine two concepts that don't combine in quite this way. `generator` is a subclass of `iterator`; it's a process. `len` applies to data objects -- in particular, to the *iterable* object, as opposed to the *iterator* that traverses the object. Therefore, a generator doesn't really have a length of its own. It returns a sequence of values, and that sequence has a length (when the generator finishes). Can you describe the concept you have of "generator with length" -- if it differs from what I just described? If you keep that distinction in mind, then yes, you can implement `__len__` as an extension to your class. You can add anything you like -- say, a `sqrt` function (See Conway's surreal numbers for details).
4,364,087
Can this be somehow overcome? Can a child process create a subprocess? The problem is, I have a ready application which needs to call a Python script. This script on its own works perfectly, but it needs to call existing shell scripts. Schematically the problem is in the following code: ### parent.py ``` import subprocess subprocess.call(['/usr/sfw/bin/python', '/usr/apps/openet/bmsystest/relAuto/variousSW/child.py','1', '2']) ``` ### child.py ``` import sys import subprocess print sys.argv[0] print sys.argv[1] subprocess.call(['ls -l'], shell=True) exit ``` ### Running child.py ``` python child.py 1 2 all is ok ``` ### Running parent.py ``` python parent.py Traceback (most recent call last): File "/usr/apps/openet/bmsystest/relAuto/variousSW/child.py", line 2, in ? import subprocess ImportError: No module named subprocess ```
2010/12/06
[ "https://Stackoverflow.com/questions/4364087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/457921/" ]
> > There should be nothing stopping you from using subprocess in both child.py and parent.py > > > I am able to run it perfectly fine. :) **Issue Debugging**: > > You are using `python` and `/usr/sfw/bin/python`. > > > 1. Is bare python pointing to the same python? 2. Can you check by typing 'which python'? I am sure if you did the following, it will work for you. ``` /usr/sfw/bin/python parent.py ``` Alternatively, Can you change your `parent.py code` to ``` import subprocess subprocess.call(['python', '/usr/apps/openet/bmsystest/relAuto/variousSW/child.py','1', '2']) ```
Using `subprocess.call` is not the proper way to do it. In my view, `subprocess.Popen` would be better. parent.py: ``` 1 import subprocess 2 3 process = subprocess.Popen(['python', './child.py', 'arg1', 'arg2'],\ 4 stdin=subprocess.PIPE, stdout=subprocess.PIPE,\ 5 stderr=subprocess.PIPE) 6 process.wait() 7 print process.stdout.read() ``` child.py ``` 1 import subprocess 2 import sys 3 4 print sys.argv[1:] 5 6 process = subprocess.Popen(['ls', '-a'], stdout = subprocess.PIPE) 7 8 process.wait() 9 print process.stdout.read() ``` Out of program: ``` python parent.py ['arg1', 'arg2'] . .. chid.py child.py .child.py.swp parent.py .ropeproject ```
4,364,087
Can this be somehow overcome? Can a child process create a subprocess? The problem is, I have a ready application which needs to call a Python script. This script on its own works perfectly, but it needs to call existing shell scripts. Schematically the problem is in the following code: ### parent.py ``` import subprocess subprocess.call(['/usr/sfw/bin/python', '/usr/apps/openet/bmsystest/relAuto/variousSW/child.py','1', '2']) ``` ### child.py ``` import sys import subprocess print sys.argv[0] print sys.argv[1] subprocess.call(['ls -l'], shell=True) exit ``` ### Running child.py ``` python child.py 1 2 all is ok ``` ### Running parent.py ``` python parent.py Traceback (most recent call last): File "/usr/apps/openet/bmsystest/relAuto/variousSW/child.py", line 2, in ? import subprocess ImportError: No module named subprocess ```
2010/12/06
[ "https://Stackoverflow.com/questions/4364087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/457921/" ]
> > There should be nothing stopping you from using subprocess in both child.py and parent.py > > > I am able to run it perfectly fine. :) **Issue Debugging**: > > You are using `python` and `/usr/sfw/bin/python`. > > > 1. Is bare python pointing to the same python? 2. Can you check by typing 'which python'? I am sure if you did the following, it will work for you. ``` /usr/sfw/bin/python parent.py ``` Alternatively, Can you change your `parent.py code` to ``` import subprocess subprocess.call(['python', '/usr/apps/openet/bmsystest/relAuto/variousSW/child.py','1', '2']) ```
You can try to add your python directory to sys.path in chield.py ``` import sys sys.path.append('../') ``` Yes, it's bad way, but it can help you.
38,909,543
I am trying to convert a string to hex character by character, but I cant figure it out in Python3. In older python versions, what I have below works: ``` test = "This is a test" for c in range(0, len(test) ): print( "0x%s"%string_value[i].encode("hex") ) ``` But with python3 I am getting the following error: LookupError: 'hex' is not a text encoding; use codecs.encode() to handle arbitrary codecs. Can anyone help to tell me what the conversion would be in python3. Thanks in advance
2016/08/12
[ "https://Stackoverflow.com/questions/38909543", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1902666/" ]
In python 3x Use [`binascii`](https://docs.python.org/3.1/library/binascii.html) instead of hex: ``` >>> import binascii >>> binascii.hexlify(b'< character / string>') ```
How about: ``` >>> test = "This is a test" >>> for c in range(0, len(test) ): ... print( "0x%x"%ord(test[c])) ... 0x54 0x68 0x69 0x73 0x20 0x69 0x73 0x20 0x61 0x20 0x74 0x65 0x73 0x74 ```
38,909,543
I am trying to convert a string to hex character by character, but I cant figure it out in Python3. In older python versions, what I have below works: ``` test = "This is a test" for c in range(0, len(test) ): print( "0x%s"%string_value[i].encode("hex") ) ``` But with python3 I am getting the following error: LookupError: 'hex' is not a text encoding; use codecs.encode() to handle arbitrary codecs. Can anyone help to tell me what the conversion would be in python3. Thanks in advance
2016/08/12
[ "https://Stackoverflow.com/questions/38909543", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1902666/" ]
In python 3x Use [`binascii`](https://docs.python.org/3.1/library/binascii.html) instead of hex: ``` >>> import binascii >>> binascii.hexlify(b'< character / string>') ```
To print: ``` for c in test: print(hex(ord(c))) ``` To convert: ``` output = ''.join(hex(ord(c)) for c in test) ``` or without the '0x' in output: ``` output = ''.join(hex(ord(c))[2:] for c in test) ```
38,909,543
I am trying to convert a string to hex character by character, but I cant figure it out in Python3. In older python versions, what I have below works: ``` test = "This is a test" for c in range(0, len(test) ): print( "0x%s"%string_value[i].encode("hex") ) ``` But with python3 I am getting the following error: LookupError: 'hex' is not a text encoding; use codecs.encode() to handle arbitrary codecs. Can anyone help to tell me what the conversion would be in python3. Thanks in advance
2016/08/12
[ "https://Stackoverflow.com/questions/38909543", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1902666/" ]
To print: ``` for c in test: print(hex(ord(c))) ``` To convert: ``` output = ''.join(hex(ord(c)) for c in test) ``` or without the '0x' in output: ``` output = ''.join(hex(ord(c))[2:] for c in test) ```
How about: ``` >>> test = "This is a test" >>> for c in range(0, len(test) ): ... print( "0x%x"%ord(test[c])) ... 0x54 0x68 0x69 0x73 0x20 0x69 0x73 0x20 0x61 0x20 0x74 0x65 0x73 0x74 ```
14,307,518
I am only an hour into learning how [cron](http://en.wikipedia.org/wiki/Cron) jobs work, and this is what I have done so far. I’m using `crontab -e` to add my cron command, which is: `0/1 * * * * /usr/bin/python /home/my_username/hello.py > /home/my_username/log.txt` `crontab -l` confirms that my command is there. Hello.py: ``` #!/usr/bin/python # Hello world python program print "Hello World!" ``` But I don’t see anything in the log file. Can someone please explain what am I doing wrong?
2013/01/13
[ "https://Stackoverflow.com/questions/14307518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1972942/" ]
Experiment shows that the `0/1` seems to be the problem. `0/1` *should* be equivalent to `*`. If you replace `0/1` with `*`, it should work. Here's my experimental crontab: ``` 0/1 * * * * echo 0/1 >> cron0.log * * * * * echo star >> cron1.log ``` This creates `cron1.log` but not `cron0.log`. I'll look into this and try to figure out why `0/1` isn't working, but for now just use `*` and it should work. Update: The `foo/bar` syntax is specific to the Vixie cron implementation, which is used by most Linux systems and by MacOS X but is not universal. The usual way to run a command every minute is to specify just `*` in the first field. To run a command every 5 minutes, *if* your cron supports it, specify `*/5`. Here's what the `crontab(5)` man page says: > > Step values can be used in conjunction with ranges. Following a range > with `/<number>` specifies skips of the number's value through the > range. For example, `0-23/2` can be used in the hours field to specify > command execution every other hour (the alternative in the V7 standard > is `0,2,4,6,8,10,12,14,16,18,20,22`). Steps are also permitted after > an asterisk, so if you want to say "every two hours", just use `*/2`. > > > I'm not even sure what `0/1` means. **UPDATE 2:** Ok, here's what I've found. Given that fields 2 through 5 are all `*`, setting the first field (specifying minutes) to `*` causes the job to run once a minute. `*/2` runs every 2 minutes, and `*/3` runs every 3 minutes. This is all as expected. Setting the first field to any of `0/1`, `0/2`, or `0/3` causes the job to run only at the top of the hour, i.e., it's equivalent to just `0`. This is not what I would have expected from the description in the man page. The [Wikipedia quote](http://en.wikipedia.org/wiki/Cron#cite_ref-8) in [jgritty's answer](https://stackoverflow.com/a/14307763/827263): > > Some versions of cron may not accept a value preceding "/" if it is > not a range, such as "0". An alternative would be replacing the zero > with an asterisk. > > > doesn't seem to be entirely correct, at least for the version of Vixie cron I'm using; the `0/1` is accepted without complaint, but it doesn't mean what I'd expect and it doesn't seem particularly useful.
`0/1` seems to be formatted wrong for your version of cron. I found this on [wikipedia](http://en.wikipedia.org/wiki/Cron#cite_ref-8): > > Some versions of cron may not accept a value preceding "/" if it is not a range, > such as "0". An alternative would be replacing the zero with an asterisk. > > > So Keith Thompson's answer should work, and so should: `*/1 * * * *`
60,963,452
I am loading in a very large image (60,000 x 80,000 pixels) and am exceeding the max pixels I can load: ```none cv2.error: OpenCV(4.2.0) /Users/travis/build/skvark/opencv-python/opencv/modules/imgcodecs/src/loadsave.cpp:75: error: (-215:Assertion failed) pixels <= CV_IO_MAX_IMAGE_PIXELS in function 'validateInputImageSize' ``` From what I have found this is referring to the limitation imposed on [line 65](https://github.com/opencv/opencv/blob/8eba3c1e7e8975ff1d263a41a5753efaa51d54fc/modules/imgcodecs/src/loadsave.cpp#L65) Ideally I'd change that to deal with at least 5 gigapixel images ``` #define CV_IO_MAX_IMAGE_PIXELS (1<<33) ``` I have seen some workarounds for this ([OpenCV image size limit](https://stackoverflow.com/questions/51493373/opencv-image-size-limit)) but those don't seem to address the problem which is an arbitrary definition (I'm working off a high performance server with 700gb ram so compute not an issue). My issue is that **I have no idea where this file is**. The error points me towards this "travis" directory which doesn't exist locally for me and in my local environment the c++ files aren't available. Any idea on where to look to find the c++ library?
2020/04/01
[ "https://Stackoverflow.com/questions/60963452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10537728/" ]
You have to modify the openCV source files and then compile it your own. EDIT: You can also modify environment variables ``` export CV_IO_MAX_IMAGE_PIXELS=1099511627776 ```
For my problem I should have specified it was a .tif file (NOTE most large images will be in this file format anyway). In which case a very easy way to load it in to a numpy array (so it can then work with OpenCV) is with the package tifffile. ``` pip install tifffile as tifi ``` This will install it in your python environment. ``` import tifffile as tifi img = tifi.imread("VeryLargeFile.tif") ``` From here you can use it as you would with any numpy array and it is fully compatible with OpenCV etc.
60,963,452
I am loading in a very large image (60,000 x 80,000 pixels) and am exceeding the max pixels I can load: ```none cv2.error: OpenCV(4.2.0) /Users/travis/build/skvark/opencv-python/opencv/modules/imgcodecs/src/loadsave.cpp:75: error: (-215:Assertion failed) pixels <= CV_IO_MAX_IMAGE_PIXELS in function 'validateInputImageSize' ``` From what I have found this is referring to the limitation imposed on [line 65](https://github.com/opencv/opencv/blob/8eba3c1e7e8975ff1d263a41a5753efaa51d54fc/modules/imgcodecs/src/loadsave.cpp#L65) Ideally I'd change that to deal with at least 5 gigapixel images ``` #define CV_IO_MAX_IMAGE_PIXELS (1<<33) ``` I have seen some workarounds for this ([OpenCV image size limit](https://stackoverflow.com/questions/51493373/opencv-image-size-limit)) but those don't seem to address the problem which is an arbitrary definition (I'm working off a high performance server with 700gb ram so compute not an issue). My issue is that **I have no idea where this file is**. The error points me towards this "travis" directory which doesn't exist locally for me and in my local environment the c++ files aren't available. Any idea on where to look to find the c++ library?
2020/04/01
[ "https://Stackoverflow.com/questions/60963452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10537728/" ]
You have to modify the openCV source files and then compile it your own. EDIT: You can also modify environment variables ``` export CV_IO_MAX_IMAGE_PIXELS=1099511627776 ```
Adding the following to your program should fix the issue in python opencv. ``` import os os.environ["OPENCV_IO_MAX_IMAGE_PIXELS"] = str(pow(2,40)) import cv2 ```
40,012,264
I am new to python. I am trying to print sum of all duplicates nos and products of non-duplicates nos from the python list. for examples list = [2,2,4,4,5,7,8,9,9]. what i want is sum= 2+2+4+4+9+9 and product=5\*7\*8.
2016/10/13
[ "https://Stackoverflow.com/questions/40012264", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4024000/" ]
You should not create a new `User` object when writing the parcel. You are operating on the current object instance. I guess you can perform all the logic for object creation and reading the parcel in the `createFromParcel()` method but I have seen the pattern below more often where you pass the parcel into a constructor for the object and handle it there. Make sure you read and write the fields to the parcel in the same exact order. ``` public class User implements Parcelable { private String userName; private String passWord; private boolean oldUser; public User(Parcel in) { userName = in.readString(); passWord = in.readString(); oldUser = in.readInt() == 1; } @Override public void writeToParcel(Parcel dest, int flags) { dest.writeString(userName); dest.writeString(passWord); dest.writeInt(oldUser ? 1 : 0); } public String getUserName() { return userName; } public String getPassWord() { return passWord; } public boolean getOldUser() { return oldUser; } public void setUserName(String userName) { this.userName = userName; } public void setPassWord(String passWord) { this.passWord = passWord; } public void setOldUser(boolean oldUser) { this.oldUser = oldUser; } @Override public int describeContents() { return 0; } public static final Parcelable.Creator<User> CREATOR = new Parcelable.Creator<User>() { public User createFromParcel(Parcel in) { return new User(in); } public User[] newArray(int size) { return new User[size]; } }; } ```
For really **Boolean** (not **boolean**) I would go with: ``` @Override public void writeToParcel(Parcel out, int flags) { if (open_now == null) { out.writeInt(-1); } else { out.writeInt(open_now ? 1 : 0); } ``` and ``` private MyClass(Parcel in) { switch (in.readInt()) { case 0: open_now = false; break; case 1: open_now = true; break; default: open_now = null; break; } ``` This will help you to keep "null" value correctly.
41,065,879
I am having trouble executing this python command and it keeps flagging this specific line. I've read the other posts about EOL, but I can't seem to find an issue with the types of quotes used. ``` logfile = "/Volumes/AC_SMN/03_DIGITAL_12/MD5_CHECKSUM_REPORTS/Text_Files” + id + ".txt" SyntaxError: EOL while scanning string literal ```
2016/12/09
[ "https://Stackoverflow.com/questions/41065879", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7273874/" ]
The quote character after Text\_Files is incorrect. You could try this: ``` logfile = "/Volumes/AC_SMN/03_DIGITAL_12/MD5_CHECKSUM_REPORTS/Text_Files" + id + ".txt" ``` However, I would recommend using the string formatting syntax instead: ``` logfile = "/Volumes/AC_SMN/03_DIGITAL_12/MD5_CHECKSUM_REPORTS/Text_Files{}.txt".format(id) ``` Also, your variable named id is shadowing the built-in id, so best practice is to use another variable name.
you have used wrong quote at the end of `Text_Files” + id +` ``` logfile = "/Volumes/AC_SMN/03_DIGITAL_12/MD5_CHECKSUM_REPORTS/Text_Files” + id + ".txt" ``` instead use this (double quotes at the end of the string) ``` logfile = "/Volumes/AC_SMN/03_DIGITAL_12/MD5_CHECKSUM_REPORTS/Text_Files" + id + ".txt" ```
2,366,056
I'm learning python with 'Dive Into Python 3' and It's very hard to remember everything, without writing something, but there are no exercises in this book. So I ask here, where can i find them to remember everything better.
2010/03/02
[ "https://Stackoverflow.com/questions/2366056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/271388/" ]
I used [ProjectEuler.net](http://projecteuler.net/) when learning Python. It also helped sharpen my math skills.
Find a good Code Kata website: Here's a list I compiled. <http://slott-softwarearchitect.blogspot.com/2009/08/code-kata-resources.html> I've also collected lots of exercises: <http://homepage.mac.com/s_lott/books/python.html> This book, however, covers only Python 2.6, so it may be more confusing than helpful.
2,366,056
I'm learning python with 'Dive Into Python 3' and It's very hard to remember everything, without writing something, but there are no exercises in this book. So I ask here, where can i find them to remember everything better.
2010/03/02
[ "https://Stackoverflow.com/questions/2366056", "https://Stackoverflow.com", "https://Stackoverflow.com/users/271388/" ]
Consider using [*How to Think Like a Computer Scientist*](http://openbookproject.net/thinkcs/python/english2e/index.html) instead of *Dive Into Python* to learn Python. The former has exercises in every chapter, is targeted for a more appropriate version of Python (Python 3 does not have the library support to make it compelling to use yet, and the version of DIP targeting Python 2 is quite old), and it does not have the reputations for errors and ugly code examples DIP has.
Find a good Code Kata website: Here's a list I compiled. <http://slott-softwarearchitect.blogspot.com/2009/08/code-kata-resources.html> I've also collected lots of exercises: <http://homepage.mac.com/s_lott/books/python.html> This book, however, covers only Python 2.6, so it may be more confusing than helpful.
57,901,183
I am using python to parse CSV file but I face an issue how to extract "Davies" element from second row. CSV looks like this ``` "_submissionusersID","_submissionresponseID","username","firstname","lastname","userid","phone","emailaddress","load_date" "b838b35d-ca18-4c7c-874a-828298ae3345","e9cde2ff-33a7-477e-b3b9-12ceb0d214e0","DAVIESJO","John","Davies","16293","","john_davies@test2.com","2019-08-30 15:37:03" "00ec3205-6fcb-4d6d-b806-25579b49911a","e9cde2ff-11a7-477e-b3b9-12ceb0d934e0","MORANJO","John","Moran","16972","+1 (425) 7404555","brian_moran2@test2.com","2019-08-30 15:37:03" "cc44e6bb-af76-4165-8839-433ed8cf6036","e9cde2ff-33a7-477e-b3b9-12ceb0d934e0","TESTNAN","Nancy","Test","75791","+1 (412) 7402344","nancy_test@test2.com","2019-08-30 15:37:03" "a8ecd4db-6c8d-453c-a2a7-032553e2f0e6","e9cde2ff-33a7-477e-b3b9-12ceb0d234e0","SMITHJO","John","Smith","197448","+1 (415) 5940445","john_smith@test2.com","2019-08-30 15:37:03" ``` I'm stuck here: ``` with open('Docs/CSV/submis/submis.csv') as csv_file: csv_reader = csv.DictReader(csv_file) for row in csv_reader: ```
2019/09/12
[ "https://Stackoverflow.com/questions/57901183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4655668/" ]
In the end I sort of solved this by repeatedly subscribing and unsubscribing from the ZMQ socket. ``` # This is run every time the subscriber receive function is called socket.setsockopt(zmq.SUBSCRIBE, '') md = socket.recv_json() msg = socket.recv() socket.setsockopt(zmq.UNSUBSCRIBE, '') ``` Essentially, I made it so that my subscriber socket doesn't care about the other messages that come in other than the one message that it has received until the next time it tries to grab a message. I don't believe that this is the best solution for this problem, as there are costs involved when repeatedly subscribing and unsubscribing. Hoping that there might be a better way to do this but so far I haven't been able to find it.
Are you looking for `zmq.CONFLATE` option ("Last Message Only")? Something like this in subscriber side: ``` context = zmq.Context() socket = context.socket(zmq.SUB) socket.setsockopt(zmq.SUBSCRIBE, '') socket.setsockopt(zmq.CONFLATE, 1) # last msg only. socket.connect("tcp://localhost:%s" % port) # must be placed after above options. ``` --- [Relevant post](https://stackoverflow.com/a/48461030/3702377) [Learn more](http://api.zeromq.org/4-0:zmq-setsockopt)
18,046,817
I have been trying to add sub-directories to an "items" list and have settled on accomplishing this with the below code. ``` root, dirs, files = iter(os.walk(PATH_TO_DIRECTORY)).next() items = [{ 'label': directory, 'path': plugin.url_for('test') } for count, directory in enumerate(dirs)] ``` The above works, but it is surprisingly slow. The os.walk is very quick, but the loop is slow for some reason. I tried to do it all in one go, adding to the "items" list during the os.walk like below ``` for root, dirs, files in os.walk(PATH_TO_DIRECTORY): ``` but couldn't quite get the right syntax to add the directories to the list. Every single example of os.walk I could find online simple did a `print` of dirs or files, which is fine as an example of its use - but not very useful in the real world. I am new to python, only just started to look at it today. Could some advise how to get a list like in my first example but without the separate loop? (I realise it's called a "directory" or something in python, not a list. Let's just call it an array and be done with it... :-) Thanks
2013/08/04
[ "https://Stackoverflow.com/questions/18046817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1743833/" ]
I have no idea what plugin.url\_for() does, but you should be able to speed it a bit doit it this way: ``` plugin_url_for = plugin.url_for _, dirs, _ = iter(os.walk(PATH_TO_DIRECTORY)).next() items = [{ 'label': directory, 'path': plugin_url_for('test') } for directory in dirs] ``` I dropped root, files variables as it seems you are not using it, also removed enumerate on dirs as you are not making any use of it. However, put it back if you need it for some weird reason. Please test it and let me know if it helped. I can not test it properly myself for obvious reasons.
``` dirlist = [] for root, dirs, files in os.walk(PATH_TO_DIRECTORY): dirlist += dirs ``` Should do the trick! For your revised question, I think what you really need is probably the output of: ``` Dirdict = {} for (root, dirs, files) in os.walk (START): Dirdict [root] = dirs ``` You might wish or need some encoding of root with plugin\_url(root), this would give you a single dictionary where you could lookup plugin\_url(some\_path) and get a list of all the directories in that path. What you are doing is creating a list of dictionaries all with a single key. I suspect that you might be after the namedtupple available from collections in python 2.7 and as a builtin in python 3.
28,147,183
I was reading about builder.connect\_signals which maps handlers of glade files with methods in your python file. Apparently works, except for the Main Window, which is not destroying when you close it. If you run it from terminal is still running and have to Ctrl-C to completely close the application. Here is my python code: ``` #!/usr/bin/env python import pygtk import gtk #from gi.repository import Gtk import gtk.glade class Mixer: def __init__(self): self.gladefile = "mixer3.glade" self.wTree = gtk.Builder() self.wTree.add_from_file(self.gladefile) window = self.wTree.get_object("window1") #if (window): # window.connect("destroy", gtk.main_quit) #line_btn = self.wTree.get_object("toggle_linein") #line_btn.connect("on_toggle_linein_activate", btn_linein_activated) self.wTree.connect_signals(self) window.show_all() # must have! def on_toggle_linein_clicked(self, widget): print "Clicked" def Destroy(self, obj): gtk.main_quit() if __name__ == "__main__": m = Mixer() gtk.main() ```
2015/01/26
[ "https://Stackoverflow.com/questions/28147183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/598070/" ]
On closing window your window destroying but main loop of program don't stop, you must connect **destroy** event to the method/function that quit from this loop that ran from last line of code. Make some change in below lines of codes: ``` #if (window): # window.connect("destroy", gtk.main_quit) ``` change to: ``` if (window): window.connect("destroy", self.Destroy) ```
You can use `GtkApplication` and `GtkApplicationWindow` to manage it for you. When Application has no more open windows, it will automatically terminate. ``` #!/usr/bin/env python import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk from gi.repository import Gio class Mixer(Gtk.Application): def __init__(self): super(Mixer, self).__init__(application_id="org.test", flags=Gio.ApplicationFlags.FLAGS_NONE) def do_activate(self): self.gladefile = "mixer3.glade" self.wTree = Gtk.Builder() self.wTree.add_from_file(self.gladefile) # window1 must be an ApplicationWindow in glade file window = self.wTree.get_object("window1") self.add_window(window) # window should be added to application # but only after 'activate' signal window.show_all() if __name__ == "__main__": m = Mixer() m.run() # No gtk.main(), GtkApplication manages it ```
69,398,944
I have this easy code to connect to download some data using `GRPC` ``` creds = grpc.ssl_channel_credentials() channel = grpc.secure_channel(f'{HOST}:{PORT}', credentials=creds) stub = liveops_pb2_grpc.LiveOpsStub(channel=channel) request = project_pb2.ListProjectsRequest(organization=ORGANIZATION) projects = stub.ListProjects(request=request) print(projects) ``` This worked fine on wednesday. It runs in a docker container with `Python 3.8.10` and `protobuf==3.18.0`,`grpcio==1.40.0`, `grpcio-tools==1.40.0`. Today I updated `MAC OS Big Sur to 11.6` and after finishing some extra features on the code I see that it returns: ``` E0930 21:12:04.108551900 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. E0930 21:12:04.194319000 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. E0930 21:12:04.286163700 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. Traceback (most recent call last): File "", line 302, in <module> projects = liveops_stub.ListProjects(request=request) File "/home/airflow/.local/lib/python3.8/site-packages/grpc/_channel.py", line 946, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/home/airflow/.local/lib/python3.8/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses" debug_error_string = "{"created":"@1633036324.286560700","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3186,"referenced_errors":[{"created":"@1633036324.286548700","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":146,"grpc_status":14}]}" > ``` Seems to be something related to SSL Certificates. If I check `/etc/ssl/certs` folder it is empty, so could be that SSL SO certificate has been erased? How can I fix it?
2021/09/30
[ "https://Stackoverflow.com/questions/69398944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5556466/" ]
You can do that with the method [String#[]](https://ruby-doc.org/core-2.7.0/String.html#method-i-5B-5D) with an argument that is a regular expression. ``` r = /.*?\.(?:rb|com|net|br)(?!\.br)/ 'giovanna.macedo@lojas100.com.br-215000695716b.ct.domain.com.br'[r] #=> "giovanna.macedo@lojas100.com.br" 'alvaro-neves@stockshop.com-215000695716b.ct.domain.com.br'[r] #=> "alvaro-neves@stockshop.com" 'filiallojas123@filiallojas.net-215000695716b.ct.domain.com.br'[r] #=> "filiallojas123@filiallojas.net" ``` The regular expression reads as follows: "Match zero or more characters non-greedily (`?`), follow by a period, followed by `'rb'` or `'com'` or `'net'` or `'br'`, which is not followed by `.br`. `(?!\.br)` is a *negative lookahead*. Alternatively the regular expression can be written in *free-spacing mode* to make it self-documenting: ``` r = / .*? # match zero or more characters non-greedily \. # match '.' (?: # begin a non-capture group rb # match 'rb' | # or com # match 'com' | # or net # match 'net' | # or br # match 'br' ) # end non-capture group (?! # begin a negative lookahead \.br # match '.br' ) # end negative lookahead /x # invoke free-spacing regex definition mode ```
This should work for your scenario: ```rb expr = /^(.+\.(?:br|com|net))-[^']+(')$/ str = "email = 'giovanna.macedo@lojas100.com.br-215000695716b.ct.domain.com.br'" str.gsub(expr, '\1\2') ```
69,398,944
I have this easy code to connect to download some data using `GRPC` ``` creds = grpc.ssl_channel_credentials() channel = grpc.secure_channel(f'{HOST}:{PORT}', credentials=creds) stub = liveops_pb2_grpc.LiveOpsStub(channel=channel) request = project_pb2.ListProjectsRequest(organization=ORGANIZATION) projects = stub.ListProjects(request=request) print(projects) ``` This worked fine on wednesday. It runs in a docker container with `Python 3.8.10` and `protobuf==3.18.0`,`grpcio==1.40.0`, `grpcio-tools==1.40.0`. Today I updated `MAC OS Big Sur to 11.6` and after finishing some extra features on the code I see that it returns: ``` E0930 21:12:04.108551900 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. E0930 21:12:04.194319000 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. E0930 21:12:04.286163700 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. Traceback (most recent call last): File "", line 302, in <module> projects = liveops_stub.ListProjects(request=request) File "/home/airflow/.local/lib/python3.8/site-packages/grpc/_channel.py", line 946, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/home/airflow/.local/lib/python3.8/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses" debug_error_string = "{"created":"@1633036324.286560700","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3186,"referenced_errors":[{"created":"@1633036324.286548700","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":146,"grpc_status":14}]}" > ``` Seems to be something related to SSL Certificates. If I check `/etc/ssl/certs` folder it is empty, so could be that SSL SO certificate has been erased? How can I fix it?
2021/09/30
[ "https://Stackoverflow.com/questions/69398944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5556466/" ]
You can do that with the method [String#[]](https://ruby-doc.org/core-2.7.0/String.html#method-i-5B-5D) with an argument that is a regular expression. ``` r = /.*?\.(?:rb|com|net|br)(?!\.br)/ 'giovanna.macedo@lojas100.com.br-215000695716b.ct.domain.com.br'[r] #=> "giovanna.macedo@lojas100.com.br" 'alvaro-neves@stockshop.com-215000695716b.ct.domain.com.br'[r] #=> "alvaro-neves@stockshop.com" 'filiallojas123@filiallojas.net-215000695716b.ct.domain.com.br'[r] #=> "filiallojas123@filiallojas.net" ``` The regular expression reads as follows: "Match zero or more characters non-greedily (`?`), follow by a period, followed by `'rb'` or `'com'` or `'net'` or `'br'`, which is not followed by `.br`. `(?!\.br)` is a *negative lookahead*. Alternatively the regular expression can be written in *free-spacing mode* to make it self-documenting: ``` r = / .*? # match zero or more characters non-greedily \. # match '.' (?: # begin a non-capture group rb # match 'rb' | # or com # match 'com' | # or net # match 'net' | # or br # match 'br' ) # end non-capture group (?! # begin a negative lookahead \.br # match '.br' ) # end negative lookahead /x # invoke free-spacing regex definition mode ```
Use the String#delete\_suffix Method ------------------------------------ This was tested with Ruby 3.0.2. Your mileage may vary with other versions that don't support [String#delete\_suffix](https://ruby-doc.org/core-3.0.2/String.html#method-i-delete_suffix) or its [related bang method](https://ruby-doc.org/core-3.0.2/String.html#method-i-delete_suffix-21). Since you're trying to remove the exact same suffix from all your emails, you can simply invoke #delete\_suffix! on each of your strings. For example: ```rb common_suffix = "-215000695716b.ct.domain.com.br".freeze emails = [email1, email2, email3] emails.each { _1.delete_suffix! common_suffix } ``` You can then validate your results with: ```rb emails #=> ["giovanna.macedo@lojas100.com.br", "alvaro-neves@stockshop.com", "filiallojas123@filiallojas.net"] email1 #=> "giovanna.macedo@lojas100.com.br" email2 #=> "alvaro-neves@stockshop.com" email3 #=> "filiallojas123@filiallojas.net" ``` You can see that the array has replaced each value, or you can call each of the array's variables individually if you want to check that the strings have actually been modified in place. ### String Methods are Usually Faster, But Your Mileage May Vary Since you're dealing with String objects instead of regular expressions, this solution is likely to be faster at scale, although I didn't bother to benchmark all solutions to compare. If you care about performance, you can measure larger samples using [IRB's new `measure` command](https://www.ruby-lang.org/en/news/2020/12/25/ruby-3-0-0-released/), it took only 0.000062s to process the strings this way on my system, and String methods generally work faster than regular expressions at large scales. You'll need to do more extensive [benchmarking](https://ruby-doc.org/stdlib-3.0.2/libdoc/benchmark/rdoc/Benchmark.html/rdoc/Benchmark.html#method-c-measure) if performance is a core concern, though. ### Making the Call Shorter You can even make the call shorter if you want. I left it a bit verbose above so you could see what the intent was at each step, but you can trim this to a single one-liner with the following block: ```rb # one method chain, just wrapped to prevent scrolling [email1, email2, email3]. map { _1.delete_suffix! "-215000695716b.ct.domain.com.br" } ``` Caveats ------- ### You Need Fixed-String Suffixes The main caveat here is that this solution will only work when you know the suffix (or set of suffixes) you want to remove. If you can't rely on the suffixes to be fixed, then you'll likely need to pursue a regex solution in one way or another, even if it's just to collect a set of suffixes. ### Dealing with Frozen Strings Another caveat is that if you've created your code with frozen string literals, you'll need to adjust your code to avoid attempting in-place changes to frozen strings. There's more than one way to do this, but a simple destructuring assignment is probably the easiest to follow given your small code sample. Consider the following: ```rb # assume that the strings in email1 etc. are frozen, but the array # itself is not; you can't change the strings in-place, but you can # re-assign new strings to the same variables or the same array emails = [email1, email2, email3] email1, email2, email3 = emails.map { _1.delete_suffix "-215000695716b.ct.domain.com.br" } ``` There are certainly other ways to work around frozen strings, but the point is that while the now-common use of the `# frozen_string_literal: true` magic comment can improve VM performance or memory usage in large programs, it isn't always the best option for string-mangling code. Just keep that in mind, as tools like RuboCop love to enforce frozen strings, and not everyone stops to consider the consequences of such generic advice to the given problem domain.
69,398,944
I have this easy code to connect to download some data using `GRPC` ``` creds = grpc.ssl_channel_credentials() channel = grpc.secure_channel(f'{HOST}:{PORT}', credentials=creds) stub = liveops_pb2_grpc.LiveOpsStub(channel=channel) request = project_pb2.ListProjectsRequest(organization=ORGANIZATION) projects = stub.ListProjects(request=request) print(projects) ``` This worked fine on wednesday. It runs in a docker container with `Python 3.8.10` and `protobuf==3.18.0`,`grpcio==1.40.0`, `grpcio-tools==1.40.0`. Today I updated `MAC OS Big Sur to 11.6` and after finishing some extra features on the code I see that it returns: ``` E0930 21:12:04.108551900 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. E0930 21:12:04.194319000 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. E0930 21:12:04.286163700 1 ssl_transport_security.cc:1468] Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. Traceback (most recent call last): File "", line 302, in <module> projects = liveops_stub.ListProjects(request=request) File "/home/airflow/.local/lib/python3.8/site-packages/grpc/_channel.py", line 946, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/home/airflow/.local/lib/python3.8/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses" debug_error_string = "{"created":"@1633036324.286560700","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3186,"referenced_errors":[{"created":"@1633036324.286548700","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":146,"grpc_status":14}]}" > ``` Seems to be something related to SSL Certificates. If I check `/etc/ssl/certs` folder it is empty, so could be that SSL SO certificate has been erased? How can I fix it?
2021/09/30
[ "https://Stackoverflow.com/questions/69398944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5556466/" ]
You can do that with the method [String#[]](https://ruby-doc.org/core-2.7.0/String.html#method-i-5B-5D) with an argument that is a regular expression. ``` r = /.*?\.(?:rb|com|net|br)(?!\.br)/ 'giovanna.macedo@lojas100.com.br-215000695716b.ct.domain.com.br'[r] #=> "giovanna.macedo@lojas100.com.br" 'alvaro-neves@stockshop.com-215000695716b.ct.domain.com.br'[r] #=> "alvaro-neves@stockshop.com" 'filiallojas123@filiallojas.net-215000695716b.ct.domain.com.br'[r] #=> "filiallojas123@filiallojas.net" ``` The regular expression reads as follows: "Match zero or more characters non-greedily (`?`), follow by a period, followed by `'rb'` or `'com'` or `'net'` or `'br'`, which is not followed by `.br`. `(?!\.br)` is a *negative lookahead*. Alternatively the regular expression can be written in *free-spacing mode* to make it self-documenting: ``` r = / .*? # match zero or more characters non-greedily \. # match '.' (?: # begin a non-capture group rb # match 'rb' | # or com # match 'com' | # or net # match 'net' | # or br # match 'br' ) # end non-capture group (?! # begin a negative lookahead \.br # match '.br' ) # end negative lookahead /x # invoke free-spacing regex definition mode ```
I would just use the `chomp(string)` method like so: ``` mask = "-215000695716b.ct.domain.com.br" email1.chomp(mask) #=> "giovanna.macedo@lojas100.com.br" email2.chomp(mask) #=> "alvaro-neves@stockshop.com" email3.chomp(mask) #=> "filiallojas123@filiallojas.net" ```
50,598,438
I am tracing a python script like this: ``` python -m trace --ignore-dir=$HOME/lib64:$HOME/lib:/usr -t bin/myscript.py ``` Some lines look like this: ``` --- modulename: __init__, funcname: getEffectiveLevel __init__.py(1325): logger = self __init__.py(1326): while logger: __init__.py(1327): if logger.level: __init__.py(1329): logger = logger.parent __init__.py(1326): while logger: __init__.py(1327): if logger.level: __init__.py(1328): return logger.level ``` Unfortunately I have no clue where this code comes from. Is there a way to see the file name (including the path) of `getEffectiveLevel()`? Of course I could search through all installed python code for a method with this name, but I would like to see the file path immediately. In this context **Python 2.7** gets used. > > I am not fixed to the standard library "trace". I could use a different library, if it provides the needed feature. > > >
2018/05/30
[ "https://Stackoverflow.com/questions/50598438", "https://Stackoverflow.com", "https://Stackoverflow.com/users/633961/" ]
if the purpose is finding the full path, then check [hunter](https://python-hunter.readthedocs.io/en/latest/readme.html#id1) project, it even has support for [query-style](https://python-hunter.readthedocs.io/en/latest/cookbook.html) tracing. ``` # a modified example from docs # do check the documentation it is easy to start with from hunter import trace, Q, Debugger from pdb import Pdb trace( # drop into a Pdb session on``myscript.mainMethod()`` call Q(module="myscript", function="getEffectiveLevel", kind="call", action=Debugger(klass=Pdb))) import myscript myscript.mainMethod() ```
Unfortunately there is no flag/command-line option to enable that. So the immediate (and probably correct) answer is: **No**. If you're okay with messing with the built-in libraries you can easily make it possible by changing the line that reads: ``` print (" --- modulename: %s, funcname: %s" % (modulename, code.co_name)) ``` with: ``` print ("filename: %s, modulename: %s, funcname: %s" % (filename, modulename, code.co_name)) ``` in the `trace.py` file of your Python installation. You can find the path to that file with: ``` >>> import trace >>> trace.__file__ ``` But I really don't want to suggest that modifying libraries that way is something I would recommend. Before you do this (if you decide you really want to do that) create a backup of the file and if possible replace it again after you're done. A better way (although still messy) is to copy the above mentioned `trace.py` file (for example into the current working directory) and modify the copied file. Then you can run the modified version: ``` python path_to_modified_trace_file your_options_for_trace ``` Without the `-m` option and with the modified path, but otherwise identical to your original.
50,598,438
I am tracing a python script like this: ``` python -m trace --ignore-dir=$HOME/lib64:$HOME/lib:/usr -t bin/myscript.py ``` Some lines look like this: ``` --- modulename: __init__, funcname: getEffectiveLevel __init__.py(1325): logger = self __init__.py(1326): while logger: __init__.py(1327): if logger.level: __init__.py(1329): logger = logger.parent __init__.py(1326): while logger: __init__.py(1327): if logger.level: __init__.py(1328): return logger.level ``` Unfortunately I have no clue where this code comes from. Is there a way to see the file name (including the path) of `getEffectiveLevel()`? Of course I could search through all installed python code for a method with this name, but I would like to see the file path immediately. In this context **Python 2.7** gets used. > > I am not fixed to the standard library "trace". I could use a different library, if it provides the needed feature. > > >
2018/05/30
[ "https://Stackoverflow.com/questions/50598438", "https://Stackoverflow.com", "https://Stackoverflow.com/users/633961/" ]
if the purpose is finding the full path, then check [hunter](https://python-hunter.readthedocs.io/en/latest/readme.html#id1) project, it even has support for [query-style](https://python-hunter.readthedocs.io/en/latest/cookbook.html) tracing. ``` # a modified example from docs # do check the documentation it is easy to start with from hunter import trace, Q, Debugger from pdb import Pdb trace( # drop into a Pdb session on``myscript.mainMethod()`` call Q(module="myscript", function="getEffectiveLevel", kind="call", action=Debugger(klass=Pdb))) import myscript myscript.mainMethod() ```
You can create a `trace2.py` file with below content ``` from trace import Trace, main original_globaltrace_lt = Trace.globaltrace_lt def patch_Trace_globaltrace_lt(self, frame, why, arg): value = original_globaltrace_lt(self, frame, why, arg) if value: filename = frame.f_globals.get('__file__', "") print("The code is in file - %s" % (filename)) return value Trace.globaltrace_lt = patch_Trace_globaltrace_lt if __name__ == "__main__": main() ``` and then run it like below ``` python -m trace2 --ignore-dir=$HOME/lib64:$HOME/lib:/usr -t bin/myscript.py ``` The output is verbose but you can alter the function `patch_Trace_globaltrace_lt` as per your needs
69,271,213
There are several ways in python to generate a greyscale image from an RGB version. One of those is just to read an image as greyscale using OpenCV. ``` im = cv2.imread(img, 0) ``` While `0` equals `cv2.IMREAD_GRAYSCALE` There are many different algorithms to handle this operation [well explained here.](https://www.dynamsoft.com/blog/insights/image-processing/image-processing-101-color-space-conversion/) I'm wondering how OpenCV handles this task and which algorithm stands behind `cv2.IMREAD_GRAYSCALE` but could neither find any documentation nor reference. Does someone have any idea? A paper reference would be great. Thanks in advance p.s. I'm working with jpg and png.
2021/09/21
[ "https://Stackoverflow.com/questions/69271213", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5152497/" ]
I think basically @Dan Mašek already answered the question in the comment section. I will try to summarize the findings for jpg files as an answer and I am glad about any improvements. CMYK to Grayscale ----------------- If you want to convert your jpg file from CMYK we have to look into [grfmt\_jpeg.cpp](https://github.com/opencv/opencv/blob/master/modules/imgcodecs/src/grfmt_jpeg.cpp#L433-L458). There exist other files like this for different image codes. Depending on the numbers of color channels `cinfo` is assigned. For CMYK images the `cinfo` is set to `4` and the function on [line 504](https://github.com/opencv/opencv/blob/master/modules/imgcodecs/src/grfmt_jpeg.cpp#L504) `icvCvt_CMYK2Gray_8u_C4C1R` is called. This function can be found in [utils.cpp](https://github.com/opencv/opencv/blob/2558ab3de7cdd57c91935eb64755afb2afd05f00/modules/imgcodecs/src/utils.cpp): ``` void icvCvt_CMYK2Gray_8u_C4C1R( const uchar* cmyk, int cmyk_step, uchar* gray, int gray_step, Size size ) { int i; for( ; size.height--; ) { for( i = 0; i < size.width; i++, cmyk += 4 ) { int c = cmyk[0], m = cmyk[1], y = cmyk[2], k = cmyk[3]; c = k - ((255 - c)*k>>8); m = k - ((255 - m)*k>>8); y = k - ((255 - y)*k>>8); int t = descale( y*cB + m*cG + c*cR, SCALE ); gray[i] = (uchar)t; } gray += gray_step; cmyk += cmyk_step - size.width*4; } } ``` and uses fixed variables for the conversion: ``` #define SCALE 14 #define cR (int)(0.299*(1 << SCALE) + 0.5) #define cG (int)(0.587*(1 << SCALE) + 0.5) #define cB ((1 << SCALE) - cR - cG) ``` RGB/BGR to Grayscale -------------------- If your image only contains three color channels it seems that [libjpeg](https://github.com/opencv/opencv/blob/master/3rdparty/libjpeg/jdcolor.c) is used for the conversion. This can be seen in [line 717](https://github.com/opencv/opencv/blob/master/modules/imgcodecs/src/grfmt_jpeg.cpp#L717). (I am not 100% sure if this is the correct line). In [jdcolor.c](https://github.com/opencv/opencv/blob/master/3rdparty/libjpeg/jdcolor.c) it can be seen that there a definitions and standards for converting color channels starting from [line 41](https://github.com/opencv/opencv/blob/master/3rdparty/libjpeg/jdcolor.c#L41). The most important part for your specific question is: ``` the conversion equations to be implemented are therefore R = Y + 1.402 * Cr G = Y - 0.344136286 * Cb - 0.714136286 * Cr B = Y + 1.772 * Cb Y = 0.299 * R + 0.587 * G + 0.114 * B ``` which relate to standards of the [ITU-R](https://en.wikipedia.org/wiki/ITU-R) and are used in many other sources I found. More detailed information can be found [here](https://en.wikipedia.org/wiki/Luma_(video)) and [here](https://stackoverflow.com/questions/687261/converting-rgb-to-grayscale-intensity). The second source relating to a StackOverflow question makes it clear that the conversion does not only depend on the pure RGB values but also on other parameters as gamma value. The standard OpenCV uses seems to be [Rec. 601](https://en.wikipedia.org/wiki/Rec._601).
in OpenCV [documentation](https://github.com/opencv/opencv/blob/master/modules/imgcodecs/include/opencv2/imgcodecs.hpp) you can find: ``` IMREAD_GRAYSCALE = 0, //!< If set, always convert image to the single channel grayscale image (codec internal conversion). ``` Also > > When using IMREAD\_GRAYSCALE, the codec's internal grayscale conversion > will be used, if available. > Results may differ to the output of cvtColor() > > > So it depends on codec's internal grayscale conversion. **More Info:** from [OpenCV documentation](https://docs.opencv.org/3.4.13/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56) > > When using IMREAD\_GRAYSCALE, the codec's internal grayscale conversion will be used, if available. Results may differ to the output of cvtColor() > On Microsoft Windows\* OS and MacOSX\*, the codecs shipped with an OpenCV image (libjpeg, libpng, libtiff, and libjasper) are used by default. So, OpenCV can always read JPEGs, PNGs, and TIFFs. On MacOSX, there is also an option to use native MacOSX image readers. But beware that currently these native image loaders give images with different pixel values because of the color management embedded into MacOSX. > On Linux\*, BSD flavors and other Unix-like open-source operating systems, OpenCV looks for codecs supplied with an OS image. Install the relevant packages (do not forget the development files, for example, "libjpeg-dev", in Debian\* and Ubuntu\*) to get the codec support or turn on the OPENCV\_BUILD\_3RDPARTY\_LIBS flag in CMake. > > >
30,196,585
I've been struggling for hours on a problem that is making me insane. I installed Python 2.7 with Cygwin and added Scipy, Numpy, Matplotlib (1.4.3) and Ipython. When I decided to run `ipython --pylab` I get the following error: ``` /usr/lib/python2.7/site-packages/matplotlib/transforms.py in <module>() 37 import numpy as np 38 from numpy import ma ----> 39 from matplotlib._path import (affine_transform, count_bboxes_overlapping_bbox, 40 update_path_extents) 41 from numpy.linalg import inv ImportError: No module named _path ``` I spent hours on the internet, looking for a solution but nothing worked. I did notice that I am missing \_path.so files in the matplotlib directory that everybody seems to have. Instead, I have two files: path.py and path.pyc. But I installed matplotlib directly from the official website using `pip install` and reinstalling it didn't make any difference. Does anyone have a little clue on what is going wrong? I would be incredibly grateful !!!
2015/05/12
[ "https://Stackoverflow.com/questions/30196585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4892337/" ]
For others having this problem, in my case, the solution was simple. The problem was caused by having the wrong matplot library installed on your computer; creating an error in finding the correct matplotlib path. In my case, I had installed matplotlib on a different version of python. Simply update matplotlib on your computer, so that it is compatible with your current version of python: ``` pip install --upgrade matplotlib ``` As for the post, I am unsure of what caused these big issues. Hope my tip can help anyone else stumbling upon this issue!
I doubt that most of you brought here by Google have the problem I had, but just in case: I got the above "ImportError: No module named \_path" (on Fedora 17) because I was trying to make use of matplotlib by just setting sys.path to point to where I had built the latest version (1.5.1 at the time). Don't do that. Once I ran "python setup.py install" (as root) to do a proper install (and got rid of my sys.path hack), the error was fixed.
30,196,585
I've been struggling for hours on a problem that is making me insane. I installed Python 2.7 with Cygwin and added Scipy, Numpy, Matplotlib (1.4.3) and Ipython. When I decided to run `ipython --pylab` I get the following error: ``` /usr/lib/python2.7/site-packages/matplotlib/transforms.py in <module>() 37 import numpy as np 38 from numpy import ma ----> 39 from matplotlib._path import (affine_transform, count_bboxes_overlapping_bbox, 40 update_path_extents) 41 from numpy.linalg import inv ImportError: No module named _path ``` I spent hours on the internet, looking for a solution but nothing worked. I did notice that I am missing \_path.so files in the matplotlib directory that everybody seems to have. Instead, I have two files: path.py and path.pyc. But I installed matplotlib directly from the official website using `pip install` and reinstalling it didn't make any difference. Does anyone have a little clue on what is going wrong? I would be incredibly grateful !!!
2015/05/12
[ "https://Stackoverflow.com/questions/30196585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4892337/" ]
For others having this problem, in my case, the solution was simple. The problem was caused by having the wrong matplot library installed on your computer; creating an error in finding the correct matplotlib path. In my case, I had installed matplotlib on a different version of python. Simply update matplotlib on your computer, so that it is compatible with your current version of python: ``` pip install --upgrade matplotlib ``` As for the post, I am unsure of what caused these big issues. Hope my tip can help anyone else stumbling upon this issue!
The package matplotlib requires multiple dependencies (see them [here](https://matplotlib.org/users/installing.html)). For me, the missing dependencies included pyparsing and kiwisolver, but your results my vary. Before you do any of these other things (reinstalling python or the library, etc...), make sure you have installed (pip install ...) all the libs in this list (link).
46,368,931
Here is my main.py: ``` #!/usr/bin/env python3 from kivy.app import App from kivy.lang import Builder from kivy.metrics import dp from kivy.properties import ObjectProperty from kivy.uix.image import Image from kivy.uix.widget import Widget from kivy.uix.boxlayout import BoxLayout from kivymd.bottomsheet import MDListBottomSheet, MDGridBottomSheet from kivymd.button import MDIconButton from kivymd.date_picker import MDDatePicker from kivymd.dialog import MDDialog from kivymd.label import MDLabel from kivymd.list import ILeftBody, ILeftBodyTouch, IRightBodyTouch, BaseListItem from kivymd.material_resources import DEVICE_TYPE from kivymd.navigationdrawer import MDNavigationDrawer, NavigationDrawerHeaderBase from kivymd.selectioncontrols import MDCheckbox from kivymd.snackbar import Snackbar from kivymd.theming import ThemeManager from kivymd.time_picker import MDTimePicker Builder.load_file('main.kv') class BoxTopLevel(BoxLayout): def on_release_next_button(self): # self.ids['sm'].current="mainscreen" if not self.ids['username'].text: Snackbar(text="Please enter a username first").show() return 1 elif ' ' in self.ids['username'].text: Snackbar(text="Invalid username").show() return 1 elif '\\' in self.ids['username'].text: Snackbar(text="Invalid username").show() return 1 # elif '\\' in self.ids['username'].text: # Snackbar(text="No slashes please").show() if self.check_if_user_exists(): Snackbar(text="Welcome %s!" % (self.ids['username'].text)).show() self.ids['sm'].current='mainscreen' return 0 def check_if_user_exists(self): return True def set_previous_date(self, date_obj): self.previous_date = date_obj # self.root.ids.date_picker_label.text = str(date_obj) def show_date_picker(self): self.date_dialog = MDDatePicker(self.set_previous_date) self.date_dialog.open() def show_time_picker(self): self.time_dialog = MDTimePicker() self.time_dialog.open() def show_send_error_dialog(self): content = MDLabel(font_style='Body1', theme_text_color='Secondary', text="This is a dialog with a title and some text. That's pretty awesome right!", size_hint_y=None, valign='top') content.bind(texture_size=content.setter('size')) self.dialog = MDDialog(title="This is a test dialog", content=content, size_hint=(.8, None), height=dp(200), auto_dismiss=False) self.dialog.add_action_button("Dismiss", action=lambda *x: self.dialog.dismiss()) self.dialog.open() def stop_record(self): print("[INFO] Recording Stopped") Snackbar(text="Recording stopped").show() self.stop_record_button = self.ids['stop_record_button'] self.stop_record_button.disabled = True self.ids['record_button'].disabled = False rec_file_path = '' def record(self): print("[INFO] Recording") Snackbar(text="Recording started").show() self.record_button = self.ids['record_button'] self.record_button.disabled = True self.ids['stop_record_button'].disabled = False def export(self, username, date, time, *args, **kwargs): username, date, time = str(username), str(date), str(time) file_path = '/home/cocoa/KES/' file_name = username+'_'+date+'_'+time+'.csv' csv_string = username+','+date+','+time for arg in args: if type(arg) == str: csv_string += ','+arg f = open(file_path+file_name, 'w') f.write(csv_string+'\n') f.close return True, file_path+file_name def upload(self, csv_file_path, recording_file_path): print(csv_file_path, recording_file_path) def submit(self): try: date = str(self.date_dialog.day)+'-'+str(self.date_dialog.month)+'-'+str(self.date_dialog.year) print(date) if self.export(self.ids['username'].text, date, self.time_dialog.time, 'answer1', 'answer2')[0]: Snackbar(text="Woo hoo! It worked!").show() else: self.show_send_error_dialog() except: Snackbar(text="Please enter the date and time and try again").show() class SamplerApp(App): theme_cls = ThemeManager() def build(self, *args, **kwargs): return BoxTopLevel() SamplerApp().run() ``` And here is my main.kv: ``` #:import Toolbar kivymd.toolbar.Toolbar #:import ThemeManager kivymd.theming.ThemeManager #:import MDNavigationDrawer kivymd.navigationdrawer.MDNavigationDrawer #:import NavigationLayout kivymd.navigationdrawer.NavigationLayout #:import NavigationDrawerDivider kivymd.navigationdrawer.NavigationDrawerDivider #:import NavigationDrawerToolbar kivymd.navigationdrawer.NavigationDrawerToolbar #:import NavigationDrawerSubheader kivymd.navigationdrawer.NavigationDrawerSubheader #:import MDCheckbox kivymd.selectioncontrols.MDCheckbox #:import MDSwitch kivymd.selectioncontrols.MDSwitch #:import MDList kivymd.list.MDList #:import OneLineListItem kivymd.list.OneLineListItem #:import TwoLineListItem kivymd.list.TwoLineListItem #:import ThreeLineListItem kivymd.list.ThreeLineListItem #:import OneLineAvatarListItem kivymd.list.OneLineAvatarListItem #:import OneLineIconListItem kivymd.list.OneLineIconListItem #:import OneLineAvatarIconListItem kivymd.list.OneLineAvatarIconListItem #:import MDTextField kivymd.textfields.MDTextField #:import MDSpinner kivymd.spinner.MDSpinner #:import MDCard kivymd.card.MDCard #:import MDSeparator kivymd.card.MDSeparator #:import MDDropdownMenu kivymd.menu.MDDropdownMenu #:import get_color_from_hex kivy.utils.get_color_from_hex #:import colors kivymd.color_definitions.colors #:import SmartTile kivymd.grid.SmartTile #:import MDSlider kivymd.slider.MDSlider #:import MDTabbedPanel kivymd.tabs.MDTabbedPanel #:import MDTab kivymd.tabs.MDTab #:import MDProgressBar kivymd.progressbar.MDProgressBar #:import MDAccordion kivymd.accordion.MDAccordion #:import MDAccordionItem kivymd.accordion.MDAccordionItem #:import MDAccordionSubItem kivymd.accordion.MDAccordionSubItem #:import MDThemePicker kivymd.theme_picker.MDThemePicker #:import MDBottomNavigation kivymd.tabs.MDBottomNavigation #:import MDBottomNavigationItem kivymd.tabs.MDBottomNavigationItem <BoxTopLevel>: orientation: 'vertical' Toolbar: id: toolbar title: 'Sampler' md_bg_color: app.theme_cls.primary_color background_palette: 'Primary' background_hue: '500' #left_action_items: [['menu', lambda x: app.root.toggle_nav_drawer()]] #right_action_items: [['dots-vertical', lambda x: app.root.toggle_nav_drawer()]] ScreenManager: id: sm Screen: name: "loginscreen" BoxLayout: spacing: 20 padding: 20 orientation: 'vertical' Widget: BoxLayout: orientation: 'vertical' padding: 10 spacing: 10 MDTextField: id: username hint_text: "Please enter your unique username" MDCard: size_hint_x: 1 BoxLayout: padding: 10 spacing: 10 orientation: 'vertical' MDLabel: text: 'Please don\'t share this username' theme_text_color: 'Secondary' font_style: "Title" size_hint_y: None height: dp(36) MDSeparator: height: dp(1) MDLabel: text: "This application was developed in a hurry, So I didn't have the time to implement a proper login system. This system is temporary And I will impliment proper logins at later stages of development" theme_text_color: "Primary" MDRaisedButton: size_hint_x: 1 text: "Next ->" on_release: root.on_release_next_button() Screen: name: "mainscreen" MDBottomNavigation: id: bottom_navigation_demo MDBottomNavigationItem: name: 'record_page' text: "Record" icon: "microphone" BoxLayout: orientation: 'vertical' padding: 10 spacing: 10 MDCard: size_hint: 1, 0.2 BoxLayout: padding: 10 spacing: 10 orientation: 'vertical' MDLabel: text: 'Hello!' theme_text_color: 'Secondary' font_style: "Title" size_hint_y: None height: dp(36) MDSeparator: height: dp(1) MDLabel: text: "Since the buzzer went off, now is the time when you freely record your thought through this app. I want you to be as free as possible, without having to worry about whether or not anyone else will find any meaning in what you're saying. You can go on for as long as you like, but please try and go on for three minutes. You don't have to be talking throughout, it's okay to fill the time with silence if you can't freely associate in that moment. There isn't any right or wrong here, it's not possible for there to be any right or wrong here. Do log in your stats before you start here:" theme_text_color: "Primary" Widget: size_hint_y: 0.02 BoxLayout: padding: 10 spacing: 10 MDRaisedButton: id: record_button text: "Start Recording" on_press: root.record() MDRaisedButton: id: stop_record_button text: "Stop Recording" on_press: root.stop_record() disabled: True MDBottomNavigationItem: name: 'questions' text: "Questions" icon: "help" GridLayout: rows: 7 cols: 1 padding: dp(48) spacing: 10 MDTextField: id: location multiline: True hint_text: "Where are you?" MDTextField: id: task multiline: True hint_text: "What were you doing?" MDTextField: id: person_with multiline: True hint_text: "Who are you with" MDTextField: id: special_circumstances multiline: True hint_text: "Are there any special circumstances? (Inebriated, very sad, something big happened)" MDRaisedButton: id: date size_hint: None, None size: 3 * dp(48), dp(48) on_press: root.show_date_picker() text: "What date is it?" MDRaisedButton: text: "What time is it?" size_hint: None, None size: 3 * dp(48), dp(48) on_press: root.show_time_picker() MDRaisedButton: id: submit_button disabled: False text: "Submit!" size_hint: None, None size: 3 * dp(48), dp(48) on_press: root.submit() MDBottomNavigationItem: name: 'info' text: "Info" icon: "information" GridLayout: spacing: 20 padding: 20 rows: 4 cols: 1 MDRaisedButton: size_hint_x: 1 MDRaisedButton: size_hint_x: 1 MDRaisedButton: size_hint_x: 1 MDRaisedButton: size_hint_x: 1 ``` I copied a lot of the kv code from the kitchen sink that kivymd provides. I found another answer on Stack Overflow but didn't quite understand what was causing the error. And since my code seems worse, I'd really appreciate it if someone could explain what exactly is causing the error and why. In my python file, I've just use iteration once in the `export` function. Also, here is the output of the code: ``` [Command: /usr/bin/env -u /home/cocoa/KES/main.py] [INFO ] [Logger ] Record log in /home/cocoa/.kivy/logs/kivy_17-09-22_83.txt [INFO ] [Kivy ] v1.10.0 [INFO ] [Python ] v3.6.2 (default, Jul 20 2017, 03:52:27) [GCC 7.1.1 20170630] [INFO ] [Factory ] 194 symbols loaded [INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_gif (img_pil, img_ffpyplayer ignored) [INFO ] [KivyMD ] KivyMD version: 0.1.2 [INFO ] [Text ] Provider: sdl2 [INFO ] [OSC ] using <multiprocessing> for socket [INFO ] [Window ] Provider: sdl2(['window_egl_rpi'] ignored) [INFO ] [GL ] Using the "OpenGL" graphics system [INFO ] [GL ] Backend used <gl> [INFO ] [GL ] OpenGL version <b'4.5.0 NVIDIA 384.69'> [INFO ] [GL ] OpenGL vendor <b'NVIDIA Corporation'> [INFO ] [GL ] OpenGL renderer <b'GeForce GT 705/PCIe/SSE2'> [INFO ] [GL ] OpenGL parsed version: 4, 5 [INFO ] [GL ] Shading version <b'4.50 NVIDIA'> [INFO ] [GL ] Texture max size <16384> [INFO ] [GL ] Texture max units <32> [INFO ] [Window ] auto add sdl2 input provider [INFO ] [Window ] virtual keyboard not allowed, single mode, not docked [INFO ] [GL ] NPOT texture support is available [INFO ] [Clipboard ] Provider: sdl2(['clipboard_dbusklipper', 'clipboard_gtk3', 'clipboard_xclip', 'clipboard_xsel'] ignored) [CRITICAL] [Cutbuffer ] Unable to find any valuable Cutbuffer provider. xclip - FileNotFoundError: [Errno 2] No such file or directory: 'xclip' File "/usr/lib/python3.6/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/usr/lib/python3.6/site-packages/kivy/core/clipboard/clipboard_xclip.py", line 17, in <module> p = subprocess.Popen(['xclip', '-version'], stdout=subprocess.PIPE) File "/usr/lib/python3.6/subprocess.py", line 707, in __init__ restore_signals, start_new_session) File "/usr/lib/python3.6/subprocess.py", line 1333, in _execute_child raise child_exception_type(errno_num, err_msg) xsel - FileNotFoundError: [Errno 2] No such file or directory: 'xsel' File "/usr/lib/python3.6/site-packages/kivy/core/__init__.py", line 59, in core_select_lib fromlist=[modulename], level=0) File "/usr/lib/python3.6/site-packages/kivy/core/clipboard/clipboard_xsel.py", line 16, in <module> p = subprocess.Popen(['xsel'], stdout=subprocess.PIPE) File "/usr/lib/python3.6/subprocess.py", line 707, in __init__ restore_signals, start_new_session) File "/usr/lib/python3.6/subprocess.py", line 1333, in _execute_child raise child_exception_type(errno_num, err_msg) [WARNING] [MDBottomNavigation] 50.0dp is less than the minimum size of 80dp for a MDBottomNavigationItem. We must now expand to 168dp. [WARNING] [MDBottomNavigation] 33.333333333333336dp is less than the minimum size of 80dp for a MDBottomNavigationItem. We must now expand to 168dp. [INFO ] [Base ] Start application main loop [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [CRITICAL] [Clock ] Warning, too much iteration done before the next frame. Check your code, or increase the Clock.max_iteration attribute [INFO ] [Base ] Leaving application in progress... [Finished in 7.724s] ```
2017/09/22
[ "https://Stackoverflow.com/questions/46368931", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4085793/" ]
This works for me Try not to change MDRaisedButton size\_hint to 1 it raised this clock error , my suggestion is not to change any kivymd button size\_hint it is by default None rather you can change size in dp
This bug still persists in `MDRaisedButton` of KivyMD. A simple workaround to solve it is using `size_hint` instead of `size_hint_x`. For example in your case, replace ``` MDRaisedButton: size_hint_x: 1 ``` by ``` MDRaisedButton: size_hint: 1., None ```
14,206,637
I am really new to the use of Python and the associated packages that can be installed. As a biologist I am looking for a lot of new packages that would help me model species systems, ecological change etc.. and after a lot of "Google-ing" I came across scikit-learn. However, I am having trouble installing it. And I will take this moment now to apologise for the length of this post. I am using 64bit Python 3.3 and have the associated NumPy (MKL 1.7.0) and SciPy. I installed distribute which worked fine and allows me to use easy\_install. So to install scikit-learn, I tried using the cmd prompt (Windows) run in administrator mode, and then also through Python command line. I placed the downloaded and extracted tar.gz file in the Lib\site-packages folder. When I run the command `easy_install scikit-learn` in cmd prompt. Then this is the following output: ``` C:\Python33\Lib\site-packages>easy_install -U scikit-learn Searching for scikit-learn Reading http://pypi.python.org/simple/scikit-learn/ Reading http://scikit-learn.org Reading http://sourceforge.net/projects/scikit-learn/files/ Reading http://scikit-learn.sourceforge.net Best match: scikit-learn 0.12.1 Downloading http://pypi.python.org/packages/source/s/scikit-learn/scikit-learn-0 .12.1.tar.gz#md5=7e8b3434f9e8198b82dc3774f8bc9394 Processing scikit-learn-0.12.1.tar.gz Writing c:\users\nuvraj~1\appdata\local\temp\easy_install-kvr2q0\scikit-learn-0. 12.1\setup.cfg Running scikit-learn-0.12.1\setup.py -q bdist_egg --dist-dir c:\users\nuvraj~1\a ppdata\local\temp\easy_install-kvr2q0\scikit-learn-0.12.1\egg-dist-tmp-l618ie Traceback (most recent call last): File "C:\Python33\Scripts\easy_install-script.py", line 9, in <module> load_entry_point('distribute==0.6.33', 'console_scripts', 'easy_install')() File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 1937, in main with_ei_usage(lambda: File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 1918, in with_ei_usage return f() File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 1941, in <lambda> distclass=DistributionWithoutHelpCommands, **kw File "C:\Python33\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Python33\lib\distutils\dist.py", line 917, in run_commands self.run_command(cmd) File "C:\Python33\lib\distutils\dist.py", line 936, in run_command cmd_obj.run() File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 358, in run self.easy_install(spec, not self.no_deps) File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 598, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 628, in install_item dists = self.install_eggs(spec, download, tmpdir) File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 823, in install_eggs return self.build_and_install(setup_script, setup_base) File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 1103, in build_and_install self.run_setup(setup_script, setup_base, args) File "C:\Python33\lib\site-packages\setuptools\command\easy_install.py", line 1089, in run_setup run_setup(setup_script, args) File "C:\Python33\lib\site-packages\setuptools\sandbox.py", line 34, in run_se tup lambda: exec(compile(open( File "C:\Python33\lib\site-packages\setuptools\sandbox.py", line 82, in run return func() File "C:\Python33\lib\site-packages\setuptools\sandbox.py", line 37, in <lambd a> {'__file__':setup_script, '__name__':'__main__'}) File "setup.py", line 33, in <module> File "c:\users\nuvraj~1\appdata\local\temp\easy_install-kvr2q0\scikit-learn-0. 12.1\sklearn\__init__.py", line 86 print "I: Seeding RNGs with %r" % _random_seed ^ SyntaxError: invalid syntax C:\Python33\Lib\site-packages> ``` So the little ^ seems to be pointing at the " after RNGS with %r. Which from what I can tell is in the '*init*' file from the sklearn folder found in the .tar.gz file. I also get the same result when running it in the Python GUI and command line. How can I install scikit-learn with Python 3.3? Is there some way of building it or editing the file to get past this invalid syntax error? Any help would be greatly appreciated. And I am very sorry for it being such a long post. I was just trying to get all the details in there. Thanks Simon
2013/01/08
[ "https://Stackoverflow.com/questions/14206637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1956404/" ]
scikit-learn does not support Python 3 yet. For now you need Python 2.7. Proper support for Python 3 is expected for the 0.14 release scheduled for Q2-2013.
I am no expert, but in my understanding the print statement in Python 3.\* is now a function, called like: print(). So, a quick solution in this case is to change ``` print "I: Seeding RNGs with %r" % _random_seed ``` to ``` print("I: Seeding RNGs with %r" % _random_seed) ```
9,331,000
I'm trying to remove large blocks of text from a file using python. Each block of text begins with /translation="SOMETEXT" Ending with the second quote. Can anyone give me some advice on how to accomplish this? Thank you
2012/02/17
[ "https://Stackoverflow.com/questions/9331000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1216584/" ]
You can use re.sub like this: ``` import re re.sub("/translation=\".*?\" ", "", s) ```
If performance doesn't matter, you could do something like this. Regular expressions would probably be faster, but this is simpler. ``` def remtxt(s,startstr,endstr): while startstr in s: startpos=s.index(startstr) try: endpos=s.index(endstr,startpos+len(startstr))+len(endstr) except: return s=s[:startpos]+s[endpos:] return s new_string=remtxt(my_string,'/translation="','"') ```
71,155,282
Below is the html tag. I want to return the value in span as an integer in python selenium. Can you help me out? ```html <span class="pendingCount"> <img src="/static/media/sandPot.a436d753.svg" alt="sandPot"> <span>2</span> </span> ```
2022/02/17
[ "https://Stackoverflow.com/questions/71155282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14902563/" ]
You could use the dates for the x-axis, the 'constant' column for the y-axis, and the Cluster id for the coloring. You can create a custom legend using a list of colored rectangles. ```py import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator import pandas as pd import numpy as np N = 100 df = pd.DataFrame({'Date': pd.date_range('2020-08-07', periods=N, freq='D'), 'order': np.arange(1, N + 1), 'ClusterNo2': np.random.randint(0, 4, N).astype(float), 'constant': 1}) df['ClusterNo2'] = df['ClusterNo2'].astype(int) # convert to integers fig, ax = plt.subplots(figsize=(15, 3)) num_clusters = df['ClusterNo2'].max() + 1 colors = plt.cm.Set2.colors ax.bar(x=range(len(df)), height=df['constant'], width=1, color=[colors[i] for i in df['ClusterNo2']], edgecolor='none') ax.set_xticks(range(len(df))) labels = ['' if i % 3 != 0 else day.strftime('%d\n%b %Y') if i == 0 or day.day <= 3 else day.strftime('%d') for i, day in enumerate(df['Date'])] ax.set_xticklabels(labels) ax.margins(x=0, y=0) ax.yaxis.set_major_locator(MaxNLocator(integer=True)) legend_handles = [plt.Rectangle((0, 0), 0, 0, color=colors[i], label=f'{i}') for i in range(num_clusters)] ax.legend(handles=legend_handles, title='Clusters', bbox_to_anchor=(1.01, 1.01), loc='upper left') fig.tight_layout() plt.show() ``` [![bar plot for clusters](https://i.stack.imgur.com/1034J.png)](https://i.stack.imgur.com/1034J.png)
You could just plot a normal bar graph, with 1 bar corresponding to 1 day. If you make the width also 1, it will look as if the patches are contiguous. [![enter image description here](https://i.stack.imgur.com/jtFlV.png)](https://i.stack.imgur.com/jtFlV.png) ``` import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import BoundaryNorm # simulate data total_datapoints = 16 total_clusters = 4 order = np.arange(total_datapoints) clusters = np.random.randint(0, total_clusters, size=total_datapoints) # map clusters to colors cmap = plt.cm.tab10 bounds = np.arange(total_clusters + 1) norm = BoundaryNorm(bounds, cmap.N) colors = [cmap(norm(cluster)) for cluster in clusters] # plot fig, ax = plt.subplots() ax.bar(order, np.ones_like(order), width=1, color=colors, align='edge') # xticks change_points = np.where(np.diff(clusters) != 0)[0] + 1 change_points = np.unique([0] + change_points.tolist() + [total_datapoints]) ax.set_xticks(change_points) # annotate clusters for ii, dx in enumerate(np.diff(change_points)): xx = change_points[ii] + dx/2 ax.text(xx, 0.5, str(clusters[int(xx)]), ha='center', va='center') ax.set_xlabel('Time (days)') plt.show() ```
48,655,638
I think what I am trying to do is pretty much like [github issue in zeep repo](https://github.com/mvantellingen/python-zeep/issues/412) --- but sadly there is no response to this issue yet. I researched suds and installed and tried -- did not even get sending parameter to work and thought zeep seems better maintained? Edit 1: For sure I am not talking about [this](http://docs.python-zeep.org/en/latest/client.html#creating-the-raw-xml-documents)
2018/02/07
[ "https://Stackoverflow.com/questions/48655638", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2379736/" ]
You can use a Plugin for editing the xml as a plain string. I used this plugin for keeping the characters '<' and '>' in a CDATA element. ``` from xml import etree from zeep import Plugin class my_plugin(Plugin): def egress(self, envelope, http_headers, operation, binding_options): xml_string = etree.ElementTree.tostring(envelope) xml_string = xml_string.replace("&lt;", "<") xml_string = xml_string.replace("&gt;", ">") parser = etree.ElementTree.XMLParser(strip_cdata=False) new_envelope = etree.ElementTree.XML(xml_string, parser=parser) return new_envelope, http_headers ``` Then just import the plugin on the client: ``` client = Client(wsdl='url', transport=transport, plugins=[my_plugin()]) ``` Take a look at the docs: <http://docs.python-zeep.org/en/master/plugins.html>
On Python 3.9, @David Ortiz answer didn't work for me, maybe something has changed. The `etree_to_string` was failing to convert the XML to string. What worked for me, instead of a plugin, I created a custom transport, that replaced the stripped tags with the correct characters, just like David's code, before the post was sent. ``` import zeep from zeep.transports import Transport from xml.etree import ElementTree class CustomTransport(Transport): def post_xml(self, address, envelope, headers): message = ElementTree.tostring(envelope, encoding="unicode") message = message.replace("&lt;", "<") message = message.replace("&gt;", ">") return self.post(address, message, headers) client = zeep.Client('wsdl_url', transport=CustomTransport()) ```
56,316,244
I have some strange behavior on python 3.7 with a nested list comprehension that involves a generator. **This works:** ``` i = range(20) n = [1, 2, 3] result = [min(x + y for x in i) for y in n] ``` It does **not work** if `i` is a generator: ``` i = (p for p in range(20)) n = [1, 2, 3] result = [min(x + y for x in i) for y in n] ``` This raises a `ValueError: min() arg is an empty sequence` Now even if the generator `i` is wrapped with `list` it still creates the same error: ``` i = (p for p in range(20)) n = [1, 2, 3] result = [min(x + y for x in list(i)) for y in n] ``` Is this a python bug or is it expected behavior? If it is expected behavior, can you explain why this does not work?
2019/05/26
[ "https://Stackoverflow.com/questions/56316244", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51627/" ]
In both of your last examples, you try to iterate on the generator again after it got exhausted. In your last example, `list(i)` is evaluated again for each value of `y`, so `i` will be exhausted after the first run. You have to make a list of the values it yields once before, as in: ``` i = (p for p in range(20)) n = [1, 2, 3] list_i = list(i) result = [min(x + y for x in list_i) for y in n] ```
The generator is emptied after the first for loop for both `for x in i` or `for x in list(i)`, instead you need to convert the generator to a list, (which essentially iterates over the generator and empties it) beforehand and use that list Note that this essentially defeats the purpose of a generator, since now this becomes the same as the first approach ``` In [14]: list(range(20)) == list(p for p in range(20)) Out[14]: True ``` Hence the updated code will be ``` #Create generator and convert to list i = list(p for p in range(20)) n = [1, 2, 3] #Use that list in the list comprehension result = [min(x + y for x in i) for y in n] print(result) ``` The output will be ``` [1, 2, 3] ``` Hence the better approach hence is to stick with the first approach itself, or you can have the generator inline, which, again is the same as the first approach with range ``` n = [1, 2, 3] result = [min(x + y for x in (p for p in range(20))) for y in n] print(result) #[1, 2, 3] ```
56,316,244
I have some strange behavior on python 3.7 with a nested list comprehension that involves a generator. **This works:** ``` i = range(20) n = [1, 2, 3] result = [min(x + y for x in i) for y in n] ``` It does **not work** if `i` is a generator: ``` i = (p for p in range(20)) n = [1, 2, 3] result = [min(x + y for x in i) for y in n] ``` This raises a `ValueError: min() arg is an empty sequence` Now even if the generator `i` is wrapped with `list` it still creates the same error: ``` i = (p for p in range(20)) n = [1, 2, 3] result = [min(x + y for x in list(i)) for y in n] ``` Is this a python bug or is it expected behavior? If it is expected behavior, can you explain why this does not work?
2019/05/26
[ "https://Stackoverflow.com/questions/56316244", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51627/" ]
In `i = range(20)` the `range(20)` is a promise to generate a generator. While `i = (p for p in range(20))` is already a generator. Now write your list expression as: ``` for y in [1, 2, 3]: print(min(x + y for x in i)) ## 1 ## ... ## ValueError: min() arg is an empty sequence ``` You get a `1` printed, but (the generator is exhausted in the first call) and then you get in the next round a `ValueError: min() arg is an empty sequence` because the generator `i` was already consumed in the first for-loop call for y as 1. While if `i` is defined as `range(20)`, everytime the `for x in i` is called, the generator is re-created again and again. You can imitate what `range(20)` is doing by: ``` def gen(): return (p for p in range(20)) for y in [1, 2, 3]: print(min(x + y for x in gen())) # range() like gen() is a promise to generate the generator ## 1 ## 2 ## 3 ``` Now the generator is created everytime anew. But in fact, `range` is even cooler, if you do: ``` i = range(20) for y in [1, 2, 3]: print(min(x + y for x in i)) ## 1 ## 2 ## 3 ``` The `i` inside the innerst generator is not a function call. But despite of that it creates - when evaluted - a new generator - at least when used as an iterable within a for loop. This is actually implemented in Python using a class and by defining the `__iter__()` method. Which defines the behaviour in interators - here especiall a lazy behavior. To imitate this behavior, we can generate a lazy generator (`lazy_gen`). ``` class lazy_gen: def __init__(self): pass def __iter__(self): # everytime when used as an iterator return self.gen() # recreate the generator # real lazy behavior def gen(self): return (p for p in range(20)) ``` Which we can use like: ``` i = lazy_gen() for y in [1, 2, 3]: print(min(x + y for x in i)) ## 1 ## 2 ## 3 ``` So this reflects even better the `range()` behavior. Other languages (functional languages) like `Lisp` family languages (common-lisp, Racket, Scheme, Clojure), `R`, or `Haskell` have a better control over evaluation - thus over lazy evaluation and promises. But in Python, for such implementations and fine grained control, one has to take resort in OOP. **My range function and class** Finally, I figured out how the range function must have been realized roughly. (For fun, though I could have looked it up in the source code of Python I know - but sometimes reasoning is fun.) ``` class Myrange: def __init__(self, start, end, step): self.start = start self.end = end self.step = step def __iter__(self): return self.generate_range() def generate_range(self): x = self.start - self.step while x + self.step < self.end: x = x + self.step yield x def __repr__(self): return "myrange({}, {})".format(self.start, self.end) def myrange(start=None, end=None, step=1): if start is None and end is None: raise "Please provide at least one number for the range-limits." elif start is not None and end is None: _start = 0 _end = start elif start is not None and end is not None: _start = start _end = end else: _start = 0 _end = end _step = step return Myrange(_start, _end, _step) ``` One can use it exactly like the range function. ``` i = myrange(20) n = [1, 2, 3] result = [min(x + y for x in i) for y in n] result ## [1, 2, 3] i ## myrange(0, 20) # representation of a Myrange object. myrange(20) ## myrange(0, 20) list(myrange(3, 10)) ## [3, 4, 5, 6, 7, 8, 9] list(myrange(0, 10)) ## [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] list(myrange(10)) ## [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] list(myrange(0, 10, 2)) ## [0, 2, 4, 6, 8] list(myrange(3, 10, 2)) ## [3, 5, 7, 9] ```
The generator is emptied after the first for loop for both `for x in i` or `for x in list(i)`, instead you need to convert the generator to a list, (which essentially iterates over the generator and empties it) beforehand and use that list Note that this essentially defeats the purpose of a generator, since now this becomes the same as the first approach ``` In [14]: list(range(20)) == list(p for p in range(20)) Out[14]: True ``` Hence the updated code will be ``` #Create generator and convert to list i = list(p for p in range(20)) n = [1, 2, 3] #Use that list in the list comprehension result = [min(x + y for x in i) for y in n] print(result) ``` The output will be ``` [1, 2, 3] ``` Hence the better approach hence is to stick with the first approach itself, or you can have the generator inline, which, again is the same as the first approach with range ``` n = [1, 2, 3] result = [min(x + y for x in (p for p in range(20))) for y in n] print(result) #[1, 2, 3] ```
56,316,244
I have some strange behavior on python 3.7 with a nested list comprehension that involves a generator. **This works:** ``` i = range(20) n = [1, 2, 3] result = [min(x + y for x in i) for y in n] ``` It does **not work** if `i` is a generator: ``` i = (p for p in range(20)) n = [1, 2, 3] result = [min(x + y for x in i) for y in n] ``` This raises a `ValueError: min() arg is an empty sequence` Now even if the generator `i` is wrapped with `list` it still creates the same error: ``` i = (p for p in range(20)) n = [1, 2, 3] result = [min(x + y for x in list(i)) for y in n] ``` Is this a python bug or is it expected behavior? If it is expected behavior, can you explain why this does not work?
2019/05/26
[ "https://Stackoverflow.com/questions/56316244", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51627/" ]
In `i = range(20)` the `range(20)` is a promise to generate a generator. While `i = (p for p in range(20))` is already a generator. Now write your list expression as: ``` for y in [1, 2, 3]: print(min(x + y for x in i)) ## 1 ## ... ## ValueError: min() arg is an empty sequence ``` You get a `1` printed, but (the generator is exhausted in the first call) and then you get in the next round a `ValueError: min() arg is an empty sequence` because the generator `i` was already consumed in the first for-loop call for y as 1. While if `i` is defined as `range(20)`, everytime the `for x in i` is called, the generator is re-created again and again. You can imitate what `range(20)` is doing by: ``` def gen(): return (p for p in range(20)) for y in [1, 2, 3]: print(min(x + y for x in gen())) # range() like gen() is a promise to generate the generator ## 1 ## 2 ## 3 ``` Now the generator is created everytime anew. But in fact, `range` is even cooler, if you do: ``` i = range(20) for y in [1, 2, 3]: print(min(x + y for x in i)) ## 1 ## 2 ## 3 ``` The `i` inside the innerst generator is not a function call. But despite of that it creates - when evaluted - a new generator - at least when used as an iterable within a for loop. This is actually implemented in Python using a class and by defining the `__iter__()` method. Which defines the behaviour in interators - here especiall a lazy behavior. To imitate this behavior, we can generate a lazy generator (`lazy_gen`). ``` class lazy_gen: def __init__(self): pass def __iter__(self): # everytime when used as an iterator return self.gen() # recreate the generator # real lazy behavior def gen(self): return (p for p in range(20)) ``` Which we can use like: ``` i = lazy_gen() for y in [1, 2, 3]: print(min(x + y for x in i)) ## 1 ## 2 ## 3 ``` So this reflects even better the `range()` behavior. Other languages (functional languages) like `Lisp` family languages (common-lisp, Racket, Scheme, Clojure), `R`, or `Haskell` have a better control over evaluation - thus over lazy evaluation and promises. But in Python, for such implementations and fine grained control, one has to take resort in OOP. **My range function and class** Finally, I figured out how the range function must have been realized roughly. (For fun, though I could have looked it up in the source code of Python I know - but sometimes reasoning is fun.) ``` class Myrange: def __init__(self, start, end, step): self.start = start self.end = end self.step = step def __iter__(self): return self.generate_range() def generate_range(self): x = self.start - self.step while x + self.step < self.end: x = x + self.step yield x def __repr__(self): return "myrange({}, {})".format(self.start, self.end) def myrange(start=None, end=None, step=1): if start is None and end is None: raise "Please provide at least one number for the range-limits." elif start is not None and end is None: _start = 0 _end = start elif start is not None and end is not None: _start = start _end = end else: _start = 0 _end = end _step = step return Myrange(_start, _end, _step) ``` One can use it exactly like the range function. ``` i = myrange(20) n = [1, 2, 3] result = [min(x + y for x in i) for y in n] result ## [1, 2, 3] i ## myrange(0, 20) # representation of a Myrange object. myrange(20) ## myrange(0, 20) list(myrange(3, 10)) ## [3, 4, 5, 6, 7, 8, 9] list(myrange(0, 10)) ## [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] list(myrange(10)) ## [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] list(myrange(0, 10, 2)) ## [0, 2, 4, 6, 8] list(myrange(3, 10, 2)) ## [3, 5, 7, 9] ```
In both of your last examples, you try to iterate on the generator again after it got exhausted. In your last example, `list(i)` is evaluated again for each value of `y`, so `i` will be exhausted after the first run. You have to make a list of the values it yields once before, as in: ``` i = (p for p in range(20)) n = [1, 2, 3] list_i = list(i) result = [min(x + y for x in list_i) for y in n] ```
70,969,920
I am looping over a list of dictionaries and I have to drop/ignore either one or more keys of the each dictionary in the list and write it to a MongoDB. What is the efficient pythonic way of doing this ? **Example:** ``` employees = [ {'name': "Tom", 'age': 10, 'salary': 10000, 'floor': 10}, {'name': "Mark", 'age': 5, 'salary': 12000, 'floor': 11}, {'name': "Pam", 'age': 7, 'salary': 9500, 'floor': 9} ] ``` Let's say I want to drop key = 'floor' or keys = ['floor', 'salary']. Currently I am using del employees['floor'] inside the loop to delete the key and my\_collection.insert\_one() to simply write the dictionary into my MongoDB. **My code:** ``` for d in employees: del d['floor'] my_collection.insert_one(d) ```
2022/02/03
[ "https://Stackoverflow.com/questions/70969920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14066217/" ]
In the context you ask about you can think that closure is a function that references to some variables that are defined in its outer scope (for other cases see the answer by @phipsgabler). Here is a minimal example: ``` julia> function est_mean(x) function fun(m) return m - mean(x) end val = find_zero(fun, 0.0) @show val, mean(x) return fun # explicitly return the inner function to inspect it end est_mean (generic function with 1 method) julia> x = rand(10) 10-element Vector{Float64}: 0.6699650145575134 0.8208379672036165 0.4299946498764684 0.1321653923513042 0.5552854476018734 0.8729613266067378 0.5423030870674236 0.15751882823315777 0.4227087678654101 0.8594042895489912 julia> fun = est_mean(x) (val, mean(x)) = (0.5463144770912497, 0.5463144770912497) fun (generic function with 1 method) julia> dump(fun) fun (function of type var"#fun#3"{Vector{Float64}}) x: Array{Float64}((10,)) [0.6699650145575134, 0.8208379672036165, 0.4299946498764684, 0.1321653923513042, 0.5552854476018734, 0.8729613266067378, 0.5423030870674236, 0.15751882823315777, 0.4227087678654101, 0.8594042895489912] julia> fun.x 10-element Vector{Float64}: 0.6699650145575134 0.8208379672036165 0.4299946498764684 0.1321653923513042 0.5552854476018734 0.8729613266067378 0.5423030870674236 0.15751882823315777 0.4227087678654101 0.8594042895489912 julia> fun(10) 9.453685522908751 ``` As you can see `fun` holds the reference to the `x` variable from the outer scope (in this case the scope introduced by the `est_mean` function). Moreover, I have shown you that you can even retrieve this value from outside of `fun` as its field (this is typically not recommended but I show this to you to prove that indeed `fun` stores a reference to the object `x` defined in its outer scope; it needs to store this reference as the variabe `x` is used inside the body of the `fun` function). In the context of estimation, as you have noted, this is useful because `find_zero` in my case requires the function to take only one argument - the `m` variable in my case, while you want the return value to depend both on passed `m` and on `x`. What is important that once `x` is captured in the `fun` closure it does not have to be in current scope. For instance when I call `fun(10)` the code executes correctly although we are outside of the scope of function `est_mean`. But this is not a problem because `fun` function has captured `x` variable. Let me give one more example: ``` julia> function gen() x = [] return v -> push!(x, v) end gen (generic function with 1 method) julia> fun2 = gen() #4 (generic function with 1 method) julia> fun2.x Any[] julia> fun2(1) 1-element Vector{Any}: 1 julia> fun2.x 1-element Vector{Any}: 1 julia> fun2(100) 2-element Vector{Any}: 1 100 julia> fun2.x 2-element Vector{Any}: 1 100 ``` Here you see that the `x` variable defined within `gen` function is captured by the anonymous function `v -> push!(x, v)` that I bind to the `fun2` variable. Later when you call `fun2` the object bound to the `x` variable gets updated (and can be referenced to) although it was defined in the `gen` function scope. Although we left the `gen` scope the object bound to the `x` variable outlives the scope because it is captured by the anonymous function we defined. If something is unclear please comment.
I'm going to complement Bogumił's answer by showing you what he has deliberately left out: a closure does not have to be a function in the strict sense. In fact, you could write them on your own, if nested functions were disallowed in Julia: ``` struct LikelihoodClosure X y end (l::LikelihoodClosure)(β) = -log_likelihood(l.X, l.y, β) make_closures(X, y) = LikelihoodClosure(X, y) nll = make_closures(X, y) ``` Now you are allowed to call `nll(β₀)`, which is an object of type `LikelihoodClosure` with a defined application method. And that's really all to it. Anonymous functions are just syntactic sugar for creating instances of objects storing the "fixed variables" from a context. ``` julia> f(x) = y -> x + y f (generic function with 1 method) julia> f(1) # that's the closure value #1 (generic function with 1 method) julia> typeof(f(1)) # that's the closure type var"#1#2"{Int64} julia> f(1).x 1 julia> propertynames(f(1)) # behold, it has a field `x`! (:x,) ``` And we can even cheat a bit and construct an instance: ``` julia> eval(Expr(:new, var"#1#2"{Int64}, 22)) #1 (generic function with 1 method) julia> eval(Expr(:new, var"#1#2"{Int64}, 22))(2) 24 ```
21,811,851
This question has been troubling me for some days now and I've tried asking in many places for advice, but it seems that nobody can answer it clearly or even provide a reference to an answer. I've also tried searching for tutorials, but I just cannot find any type of tutorial that explains how you would use a reusable third-party django app (most tutorials explain how to write them, none explain how to use them). Also, I've taken a look here: [How to re-use a reusable app in Django](https://stackoverflow.com/questions/557171/how-to-re-use-a-reusable-app-in-django) - it doesn't explain how to actually use it IN a project itself and here: [How to bind multiple reusable Django apps together?](https://stackoverflow.com/questions/11579232/how-to-bind-multiple-reusable-django-apps-together) - the answer by aquaplanet kind of makes sense, but I thought I would ask this question to solve the mental block I am facing in trying to understand this. --- **In order to best explain this, let me do so by example (note, it is not something I am actually building).** I am creating a project that acts like Reddit. I will have users, links and voting/points. Based on this crude example, I will want to reuse 3 (arbitrary) third-party apps: user, voting/points and links. I decide to use each of them as any other python package (meaning that they will be treated as a package and none of their code should be touched) [would this method actually work? Or do you have to be able to edit third-party apps to build a project??) With these apps now within my project, I will use a main app to handle all the template stuff (meaning everything I see on the frontend will be in a single app). I will then either use that same main app for custom logic (in views.py) or I will break up that logic among different apps (but will still use a single frontend-only app). From the 3 paragraphs above, is this structure applicable (or can it work) ? --- Now lets say that this structure **is applicable** and I am using a single main app for the frontend and custom logic. What would I write in models.py? How would I integrate things from the 3 reusable apps into the *main* models.py file? How would I reference the reusable apps in views.py? Lets take the example of contrib.auth With this built-in app, for logging out I would write: ```py from django.contrib.auth import logout from django.contrib.auth.decorators import login_required from django.shortcuts import redirect @login_required def user_logout(request): logout(request) return redirect('/home/') ``` Although the above code is simple, is that basically how things would be done with any reusable app? My question is very long, but I think that this reusable app issue is something a lot of developers aren't quite clear about themselves and maybe this answer will help a lot of others who have heard about the promises of reusable apps, but fail to understand how to *actually* use them.
2014/02/16
[ "https://Stackoverflow.com/questions/21811851", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1663535/" ]
TL;DR: ------ Nope & it depends... Some (Very) Common Reusable Apps -------------------------------- * [django.contrib.admin](https://docs.djangoproject.com/en/dev/ref/contrib/admin/) * [django.contrib.auth](https://docs.djangoproject.com/en/dev/ref/contrib/auth/) * [django.contrib.staticfiles](https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/) ... those are all reusable Django apps, that happen to be shipped with Django (most of them were not, at some point in time) Ok, some other reusable apps that don't ship with Django: * [django-rest-framework](http://www.django-rest-framework.org/) * [django-registration](https://github.com/ubernostrum/django-registration) * [South](http://south.aeracode.org/) Those are all truly reusable apps, and nothing less. There are very many more apps like that. How do they do it? ------------------ To me your question looks more like "how do I build reusable apps", then "how to use them". Actually using them is very different from app to app, because they do very different things. There is only one rule: [RTFM](http://en.wikipedia.org/wiki/RTFM) No way around that either. Often, they rely on one or more of the following: * additional value(s) in `settings.py` * addition (usually one `include` statement) to `urls.py` * subclassing and/or mixins for Models, Forms, Fields, Views etc. * template tags and/or filters * management commands * ... Those are all powerful ways though which your app can **provide** functionality to other apps. There is no recipe (AFAIK) to make a reusable app, because there are so many different scenarios to consider. It all depends on what exactly your app should do. Reusable apps **provide** functionalities ----------------------------------------- I'd argue that it's important to not think of reusable apps as "working together" with other app, but instead recognize that that they "provide functionality." The details of the functionality provided should dictate the way the target developer is supposed to use your library. Not everything should be reusable --------------------------------- Obviously enough, even though many apps can "in principle" be reusable, it often makes little sense to do so, because it is way faster to clump things together (and make them just "work together").
I'm not sure why you think you need a main app for the "frontend" stuff. The point of a reusable app is that it takes care of everything, you just add (usually) a single URL to include the urls.py of the app, plus your own templates and styling as required. And you certainly don't need to wrap the app's views in your own views, unless you specifically want to override some functionality. I don't understand at all your question about models. There's no such thing as a "main" models file, and using a reusable app's models is just the same as using models from any of your own apps. Normally you would not edit a third-party app, that would make it very hard to integrate updates. Just install the app in your virtualenv (you are using virtualenv, of course!) with pip, which will put it in the lib directory, and you can reference it just like any other app. Make sure you add it to INSTALLED\_APPS.
48,642,572
I'm trying to port a custom class from Python 2 to Python 3. I can't find the right syntax to port the iterator for the class. Here is a MVCE of the real class and my attempts to solve this so far: Working Python 2 code: ``` class Temp: def __init__(self): self.d = dict() def __iter__(self): return self.d.iteritems() temp = Temp() for thing in temp: print(thing) ``` In the above code iteritems() breaks in Python 3. According to [this](https://stackoverflow.com/q/13998492/4490400) highly voted answer, "`dict.items` now does the thing `dict.iteritems` did in python 2". So I tried that next: ``` class Temp: def __init__(self): self.d = dict() def __iter__(self): return self.d.items() ``` The above code yields "`TypeError: iter() returned non-iterator of type 'dict_items'`" According to [this](https://stackoverflow.com/a/24377/4490400) answer, Python 3 requires iterable objects to provide a next() method in addition to the iter method. Well, a dictionary is also iterable, so in my use case I should be able to just pass dictionary's next and iter methods, right? ``` class Temp: def __init__(self): self.d = dict() def __iter__(self): return self.d.__iter__ def next(self): return self.d.next ``` This time it's giving me "`TypeError: iter() returned non-iterator of type 'method-wrapper'`". What am I missing here?
2018/02/06
[ "https://Stackoverflow.com/questions/48642572", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4490400/" ]
As the error message suggests, your `__iter__` function does not return an iterator, which you can easily fix using the built-in `iter` function ``` class Temp: def __init__(self): self.d = {} def __iter__(self): return iter(self.d.items()) ``` This will make your class iterable. Alternatively, you may write a generator yourself, like so: ``` def __iter__(self): for key,item in self.d.items(): yield key,item ``` If you want to be able to iterate over keys and items separately, i.e. in the form that the usual python3 dictionary can, you can provide additional functions, for example ``` class Temp: def __init__(self, dic): self.d = dic def __iter__(self): return iter(self.d) def keys(self): return self.d.keys() def items(self): return self.d.items() def values(self): return self.d.values() ``` I'm guessing from the way you phrased it that you don't actually want the `next()` method to be implemented if not needed. If you would, you would have to somehow turn your whole class into an iterator and somehow keep track of where you are momentarily in this iterator, because dictionaries themselves are not iterators. See also [this](https://stackoverflow.com/questions/38700734/how-to-implement-next-for-a-dictionary-object-to-be-iterable) answer.
I don't know what works in Python 2. But on Python 3 iterators can be most easily created using something called a [generator](https://www.pythoncentral.io/python-generators-and-yield-keyword/). I am providing the name and the link so that you can research further. ``` class Temp: def __init__(self): self.d = {} def __iter__(self): for thing in self.d.items(): yield thing ```
21,179,140
Okay, so in a terminal, after importing and making the necessary objects--I typed: ``` for links in soup.find_all('a'): print(links.get('href')) ``` which gave me all the links on a wikipedia page (roughly 250). No problems. However, in a program I am coding, I only receive about 60 links (and this is scraping the same wikipedia page) and the ones I DO get are mostly not worth anything. I double checked that I initialized both exactly the same--the only difference is the names of variables. For reference, here is the function where I setup the BS4 object, and grab the desired page: ``` def get_site(hyperLink): userSite = urllib3.PoolManager() siteData = userSite.request("GET", hyperLink) bsd = BeautifulSoup(siteData.data) return bsd ``` Later, I grab the elements and append them to a list I will then manipulate: ``` def find_urls(bsd, urls, currentNetloc): for links in bsd.find_all('a'): urls.append(links.get('href')) return urls ``` Other relevant info: * I am using Python 3.3 * I am using urllib3, BeautifulSoup 4, and urlparse (from urllib) * I am working in PyCharm (for the actual program) * Using Lubuntu, if it matters. After running a command line instance of python3 and importing "sys" I typed and received: ``` $ sys.executable '/usr/bin/python3' $ sys.path ['', '/usr/local/lib/python3.3/dist-packages/setuptools-1.1.5-py3.3.egg', '/usr/local/lib/python3.3/dist-packages/pip-1.4.1-py3.3.egg', '/usr/local/lib/python3.3/dist-packages/beautifulsoup4-4.3.2-py3.3.egg', '/usr/lib/python3.3', '/usr/lib/python3.3/plat-i386-linux-gnu', '/usr/lib/python3.3/lib-dynload', '/usr/local/lib/python3.3/dist-packages', '/usr/lib/python3/dist-packages'] ``` After running these commands in a Pycharm project, I received exactly the same results, with the exception that the directories containing my pycharm projects were included in the list.
2014/01/17
[ "https://Stackoverflow.com/questions/21179140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3204909/" ]
This is not my answer. I got it from [here](http://fryandata.wordpress.com/2014/06/17/24/), which has helped me before. ``` from bs4 import BeautifulSoup import csv # Create .csv file with headers f=csv.writer(open("nyccMeetings.csv","w")) f.writerow(["Name", "Date", "Time", "Location", "Topic"]) # Use python html parser to avoid truncation htmlContent = open("nyccMeetings.html") soup = BeautifulSoup(htmlContent,"html.parser") # Find each row rows = soup.find_all('tr') for tr in rows: cols = tr.find_all('td') # Find each column try: names = cols[0].get_text().encode('utf-8') date = cols[1].get_text().encode('utf-8') time = cols[2].get_text().encode('utf-8') location = cols[3].get_text().encode('utf-8') topic = cols[4].get_text().encode('utf-8') except: continue # Write to .csv file f.writerow([names, date, time, location, topic]) ``` I think it would be useful to note some of the troubles I ran into while writing this script: Specify your parser. It is very important to specify the type of html parser that BeautifulSoup will use to parse through the html tree form. The html file that I read into Python was not formatted correctly so BeautifulSoup truncated the html and I was only able to access about a quarter of the records. By telling BeautifulSoup to explicitly use the built-in Python html parser, I was able to avoid this issue and retrieve all records. Encode to UTF-8. get\_text() had some issues with encoding the text inside the html tags. As such, I was unable to write data to the comma-delimited file. By explicitly telling the program to encode to UTF-8, we avoid this issue altogether.
I have encountered many problems in my web scraping projects; however, BeautifulSoup was never the culprit. I highly suspect you are having the same problem I had scraping Wikipedia. Wikipedia did not like my user-agent and was returning a page other than what I requested. Try adding a user-agent in your code e.g. `Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.146 Safari/537.36` You mentioned you were using urllib3 so [here](http://urllib3.readthedocs.org/en/1.2.1/pools.html) is where you can read on how to use a custom user-agent. Also if you want to diagnose your problem try this: In the terminal where you said everything was working fine, add an extra line `print len(html)` Then do the same in your program to see if you are in fact getting the links from the same page.
13,925,355
I wanted to run the command: `repo init -u https://android.googlesource.com/platform/manifest -b android-4.1.1_r6` and got the following output: `Traceback (most recent call last): File "/home/anu/bin/repo", line 91, in <module> import readline ImportError: No module named readline` So to fix the above, I tried to install readline using commands `pip install readline` and `easy_install readline`, but both the commands outputs the following: ``` /usr/bin/ld: cannot find -lncurses collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 ``` I have ubuntu 12.04 with python 2.7.3 and want to build android source code. I searched a lot to fix it but no success... Can anybody point to me what I am missing?
2012/12/18
[ "https://Stackoverflow.com/questions/13925355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1649409/" ]
``` sudo apt-get install libncurses5-dev ``` And then rerun you command
If you are running an 64 bit OS, you might have to install the i386 versions of the libraries. A lot (all?) of the Android host commands are 32-bit only.
13,925,355
I wanted to run the command: `repo init -u https://android.googlesource.com/platform/manifest -b android-4.1.1_r6` and got the following output: `Traceback (most recent call last): File "/home/anu/bin/repo", line 91, in <module> import readline ImportError: No module named readline` So to fix the above, I tried to install readline using commands `pip install readline` and `easy_install readline`, but both the commands outputs the following: ``` /usr/bin/ld: cannot find -lncurses collect2: ld returned 1 exit status error: command 'gcc' failed with exit status 1 ``` I have ubuntu 12.04 with python 2.7.3 and want to build android source code. I searched a lot to fix it but no success... Can anybody point to me what I am missing?
2012/12/18
[ "https://Stackoverflow.com/questions/13925355", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1649409/" ]
``` sudo apt-get install libncurses5-dev ``` And then rerun you command
I fixed above issue by installing python 2.7 .Guess the repo works fine with previous version of python and not the current version 2.7.3.
43,748,464
I have a large number of files with $Log expanded-keyword text at the end that needs to be deleted. I am looking to modify an existing python 2.7 script to do this but cannot get the regex working correctly. The text to strip from the end of a file looks like this: ``` /* one or more lines of .. .. possible text $Log: oldfile.c,v $ Revision 11.4 2000/01/20 19:01:41 userid a bunch more text .. .. of unknown number of lines */ ``` I want to strip all of the text shown above, *including* the comment anchors `/*` and `*/` and everything in between. I looked at these questions/answers and a few others: [Python re.sub non-greedy mode ..](https://stackoverflow.com/questions/4273987/python-re-sub-use-non-greedy-mode-with-end-of-string-it-comes-greedy) [Python non-greedy rebexes](https://stackoverflow.com/questions/766372/python-non-greedy-regexes) The closest I have been able to get is with: ``` content = re.sub(re.compile(r'\$Log:.*', re.DOTALL), '', content) ``` Which of course leaves behind the opening `/*`. The following deleted my whole sample test file because the file opens with a matching comment (I thought the non-greedy `?` modifier would prevent this): ``` content = re.sub(re.compile(r'^/\*.*?\$Log:.*', re.DOTALL), '', content) ``` I experimented with using re.MULTILINE without success. How can a regex be defined in Python to grab the whole $Log comment -- AND none of the previous comments in the file?
2017/05/02
[ "https://Stackoverflow.com/questions/43748464", "https://Stackoverflow.com", "https://Stackoverflow.com/users/947860/" ]
You can use: ``` result = re.sub(r"/\*\s+\*+\s+\$Log.*?\*/", "", subject, 0, re.DOTALL) ``` --- [![enter image description here](https://i.stack.imgur.com/YTKkn.jpg)](https://i.stack.imgur.com/YTKkn.jpg) --- [Regex Demo](https://regex101.com/r/6AgeQe/4) [Python Demo](https://ideone.com/QJmjUy)
It is a bit unclear what you are expecting as output. My understanding is that you are trying to extract the comment. I'm assuming that the comment appears on the 3rd line and you have to just extract the third line using regex. Regex Expression used: ``` (\$Log:.*[\r\n]*.*[\r\n])(.*) ``` After using the regex for matching, the **third group** will be the comment as demonstrated in the link and screenshot below. So `blah blah blah` can be fetched using `.group(2)`. Adding python code below: ``` matches = re.search(r"(\$Log:.*[\r\n]*.*[\r\n])(.*)", content) print matches.group(2) // Output: blah blah blah ``` [Regex101](https://regex101.com/r/TG8e1j/1): Sample code for python is available [here](https://regex101.com/r/TG8e1j/1/codegen?language=python). [Python Demo](https://ideone.com/M0OMzj) [![enter image description here](https://i.stack.imgur.com/zSHpV.png)](https://i.stack.imgur.com/zSHpV.png)
43,748,464
I have a large number of files with $Log expanded-keyword text at the end that needs to be deleted. I am looking to modify an existing python 2.7 script to do this but cannot get the regex working correctly. The text to strip from the end of a file looks like this: ``` /* one or more lines of .. .. possible text $Log: oldfile.c,v $ Revision 11.4 2000/01/20 19:01:41 userid a bunch more text .. .. of unknown number of lines */ ``` I want to strip all of the text shown above, *including* the comment anchors `/*` and `*/` and everything in between. I looked at these questions/answers and a few others: [Python re.sub non-greedy mode ..](https://stackoverflow.com/questions/4273987/python-re-sub-use-non-greedy-mode-with-end-of-string-it-comes-greedy) [Python non-greedy rebexes](https://stackoverflow.com/questions/766372/python-non-greedy-regexes) The closest I have been able to get is with: ``` content = re.sub(re.compile(r'\$Log:.*', re.DOTALL), '', content) ``` Which of course leaves behind the opening `/*`. The following deleted my whole sample test file because the file opens with a matching comment (I thought the non-greedy `?` modifier would prevent this): ``` content = re.sub(re.compile(r'^/\*.*?\$Log:.*', re.DOTALL), '', content) ``` I experimented with using re.MULTILINE without success. How can a regex be defined in Python to grab the whole $Log comment -- AND none of the previous comments in the file?
2017/05/02
[ "https://Stackoverflow.com/questions/43748464", "https://Stackoverflow.com", "https://Stackoverflow.com/users/947860/" ]
You can use: ``` result = re.sub(r"/\*\s+\*+\s+\$Log.*?\*/", "", subject, 0, re.DOTALL) ``` --- [![enter image description here](https://i.stack.imgur.com/YTKkn.jpg)](https://i.stack.imgur.com/YTKkn.jpg) --- [Regex Demo](https://regex101.com/r/6AgeQe/4) [Python Demo](https://ideone.com/QJmjUy)
``` content = re.sub(re.compile(r'\/\*\n\**\n\$Log(?:.|[\n])*\*\/', re.DOTALL), '', content) ``` [Regex Explanation](https://i.stack.imgur.com/jyZyT.png)
62,791,323
I have recently upgraded my python/opencv for a project to python 3.7 + opencv 4.3.0 and now I have an issue with opencvs imshow. I am running Ubuntu 18.04 and am using conda venvs. I tried to rerun this piece of code multiple times and half the time it correctly displays the white image, and half the time it displays the image below ([1](https://i.stack.imgur.com/Hwxes.png)). The printed output ([2](https://i.stack.imgur.com/nwOXu.png)) is always the same. I tried changing it from waitKey(0) to waitKey(1000) but that doesn't make a difference. Still about half the time a tiny black image is all I see. Does anybody know how to debug this? I tried looking at Pycharms log files but they dont have any more details. I also tried running it straight from the command line but that gives the same issue. Also tried to remove the environment and created a fresh one, reinstalled opencv and got the same issues. When I create a 3.6 environment I don't have the issue, but that's no longer an option. I need python3.7 for some other packages that don't support certain features in 3.6. I received a warning that libcanberra gtk was missing, and found that in another post that it could cause issues. So I installed it using `sudo apt install libcanberra-gtk-module libcanberra-gtk3-module` and the warning went away. Sadly the issue did not... ``` import numpy as np import cv2 if __name__ == '__main__': img = np.ones((255, 255, 3), dtype=np.uint8)*255 print(img.shape) print(img.dtype) print(img.min()) print(img.max()) cv2.imshow("i", img) cv2.waitKey(0) ``` [screenshot of the code + result](https://i.stack.imgur.com/Hwxes.png) [console output](https://i.stack.imgur.com/nwOXu.png)
2020/07/08
[ "https://Stackoverflow.com/questions/62791323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13890473/" ]
``` django_find_project = True ``` Add this to your `pytest.ini`. **EDIT:** It looks like you have spelled `DJANGO_SETTINGS_MODULE` wrong in your `pytest.ini`. Please fix it.
Pytest has an order of precedence when choosing which settings.py to be used in tests and the settings in the pytest.ini is only used as last resort. Pytest first looks at the `--ds` setting when running your tests, if that is not set it then used the environment variable `DJANGO_SETTINGS_MODULE`, if this also not set it uses the settings set in the `pytest.ini` file. source: <https://pytest-django.readthedocs.io/en/latest/configuring_django.html>
62,791,323
I have recently upgraded my python/opencv for a project to python 3.7 + opencv 4.3.0 and now I have an issue with opencvs imshow. I am running Ubuntu 18.04 and am using conda venvs. I tried to rerun this piece of code multiple times and half the time it correctly displays the white image, and half the time it displays the image below ([1](https://i.stack.imgur.com/Hwxes.png)). The printed output ([2](https://i.stack.imgur.com/nwOXu.png)) is always the same. I tried changing it from waitKey(0) to waitKey(1000) but that doesn't make a difference. Still about half the time a tiny black image is all I see. Does anybody know how to debug this? I tried looking at Pycharms log files but they dont have any more details. I also tried running it straight from the command line but that gives the same issue. Also tried to remove the environment and created a fresh one, reinstalled opencv and got the same issues. When I create a 3.6 environment I don't have the issue, but that's no longer an option. I need python3.7 for some other packages that don't support certain features in 3.6. I received a warning that libcanberra gtk was missing, and found that in another post that it could cause issues. So I installed it using `sudo apt install libcanberra-gtk-module libcanberra-gtk3-module` and the warning went away. Sadly the issue did not... ``` import numpy as np import cv2 if __name__ == '__main__': img = np.ones((255, 255, 3), dtype=np.uint8)*255 print(img.shape) print(img.dtype) print(img.min()) print(img.max()) cv2.imshow("i", img) cv2.waitKey(0) ``` [screenshot of the code + result](https://i.stack.imgur.com/Hwxes.png) [console output](https://i.stack.imgur.com/nwOXu.png)
2020/07/08
[ "https://Stackoverflow.com/questions/62791323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13890473/" ]
``` django_find_project = True ``` Add this to your `pytest.ini`. **EDIT:** It looks like you have spelled `DJANGO_SETTINGS_MODULE` wrong in your `pytest.ini`. Please fix it.
You have this issue because **pytest-django** is not installed in addition to your django project. Do `pip install pytest-django` Excute `pytest` again
62,791,323
I have recently upgraded my python/opencv for a project to python 3.7 + opencv 4.3.0 and now I have an issue with opencvs imshow. I am running Ubuntu 18.04 and am using conda venvs. I tried to rerun this piece of code multiple times and half the time it correctly displays the white image, and half the time it displays the image below ([1](https://i.stack.imgur.com/Hwxes.png)). The printed output ([2](https://i.stack.imgur.com/nwOXu.png)) is always the same. I tried changing it from waitKey(0) to waitKey(1000) but that doesn't make a difference. Still about half the time a tiny black image is all I see. Does anybody know how to debug this? I tried looking at Pycharms log files but they dont have any more details. I also tried running it straight from the command line but that gives the same issue. Also tried to remove the environment and created a fresh one, reinstalled opencv and got the same issues. When I create a 3.6 environment I don't have the issue, but that's no longer an option. I need python3.7 for some other packages that don't support certain features in 3.6. I received a warning that libcanberra gtk was missing, and found that in another post that it could cause issues. So I installed it using `sudo apt install libcanberra-gtk-module libcanberra-gtk3-module` and the warning went away. Sadly the issue did not... ``` import numpy as np import cv2 if __name__ == '__main__': img = np.ones((255, 255, 3), dtype=np.uint8)*255 print(img.shape) print(img.dtype) print(img.min()) print(img.max()) cv2.imshow("i", img) cv2.waitKey(0) ``` [screenshot of the code + result](https://i.stack.imgur.com/Hwxes.png) [console output](https://i.stack.imgur.com/nwOXu.png)
2020/07/08
[ "https://Stackoverflow.com/questions/62791323", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13890473/" ]
You have this issue because **pytest-django** is not installed in addition to your django project. Do `pip install pytest-django` Excute `pytest` again
Pytest has an order of precedence when choosing which settings.py to be used in tests and the settings in the pytest.ini is only used as last resort. Pytest first looks at the `--ds` setting when running your tests, if that is not set it then used the environment variable `DJANGO_SETTINGS_MODULE`, if this also not set it uses the settings set in the `pytest.ini` file. source: <https://pytest-django.readthedocs.io/en/latest/configuring_django.html>
48,986,755
I have a list which contains zeros and non-zero values. I want to find the range of zeros and non-zero values in terms of tuple inside the list. I am looking for package free solution with pythonic way. E.g. ``` a = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 11, 12, 12, 12, 13, 13, 17, 17, 17, 17, 17, 17, 17, 17, 17, 25, 42, 54, 61, 61, 68, 73, 103, 115, 138, 147, 170, 187, 192, 197, 201, 208, 210, 214, 216, 217, 217, 218, 219, 220, 221, 222, 222, 219, 220, 220, 221, 220, 216, 216, 217, 217, 217, 217, 216, 216, 216, 209, 204, 193, 185, 177, 161, 156, 143, 110, 103, 89, 82, 62, 62, 62, 60, 56, 55, 50, 49, 48, 47, 47, 45, 44, 43, 42, 40, 37, 23, 22, 14, 12, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 6, 7, 7, 7, 13, 29, 31, 32, 33, 41, 42, 43, 43, 44, 44, 44, 44, 44, 60, 70, 71, 72, 88, 95, 104, 105, 111, 124, 125, 131, 145, 157, 169, 174, 186, 190, 190, 191, 192, 192, 193, 193, 193, 194, 198, 201, 202, 203, 202, 203, 203, 203, 203, 203, 203, 197, 195, 186, 177, 171, 154, 153, 148, 141, 140, 135, 132, 120, 108, 94, 86, 78, 73, 60, 53, 46, 46, 45, 44, 43, 37, 35, 29, 26, 19, 11, 0]] ``` **Output: idx = [(0,9),(10,101),(102,128),...]**
2018/02/26
[ "https://Stackoverflow.com/questions/48986755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7819943/" ]
You could `enumerate()` to get the indexes, `itertools.groupby()` to group falsy (`0`) and truthy values together, and extract the start and end indexes with `operator.itemgetter(0, -1)`: ``` from operator import truth, itemgetter from itertools import groupby [itemgetter(0,-1)([i for i,v in g]) for _, g in groupby(enumerate(a), key = lambda x: truth(x[1]))] # [(0, 9), (10, 101), (102, 128), (129, 217), (218, 252), (253, 338), (339, 362), (363, 447), (448, 490), (491, 580), (581, 581)] ```
``` import numpy as np unique, counts = np.unique(a, return_counts=True) idx = tuple(zip(unique, counts)) ``` I think this will work for you.
48,986,755
I have a list which contains zeros and non-zero values. I want to find the range of zeros and non-zero values in terms of tuple inside the list. I am looking for package free solution with pythonic way. E.g. ``` a = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 11, 12, 12, 12, 13, 13, 17, 17, 17, 17, 17, 17, 17, 17, 17, 25, 42, 54, 61, 61, 68, 73, 103, 115, 138, 147, 170, 187, 192, 197, 201, 208, 210, 214, 216, 217, 217, 218, 219, 220, 221, 222, 222, 219, 220, 220, 221, 220, 216, 216, 217, 217, 217, 217, 216, 216, 216, 209, 204, 193, 185, 177, 161, 156, 143, 110, 103, 89, 82, 62, 62, 62, 60, 56, 55, 50, 49, 48, 47, 47, 45, 44, 43, 42, 40, 37, 23, 22, 14, 12, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 6, 7, 7, 7, 13, 29, 31, 32, 33, 41, 42, 43, 43, 44, 44, 44, 44, 44, 60, 70, 71, 72, 88, 95, 104, 105, 111, 124, 125, 131, 145, 157, 169, 174, 186, 190, 190, 191, 192, 192, 193, 193, 193, 194, 198, 201, 202, 203, 202, 203, 203, 203, 203, 203, 203, 197, 195, 186, 177, 171, 154, 153, 148, 141, 140, 135, 132, 120, 108, 94, 86, 78, 73, 60, 53, 46, 46, 45, 44, 43, 37, 35, 29, 26, 19, 11, 0]] ``` **Output: idx = [(0,9),(10,101),(102,128),...]**
2018/02/26
[ "https://Stackoverflow.com/questions/48986755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7819943/" ]
This is an answer without an external library: ``` pointer = 0 is_zero = True result = [] def finder(p, is_z): while (a[p] == 0) is is_z: p += 1 if p == len(a): return p return p while pointer < len(a): tmp = finder(pointer, is_zero) result.append((pointer, tmp - 1)) pointer = tmp is_zero = not is_zero print(result) ```
``` import numpy as np unique, counts = np.unique(a, return_counts=True) idx = tuple(zip(unique, counts)) ``` I think this will work for you.
48,986,755
I have a list which contains zeros and non-zero values. I want to find the range of zeros and non-zero values in terms of tuple inside the list. I am looking for package free solution with pythonic way. E.g. ``` a = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 11, 12, 12, 12, 13, 13, 17, 17, 17, 17, 17, 17, 17, 17, 17, 25, 42, 54, 61, 61, 68, 73, 103, 115, 138, 147, 170, 187, 192, 197, 201, 208, 210, 214, 216, 217, 217, 218, 219, 220, 221, 222, 222, 219, 220, 220, 221, 220, 216, 216, 217, 217, 217, 217, 216, 216, 216, 209, 204, 193, 185, 177, 161, 156, 143, 110, 103, 89, 82, 62, 62, 62, 60, 56, 55, 50, 49, 48, 47, 47, 45, 44, 43, 42, 40, 37, 23, 22, 14, 12, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 6, 7, 7, 7, 13, 29, 31, 32, 33, 41, 42, 43, 43, 44, 44, 44, 44, 44, 60, 70, 71, 72, 88, 95, 104, 105, 111, 124, 125, 131, 145, 157, 169, 174, 186, 190, 190, 191, 192, 192, 193, 193, 193, 194, 198, 201, 202, 203, 202, 203, 203, 203, 203, 203, 203, 197, 195, 186, 177, 171, 154, 153, 148, 141, 140, 135, 132, 120, 108, 94, 86, 78, 73, 60, 53, 46, 46, 45, 44, 43, 37, 35, 29, 26, 19, 11, 0]] ``` **Output: idx = [(0,9),(10,101),(102,128),...]**
2018/02/26
[ "https://Stackoverflow.com/questions/48986755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7819943/" ]
Here my suggestion without external packages, short, readable and easily understandable: ``` # Compute list "b" by replacing any non-zero value of "a" with 1 b = list(map(int,[i != 0 for i in a])) #Compute ranges of 0 and ranges of 1 idx = [] # result list of tuples ind = 0 # index of first element of each range of zeros or non-zeros for n,i in enumerate(b): if (n+1 == len(b)) or (b[n] != b[n+1]): # Here: EITHER it is the last value of the list # OR a new range starts at index n+1 idx.append((ind,n)) ind = n+1 print(idx) ```
``` import numpy as np unique, counts = np.unique(a, return_counts=True) idx = tuple(zip(unique, counts)) ``` I think this will work for you.
48,986,755
I have a list which contains zeros and non-zero values. I want to find the range of zeros and non-zero values in terms of tuple inside the list. I am looking for package free solution with pythonic way. E.g. ``` a = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 11, 12, 12, 12, 13, 13, 17, 17, 17, 17, 17, 17, 17, 17, 17, 25, 42, 54, 61, 61, 68, 73, 103, 115, 138, 147, 170, 187, 192, 197, 201, 208, 210, 214, 216, 217, 217, 218, 219, 220, 221, 222, 222, 219, 220, 220, 221, 220, 216, 216, 217, 217, 217, 217, 216, 216, 216, 209, 204, 193, 185, 177, 161, 156, 143, 110, 103, 89, 82, 62, 62, 62, 60, 56, 55, 50, 49, 48, 47, 47, 45, 44, 43, 42, 40, 37, 23, 22, 14, 12, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 6, 7, 7, 7, 13, 29, 31, 32, 33, 41, 42, 43, 43, 44, 44, 44, 44, 44, 60, 70, 71, 72, 88, 95, 104, 105, 111, 124, 125, 131, 145, 157, 169, 174, 186, 190, 190, 191, 192, 192, 193, 193, 193, 194, 198, 201, 202, 203, 202, 203, 203, 203, 203, 203, 203, 197, 195, 186, 177, 171, 154, 153, 148, 141, 140, 135, 132, 120, 108, 94, 86, 78, 73, 60, 53, 46, 46, 45, 44, 43, 37, 35, 29, 26, 19, 11, 0]] ``` **Output: idx = [(0,9),(10,101),(102,128),...]**
2018/02/26
[ "https://Stackoverflow.com/questions/48986755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7819943/" ]
No imports needed (so no need to search the library docs to work out how those imports work :-) and with comments. ``` # results is the output list, each entry is a list of startindex,stopindex results = [] # each time round the logical value is remembered in previouslvalue previouslvalue = None for i,value in enumerate(a): # get the logical value (==0/!=0) from value lvalue = True if value != 0 else False if previouslvalue is None or lvalue != previouslvalue: # this is either the first entry, or the lvalue has changed # either way, append a new entry to results, with the current index as start/finish of the run results.append([i,i]) else: # same lvalue as previous, extend the last entry to include this index results[-1][1] = i # save the logical value for next time round the loop previouslvalue = lvalue print results ``` Output is: ``` [[0, 9], [10, 101], [102, 128], [129, 217], [218, 252], [253, 338], [339, 362], [363, 447], [448, 490], [491, 580], [581,581]] ``` Responding to comment about including the logical value in the results list, this is very easy: ``` # results is the output list, each entry is a list of startindex,stopindex results = [] # each time round the previouslvalue = None for i,value in enumerate(a): # get the logical value (==0/!=0) from value lvalue = True if value != 0 else False if previouslvalue is None or lvalue != previouslvalue: # this is either the first entry, or the lvalue has changed # either way, append a new entry to results, with the current index as start/finish of the run # include the logical value in the list of results results.append([i,i,lvalue]) else: # same lvalue as previous, extend the last entry to include this index results[-1][1] = i # save the logical value for next time round the loop previouslvalue = lvalue print results ``` Output is now: ``` [[0, 9, False], [10, 101, True], [102, 128, False], [129, 217, True], [218, 252, False], [253, 338, True], [339, 362, False], [363, 447, True], [448, 490, False], [491, 580, True], [581, 581, False]] ```
``` import numpy as np unique, counts = np.unique(a, return_counts=True) idx = tuple(zip(unique, counts)) ``` I think this will work for you.
48,986,755
I have a list which contains zeros and non-zero values. I want to find the range of zeros and non-zero values in terms of tuple inside the list. I am looking for package free solution with pythonic way. E.g. ``` a = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 11, 12, 12, 12, 13, 13, 17, 17, 17, 17, 17, 17, 17, 17, 17, 25, 42, 54, 61, 61, 68, 73, 103, 115, 138, 147, 170, 187, 192, 197, 201, 208, 210, 214, 216, 217, 217, 218, 219, 220, 221, 222, 222, 219, 220, 220, 221, 220, 216, 216, 217, 217, 217, 217, 216, 216, 216, 209, 204, 193, 185, 177, 161, 156, 143, 110, 103, 89, 82, 62, 62, 62, 60, 56, 55, 50, 49, 48, 47, 47, 45, 44, 43, 42, 40, 37, 23, 22, 14, 12, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 6, 7, 7, 7, 13, 29, 31, 32, 33, 41, 42, 43, 43, 44, 44, 44, 44, 44, 60, 70, 71, 72, 88, 95, 104, 105, 111, 124, 125, 131, 145, 157, 169, 174, 186, 190, 190, 191, 192, 192, 193, 193, 193, 194, 198, 201, 202, 203, 202, 203, 203, 203, 203, 203, 203, 197, 195, 186, 177, 171, 154, 153, 148, 141, 140, 135, 132, 120, 108, 94, 86, 78, 73, 60, 53, 46, 46, 45, 44, 43, 37, 35, 29, 26, 19, 11, 0]] ``` **Output: idx = [(0,9),(10,101),(102,128),...]**
2018/02/26
[ "https://Stackoverflow.com/questions/48986755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7819943/" ]
You could `enumerate()` to get the indexes, `itertools.groupby()` to group falsy (`0`) and truthy values together, and extract the start and end indexes with `operator.itemgetter(0, -1)`: ``` from operator import truth, itemgetter from itertools import groupby [itemgetter(0,-1)([i for i,v in g]) for _, g in groupby(enumerate(a), key = lambda x: truth(x[1]))] # [(0, 9), (10, 101), (102, 128), (129, 217), (218, 252), (253, 338), (339, 362), (363, 447), (448, 490), (491, 580), (581, 581)] ```
This is an answer without an external library: ``` pointer = 0 is_zero = True result = [] def finder(p, is_z): while (a[p] == 0) is is_z: p += 1 if p == len(a): return p return p while pointer < len(a): tmp = finder(pointer, is_zero) result.append((pointer, tmp - 1)) pointer = tmp is_zero = not is_zero print(result) ```
48,986,755
I have a list which contains zeros and non-zero values. I want to find the range of zeros and non-zero values in terms of tuple inside the list. I am looking for package free solution with pythonic way. E.g. ``` a = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 11, 12, 12, 12, 13, 13, 17, 17, 17, 17, 17, 17, 17, 17, 17, 25, 42, 54, 61, 61, 68, 73, 103, 115, 138, 147, 170, 187, 192, 197, 201, 208, 210, 214, 216, 217, 217, 218, 219, 220, 221, 222, 222, 219, 220, 220, 221, 220, 216, 216, 217, 217, 217, 217, 216, 216, 216, 209, 204, 193, 185, 177, 161, 156, 143, 110, 103, 89, 82, 62, 62, 62, 60, 56, 55, 50, 49, 48, 47, 47, 45, 44, 43, 42, 40, 37, 23, 22, 14, 12, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 6, 7, 7, 7, 13, 29, 31, 32, 33, 41, 42, 43, 43, 44, 44, 44, 44, 44, 60, 70, 71, 72, 88, 95, 104, 105, 111, 124, 125, 131, 145, 157, 169, 174, 186, 190, 190, 191, 192, 192, 193, 193, 193, 194, 198, 201, 202, 203, 202, 203, 203, 203, 203, 203, 203, 197, 195, 186, 177, 171, 154, 153, 148, 141, 140, 135, 132, 120, 108, 94, 86, 78, 73, 60, 53, 46, 46, 45, 44, 43, 37, 35, 29, 26, 19, 11, 0]] ``` **Output: idx = [(0,9),(10,101),(102,128),...]**
2018/02/26
[ "https://Stackoverflow.com/questions/48986755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7819943/" ]
Here my suggestion without external packages, short, readable and easily understandable: ``` # Compute list "b" by replacing any non-zero value of "a" with 1 b = list(map(int,[i != 0 for i in a])) #Compute ranges of 0 and ranges of 1 idx = [] # result list of tuples ind = 0 # index of first element of each range of zeros or non-zeros for n,i in enumerate(b): if (n+1 == len(b)) or (b[n] != b[n+1]): # Here: EITHER it is the last value of the list # OR a new range starts at index n+1 idx.append((ind,n)) ind = n+1 print(idx) ```
You could `enumerate()` to get the indexes, `itertools.groupby()` to group falsy (`0`) and truthy values together, and extract the start and end indexes with `operator.itemgetter(0, -1)`: ``` from operator import truth, itemgetter from itertools import groupby [itemgetter(0,-1)([i for i,v in g]) for _, g in groupby(enumerate(a), key = lambda x: truth(x[1]))] # [(0, 9), (10, 101), (102, 128), (129, 217), (218, 252), (253, 338), (339, 362), (363, 447), (448, 490), (491, 580), (581, 581)] ```
48,986,755
I have a list which contains zeros and non-zero values. I want to find the range of zeros and non-zero values in terms of tuple inside the list. I am looking for package free solution with pythonic way. E.g. ``` a = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 11, 12, 12, 12, 13, 13, 17, 17, 17, 17, 17, 17, 17, 17, 17, 25, 42, 54, 61, 61, 68, 73, 103, 115, 138, 147, 170, 187, 192, 197, 201, 208, 210, 214, 216, 217, 217, 218, 219, 220, 221, 222, 222, 219, 220, 220, 221, 220, 216, 216, 217, 217, 217, 217, 216, 216, 216, 209, 204, 193, 185, 177, 161, 156, 143, 110, 103, 89, 82, 62, 62, 62, 60, 56, 55, 50, 49, 48, 47, 47, 45, 44, 43, 42, 40, 37, 23, 22, 14, 12, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 6, 7, 7, 7, 13, 29, 31, 32, 33, 41, 42, 43, 43, 44, 44, 44, 44, 44, 60, 70, 71, 72, 88, 95, 104, 105, 111, 124, 125, 131, 145, 157, 169, 174, 186, 190, 190, 191, 192, 192, 193, 193, 193, 194, 198, 201, 202, 203, 202, 203, 203, 203, 203, 203, 203, 197, 195, 186, 177, 171, 154, 153, 148, 141, 140, 135, 132, 120, 108, 94, 86, 78, 73, 60, 53, 46, 46, 45, 44, 43, 37, 35, 29, 26, 19, 11, 0]] ``` **Output: idx = [(0,9),(10,101),(102,128),...]**
2018/02/26
[ "https://Stackoverflow.com/questions/48986755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7819943/" ]
You could `enumerate()` to get the indexes, `itertools.groupby()` to group falsy (`0`) and truthy values together, and extract the start and end indexes with `operator.itemgetter(0, -1)`: ``` from operator import truth, itemgetter from itertools import groupby [itemgetter(0,-1)([i for i,v in g]) for _, g in groupby(enumerate(a), key = lambda x: truth(x[1]))] # [(0, 9), (10, 101), (102, 128), (129, 217), (218, 252), (253, 338), (339, 362), (363, 447), (448, 490), (491, 580), (581, 581)] ```
No imports needed (so no need to search the library docs to work out how those imports work :-) and with comments. ``` # results is the output list, each entry is a list of startindex,stopindex results = [] # each time round the logical value is remembered in previouslvalue previouslvalue = None for i,value in enumerate(a): # get the logical value (==0/!=0) from value lvalue = True if value != 0 else False if previouslvalue is None or lvalue != previouslvalue: # this is either the first entry, or the lvalue has changed # either way, append a new entry to results, with the current index as start/finish of the run results.append([i,i]) else: # same lvalue as previous, extend the last entry to include this index results[-1][1] = i # save the logical value for next time round the loop previouslvalue = lvalue print results ``` Output is: ``` [[0, 9], [10, 101], [102, 128], [129, 217], [218, 252], [253, 338], [339, 362], [363, 447], [448, 490], [491, 580], [581,581]] ``` Responding to comment about including the logical value in the results list, this is very easy: ``` # results is the output list, each entry is a list of startindex,stopindex results = [] # each time round the previouslvalue = None for i,value in enumerate(a): # get the logical value (==0/!=0) from value lvalue = True if value != 0 else False if previouslvalue is None or lvalue != previouslvalue: # this is either the first entry, or the lvalue has changed # either way, append a new entry to results, with the current index as start/finish of the run # include the logical value in the list of results results.append([i,i,lvalue]) else: # same lvalue as previous, extend the last entry to include this index results[-1][1] = i # save the logical value for next time round the loop previouslvalue = lvalue print results ``` Output is now: ``` [[0, 9, False], [10, 101, True], [102, 128, False], [129, 217, True], [218, 252, False], [253, 338, True], [339, 362, False], [363, 447, True], [448, 490, False], [491, 580, True], [581, 581, False]] ```
48,986,755
I have a list which contains zeros and non-zero values. I want to find the range of zeros and non-zero values in terms of tuple inside the list. I am looking for package free solution with pythonic way. E.g. ``` a = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 11, 12, 12, 12, 13, 13, 17, 17, 17, 17, 17, 17, 17, 17, 17, 25, 42, 54, 61, 61, 68, 73, 103, 115, 138, 147, 170, 187, 192, 197, 201, 208, 210, 214, 216, 217, 217, 218, 219, 220, 221, 222, 222, 219, 220, 220, 221, 220, 216, 216, 217, 217, 217, 217, 216, 216, 216, 209, 204, 193, 185, 177, 161, 156, 143, 110, 103, 89, 82, 62, 62, 62, 60, 56, 55, 50, 49, 48, 47, 47, 45, 44, 43, 42, 40, 37, 23, 22, 14, 12, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 6, 7, 7, 7, 13, 29, 31, 32, 33, 41, 42, 43, 43, 44, 44, 44, 44, 44, 60, 70, 71, 72, 88, 95, 104, 105, 111, 124, 125, 131, 145, 157, 169, 174, 186, 190, 190, 191, 192, 192, 193, 193, 193, 194, 198, 201, 202, 203, 202, 203, 203, 203, 203, 203, 203, 197, 195, 186, 177, 171, 154, 153, 148, 141, 140, 135, 132, 120, 108, 94, 86, 78, 73, 60, 53, 46, 46, 45, 44, 43, 37, 35, 29, 26, 19, 11, 0]] ``` **Output: idx = [(0,9),(10,101),(102,128),...]**
2018/02/26
[ "https://Stackoverflow.com/questions/48986755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7819943/" ]
Here my suggestion without external packages, short, readable and easily understandable: ``` # Compute list "b" by replacing any non-zero value of "a" with 1 b = list(map(int,[i != 0 for i in a])) #Compute ranges of 0 and ranges of 1 idx = [] # result list of tuples ind = 0 # index of first element of each range of zeros or non-zeros for n,i in enumerate(b): if (n+1 == len(b)) or (b[n] != b[n+1]): # Here: EITHER it is the last value of the list # OR a new range starts at index n+1 idx.append((ind,n)) ind = n+1 print(idx) ```
This is an answer without an external library: ``` pointer = 0 is_zero = True result = [] def finder(p, is_z): while (a[p] == 0) is is_z: p += 1 if p == len(a): return p return p while pointer < len(a): tmp = finder(pointer, is_zero) result.append((pointer, tmp - 1)) pointer = tmp is_zero = not is_zero print(result) ```
48,986,755
I have a list which contains zeros and non-zero values. I want to find the range of zeros and non-zero values in terms of tuple inside the list. I am looking for package free solution with pythonic way. E.g. ``` a = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 11, 12, 12, 12, 13, 13, 17, 17, 17, 17, 17, 17, 17, 17, 17, 25, 42, 54, 61, 61, 68, 73, 103, 115, 138, 147, 170, 187, 192, 197, 201, 208, 210, 214, 216, 217, 217, 218, 219, 220, 221, 222, 222, 219, 220, 220, 221, 220, 216, 216, 217, 217, 217, 217, 216, 216, 216, 209, 204, 193, 185, 177, 161, 156, 143, 110, 103, 89, 82, 62, 62, 62, 60, 56, 55, 50, 49, 48, 47, 47, 45, 44, 43, 42, 40, 37, 23, 22, 14, 12, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 6, 7, 7, 7, 13, 29, 31, 32, 33, 41, 42, 43, 43, 44, 44, 44, 44, 44, 60, 70, 71, 72, 88, 95, 104, 105, 111, 124, 125, 131, 145, 157, 169, 174, 186, 190, 190, 191, 192, 192, 193, 193, 193, 194, 198, 201, 202, 203, 202, 203, 203, 203, 203, 203, 203, 197, 195, 186, 177, 171, 154, 153, 148, 141, 140, 135, 132, 120, 108, 94, 86, 78, 73, 60, 53, 46, 46, 45, 44, 43, 37, 35, 29, 26, 19, 11, 0]] ``` **Output: idx = [(0,9),(10,101),(102,128),...]**
2018/02/26
[ "https://Stackoverflow.com/questions/48986755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7819943/" ]
Here my suggestion without external packages, short, readable and easily understandable: ``` # Compute list "b" by replacing any non-zero value of "a" with 1 b = list(map(int,[i != 0 for i in a])) #Compute ranges of 0 and ranges of 1 idx = [] # result list of tuples ind = 0 # index of first element of each range of zeros or non-zeros for n,i in enumerate(b): if (n+1 == len(b)) or (b[n] != b[n+1]): # Here: EITHER it is the last value of the list # OR a new range starts at index n+1 idx.append((ind,n)) ind = n+1 print(idx) ```
No imports needed (so no need to search the library docs to work out how those imports work :-) and with comments. ``` # results is the output list, each entry is a list of startindex,stopindex results = [] # each time round the logical value is remembered in previouslvalue previouslvalue = None for i,value in enumerate(a): # get the logical value (==0/!=0) from value lvalue = True if value != 0 else False if previouslvalue is None or lvalue != previouslvalue: # this is either the first entry, or the lvalue has changed # either way, append a new entry to results, with the current index as start/finish of the run results.append([i,i]) else: # same lvalue as previous, extend the last entry to include this index results[-1][1] = i # save the logical value for next time round the loop previouslvalue = lvalue print results ``` Output is: ``` [[0, 9], [10, 101], [102, 128], [129, 217], [218, 252], [253, 338], [339, 362], [363, 447], [448, 490], [491, 580], [581,581]] ``` Responding to comment about including the logical value in the results list, this is very easy: ``` # results is the output list, each entry is a list of startindex,stopindex results = [] # each time round the previouslvalue = None for i,value in enumerate(a): # get the logical value (==0/!=0) from value lvalue = True if value != 0 else False if previouslvalue is None or lvalue != previouslvalue: # this is either the first entry, or the lvalue has changed # either way, append a new entry to results, with the current index as start/finish of the run # include the logical value in the list of results results.append([i,i,lvalue]) else: # same lvalue as previous, extend the last entry to include this index results[-1][1] = i # save the logical value for next time round the loop previouslvalue = lvalue print results ``` Output is now: ``` [[0, 9, False], [10, 101, True], [102, 128, False], [129, 217, True], [218, 252, False], [253, 338, True], [339, 362, False], [363, 447, True], [448, 490, False], [491, 580, True], [581, 581, False]] ```
72,404,096
Trying to run examples or telegram bots from official site - <https://github.com/python-telegram-bot/python-telegram-bot/tree/master/examples> Installed : ``` pip install python-telegram-bot ``` and when i run the example, i got error back that version is not compatible. ``` if __version_info__ < (20, 0, 0, "alpha", 1): raise RuntimeError( f"This example is not compatible with your current PTB version {TG_VER}. To view the " f"{TG_VER} version of this example, " f"visit https://github.com/python-telegram-bot/python-telegram-bot/tree/v{TG_VER}/examples" ) ``` It installs PyPi version 13.12 but example checks different version v20. So, the error is reasonable. How can I let example working?
2022/05/27
[ "https://Stackoverflow.com/questions/72404096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14333315/" ]
Assuming you have only one non-NaN per row, you can `stack`: ```py df.stack().droplevel(1).to_frame(name='Fruits') ``` Output: ``` Fruits 0 Apple 1 Pear 2 Orange 3 Mango 4 banana ``` #### Handling rows with only NaNs: ```py df.stack().droplevel(1).to_frame(name='Fruits').reindex(df.index) ``` Output assuming banana is a NaN: ``` Fruits 0 Apple 1 Pear 2 Orange 3 Mango 4 NaN ```
I think this should give the desired output - `df['Fruit1'].fillna(df['Fruit2'])`
72,404,096
Trying to run examples or telegram bots from official site - <https://github.com/python-telegram-bot/python-telegram-bot/tree/master/examples> Installed : ``` pip install python-telegram-bot ``` and when i run the example, i got error back that version is not compatible. ``` if __version_info__ < (20, 0, 0, "alpha", 1): raise RuntimeError( f"This example is not compatible with your current PTB version {TG_VER}. To view the " f"{TG_VER} version of this example, " f"visit https://github.com/python-telegram-bot/python-telegram-bot/tree/v{TG_VER}/examples" ) ``` It installs PyPi version 13.12 but example checks different version v20. So, the error is reasonable. How can I let example working?
2022/05/27
[ "https://Stackoverflow.com/questions/72404096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14333315/" ]
I would use [`bfill()`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.bfill.html): ``` df = pd.DataFrame({ 'fruit_1': [None, 'Pear', None, None], 'fruit_2': ['Apple', None, None, None], 'fruit_3': [None, None, 'Orange', None]}) df.bfill(axis=1).iloc[:,0].rename('fruits') # returns ``` ``` 0 Apple 1 Pear 2 Orange 3 None Name: fruits, dtype: object ``` (or [`ffill()`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.ffill.html) and use the last column) It also works for rows containing `None` only.
I think this should give the desired output - `df['Fruit1'].fillna(df['Fruit2'])`
72,404,096
Trying to run examples or telegram bots from official site - <https://github.com/python-telegram-bot/python-telegram-bot/tree/master/examples> Installed : ``` pip install python-telegram-bot ``` and when i run the example, i got error back that version is not compatible. ``` if __version_info__ < (20, 0, 0, "alpha", 1): raise RuntimeError( f"This example is not compatible with your current PTB version {TG_VER}. To view the " f"{TG_VER} version of this example, " f"visit https://github.com/python-telegram-bot/python-telegram-bot/tree/v{TG_VER}/examples" ) ``` It installs PyPi version 13.12 but example checks different version v20. So, the error is reasonable. How can I let example working?
2022/05/27
[ "https://Stackoverflow.com/questions/72404096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14333315/" ]
Assuming you have only one non-NaN per row, you can `stack`: ```py df.stack().droplevel(1).to_frame(name='Fruits') ``` Output: ``` Fruits 0 Apple 1 Pear 2 Orange 3 Mango 4 banana ``` #### Handling rows with only NaNs: ```py df.stack().droplevel(1).to_frame(name='Fruits').reindex(df.index) ``` Output assuming banana is a NaN: ``` Fruits 0 Apple 1 Pear 2 Orange 3 Mango 4 NaN ```
We can use `combine_first` here: ```py df["Fruits"] = df["Fruit1"].combine_first(df["Fruit2"]) ``` We can also use `np.where`: ```py df["Fruits"] = np.where(df["Fruit1"].isnull(), df["Fruit2"], df["Fruit1"]) ```
72,404,096
Trying to run examples or telegram bots from official site - <https://github.com/python-telegram-bot/python-telegram-bot/tree/master/examples> Installed : ``` pip install python-telegram-bot ``` and when i run the example, i got error back that version is not compatible. ``` if __version_info__ < (20, 0, 0, "alpha", 1): raise RuntimeError( f"This example is not compatible with your current PTB version {TG_VER}. To view the " f"{TG_VER} version of this example, " f"visit https://github.com/python-telegram-bot/python-telegram-bot/tree/v{TG_VER}/examples" ) ``` It installs PyPi version 13.12 but example checks different version v20. So, the error is reasonable. How can I let example working?
2022/05/27
[ "https://Stackoverflow.com/questions/72404096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14333315/" ]
I would use [`bfill()`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.bfill.html): ``` df = pd.DataFrame({ 'fruit_1': [None, 'Pear', None, None], 'fruit_2': ['Apple', None, None, None], 'fruit_3': [None, None, 'Orange', None]}) df.bfill(axis=1).iloc[:,0].rename('fruits') # returns ``` ``` 0 Apple 1 Pear 2 Orange 3 None Name: fruits, dtype: object ``` (or [`ffill()`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.ffill.html) and use the last column) It also works for rows containing `None` only.
We can use `combine_first` here: ```py df["Fruits"] = df["Fruit1"].combine_first(df["Fruit2"]) ``` We can also use `np.where`: ```py df["Fruits"] = np.where(df["Fruit1"].isnull(), df["Fruit2"], df["Fruit1"]) ```
72,404,096
Trying to run examples or telegram bots from official site - <https://github.com/python-telegram-bot/python-telegram-bot/tree/master/examples> Installed : ``` pip install python-telegram-bot ``` and when i run the example, i got error back that version is not compatible. ``` if __version_info__ < (20, 0, 0, "alpha", 1): raise RuntimeError( f"This example is not compatible with your current PTB version {TG_VER}. To view the " f"{TG_VER} version of this example, " f"visit https://github.com/python-telegram-bot/python-telegram-bot/tree/v{TG_VER}/examples" ) ``` It installs PyPi version 13.12 but example checks different version v20. So, the error is reasonable. How can I let example working?
2022/05/27
[ "https://Stackoverflow.com/questions/72404096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14333315/" ]
I would use [`bfill()`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.bfill.html): ``` df = pd.DataFrame({ 'fruit_1': [None, 'Pear', None, None], 'fruit_2': ['Apple', None, None, None], 'fruit_3': [None, None, 'Orange', None]}) df.bfill(axis=1).iloc[:,0].rename('fruits') # returns ``` ``` 0 Apple 1 Pear 2 Orange 3 None Name: fruits, dtype: object ``` (or [`ffill()`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.ffill.html) and use the last column) It also works for rows containing `None` only.
Assuming you have only one non-NaN per row, you can `stack`: ```py df.stack().droplevel(1).to_frame(name='Fruits') ``` Output: ``` Fruits 0 Apple 1 Pear 2 Orange 3 Mango 4 banana ``` #### Handling rows with only NaNs: ```py df.stack().droplevel(1).to_frame(name='Fruits').reindex(df.index) ``` Output assuming banana is a NaN: ``` Fruits 0 Apple 1 Pear 2 Orange 3 Mango 4 NaN ```