qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
52,318,106
HTML: I have a 'sign-up' form in a modal (index.html) JS: The form data is posted to a python flask function: /signup\_user ``` $(function () { $('#signupButton').click(function () { $.ajax({ url: '/signup_user', method: 'POST', data: $('#signupForm').serialize() }) .done(function (data) { console.log('success callback 1', data) }) .fail(function (xhr) { console.log('error callback 1', xhr); }) }) }); ``` Python/Flask: ``` @app.route('/signup_user', methods=['POST']) def signup_user(): #Request data from form and send to database #Check that username isn't already taken #if the username is not already taken if new_user: users.insert_one(new_user) message = "Success" return message #else if ussername is taken, send message to user viewable in the modal else: message = "Failure" return message return redirect(url_for('index')) ``` I cannot figure out how to get the flask function to return the "Failure" message to the form in the modal so that the user can change the username. Right now, as soon as I click the submit button the form/modal disappears and the 'Failure' message refreshes the entire page to a blank page with the word Failure. How do I get the error message back to display in the form/modal?
2018/09/13
[ "https://Stackoverflow.com/questions/52318106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5632508/" ]
Another way could be ``` student_marks.each.with_object(Hash.new([])){ |(k,v), h| h[v] += [k] } #=> {50=>["Alex", "Matt"], 54=>["Beth"]} ```
Another easy way ``` student_marks.keys.group_by{ |v| student_marks[v] } {50=>["Alex", "Matt"], 54=>["Beth"]} ```
52,318,106
HTML: I have a 'sign-up' form in a modal (index.html) JS: The form data is posted to a python flask function: /signup\_user ``` $(function () { $('#signupButton').click(function () { $.ajax({ url: '/signup_user', method: 'POST', data: $('#signupForm').serialize() }) .done(function (data) { console.log('success callback 1', data) }) .fail(function (xhr) { console.log('error callback 1', xhr); }) }) }); ``` Python/Flask: ``` @app.route('/signup_user', methods=['POST']) def signup_user(): #Request data from form and send to database #Check that username isn't already taken #if the username is not already taken if new_user: users.insert_one(new_user) message = "Success" return message #else if ussername is taken, send message to user viewable in the modal else: message = "Failure" return message return redirect(url_for('index')) ``` I cannot figure out how to get the flask function to return the "Failure" message to the form in the modal so that the user can change the username. Right now, as soon as I click the submit button the form/modal disappears and the 'Failure' message refreshes the entire page to a blank page with the word Failure. How do I get the error message back to display in the form/modal?
2018/09/13
[ "https://Stackoverflow.com/questions/52318106", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5632508/" ]
``` student_marks.group_by(&:last).transform_values { |v| v.map(&:first) } #=> {50=>["Alex", "Matt"], 54=>["Beth"]} ``` [Hash#transform\_values](https://ruby-doc.org/core-2.4.0/Hash.html) made its debut in Ruby MRI v2.4.0.
Another easy way ``` student_marks.keys.group_by{ |v| student_marks[v] } {50=>["Alex", "Matt"], 54=>["Beth"]} ```
34,734,436
I am using pip on EC2 now, python version is 2.7. 'sudo pip' suddenly doesn't work anymore. ```none [ec2-user@ip-172-31-17-194 ~]$ sudo pip install validate_email Traceback (most recent call last): File "/usr/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3138, in <module> @_call_aside File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3124, in _call_aside f(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3151, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 663, in _build_master return cls._build_from_requirements(__requires__) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 676, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 849, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'pip==6.1.1' distribution was not found and is required by the application [ec2-user@ip-172-31-17-194 ~]$ which pip /usr/local/bin/pip ```
2016/01/12
[ "https://Stackoverflow.com/questions/34734436", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5326788/" ]
first, `which pip` is not going to return the same result as `sudo which pip`, so you should check that out first. you may also consider not running pip as sudo at all. [Is it acceptable & safe to run pip install under sudo?](https://stackoverflow.com/questions/15028648/is-it-acceptable-safe-to-run-pip-install-under-sudo) second, can you try this: ``` easy_install --upgrade pip ``` if you get an error here (regarding pip's wheel support), try this, then run the above command again: ``` easy_install -U setuptools ```
I fixed the same error ("The 'pip==6.1.1' distribution was not found") by using the tip of Wesm : ``` $> which pip && sudo which pip /usr/local/bin/pip /usr/bin/pip ``` So, it seels that "pip" of average user and of root are not the same. Will fix it later. Then I ran "sudo easy\_install --upgrade pip" => succeed Then I used "sudo /usr/local/bin/pip install " and it works.
34,734,436
I am using pip on EC2 now, python version is 2.7. 'sudo pip' suddenly doesn't work anymore. ```none [ec2-user@ip-172-31-17-194 ~]$ sudo pip install validate_email Traceback (most recent call last): File "/usr/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3138, in <module> @_call_aside File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3124, in _call_aside f(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3151, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 663, in _build_master return cls._build_from_requirements(__requires__) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 676, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 849, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'pip==6.1.1' distribution was not found and is required by the application [ec2-user@ip-172-31-17-194 ~]$ which pip /usr/local/bin/pip ```
2016/01/12
[ "https://Stackoverflow.com/questions/34734436", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5326788/" ]
first, `which pip` is not going to return the same result as `sudo which pip`, so you should check that out first. you may also consider not running pip as sudo at all. [Is it acceptable & safe to run pip install under sudo?](https://stackoverflow.com/questions/15028648/is-it-acceptable-safe-to-run-pip-install-under-sudo) second, can you try this: ``` easy_install --upgrade pip ``` if you get an error here (regarding pip's wheel support), try this, then run the above command again: ``` easy_install -U setuptools ```
I tried a few of these solutions without much success. In the end I just created a new instance using Ubuntu as the operating system. It was already setup properly for using Python properly. If that is not possible then you can try linking the user pip into a folder on root's (sudo) path.
34,734,436
I am using pip on EC2 now, python version is 2.7. 'sudo pip' suddenly doesn't work anymore. ```none [ec2-user@ip-172-31-17-194 ~]$ sudo pip install validate_email Traceback (most recent call last): File "/usr/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3138, in <module> @_call_aside File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3124, in _call_aside f(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3151, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 663, in _build_master return cls._build_from_requirements(__requires__) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 676, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 849, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'pip==6.1.1' distribution was not found and is required by the application [ec2-user@ip-172-31-17-194 ~]$ which pip /usr/local/bin/pip ```
2016/01/12
[ "https://Stackoverflow.com/questions/34734436", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5326788/" ]
first, `which pip` is not going to return the same result as `sudo which pip`, so you should check that out first. you may also consider not running pip as sudo at all. [Is it acceptable & safe to run pip install under sudo?](https://stackoverflow.com/questions/15028648/is-it-acceptable-safe-to-run-pip-install-under-sudo) second, can you try this: ``` easy_install --upgrade pip ``` if you get an error here (regarding pip's wheel support), try this, then run the above command again: ``` easy_install -U setuptools ```
Some additional information for anyone who is also stuck on the same issue:- Running commands with `sudo` searches for the command in `usr/bin` directory. One way to solve this issue is to specify the complete path to the command while using `sudo` as commented by @Cissoid in the question's comment section Or ...what you can do is create a **symbolic link**(sym link) to that command in the `usr/bin` directory using `ln` command. ``` $> ln -s /usr/local/bin/pip /usr/bin/pip ``` The syntax is:- ``` $> ln -s /path/to/file /path/to/link ```
34,734,436
I am using pip on EC2 now, python version is 2.7. 'sudo pip' suddenly doesn't work anymore. ```none [ec2-user@ip-172-31-17-194 ~]$ sudo pip install validate_email Traceback (most recent call last): File "/usr/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3138, in <module> @_call_aside File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3124, in _call_aside f(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3151, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 663, in _build_master return cls._build_from_requirements(__requires__) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 676, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 849, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'pip==6.1.1' distribution was not found and is required by the application [ec2-user@ip-172-31-17-194 ~]$ which pip /usr/local/bin/pip ```
2016/01/12
[ "https://Stackoverflow.com/questions/34734436", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5326788/" ]
I fixed the same error ("The 'pip==6.1.1' distribution was not found") by using the tip of Wesm : ``` $> which pip && sudo which pip /usr/local/bin/pip /usr/bin/pip ``` So, it seels that "pip" of average user and of root are not the same. Will fix it later. Then I ran "sudo easy\_install --upgrade pip" => succeed Then I used "sudo /usr/local/bin/pip install " and it works.
I tried a few of these solutions without much success. In the end I just created a new instance using Ubuntu as the operating system. It was already setup properly for using Python properly. If that is not possible then you can try linking the user pip into a folder on root's (sudo) path.
34,734,436
I am using pip on EC2 now, python version is 2.7. 'sudo pip' suddenly doesn't work anymore. ```none [ec2-user@ip-172-31-17-194 ~]$ sudo pip install validate_email Traceback (most recent call last): File "/usr/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3138, in <module> @_call_aside File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3124, in _call_aside f(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3151, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 663, in _build_master return cls._build_from_requirements(__requires__) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 676, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 849, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'pip==6.1.1' distribution was not found and is required by the application [ec2-user@ip-172-31-17-194 ~]$ which pip /usr/local/bin/pip ```
2016/01/12
[ "https://Stackoverflow.com/questions/34734436", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5326788/" ]
I fixed the same error ("The 'pip==6.1.1' distribution was not found") by using the tip of Wesm : ``` $> which pip && sudo which pip /usr/local/bin/pip /usr/bin/pip ``` So, it seels that "pip" of average user and of root are not the same. Will fix it later. Then I ran "sudo easy\_install --upgrade pip" => succeed Then I used "sudo /usr/local/bin/pip install " and it works.
Some additional information for anyone who is also stuck on the same issue:- Running commands with `sudo` searches for the command in `usr/bin` directory. One way to solve this issue is to specify the complete path to the command while using `sudo` as commented by @Cissoid in the question's comment section Or ...what you can do is create a **symbolic link**(sym link) to that command in the `usr/bin` directory using `ln` command. ``` $> ln -s /usr/local/bin/pip /usr/bin/pip ``` The syntax is:- ``` $> ln -s /path/to/file /path/to/link ```
34,734,436
I am using pip on EC2 now, python version is 2.7. 'sudo pip' suddenly doesn't work anymore. ```none [ec2-user@ip-172-31-17-194 ~]$ sudo pip install validate_email Traceback (most recent call last): File "/usr/bin/pip", line 5, in <module> from pkg_resources import load_entry_point File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3138, in <module> @_call_aside File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3124, in _call_aside f(*args, **kwargs) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 3151, in _initialize_master_working_set working_set = WorkingSet._build_master() File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 663, in _build_master return cls._build_from_requirements(__requires__) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 676, in _build_from_requirements dists = ws.resolve(reqs, Environment()) File "/usr/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 849, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'pip==6.1.1' distribution was not found and is required by the application [ec2-user@ip-172-31-17-194 ~]$ which pip /usr/local/bin/pip ```
2016/01/12
[ "https://Stackoverflow.com/questions/34734436", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5326788/" ]
Some additional information for anyone who is also stuck on the same issue:- Running commands with `sudo` searches for the command in `usr/bin` directory. One way to solve this issue is to specify the complete path to the command while using `sudo` as commented by @Cissoid in the question's comment section Or ...what you can do is create a **symbolic link**(sym link) to that command in the `usr/bin` directory using `ln` command. ``` $> ln -s /usr/local/bin/pip /usr/bin/pip ``` The syntax is:- ``` $> ln -s /path/to/file /path/to/link ```
I tried a few of these solutions without much success. In the end I just created a new instance using Ubuntu as the operating system. It was already setup properly for using Python properly. If that is not possible then you can try linking the user pip into a folder on root's (sudo) path.
66,559,129
I am writing table to mysql from python using pymysql to\_sql function. I am having 1000 rows with 200 columns. Query to connect to mysql is below: ``` engine = create_engine("mysql://hostname:password#@localhostname/dbname") conn = engine.connect() writing query: df.to_sql('data'.lower(),schema=schema,conn,'replace',index=False) ``` I am getting below error: ``` OperationalError: (pymysql.err.OperationalError) (1118, 'Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.') ``` I have changed column dtypes to string still am getting above error. Please, help me to solve this error. I am trying to save the table like below. Here, I am providing few columns with create table query.I am getting error while creating the table while saving. CREATE TABLE dbname.table name( `08:00:00` TEXT, `08:08:00` TEXT, `08:16:00` TEXT, `08:24:00` TEXT, `08:32:00` TEXT, `08:40:00` TEXT, `08:48:00` TEXT, `08:56:00` TEXT, `09:04:00` TEXT, `09:12:00` TEXT, `09:20:00` TEXT, `09:28:00` TEXT)
2021/03/10
[ "https://Stackoverflow.com/questions/66559129", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12249443/" ]
You can store the data with 3 methods- 1. Use localStorage if you are using a json object then you can use localStorage.setItem("data123",JSON.stringify(data)) and fetch the data using JSON.prase(localStorage.getItem("data123")) 2. sessionStorgae Syntax is same as localStaorage. Just replace localwith session Difference - local is persistent, session get deleted when page is closed. 3. React state (prefered method if persistent storage is not required) You can use useState hook (for functional component) or state = {} (for classful components). Usage and examples are readily available one search away. Note: Is using React states and the component which taken in the input data is inside a parent component the define the states in parent and pass the hook definitions to the child component or else when the component changes, the hooks will be lost if defined in the child component
I think the best way to save form data locally is to preserve it in your form handler class like you can create a new Handler class and create setter and getter in it which store the form fields data/values in object key, value pair which you always update whenever fields update.
12,028,496
How do I pass a query string to a HTML frame? I have the following HTML in index.html: ``` <HTML> <FRAMESET rows="200, 200" border=0> <FRAMESET> <FRAME name=top scrolling=no src="top.html"> <FRAME name=main scrolling=yes src="/cgi-bin/main.py"> </FRAMESET> </FRAMESET> </HTML> ``` The frame src is main.py which is a python cgi script. This is in main.py: ``` import cgi form = cgi.FieldStorage() test = form.getvalue('test') ``` Suppose I call the url using index.html?test=abcde. How do I pass the query string to main.py? Is this possible using javascript?
2012/08/19
[ "https://Stackoverflow.com/questions/12028496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/65406/" ]
[This](https://stackoverflow.com/a/2880929/1273830) should help you get variables from the query string, which you can use to build your custom queryString. Or if you want to pass the query string as it is to the frame, then you could get it a simpler fashion. `var queryString = window.location.href.split('index.html?')[1];` Now, as for passing it to the frame, it should be easy because you'd just be appending the query string to the frame element's `src` attribute.
Not sure if this will work (untested), but perhaps you can load the query parameters onload using jQuery? Here is a proof of concept: ``` <html> <head> //Load jquery here, then do the following: <script type="text/javascript"> $(document).ready(functin(){ // navigator.href holds the current window's q params // you will have to write this funciton (fairly easy). var q = get_query_paramenters(navigator.href); var top = $("<frame name=top scrolling=no src='top.html'></frame>"), main = $("<frame name=main scrolling=yes></frame>"); main.attr("src", "/cgi-bin/main.py?" + q); $("#myframeset").append(top).append(main); }); </script> </head> <body> <FRAMESET rows="200, 200" border=0> <FRAMESET id='myframeset'> </FRAMESET> </FRAMESET> </body> </html> ``` Hope that helps.
22,447,986
I have the following list of string ``` mystring = [ 'FOO_LG_06.ip', 'FOO_LV_06.ip', 'FOO_SP_06.ip', 'FOO_LN_06.id', 'FOO_LV_06.id', 'FOO_SP_06.id'] ``` What I want to do is to print it out so that it gives this: ``` LG.ip LV.ip SP.ip LN.id LV.id SP.id ``` How can I do that in python? I'm stuck with this code: ``` for soth in mystring: print soth ``` In Perl we can do something like this for regex capture: ``` my ($part1,$part2) = $soth =~ /FOO_(\w+)_06(\.\w{2})/; print "$part1"; print "$part2\n"; ```
2014/03/17
[ "https://Stackoverflow.com/questions/22447986", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1380929/" ]
If you want to do this in a manner similar to the one you know in perl, you can use `re.search`: ``` import re mystring = [ 'FOO_LG_06.ip', 'FOO_LV_06.ip', 'FOO_SP_06.ip', 'FOO_LN_06.id', 'FOO_LV_06.id', 'FOO_SP_06.id'] for soth in mystring: matches = re.search(r'FOO_(\w+)_06(\.\w{2})', soth) print(matches.group(1) + matches.group(2)) ``` `matches.group(1)` contains the first capture, `matches.group(2)` contains the second capture. [ideone demo](http://ideone.com/UjiNFm).
different regex: > > > > > > > > > > > > p='[^*]+*([A-Z]+)[^.]+(..\*)' > > > > > > > > > > > > > > > > > > ``` >>> for soth in mystring: ... match=re.search(p,soth) ... print ''.join([match.group(1),match.group(2)]) ``` Output: LG.ip LV.ip SP.ip LN.id LV.id SP.id
1,942,295
Noob @ programming with python and pygtk. I'm creating an application which includes a couple of dialogs for user interaction. ``` #!usr/bin/env python import gtk info = gtk.MessageDialog(type=gtk.DIALOG_INFO, buttons=gtk.BUTTONS_OK) info.set_property('title', 'Test info message') info.set_property('text', 'Message to be displayed in the messagebox goes here') if info.run() == gtk.RESPONSE_OK: info.destroy() ``` This displays my message dialog, however, when you click on the 'OK' button presented in the dialog, nothing happens, the box just freezes. What am I doing wrong here?
2009/12/21
[ "https://Stackoverflow.com/questions/1942295", "https://Stackoverflow.com", "https://Stackoverflow.com/users/234654/" ]
can you give me a last chance? ;) there are some errors in your code: * you did not close a bracket * your syntax in `.set_property` is wrong: use: `.set_property('property', 'value')` but i think they are copy/paste errors. try this code, it works for me. maybe did you forget the `gtk.main()`? ``` import gtk info = gtk.MessageDialog(buttons=gtk.BUTTONS_OK) info.set_property('title', 'Test info message') info.set_property('text', 'Message to be displayed in the messagebox goes here') response = info.run() if response == gtk.RESPONSE_OK: print 'ok' else: print response info.destroy() gtk.main() ```
@mg My bad. Your code is correct (and I guess my initial code was too) The reason my dialog was remaining on the screen is because my gtk.main loop is running on a separate thread. So all I had to was enclose your code (corrected version of mine) in between a ``` gtk.gdk.threads_enter() ``` and a ``` gtk.gdk.threads_leave() ``` and there it was. Thanks for your response.
15,852,455
I need scipy on cygwin, so I figured the quickest way to make it work would have been installing enthought python. However, I then realized I have to make cygwin aware of enthought before I can use it, e.g. so that calling Python from the cygwin shell I get the enthought python (with scipy) rather than the cygwin one. How do I do that? I can guess my question is easy, but I'm just learning about all of this and so please be patient :-)
2013/04/06
[ "https://Stackoverflow.com/questions/15852455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1714385/" ]
There are better options than periodically polling the value of the variable. Polling could miss a variable change, and it requires computational resources even if nothing is happening. You could wrap the variable in a wrapper class and change it only through a setter. If you're using Eclipse, you can ask the debugger to stop whenever the value changes.
Using a wrapper class for you variable like: ``` class VarWrapper{ private Object myVar; public Object getMyVar() { return myVar; } public void setMyVar(Object myVar) { //[1],Here you'll know myVar changed this.myVar = myVar; } } ```
15,852,455
I need scipy on cygwin, so I figured the quickest way to make it work would have been installing enthought python. However, I then realized I have to make cygwin aware of enthought before I can use it, e.g. so that calling Python from the cygwin shell I get the enthought python (with scipy) rather than the cygwin one. How do I do that? I can guess my question is easy, but I'm just learning about all of this and so please be patient :-)
2013/04/06
[ "https://Stackoverflow.com/questions/15852455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1714385/" ]
There are better options than periodically polling the value of the variable. Polling could miss a variable change, and it requires computational resources even if nothing is happening. You could wrap the variable in a wrapper class and change it only through a setter. If you're using Eclipse, you can ask the debugger to stop whenever the value changes.
``` Class Test{ private int var; public void setVariable(int var){ this.var = var; varChangedCallBack(); } public void varChangedCallBack(){ //Do something on var changed } } ``` As the variable is private above, it can be changed only thr' setter.After changing we are giving call to callback method, in which you can do whatever you like to do on variable change.
15,852,455
I need scipy on cygwin, so I figured the quickest way to make it work would have been installing enthought python. However, I then realized I have to make cygwin aware of enthought before I can use it, e.g. so that calling Python from the cygwin shell I get the enthought python (with scipy) rather than the cygwin one. How do I do that? I can guess my question is easy, but I'm just learning about all of this and so please be patient :-)
2013/04/06
[ "https://Stackoverflow.com/questions/15852455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1714385/" ]
There are better options than periodically polling the value of the variable. Polling could miss a variable change, and it requires computational resources even if nothing is happening. You could wrap the variable in a wrapper class and change it only through a setter. If you're using Eclipse, you can ask the debugger to stop whenever the value changes.
If you have a dedicated thread to check the variable state then it should be done with wakening up this waiting thread with sync-notify on the object. The waiting thread is going to sync-wait on that object. If you do not have a dedicated thread then this the Observer pattern. <http://en.wikipedia.org/wiki/Observer_pattern>
30,023,898
I'm creating a little calculator as a project And I want it to restart when it type yes when it's done. Problem is, I can't seem to figure out how. I'm not a whiz when it comes to python. ``` import sys OPTIONS = ["Divide", "divide", "Multiply", "multiply", "Add", "add", "Subtract", "subtract"] def userinput(): while True: try: number = int(input("Number: ")) break except ValueError: print("NOPE...") return number def operation(): while True: operation = input("Multiply/Divide/Add: ") if operation in OPTIONS: break else: print("Not an option.") return operation def playagain(): while True: again = input("Again? Yes/No: ") if again == "Yes" or again == "yes": break elif again == "No" or again == "no": sys.exit(0) else: print("Nope..") def multiply(x,y): z = x * y print(z) def divide(x,y): z = x / y print(z) def add(x,y): z = x + y print(z) def subtract(x,y): z = x - y print(z) while True: operation = operation() x = userinput() y = userinput() if operation == "add" or operation == "Add": add(x,y) elif operation == "divide" or operation == "Divide": divide(x,y) elif operation == "multiply" or operation == "Multiply": multiply(x,y) elif operation == "subtract" or operation == "Subtract": subtract(x,y) playagain() ``` I currently have a break in line 28 because I can't find out how to restart it. If anyone could help me, THANKS!
2015/05/04
[ "https://Stackoverflow.com/questions/30023898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4861144/" ]
You don't need to restart your script, just have a little bit of thought about the design before you code. Taking the script you provided, there are two alterations for this issue: ``` def playagain(): while True: again = input("Again? Yes/No: ") if again == "Yes" or again == "yes": return True elif again == "No" or again == "no": return False else: print("Nope..") ``` Then, where you call `playagain()`, change that to: ``` if not playagain(): break ``` I think I know why you want to restart the script, you have a bug. Python functions are like any other object. When you say: ``` operation = operation() ``` that reassigns the reference to the `operation` function to the string returned by the function. So the second time you call it on restart it fails with: ``` TypeError: 'str' object is not callable ``` RENAME your `operation` function something like `foperation`: ``` def fopertion(): ``` then: ``` operation = foperation() ``` So, the complete code becomes: ``` import sys OPTIONS = ["Divide", "divide", "Multiply", "multiply", "Add", "add", "Subtract", "subtract"] def userinput(): while True: try: number = int(input("Number: ")) break except ValueError: print("NOPE...") return number def foperation(): while True: operation = input("Multiply/Divide/Add: ") if operation in OPTIONS: break else: print("Not an option.") return operation def playagain(): while True: again = input("Again? Yes/No: ") if again == "Yes" or again == "yes": return True elif again == "No" or again == "no": return False else: print("Nope..") def multiply(x,y): z = x * y print(z) def divide(x,y): z = x / y print(z) def add(x,y): z = x + y print(z) def subtract(x,y): z = x - y print(z) while True: operation = foperation() x = userinput() y = userinput() if operation == "add" or operation == "Add": add(x,y) elif operation == "divide" or operation == "Divide": divide(x,y) elif operation == "multiply" or operation == "Multiply": multiply(x,y) elif operation == "subtract" or operation == "Subtract": subtract(x,y) if not playagain(): break ``` There are many other improvements to this code that I could make, but let's just get this working first.
Use os.execv().... [Restarting a Python Script Within Itself](http://blog.petrzemek.net/2014/03/23/restarting-a-python-script-within-itself/)
30,023,898
I'm creating a little calculator as a project And I want it to restart when it type yes when it's done. Problem is, I can't seem to figure out how. I'm not a whiz when it comes to python. ``` import sys OPTIONS = ["Divide", "divide", "Multiply", "multiply", "Add", "add", "Subtract", "subtract"] def userinput(): while True: try: number = int(input("Number: ")) break except ValueError: print("NOPE...") return number def operation(): while True: operation = input("Multiply/Divide/Add: ") if operation in OPTIONS: break else: print("Not an option.") return operation def playagain(): while True: again = input("Again? Yes/No: ") if again == "Yes" or again == "yes": break elif again == "No" or again == "no": sys.exit(0) else: print("Nope..") def multiply(x,y): z = x * y print(z) def divide(x,y): z = x / y print(z) def add(x,y): z = x + y print(z) def subtract(x,y): z = x - y print(z) while True: operation = operation() x = userinput() y = userinput() if operation == "add" or operation == "Add": add(x,y) elif operation == "divide" or operation == "Divide": divide(x,y) elif operation == "multiply" or operation == "Multiply": multiply(x,y) elif operation == "subtract" or operation == "Subtract": subtract(x,y) playagain() ``` I currently have a break in line 28 because I can't find out how to restart it. If anyone could help me, THANKS!
2015/05/04
[ "https://Stackoverflow.com/questions/30023898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4861144/" ]
instead of restarting the scrip i made it so you could use it forever and only the user himself can exit it. i only changed the playagain() and the while loop at the end, read the comments for the explanation: ``` import sys OPTIONS = ["Divide", "divide", "Multiply", "multiply", "Add", "add", "Subtract", "subtract"] # python2 compatibility, you dont need to care about this ;-) try: input = raw_input except: pass def userinput(): while True: try: number = int(input("Number: ")) break except ValueError: print("NOPE...") return number def operation(): while True: operation = input("Multiply/Divide/Add: ") if operation in OPTIONS: break else: print("Not an option.") return operation def playagain(): """ return True if userinput "Yes" and False if userinput "no" does this until user input is yes or no """ again = input("Again? Yes/No: ") if again.lower() == "yes": return True elif again.lower() == "no": return False else: # reruns the code --> until user input is 'yes' or 'no' return playagain() def multiply(x,y): z = x * y print(z) def divide(x,y): z = x / y print(z) def add(x,y): z = x + y print(z) def subtract(x,y): z = x - y print(z) # a main in python: this will be executed when run as a script # but not when you import something from this if __name__ == '__main__': play = True while play: operation = operation() x = userinput() y = userinput() if operation == "add" or operation == "Add": add(x,y) elif operation == "divide" or operation == "Divide": divide(x,y) elif operation == "multiply" or operation == "Multiply": multiply(x,y) elif operation == "subtract" or operation == "Subtract": subtract(x,y) # player/user can exit the loop if he enters "no" and therefore end the loop play = playagain() ```
Use os.execv().... [Restarting a Python Script Within Itself](http://blog.petrzemek.net/2014/03/23/restarting-a-python-script-within-itself/)
30,023,898
I'm creating a little calculator as a project And I want it to restart when it type yes when it's done. Problem is, I can't seem to figure out how. I'm not a whiz when it comes to python. ``` import sys OPTIONS = ["Divide", "divide", "Multiply", "multiply", "Add", "add", "Subtract", "subtract"] def userinput(): while True: try: number = int(input("Number: ")) break except ValueError: print("NOPE...") return number def operation(): while True: operation = input("Multiply/Divide/Add: ") if operation in OPTIONS: break else: print("Not an option.") return operation def playagain(): while True: again = input("Again? Yes/No: ") if again == "Yes" or again == "yes": break elif again == "No" or again == "no": sys.exit(0) else: print("Nope..") def multiply(x,y): z = x * y print(z) def divide(x,y): z = x / y print(z) def add(x,y): z = x + y print(z) def subtract(x,y): z = x - y print(z) while True: operation = operation() x = userinput() y = userinput() if operation == "add" or operation == "Add": add(x,y) elif operation == "divide" or operation == "Divide": divide(x,y) elif operation == "multiply" or operation == "Multiply": multiply(x,y) elif operation == "subtract" or operation == "Subtract": subtract(x,y) playagain() ``` I currently have a break in line 28 because I can't find out how to restart it. If anyone could help me, THANKS!
2015/05/04
[ "https://Stackoverflow.com/questions/30023898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4861144/" ]
You don't need to restart your script, just have a little bit of thought about the design before you code. Taking the script you provided, there are two alterations for this issue: ``` def playagain(): while True: again = input("Again? Yes/No: ") if again == "Yes" or again == "yes": return True elif again == "No" or again == "no": return False else: print("Nope..") ``` Then, where you call `playagain()`, change that to: ``` if not playagain(): break ``` I think I know why you want to restart the script, you have a bug. Python functions are like any other object. When you say: ``` operation = operation() ``` that reassigns the reference to the `operation` function to the string returned by the function. So the second time you call it on restart it fails with: ``` TypeError: 'str' object is not callable ``` RENAME your `operation` function something like `foperation`: ``` def fopertion(): ``` then: ``` operation = foperation() ``` So, the complete code becomes: ``` import sys OPTIONS = ["Divide", "divide", "Multiply", "multiply", "Add", "add", "Subtract", "subtract"] def userinput(): while True: try: number = int(input("Number: ")) break except ValueError: print("NOPE...") return number def foperation(): while True: operation = input("Multiply/Divide/Add: ") if operation in OPTIONS: break else: print("Not an option.") return operation def playagain(): while True: again = input("Again? Yes/No: ") if again == "Yes" or again == "yes": return True elif again == "No" or again == "no": return False else: print("Nope..") def multiply(x,y): z = x * y print(z) def divide(x,y): z = x / y print(z) def add(x,y): z = x + y print(z) def subtract(x,y): z = x - y print(z) while True: operation = foperation() x = userinput() y = userinput() if operation == "add" or operation == "Add": add(x,y) elif operation == "divide" or operation == "Divide": divide(x,y) elif operation == "multiply" or operation == "Multiply": multiply(x,y) elif operation == "subtract" or operation == "Subtract": subtract(x,y) if not playagain(): break ``` There are many other improvements to this code that I could make, but let's just get this working first.
instead of restarting the scrip i made it so you could use it forever and only the user himself can exit it. i only changed the playagain() and the while loop at the end, read the comments for the explanation: ``` import sys OPTIONS = ["Divide", "divide", "Multiply", "multiply", "Add", "add", "Subtract", "subtract"] # python2 compatibility, you dont need to care about this ;-) try: input = raw_input except: pass def userinput(): while True: try: number = int(input("Number: ")) break except ValueError: print("NOPE...") return number def operation(): while True: operation = input("Multiply/Divide/Add: ") if operation in OPTIONS: break else: print("Not an option.") return operation def playagain(): """ return True if userinput "Yes" and False if userinput "no" does this until user input is yes or no """ again = input("Again? Yes/No: ") if again.lower() == "yes": return True elif again.lower() == "no": return False else: # reruns the code --> until user input is 'yes' or 'no' return playagain() def multiply(x,y): z = x * y print(z) def divide(x,y): z = x / y print(z) def add(x,y): z = x + y print(z) def subtract(x,y): z = x - y print(z) # a main in python: this will be executed when run as a script # but not when you import something from this if __name__ == '__main__': play = True while play: operation = operation() x = userinput() y = userinput() if operation == "add" or operation == "Add": add(x,y) elif operation == "divide" or operation == "Divide": divide(x,y) elif operation == "multiply" or operation == "Multiply": multiply(x,y) elif operation == "subtract" or operation == "Subtract": subtract(x,y) # player/user can exit the loop if he enters "no" and therefore end the loop play = playagain() ```
15,612,362
Here is python code: ``` def is_palindrome(s): return revers(s) == s def revers(s): ret = '' for ch in s: ret = ch + ret return ret print is_palindrome('RACECAR') # that will print true ``` when i convert that function to php. ``` function is_palindrome($string){ if (strrev($string) == $string) return true; return false; } $word = "RACECAR"; var_dump(is_palindrome($word)); // true ``` Both functions works fine but, how can i revers string with php in loop ?? ``` $string = str_split(hello); $output = ''; foreach($string as $c){ $output .= $c; } print $output; // output hello //i did this, ``` that's work find but is there any way to do that in better way ? $string = "hello"; $lent = strlen($string); ``` $ret = ''; for($i = $lent; ($i > 0) or ($i == 0); $i--) { $ret .= $string[$i]; #$lent = $lent - 1; } print $output; //output olleh ```
2013/03/25
[ "https://Stackoverflow.com/questions/15612362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/740182/" ]
Replace ``` $output .= $c; ``` with ``` $output = $c . $output; ```
strrev() is a function that reverses a string in PHP. <http://php.net/manual/en/function.strrev.php> ``` $s = "foobar"; echo strrev($s); //raboof ``` If you want to check if a word is a palindrome: ``` function is_palindrome($word){ return strrev($word) == $word } $s = "RACECAR"; echo $s." is ".((is_palindrome($s))?"":"NOT ")."a palindrome"; ```
15,612,362
Here is python code: ``` def is_palindrome(s): return revers(s) == s def revers(s): ret = '' for ch in s: ret = ch + ret return ret print is_palindrome('RACECAR') # that will print true ``` when i convert that function to php. ``` function is_palindrome($string){ if (strrev($string) == $string) return true; return false; } $word = "RACECAR"; var_dump(is_palindrome($word)); // true ``` Both functions works fine but, how can i revers string with php in loop ?? ``` $string = str_split(hello); $output = ''; foreach($string as $c){ $output .= $c; } print $output; // output hello //i did this, ``` that's work find but is there any way to do that in better way ? $string = "hello"; $lent = strlen($string); ``` $ret = ''; for($i = $lent; ($i > 0) or ($i == 0); $i--) { $ret .= $string[$i]; #$lent = $lent - 1; } print $output; //output olleh ```
2013/03/25
[ "https://Stackoverflow.com/questions/15612362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/740182/" ]
I don't try that code, but I think that it should work: ``` $string = "hello"; $output = ""; $arr = array_reverse(str_split($string)); // Transform "" to [] and then reverse => ["o","l","l,"e","h"] foreach($arr as $char) { $output .= $char; } echo $output; ``` Another way: ``` $string = "hello"; $output = ""; for($i = strlen($string); $i >= 0; $i--) { $output .= substr($string, $i, 1); } echo $output; ```
strrev() is a function that reverses a string in PHP. <http://php.net/manual/en/function.strrev.php> ``` $s = "foobar"; echo strrev($s); //raboof ``` If you want to check if a word is a palindrome: ``` function is_palindrome($word){ return strrev($word) == $word } $s = "RACECAR"; echo $s." is ".((is_palindrome($s))?"":"NOT ")."a palindrome"; ```
15,612,362
Here is python code: ``` def is_palindrome(s): return revers(s) == s def revers(s): ret = '' for ch in s: ret = ch + ret return ret print is_palindrome('RACECAR') # that will print true ``` when i convert that function to php. ``` function is_palindrome($string){ if (strrev($string) == $string) return true; return false; } $word = "RACECAR"; var_dump(is_palindrome($word)); // true ``` Both functions works fine but, how can i revers string with php in loop ?? ``` $string = str_split(hello); $output = ''; foreach($string as $c){ $output .= $c; } print $output; // output hello //i did this, ``` that's work find but is there any way to do that in better way ? $string = "hello"; $lent = strlen($string); ``` $ret = ''; for($i = $lent; ($i > 0) or ($i == 0); $i--) { $ret .= $string[$i]; #$lent = $lent - 1; } print $output; //output olleh ```
2013/03/25
[ "https://Stackoverflow.com/questions/15612362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/740182/" ]
Can't be shorter I guess. With a loop :) ``` $word = "Hello"; $result = ''; foreach($word as $letter) $result = $letter . $result; echo $result; ```
strrev() is a function that reverses a string in PHP. <http://php.net/manual/en/function.strrev.php> ``` $s = "foobar"; echo strrev($s); //raboof ``` If you want to check if a word is a palindrome: ``` function is_palindrome($word){ return strrev($word) == $word } $s = "RACECAR"; echo $s." is ".((is_palindrome($s))?"":"NOT ")."a palindrome"; ```
15,612,362
Here is python code: ``` def is_palindrome(s): return revers(s) == s def revers(s): ret = '' for ch in s: ret = ch + ret return ret print is_palindrome('RACECAR') # that will print true ``` when i convert that function to php. ``` function is_palindrome($string){ if (strrev($string) == $string) return true; return false; } $word = "RACECAR"; var_dump(is_palindrome($word)); // true ``` Both functions works fine but, how can i revers string with php in loop ?? ``` $string = str_split(hello); $output = ''; foreach($string as $c){ $output .= $c; } print $output; // output hello //i did this, ``` that's work find but is there any way to do that in better way ? $string = "hello"; $lent = strlen($string); ``` $ret = ''; for($i = $lent; ($i > 0) or ($i == 0); $i--) { $ret .= $string[$i]; #$lent = $lent - 1; } print $output; //output olleh ```
2013/03/25
[ "https://Stackoverflow.com/questions/15612362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/740182/" ]
Replace ``` $output .= $c; ``` with ``` $output = $c . $output; ```
I don't try that code, but I think that it should work: ``` $string = "hello"; $output = ""; $arr = array_reverse(str_split($string)); // Transform "" to [] and then reverse => ["o","l","l,"e","h"] foreach($arr as $char) { $output .= $char; } echo $output; ``` Another way: ``` $string = "hello"; $output = ""; for($i = strlen($string); $i >= 0; $i--) { $output .= substr($string, $i, 1); } echo $output; ```
15,612,362
Here is python code: ``` def is_palindrome(s): return revers(s) == s def revers(s): ret = '' for ch in s: ret = ch + ret return ret print is_palindrome('RACECAR') # that will print true ``` when i convert that function to php. ``` function is_palindrome($string){ if (strrev($string) == $string) return true; return false; } $word = "RACECAR"; var_dump(is_palindrome($word)); // true ``` Both functions works fine but, how can i revers string with php in loop ?? ``` $string = str_split(hello); $output = ''; foreach($string as $c){ $output .= $c; } print $output; // output hello //i did this, ``` that's work find but is there any way to do that in better way ? $string = "hello"; $lent = strlen($string); ``` $ret = ''; for($i = $lent; ($i > 0) or ($i == 0); $i--) { $ret .= $string[$i]; #$lent = $lent - 1; } print $output; //output olleh ```
2013/03/25
[ "https://Stackoverflow.com/questions/15612362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/740182/" ]
Replace ``` $output .= $c; ``` with ``` $output = $c . $output; ```
Can't be shorter I guess. With a loop :) ``` $word = "Hello"; $result = ''; foreach($word as $letter) $result = $letter . $result; echo $result; ```
15,612,362
Here is python code: ``` def is_palindrome(s): return revers(s) == s def revers(s): ret = '' for ch in s: ret = ch + ret return ret print is_palindrome('RACECAR') # that will print true ``` when i convert that function to php. ``` function is_palindrome($string){ if (strrev($string) == $string) return true; return false; } $word = "RACECAR"; var_dump(is_palindrome($word)); // true ``` Both functions works fine but, how can i revers string with php in loop ?? ``` $string = str_split(hello); $output = ''; foreach($string as $c){ $output .= $c; } print $output; // output hello //i did this, ``` that's work find but is there any way to do that in better way ? $string = "hello"; $lent = strlen($string); ``` $ret = ''; for($i = $lent; ($i > 0) or ($i == 0); $i--) { $ret .= $string[$i]; #$lent = $lent - 1; } print $output; //output olleh ```
2013/03/25
[ "https://Stackoverflow.com/questions/15612362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/740182/" ]
Can't be shorter I guess. With a loop :) ``` $word = "Hello"; $result = ''; foreach($word as $letter) $result = $letter . $result; echo $result; ```
I don't try that code, but I think that it should work: ``` $string = "hello"; $output = ""; $arr = array_reverse(str_split($string)); // Transform "" to [] and then reverse => ["o","l","l,"e","h"] foreach($arr as $char) { $output .= $char; } echo $output; ``` Another way: ``` $string = "hello"; $output = ""; for($i = strlen($string); $i >= 0; $i--) { $output .= substr($string, $i, 1); } echo $output; ```
24,931,465
Hi I am very new to python, here i m trying to open a xls file in python code but it is showing me some error as below. Code: ``` from xlrd import open_workbook import os.path wb = open_workbook('C:\Users\xxxx\Desktop\a.xlsx') Error:Traceback (most recent call last): File "C:\Python27\1.py", line 3, in <module> wb = open_workbook('C:\Users\xxxx\Desktop\a.xlsx') File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 429, in open_workbook biff_version = bk.getbof(XL_WORKBOOK_GLOBALS) File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 1545, in getbof bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8]) File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 1539, in bof_error raise XLRDError('Unsupported format, or corrupt file: ' + msg) xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found 'PK\x03\x04\x14\x00\x06\x00' ``` need help guyz
2014/07/24
[ "https://Stackoverflow.com/questions/24931465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3872486/" ]
This is a version conflict issue. Your Excel sheet format and the format that xlrd expects are different. You could try to save the Excel sheet in a different format until you find what xlrd expects.
Not familiar with xlrd, but nothing wrong appears on my Mac. According to @jewirth, you can try to rename the suffix to xls which is the old version, and then reopen it or convert it into xlsx.
24,931,465
Hi I am very new to python, here i m trying to open a xls file in python code but it is showing me some error as below. Code: ``` from xlrd import open_workbook import os.path wb = open_workbook('C:\Users\xxxx\Desktop\a.xlsx') Error:Traceback (most recent call last): File "C:\Python27\1.py", line 3, in <module> wb = open_workbook('C:\Users\xxxx\Desktop\a.xlsx') File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 429, in open_workbook biff_version = bk.getbof(XL_WORKBOOK_GLOBALS) File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 1545, in getbof bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8]) File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 1539, in bof_error raise XLRDError('Unsupported format, or corrupt file: ' + msg) xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found 'PK\x03\x04\x14\x00\x06\x00' ``` need help guyz
2014/07/24
[ "https://Stackoverflow.com/questions/24931465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3872486/" ]
This is a version conflict issue. Your Excel sheet format and the format that xlrd expects are different. You could try to save the Excel sheet in a different format until you find what xlrd expects.
``` from xlrd import open_workbook import os.path wb = open_workbook(r'C:\Users\XXXX\Desktop\a.xlsx') print wb Output : <xlrd.book.Book object at 0x0260E490> ``` Opened the excel in 'r' format and it shows the excel object. Its working normally. Try to get the xlrd version and update it. Change the excel file format to '.xls' from '.xlsx' and try
24,931,465
Hi I am very new to python, here i m trying to open a xls file in python code but it is showing me some error as below. Code: ``` from xlrd import open_workbook import os.path wb = open_workbook('C:\Users\xxxx\Desktop\a.xlsx') Error:Traceback (most recent call last): File "C:\Python27\1.py", line 3, in <module> wb = open_workbook('C:\Users\xxxx\Desktop\a.xlsx') File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 429, in open_workbook biff_version = bk.getbof(XL_WORKBOOK_GLOBALS) File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 1545, in getbof bof_error('Expected BOF record; found %r' % self.mem[savpos:savpos+8]) File "C:\Python27\lib\site-packages\xlrd\__init__.py", line 1539, in bof_error raise XLRDError('Unsupported format, or corrupt file: ' + msg) xlrd.biffh.XLRDError: Unsupported format, or corrupt file: Expected BOF record; found 'PK\x03\x04\x14\x00\x06\x00' ``` need help guyz
2014/07/24
[ "https://Stackoverflow.com/questions/24931465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3872486/" ]
This is a version conflict issue. Your Excel sheet format and the format that xlrd expects are different. You could try to save the Excel sheet in a different format until you find what xlrd expects.
You are getting that error because you are using an old version of [xlrd](https://pypi.python.org/pypi/xlrd) which doesn't support xlsx. You need to upgrade to a recent version of xlrd.
33,686,880
I have a `libpython27.a` file: how to know whether it is 32-bit or 64-bit, on Windows 7 x64?
2015/11/13
[ "https://Stackoverflow.com/questions/33686880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/395857/" ]
Try `dumpbin /headers "libpython27.a"`. ([dumpbin reference](https://msdn.microsoft.com/en-us/library/c1h23y6c.aspx)) The output will contain `FILE HEADER VALUES 14C machine (x86)` or `FILE HEADER VALUES 8664 machine (x64)` --- Note that if you get an error message like: ``` E:\temp>dumpbin /headers "libpython27.a" LINK: extra operand `libpython27.a' Try `LINK --help' for more information. ``` It means there is a copy of the GNU link utility somewhere in the search path. Make sure you use the correct `link.exe` (e.g. the one provided in `C:\Program Files (x86)\Common Files\Microsoft\Visual C++ for Python\9.0\VC\bin`). It also requires `mspdb80.dll`, which is in the same folder or something in PATH, otherwise you'll get the error message: [![enter image description here](https://i.stack.imgur.com/84mPY.png)](https://i.stack.imgur.com/84mPY.png)
When starting the Python interpreter in the terminal/command line you may also see a line like: > > Python 2.7.2 (default, Jun 12 2011, 14:24:46) [MSC v.1500 64 bit > (AMD64)] on win32 > > > Where [MSC v.1500 64 bit (AMD64)] means 64-bit Python. Or Try using ctypes to get the size of a void pointer: ``` import ctypes print ctypes.sizeof(ctypes.c_voidp) ``` It'll be 4 for 32 bit or 8 for 64 bit.
33,686,880
I have a `libpython27.a` file: how to know whether it is 32-bit or 64-bit, on Windows 7 x64?
2015/11/13
[ "https://Stackoverflow.com/questions/33686880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/395857/" ]
When starting the Python interpreter in the terminal/command line you may also see a line like: > > Python 2.7.2 (default, Jun 12 2011, 14:24:46) [MSC v.1500 64 bit > (AMD64)] on win32 > > > Where [MSC v.1500 64 bit (AMD64)] means 64-bit Python. Or Try using ctypes to get the size of a void pointer: ``` import ctypes print ctypes.sizeof(ctypes.c_voidp) ``` It'll be 4 for 32 bit or 8 for 64 bit.
On Linux you can use: `objdump -a libpython27.a|grep 'file format'`. Example: ``` f@f-VirtualBox:/media/code$ objdump -a libpython27.a|grep 'file format' dywkt.o: file format pe-i386 dywkh.o: file format pe-i386 dywks01051.o: file format pe-i386 dywks01050.o: file format pe-i386 dywks01049.o: file format pe-i386 dywks01048.o: file format pe-i386 [...] ```
33,686,880
I have a `libpython27.a` file: how to know whether it is 32-bit or 64-bit, on Windows 7 x64?
2015/11/13
[ "https://Stackoverflow.com/questions/33686880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/395857/" ]
Try `dumpbin /headers "libpython27.a"`. ([dumpbin reference](https://msdn.microsoft.com/en-us/library/c1h23y6c.aspx)) The output will contain `FILE HEADER VALUES 14C machine (x86)` or `FILE HEADER VALUES 8664 machine (x64)` --- Note that if you get an error message like: ``` E:\temp>dumpbin /headers "libpython27.a" LINK: extra operand `libpython27.a' Try `LINK --help' for more information. ``` It means there is a copy of the GNU link utility somewhere in the search path. Make sure you use the correct `link.exe` (e.g. the one provided in `C:\Program Files (x86)\Common Files\Microsoft\Visual C++ for Python\9.0\VC\bin`). It also requires `mspdb80.dll`, which is in the same folder or something in PATH, otherwise you'll get the error message: [![enter image description here](https://i.stack.imgur.com/84mPY.png)](https://i.stack.imgur.com/84mPY.png)
On Linux you can use: `objdump -a libpython27.a|grep 'file format'`. Example: ``` f@f-VirtualBox:/media/code$ objdump -a libpython27.a|grep 'file format' dywkt.o: file format pe-i386 dywkh.o: file format pe-i386 dywks01051.o: file format pe-i386 dywks01050.o: file format pe-i386 dywks01049.o: file format pe-i386 dywks01048.o: file format pe-i386 [...] ```
68,764,541
I was reading through the [PEP 526](https://www.python.org/dev/peps/pep-0526/) documentation and I was wondering what is the proper way to annotate a class instance. I have not found the answer in the documentation. I have the following module: ```py class global_variables: # Class body global_variables_dictionary: global_variables = global_variables("application.yaml") ``` Is `something: <class_name> = class_name()` the correct way to do this? Thanks
2021/08/12
[ "https://Stackoverflow.com/questions/68764541", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10588212/" ]
**Note**: "Best Practice" for something like this is difficult to define, since everyone's situation is likely different. That being said, one of our projects has a similar situation as yours: we use Git Flow, and our `develop` branch build numbers are always different than the `release` branch build numbers. Our potentially ideal solution has not been implemented, but would likely be similar to your suggested Approach 8, where we would inject the version into the build pipeline without it being hard-coded in a commit (i.e. don't even modify the version file at all). The downside of this though is you can't know what version is represented by a specific commit based on code alone. But you could tag the commit with a specific version, which is probably what we would do if we implemented that. We could also bake the commit ID along with version info into the artifact meta data for easy lookup. The solution we currently use is a combination of Approaches 4, 5, and 7. We separate version files (your Approach 5), and every time we create a `release` (or `hotfix`) branch, the first commit only changes the version file to the upcoming release version (your Approach 7). We make sure that `release` *always* has the tip of `main` in it, so that anytime we deploy `release` to production we can cleanly merge `release` to `main`. (Note we still use `--no-ff` as suggested by Git Flow but the point is we *could* fast-forward if we wanted to.) Now, after you complete the `release` branch into `main`, Git Flow suggests merging `release` back to `develop`, but we find merging `main` back to `develop` slightly more efficient so that the tip of `main` is also on `develop`, but occasionally we also merge `release` back into `develop` before deployment if important bug fixes appear on `release`. Either way, both of those merges back to `develop` will always have conflicts with the version files on `develop`, and we use your Approach 4 to automate choosing the `develop` version of those files. This enables the merge back to be fully automated, however, sometimes there are still other conflicts that have to be resolved manually, just as a course of normal development happening on `develop` and `release` simultaneously. But at least it's usually clean. Note that a side effect of our approach is that our versions files are *always* different on `develop` and `main`, and that's fine with us.
What about using an external tool to manage the version? We use [GitVersion](https://github.com/GitTools/GitVersion) for this. Now I am not sure if there is a smarter way, but a brute-force one is to have something like this `<version>${env.GitVersion_SemVer}</version>` in your pom.xml, where env.GitVersion\_SemVer is an output from GitVersion.
26,472,868
I have a python script (analyze.py) which takes a filename as a parameter and analyzes it. When it is done with analysis, it waits for another file name. What I want to do is: 1. Send file name as a parameter from PHP to Python. 2. Run analyze.py in the background as a daemon with the filename that came from PHP. I can post the parameter from PHP as a command line argument to Python but I cannot send parameter to python script that already runs at the background. Any ideas?
2014/10/20
[ "https://Stackoverflow.com/questions/26472868", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1430739/" ]
The obvious answer here is to either: 1. Run `analyze.py` once per filename, instead of running it as a daemon. 2. Pass `analyze.py` a whole slew of filenames at startup, instead of passing them one at a time. But there may be a reason neither obvious answer will work in your case. If so, then you need some form of *inter-process communication*. There are a few alternatives: * Use the Python script's standard input to pass it data, by writing to it from the (PHP) parent process. (I'm not sure how to do this from PHP, or even if it's possible, but it's pretty simple from Python, sh, and many other languages, so …) * Open a TCP socket, Unix socket, named pipe, anonymous pipe, etc., giving one end to the Python child and keeping the other in the PHP parent. (Note that the first one is really just a special case of this oneβ€”under the covers, standard input is basically just an anonymous pipe between the child and parent.) * Open a region of shared memory, or an `mmap`-ed file, or similar in both parent and child. This probably also requires sharing a semaphore that you can use to build a condition or event, so the child has some way to wait on the next input. * Use some higher-level API that wraps up one of the aboveβ€”e.g., write the Python child as a simple HTTP service (or JSON-RPC or ZeroMQ or pretty much anything you can find good libraries for in both languages); have the PHP code start that service and make requests as a client.
Here is what I did. PHP Part: ``` <?php $param1 = "filename"; $command = "python analyze.py "; $command .= " $param1"; $pid = popen( $command,"r"); echo "<body><pre>"; while( !feof( $pid ) ) { echo fread($pid, 256); flush(); ob_flush(); } pclose($pid); ?> ``` Python Part: ``` 1. I used [JSON-RPC]: https://github.com/gerold-penz/python-jsonrpc to create a http service that wraps my python script (runs forever) 2. Created a http client that calls the method of http service. 3. Printed the results in json format. ``` Works like a charm.
46,877,384
I am reading a text file in python(500 rows) and it seems like: ``` File Input: 0082335401 0094446049 01008544409 01037792084 01040763890 ``` I wanted to ask that is it possible to insert one space after 5th Character in each line: ``` Desired Output: 00823 35401 00944 46049 01008 544409 01037 792084 01040 763890 ``` I have tried below code ``` st = " ".join(st[i:i + 5] for i in range(0, len(st), 5)) ``` but the below output was returned on executing it: ``` 00823 35401 0094 44604 9 010 08544 409 0 10377 92084 0104 07638 90 ``` I am a novice in Python. Any help would make a difference.
2017/10/22
[ "https://Stackoverflow.com/questions/46877384", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Inside your k6 script use the url `host.docker.internal` to access something running on the host machine. For example to access a service running on the host at `http://localhost:8080` ```js // script.js import http from "k6/http"; import { sleep } from "k6"; export default function () { http.get("http://host.docker.internal:8080"); sleep(1); } ``` Then on windows or mac this can be run with: ```sh $ docker run -i loadimpact/k6 run - <script.js ``` for linux you need an extra flag ```sh $ docker run --add-host=host.docker.internal:host-gateway -i loadimpact/k6 run - <script.js ``` References: * Mac: <https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds> * Windows: <https://docs.docker.com/docker-for-windows/networking/#known-limitations-use-cases-and-workarounds> * Linux: <https://stackoverflow.com/a/61424570/3757139>
k6 inside the docker instance should be able to connect to the "public" IP on your host machine - the IP that is configured on your ethernet or Wifi interface. You can do a `ipconfig /all` to see all your interfaces and their IPs. On my Mac I can do this: `$ python httpserv.py & [1] 7824 serving at port 8000 $ ifconfig en1 en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether b8:09:8a:bb:f7:ed inet6 fe80::148f:5671:5297:fc24%en1 prefixlen 64 secured scopeid 0x5 inet 192.168.0.107 netmask 0xffffff00 broadcast 192.168.0.255 nd6 options=201<PERFORMNUD,DAD> media: autoselect status: active $ echo 'import http from "k6/http"; export default function() { let res = http.get("http://192.168.0.107:8000"); console.log(res.status); };' |docker run -i loadimpact/k6 run -` I.e. I start a simple HTTP server on port 8000 of the host machine, then executes the k6 docker image and tells it to access a URL based on the IP address of the physical, outward-facing en1 interface on the host machine. In your case, on Windows, you can use `ipconfig` to find out your external-facing IP.
36,711,810
I'm going to come out with a disclaimer and say this is my homework problem. So I don't necessarily want you to solve it, I just want some clarification. The exact problem is this: > > Write a function to swap odd and even bits in an integer with as few > instructions as possible (e.g., bit 0 and bit 1 are swapped, bit 2 and > bit 3 are swapped, and so on). > > > It also hints that no conditional statements are required. I kind of looked into it and I discovered if I somehow separate the even and odd bits I can use shifts to accomplish this. What I don't understand is how to manipulate individual bits. In python (programming language I'm used to) it's easy with the index operator as you can just do number[0] for example and you can get the first bit. How do you do something like this for assembly? EDIT: So @jotik, thanks for your help. I implemented something like this: ``` mov edi, ebx and edi, 0x5555555555555555 shl edi, 1 mov esi, ebx and esi, 0xAAAAAAAAAAAAAAAA shr esi, 1 or edi, esi mov eax, edi ``` And when I saw the | operator, I was thinking OR was ||. Silly mistake.
2016/04/19
[ "https://Stackoverflow.com/questions/36711810", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1157549/" ]
In assembly one can use [bit masks](https://en.wikipedia.org/wiki/Mask_%28computing%29) together with other [bitwise operations](https://en.wikipedia.org/wiki/Bitwise_operation) to archive your result. ``` result = ((odd-bit-mask & input) << 1) | ((even-bit-mask & input) >> 1) ``` where `odd-bit-mask` is a value with all odd bits set (`1`) and even bits unset (`0`); and `even-bit-mask` is a value with all even bits set (`1`) and odd bits unset. For 64-bit values, the odd and even bit masks would be (in hexadecimal notation) `0x0x5555555555555555` and `0xAAAAAAAAAAAAAAAA` respectively. So the pseudocode of your assembly algorithm would probably look similar to the following: ``` oddbits = input & 0x5555555555555555 oddbits = oddbits << 1 evenbits = input & 0xAAAAAAAAAAAAAAAA evenbits = evenbits >> 1 result = oddbits | evenbits ``` where `&` is a bitwise AND operation, `|` is a bitwise OR operation, `<<` and `>>` are the bitwise shift left and bitwise shift right operations respectively. PS: You can find some other useful bit manipulation tricks on Sean Eron Anderson's [Bit Twiddling Hacks](https://graphics.stanford.edu/~seander/bithacks.html) webpage.
Here are a few hints: * Bitwise boolean operations (these usually have 1:1 counterparts in assembly, but if everything else fails, you can construct them by cleverly combining several XOR calls) + bitwise AND: 0b10110101 & 0b00011000 β†’ 0b00010000 + bitwise OR: 0b10110101 & 0b00011000 β†’ 0b10111101 + bitwise XOR: 0b10110101 & 0b00011000 β†’ 0b10101101 * Bit shifts + shift x by n bits to the left (x << n): 0b00100001 << 3 β†’ 0b00001000 + shift x by n bits to the right (x >> n): 0b00100001 >> 3 β†’ 0b00000100 There's also rotating bit shift, where bits "shifted out" appear on the other side, but this is not as widely supported in hardware.
41,132,864
I have been trying to install OpenCV for ages now and finally I succeeded using this tutorial: <http://www.pyimagesearch.com/2016/12/05/macos-install-opencv-3-and-python-3-5/>. However, whenever I try to import cv2 in IDLE, it is not found but I am certain I installed OpenCV. The cv2.so file exists at: /usr/local/lib/python3.5/site-packages/cv2.so I believe it may have something to do with the interpreter but I am not sure how to fix it. In terminal, when I try importing it, it works. I included the terminal message to prove it. [![Terminal Message](https://i.stack.imgur.com/xRQ12.png)](https://i.stack.imgur.com/xRQ12.png) Any help is appreciated. Thank you.
2016/12/14
[ "https://Stackoverflow.com/questions/41132864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7293747/" ]
ok i found the answer! after you activate the virtual environment with: ``` work on cv ``` type this on the terminal to open the IDLE with the current virtual environment ``` python -c "from idlelib.PyShell import main; main()" ``` or ``` python -m idlelib ``` and it will do the trick!
Dumb approach but does your IDLE run the same python environment as your terminal?
12,990,462
This is a repost of an issue I posted on the berkelium project on github (<https://github.com/sirikata/berkelium/issues/19>). My question: During chromium compilation on Linux (Debian testing, 64bit, gcc 4.7.1, cmake 2.8.9), the python script `action_makenames.py` fails with the following error: ``` ... ACTION webcore_bindings_sources_HTMLNames out/Release/obj/gen/webkit/HTMLNames.cpp ACTION webcore_bindings_sources_SVGNames out/Release/obj/gen/webkit/SVGNames.cpp ACTION webcore_bindings_sources_MathMLNames out/Release/obj/gen/webkit/MathMLNames.cpp ACTION webcore_bindings_sources_XLinkNames out/Release/obj/gen/webkit/XLinkNames.cpp ACTION webcore_bindings_sources_XMLNSNames out/Release/obj/gen/webkit/XMLNSNames.cpp Unknown parameter math for tags/attrs Traceback (most recent call last): File "scripts/action_makenames.py", line 174, in <module> sys.exit(main(sys.argv)) File "scripts/action_makenames.py", line 156, in main assert returnCode == 0 AssertionError make: *** [out/Release/obj/gen/webkit/MathMLNames.cpp] Error 1 make: *** Waiting for unfinished jobs.... Unknown parameter a for tags/attrs Traceback (most recent call last): File "scripts/action_makenames.py", line 174, in <module> sys.exit(main(sys.argv)) File "scripts/action_makenames.py", line 156, in main assert returnCode == 0 AssertionError Unknown parameter a interfaceName for tags/attrs make: *** [out/Release/obj/gen/webkit/SVGNames.cpp] Error 1 Traceback (most recent call last): File "scripts/action_makenames.py", line 174, in <module> sys.exit(main(sys.argv)) File "scripts/action_makenames.py", line 156, in main assert returnCode == 0 AssertionError make: *** [out/Release/obj/gen/webkit/HTMLNames.cpp] Error 1 Unknown parameter actuate for tags/attrs Traceback (most recent call last): File "scripts/action_makenames.py", line 174, in <module> sys.exit(main(sys.argv)) File "scripts/action_makenames.py", line 156, in main assert returnCode == 0 AssertionError make: *** [out/Release/obj/gen/webkit/XLinkNames.cpp] Error 1 Unknown parameter xmlns for tags/attrs Traceback (most recent call last): File "scripts/action_makenames.py", line 174, in <module> sys.exit(main(sys.argv)) File "scripts/action_makenames.py", line 156, in main assert returnCode == 0 AssertionError make: *** [out/Release/obj/gen/webkit/XMLNSNames.cpp] Error 1 Failed to install: chromium ``` It looks like the python script is calling a perl script, and the perl script is dying on line 209: ``` die "Unknown parameter $parameter for tags/attrs\n" if !defined($parameters{$parameter}); ``` The 'unknown parameter's are: * math * a * a interfaceName * actuate * xmlns I'm not sure where these parameters are coming from. Anyone have any idea how to correct this?
2012/10/20
[ "https://Stackoverflow.com/questions/12990462", "https://Stackoverflow.com", "https://Stackoverflow.com/users/780281/" ]
Turns out to be a preprocessor bug for gcc 4.6. As a fix, you have to remove the `-P` parameter of the gcc preprocessor command in `make_names.pl`. **Bug report**: <http://code.google.com/p/chromium/issues/detail?id=46411> **Bug fix**: <http://trac.webkit.org/changeset/84123>
sounds like you may be missing a directory, a la <http://aur.archlinux.org/packages.php?ID=45713>
53,066,830
I have a python program which I have made work in both Python 2 and 3, and it has more functionality in Python 3 (using new Python 3 features). My script currently starts `#!/usr/bin/env python`, as that seems to be the mostly likely name for a python executable. However, what I'd like to do is "if python3 exists, use that, if not use python". I would prefer not to have to distribute multiple files / and extra script (at present my program is a single distributed python file). Is there an easy way to run the current script in python3, if it exists?
2018/10/30
[ "https://Stackoverflow.com/questions/53066830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/27074/" ]
Another better method modified from [this question](https://stackoverflow.com/questions/12070516/conditional-shebang-line-for-different-versions-of-python) is to check the `sys.version`: ``` import sys py_ver = sys.version[0] ``` Original answer: May not be the best method, but one way to do it is test against a function that only exist in one version of Python to know what you are running off of. ``` try: raw_input py_ver = 2 except NameError: py_ver = 3 if py_ver==2: ... Python 2 stuff elif py_ver==3: ... Python 3 stuff ```
Try with version\_info from sys package
53,066,830
I have a python program which I have made work in both Python 2 and 3, and it has more functionality in Python 3 (using new Python 3 features). My script currently starts `#!/usr/bin/env python`, as that seems to be the mostly likely name for a python executable. However, what I'd like to do is "if python3 exists, use that, if not use python". I would prefer not to have to distribute multiple files / and extra script (at present my program is a single distributed python file). Is there an easy way to run the current script in python3, if it exists?
2018/10/30
[ "https://Stackoverflow.com/questions/53066830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/27074/" ]
Another better method modified from [this question](https://stackoverflow.com/questions/12070516/conditional-shebang-line-for-different-versions-of-python) is to check the `sys.version`: ``` import sys py_ver = sys.version[0] ``` Original answer: May not be the best method, but one way to do it is test against a function that only exist in one version of Python to know what you are running off of. ``` try: raw_input py_ver = 2 except NameError: py_ver = 3 if py_ver==2: ... Python 2 stuff elif py_ver==3: ... Python 3 stuff ```
Maybe I didn't quite get what you want. I understood: You want to look if there is Python 3 installed on the Computer and if so, use it. Inside the script you can check the version with `sys.version` as Idlehands mentioned. To get the latest version you might want to use a small bash script like this ``` py_versions=($(ls /usr/bin | grep 'python[0-9]\.[0-9]$')) ${py_versions[-1]} your_script.py ``` This searches the output of `ls` for all python versions and stores them in `py_versions`. Thankfully the output is already sorted, so the last element in the array will be the latest version.
58,192,211
I'm trying to do something simple in Python. I'm a little rusty so I'm not sure what I'm doing wrong. I want to give random values to dictionary items. Each loop I want to subtract from the original value so if a house has 5 rooms then the total doesn't ever go over 5 for the combined items in the dictionary. This is not homework. I work IT and I'm trying to practice my Python which is one of my weaker known scripting languages. Simplified in the terminal it appears to work but when I put it in the code I get an error. Terminal: ``` $ python3 Python 3.6.8 (default, Apr 25 2019, 21:02:35) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import random >>> h1 = random.randint(1,5) >>> size = random.randint(1, h1) >>> h1 = h1 - size >>> print(h1) 2 ``` Script: ``` import random h1 = random.randint(1,5) rooms = { "bed" : 0, "bath": 0, "study": 0 } for z in rooms: size = random.randint(1, h1) room_types[z] = size if h1_size != 0: h1 = h1 - size for x, y in rooms.items(): print(x, y) ``` I get the following error: ``` $ ./two.py Traceback (most recent call last): File "./two.py", line 13, in <module> size = random.randint(1, h1) File "/usr/lib64/python3.6/random.py", line 221, in randint return self.randrange(a, b+1) File "/usr/lib64/python3.6/random.py", line 199, in randrange raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width)) ValueError: empty range for randrange() (1,1, 0) ```
2019/10/01
[ "https://Stackoverflow.com/questions/58192211", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1112733/" ]
In the script you're re-assigning `h1` with `h1 - size` in each iteration of the `for` loop, and if `size` happens to be `h1` as it is the upper bound passed to `randint`, `h1` would become `0` after the assignment, so that in the next iteration you would be effectively calling `random.randint(1, 0)`, where the upper bound is less than the lower bound, which is disallowed and therefore produces the said error.
Let's consider a simple example. You randomly select a 4-room house. Your random numbers give you 3 bedrooms and one bath. Your loop continues to "study" and tries to generate a random number form 1 to 0. You neglected to reserve a room for that requirement. Python considers the inverted range to be an error. If you truly require at least one room of each type, then allocate those before you choose random numbers, allowing a choice of 0 thereafter. If you don't require at least one room, then reduce the lower bound of your `rangrange`.
33,306,221
I need in python execute this command and enter password from keyboard, this is works: ``` import os cmd = "cat /home/user1/.ssh/id_rsa.pub | ssh user2@host.net \'cat >> .ssh/authorized_keys\' > /dev/null 2>&1" os.system(cmd) ``` As you can see I want append public key to remote host via ssh. See here: [equivalent-of-ftp-put-and-append-in-scp](https://stackoverflow.com/questions/9971490/equivalent-of-ftp-put-and-append-in-scp) and here: [copy-and-append-files-to-a-remote-machine-cat-error](https://stackoverflow.com/questions/13650312/copy-and-append-files-to-a-remote-machine-cat-error) Of course I want it do it without user input I've try pexpect and I think command is to weird for it: ``` import pexpect child = pexpect.spawn(command=cmd, timeout=10, logfile=open('debug.txt', 'a+')) matched = child.expect(['Password:', pexpect.EOF, pexpect.TIMEOUT]) if matched == 0: child.sendline(passwd) ``` in debug.txt: ``` ssh-rsa AAAA..........vcxv233x5v3543sfsfvsv user1@host1 /bin/cat: |: No such file or directory /bin/cat: ssh: No such file or directory /bin/cat: user2@host.net: No such file or directory /bin/cat: cat >> .ssh/authorized_keys: No such file or directory /bin/cat: >: No such file or directory /bin/cat: 2>&1: No such file or directory ``` I see two solution: 1. fix command for pexpect, that it recognize whole string as one command or, 2. inject/write passwd to stdin as fake user, but how!?!?
2015/10/23
[ "https://Stackoverflow.com/questions/33306221", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2595216/" ]
From [the `pexpect` docs](http://pexpect.readthedocs.org/en/stable/api/pexpect.html#spawn-class): > > Remember that Pexpect does NOT interpret shell meta characters such as > redirect, pipe, or wild cards (`>`, `|`, or `*`). This is a > common mistake. If you want to run a command and pipe it through > another command then you must also start a shell. For example:: > > > > ``` > child = pexpect.spawn('/bin/bash -c "ls -l | grep LOG > logs.txt"') > child.expect(pexpect.EOF) > > ``` > >
That worked for me: ``` command = "/bin/bash -c \"cat /home/user1/.ssh/id_rsa.pub | ssh user2@host.net \'cat >> ~/.ssh/authorized_keys\' > /dev/null 2>&1\"" child = spawn(command=command, timeout=5) ```
2,286,276
I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one. Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist. I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that.
2010/02/18
[ "https://Stackoverflow.com/questions/2286276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275779/" ]
None of the answers shows how to delete just one table in an app. It's not too difficult. The [`dbshell`](https://docs.djangoproject.com/en/1.7/ref/django-admin/#django-admin-dbshell) command logs the user into the sqlite3 shell. ``` python manage.py dbshell ``` When you are in the shell, type the following command to see the structure of your database. This will show you all the table names in the database (and also the column names within tables). ``` SELECT * FROM sqlite_master WHERE type='table'; ``` In general, Django names tables according to the following convention: "appname\_modelname". Therefore, SQL query that accomplishes your goal will look similar to the following: ``` DROP TABLE appname_modelname; ``` This should be sufficient, even if the table had relationships with other tables. Now you can log out of SQLITE shell by executing: ``` .exit ``` If you run syncdb again, Django will rebuild the table according to your model. This way, you can update your database tables without losing all of the app data. If you are running into this problem a lot, consider using South - a django app that will migrate your tables for you.
In Django 2.1.7, I've opened the `db.sqlite3` file in [SQLite browser](https://sqlitebrowser.org/) (there is also a Python package on [Pypi](https://pypi.org/project/sqlite_bro/)) and deleted the table using command ``` DROP TABLE appname_tablename; ``` and then ``` DELETE FROM django_migrations WHERE App='appname'; ``` Then run again ``` python manage.py makemigrations appname python manage.py migrate appname ```
2,286,276
I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one. Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist. I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that.
2010/02/18
[ "https://Stackoverflow.com/questions/2286276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275779/" ]
get the DROP statements with `python manage.py sqlclear app_name` then try `python manage.py dbshell` and execute the DROP statement check out <http://docs.djangoproject.com/en/dev/ref/django-admin/>
In Django 1.9 I had to do Kat Russo's steps, but the second migration was a little bit tricky. You have to run ``` ./manage.py migrate --run-syncdb ```
2,286,276
I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one. Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist. I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that.
2010/02/18
[ "https://Stackoverflow.com/questions/2286276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275779/" ]
get the DROP statements with `python manage.py sqlclear app_name` then try `python manage.py dbshell` and execute the DROP statement check out <http://docs.djangoproject.com/en/dev/ref/django-admin/>
In Django 2.1.7, I've opened the `db.sqlite3` file in [SQLite browser](https://sqlitebrowser.org/) (there is also a Python package on [Pypi](https://pypi.org/project/sqlite_bro/)) and deleted the table using command ``` DROP TABLE appname_tablename; ``` and then ``` DELETE FROM django_migrations WHERE App='appname'; ``` Then run again ``` python manage.py makemigrations appname python manage.py migrate appname ```
2,286,276
I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one. Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist. I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that.
2010/02/18
[ "https://Stackoverflow.com/questions/2286276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275779/" ]
to clear out an application is as simple as writing: ``` ./manage.py sqlclear app_name | ./manage.py dbshell ``` then in order to rebuild your tables just type: ``` ./manage.py syncdb ```
I had the same problem. For a quick resolve (if you don't care about losing your tables/data), correct your models.py file with the desired data types, delete the Migration folder and db.SQLite3 file, then re-run the following commands: 1. python manage.py migrate 2. python manage.py makemigrations 3. python manage.py migrate 4. python manage.py createsuperuser (to create an admin user/pswd to manage admin page) 5. python manage.py runserver
2,286,276
I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one. Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist. I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that.
2010/02/18
[ "https://Stackoverflow.com/questions/2286276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275779/" ]
to clear out an application is as simple as writing: ``` ./manage.py sqlclear app_name | ./manage.py dbshell ``` then in order to rebuild your tables just type: ``` ./manage.py syncdb ```
get the DROP statements with `python manage.py sqlclear app_name` then try `python manage.py dbshell` and execute the DROP statement check out <http://docs.djangoproject.com/en/dev/ref/django-admin/>
2,286,276
I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one. Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist. I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that.
2010/02/18
[ "https://Stackoverflow.com/questions/2286276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275779/" ]
Might be worth expanding on a few answers here: So, you can get access to the `dbshell` with the following command: ``` python manage.py dbshell ``` It is then preferential to use the following: ``` DROP TABLE appname_tablename CASCADE; ``` To drop any related tables, effectively mirroring the "on\_delete=CASCADE" direction on a field.
In Django 1.9 I had to do Kat Russo's steps, but the second migration was a little bit tricky. You have to run ``` ./manage.py migrate --run-syncdb ```
2,286,276
I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one. Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist. I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that.
2010/02/18
[ "https://Stackoverflow.com/questions/2286276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275779/" ]
to clear out an application is as simple as writing: ``` ./manage.py sqlclear app_name | ./manage.py dbshell ``` then in order to rebuild your tables just type: ``` ./manage.py syncdb ```
In Django 2.1.7, I've opened the `db.sqlite3` file in [SQLite browser](https://sqlitebrowser.org/) (there is also a Python package on [Pypi](https://pypi.org/project/sqlite_bro/)) and deleted the table using command ``` DROP TABLE appname_tablename; ``` and then ``` DELETE FROM django_migrations WHERE App='appname'; ``` Then run again ``` python manage.py makemigrations appname python manage.py migrate appname ```
2,286,276
I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one. Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist. I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that.
2010/02/18
[ "https://Stackoverflow.com/questions/2286276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275779/" ]
get the DROP statements with `python manage.py sqlclear app_name` then try `python manage.py dbshell` and execute the DROP statement check out <http://docs.djangoproject.com/en/dev/ref/django-admin/>
I had the same problem. For a quick resolve (if you don't care about losing your tables/data), correct your models.py file with the desired data types, delete the Migration folder and db.SQLite3 file, then re-run the following commands: 1. python manage.py migrate 2. python manage.py makemigrations 3. python manage.py migrate 4. python manage.py createsuperuser (to create an admin user/pswd to manage admin page) 5. python manage.py runserver
2,286,276
I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one. Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist. I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that.
2010/02/18
[ "https://Stackoverflow.com/questions/2286276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275779/" ]
I had the same problem. For a quick resolve (if you don't care about losing your tables/data), correct your models.py file with the desired data types, delete the Migration folder and db.SQLite3 file, then re-run the following commands: 1. python manage.py migrate 2. python manage.py makemigrations 3. python manage.py migrate 4. python manage.py createsuperuser (to create an admin user/pswd to manage admin page) 5. python manage.py runserver
Might be worth expanding on a few answers here: So, you can get access to the `dbshell` with the following command: ``` python manage.py dbshell ``` It is then preferential to use the following: ``` DROP TABLE appname_tablename CASCADE; ``` To drop any related tables, effectively mirroring the "on\_delete=CASCADE" direction on a field.
2,286,276
I made a model, and ran python manage.py syncdb. I think that created a table in the db. Then I realized that I had made a column incorrectly, so I changed it, and ran the same command, thinking that it would drop the old table, and add a new one. Then I went to python manage.py shell, and tried to run .objects.all(), and it failed, saying that column doesn't exist. I want to clear out the old table, and then run syncdb again, but I can't figure out how to do that.
2010/02/18
[ "https://Stackoverflow.com/questions/2286276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/275779/" ]
to clear out an application is as simple as writing: ``` ./manage.py sqlclear app_name | ./manage.py dbshell ``` then in order to rebuild your tables just type: ``` ./manage.py syncdb ```
In Django 1.9 I had to do Kat Russo's steps, but the second migration was a little bit tricky. You have to run ``` ./manage.py migrate --run-syncdb ```
59,039,858
I am importing a large number of dates in the form DD/MM/YYYY from a csv file into python and want to group them by just MM-YYYY. One method I have tried is the following: ``` str=date.iloc[2] ``` which results in str=7/18/2019. But what I want to do is convert it to Jul 2019 to make groupings by month and year. I have tried doing this ``` datetime.datetime.strptime("str","%m/%d/%Y").strptime("%b %Y") ``` and get the following error "time data 'str' does not match format '%m/%d/%Y'. I have also tried str.replace("/","-") but that did not help either but when I manually type in the date `datetime.datetime.strptime("7/18/2019","%m/%d/%Y").strptime("%b %Y")` it works exactly as I'd like it to. This is an easy fix but my date dataframe contains hundreds of dates and would ultimately like to run it in a loop to automate it. I cannot seem to find why format does not match. I've done research but no one seems to be having the same issue. Any help is appreciated.
2019/11/25
[ "https://Stackoverflow.com/questions/59039858", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12432098/" ]
``` datetime.datetime.strptime("str","%m/%d/%Y").strptime("%b %Y") ``` Is to parse literally the string of "str" into the date. Instead, you should do ``` datetime.datetime.strptime(str,"%m/%d/%Y").strftime("%b %Y") ```
In this line ``` datetime.datetime.strptime("str","%m/%d/%Y").strptime("%b %Y") ``` `"str"` is a string literal. You want the variable `str` ``` datetime.datetime.strptime(str,"%m/%d/%Y").strptime("%b %Y") ```
59,039,858
I am importing a large number of dates in the form DD/MM/YYYY from a csv file into python and want to group them by just MM-YYYY. One method I have tried is the following: ``` str=date.iloc[2] ``` which results in str=7/18/2019. But what I want to do is convert it to Jul 2019 to make groupings by month and year. I have tried doing this ``` datetime.datetime.strptime("str","%m/%d/%Y").strptime("%b %Y") ``` and get the following error "time data 'str' does not match format '%m/%d/%Y'. I have also tried str.replace("/","-") but that did not help either but when I manually type in the date `datetime.datetime.strptime("7/18/2019","%m/%d/%Y").strptime("%b %Y")` it works exactly as I'd like it to. This is an easy fix but my date dataframe contains hundreds of dates and would ultimately like to run it in a loop to automate it. I cannot seem to find why format does not match. I've done research but no one seems to be having the same issue. Any help is appreciated.
2019/11/25
[ "https://Stackoverflow.com/questions/59039858", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12432098/" ]
You can use datetime's `strftime()`: ``` import datetime a = '07/22/1990' d = datetime.datetime.strptime(a, "%m/%d/%Y") dd = d.strftime("%m/%Y") print('d: ', d) print('dd: ', dd) # or in one line ddd = datetime.datetime.strptime(a, "%m/%d/%Y").strftime("%m/%Y") print('ddd: ', ddd) ``` The output: ``` $ python p.py d: 1990-07-22 00:00:00 dd: 07/1990 ddd: 07/1990 ```
In this line ``` datetime.datetime.strptime("str","%m/%d/%Y").strptime("%b %Y") ``` `"str"` is a string literal. You want the variable `str` ``` datetime.datetime.strptime(str,"%m/%d/%Y").strptime("%b %Y") ```
4,707,941
I have seen several Questions comparing different ECommerce CMS's: 1. [Prestashop compared to Zen-Cart and osCommerce](https://stackoverflow.com/questions/2040472/prestashop-compared-to-zen-cart-and-oscommerce) 2. [Magento or Prestashop, which is better?](https://stackoverflow.com/search?q=prestashop) 3. [Best php/ruby/python e-commerce solution](https://stackoverflow.com/questions/76420/best-php-ruby-python-e-commerce-solution) I was hoping to get some people to weigh in with which they prefer for a relatively small E-shop. I am now primarily looking at [PrestaShop](http://www.prestashop.com/) and [Shopify](http://www.shopify.com/). I really like that Shopify does the hosting, has quality service, and is simple to understand and theme. However PrestaShop is **free** and seems to be able to do just as much if not more than Shopify. I have decided that Magento is too clunky for the project, and have read that many other solutions (osCommerce, ZenCart, OpenCart) are outdated, buggy, or just inferior.
2011/01/16
[ "https://Stackoverflow.com/questions/4707941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/363701/" ]
"Free" in the e-commerce industry usually works out to a few thousand dollars a month of real cost. E-commerce stores are powering the livelihood of businesses, so there is no way to go with a value hosting company. Additionally security is a huge concern so updates are incredibly important. So this leaves you with a server configuration of at least 2 servers setup in HA environment and a part time operations person performing the maintenance. So once you ensure that you can keep your site up you then have to invest into things that most people don't think off: * Email service that guarantees delivery * CDN, your store needs to be fast or you won't sell. * Fraud protection services ($$$) Anyways, you get all of the above for 30 bucks a month from a hosted service. Full disclosure: I'm founder of Shopify. I used to host my own e-commerce store before I started Shopify. 95% of our customers recover the monthly Shopify bill in the first few hours of the first day of each month.
The prices is not the main difference between Shopify and PrestaShop. Talking about the scope- I think both will suit you. 1. Technical Level The choice of hosted Shopify or open-source PrestaShop may depends on the level of technical skills. Without doubt it is easier to maintain your store with hosted shopping cart which supplies you with regular updates and takes care of bug fixes. PrestaShop administration is quite intuitive as well. It has the interface with numerous themes, modern designs and easily guided catalogues is attractive for the customers. It goes without saying, that such details help you raise your traffic rates. 1. Performance Non-hosted installed platforms provide you with control over the speed. As the store is located on your server, you can maintain the necessary speed of connection. Unfortunately, this is not the case with hosted solutions. So, using Shopify you can face issues with low connection speed.
12,624,324
I am facing difficulty in getting the xml structure listing all the directories/ sub directories inside a given directory. I got that working using the recursion in the [given post](https://stackoverflow.com/questions/2104997/os-walk-python-xml-representation-of-a-directory-structure-recursion) My problem is little bit tougher than usual. I have directories that may have 10000 of files in it so checking every content to see if its a directory or not is going to be costly and its already taking to long to build the xml. I want to build the xml for directories only. I know linux has some command like **find . -type d** to list the directories present (not the files). How can I achieve this in python. Thanks in advance.
2012/09/27
[ "https://Stackoverflow.com/questions/12624324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1364646/" ]
[`os.walk`](http://docs.python.org/library/os.html#os.walk) already distinguishes between files and directories: ``` def find_all_dirs(root='.'): for path,dirs,files in os.walk(root): for d in dirs: yield os.path.join(path, d) ```
For just one directory... ``` import os def get_dirs(p): p = os.path.abspath(p) return [n for n in os.listdir(p) if os.path.isdir(os.path.join(p, n))] print "\n".join(get_dirs(".")) ```
12,624,324
I am facing difficulty in getting the xml structure listing all the directories/ sub directories inside a given directory. I got that working using the recursion in the [given post](https://stackoverflow.com/questions/2104997/os-walk-python-xml-representation-of-a-directory-structure-recursion) My problem is little bit tougher than usual. I have directories that may have 10000 of files in it so checking every content to see if its a directory or not is going to be costly and its already taking to long to build the xml. I want to build the xml for directories only. I know linux has some command like **find . -type d** to list the directories present (not the files). How can I achieve this in python. Thanks in advance.
2012/09/27
[ "https://Stackoverflow.com/questions/12624324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1364646/" ]
[`os.walk`](http://docs.python.org/library/os.html#os.walk) already distinguishes between files and directories: ``` def find_all_dirs(root='.'): for path,dirs,files in os.walk(root): for d in dirs: yield os.path.join(path, d) ```
Here is the solution I got after searching and trying different things. I am not saying that this if faster than then method of looking every content in a directory, but it actually produces the result much more quicker( difference visible when directory contain 1000 of files) ``` import os import subprocess from xml.sax.saxutils import quoteattr as xml_quoteattr def DirAsLessXML(path): result = '<dir type ={0} name={1} path={2}>\n'.format(xml_quoteattr('dir'),xml_quoteattr(os.path.basename(path)),xml_quoteattr(path)) list = subprocess.Popen(['find', path,'-maxdepth', '1', '-type', 'd'],stdout=subprocess.PIPE, shell=False).communicate()[0] output_list = list.splitlines() if len(output_list) == 1: result = '<dir type ={0} name={1} path={2}>\n'.format(xml_quoteattr('leaf_dir'),xml_quoteattr(os.path.basename(path)),xml_quoteattr(path)) for item in output_list[1:]: result += '\n'.join(' ' + line for line in DirAsLessXML(item).split('\n')) result += '</dir>\n' return result ```
12,624,324
I am facing difficulty in getting the xml structure listing all the directories/ sub directories inside a given directory. I got that working using the recursion in the [given post](https://stackoverflow.com/questions/2104997/os-walk-python-xml-representation-of-a-directory-structure-recursion) My problem is little bit tougher than usual. I have directories that may have 10000 of files in it so checking every content to see if its a directory or not is going to be costly and its already taking to long to build the xml. I want to build the xml for directories only. I know linux has some command like **find . -type d** to list the directories present (not the files). How can I achieve this in python. Thanks in advance.
2012/09/27
[ "https://Stackoverflow.com/questions/12624324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1364646/" ]
For just one directory... ``` import os def get_dirs(p): p = os.path.abspath(p) return [n for n in os.listdir(p) if os.path.isdir(os.path.join(p, n))] print "\n".join(get_dirs(".")) ```
Here is the solution I got after searching and trying different things. I am not saying that this if faster than then method of looking every content in a directory, but it actually produces the result much more quicker( difference visible when directory contain 1000 of files) ``` import os import subprocess from xml.sax.saxutils import quoteattr as xml_quoteattr def DirAsLessXML(path): result = '<dir type ={0} name={1} path={2}>\n'.format(xml_quoteattr('dir'),xml_quoteattr(os.path.basename(path)),xml_quoteattr(path)) list = subprocess.Popen(['find', path,'-maxdepth', '1', '-type', 'd'],stdout=subprocess.PIPE, shell=False).communicate()[0] output_list = list.splitlines() if len(output_list) == 1: result = '<dir type ={0} name={1} path={2}>\n'.format(xml_quoteattr('leaf_dir'),xml_quoteattr(os.path.basename(path)),xml_quoteattr(path)) for item in output_list[1:]: result += '\n'.join(' ' + line for line in DirAsLessXML(item).split('\n')) result += '</dir>\n' return result ```
46,552,178
I have two files. `functions.py` has a function and creates a pyspark udf from that function. `main.py` attempts to import the udf. However, `main.py` seems to have trouble accessing the function in `functions.py`. functions.py: ``` from pyspark.sql.functions import udf from pyspark.sql.types import StringType def do_something(x): return x + 'hello' sample_udf = udf(lambda x: do_something(x), StringType()) ``` main.py: ``` from functions import sample_udf, do_something df = spark.read.load(file) df.withColumn("sample",sample_udf(col("text"))) ``` This results in an error: ``` 17/10/03 19:35:29 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 6, ip-10-223-181-5.ec2.internal, executor 3): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/lib/spark/python/pyspark/worker.py", line 164, in main func, profiler, deserializer, serializer = read_udfs(pickleSer, infile) File "/usr/lib/spark/python/pyspark/worker.py", line 93, in read_udfs arg_offsets, udf = read_single_udf(pickleSer, infile) File "/usr/lib/spark/python/pyspark/worker.py", line 79, in read_single_udf f, return_type = read_command(pickleSer, infile) File "/usr/lib/spark/python/pyspark/worker.py", line 55, in read_command command = serializer._read_with_length(file) File "/usr/lib/spark/python/pyspark/serializers.py", line 169, in _read_with_length return self.loads(obj) File "/usr/lib/spark/python/pyspark/serializers.py", line 454, in loads return pickle.loads(obj) AttributeError: 'module' object has no attribute 'do_something' ``` If I bypass the `do_something` function and just put it inside the udf, eg: `udf(lambda x: x + ' hello', StringType())`, the UDF imports fine - but my function is a little longer and it would be nice to have it encapsulated in a separate function. What's the correct way to achieve this?
2017/10/03
[ "https://Stackoverflow.com/questions/46552178", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5617110/" ]
Just adding this as answer:- add your py file to sparkcontext in order to make it available to your executors. ``` sc.addPyFile("functions.py") from functions import sample_udf ``` Here is my test notebook <https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/3669221609244155/3140647912908320/868274901052987/latest.html> Thanks, Charles.
I think a cleaner solution would be to use the udf decorator to define your udf function : ``` import pyspark.sql.functions as F from pyspark.sql.types import StringType @F.udf def sample_udf(x): return x + 'hello' ``` With this solution, the udf does not reference any other function and you don't need the `sc.addPyFile` in your main code. ``` from functions import sample_udf, do_something df = spark.read.load(file) df.withColumn("sample",sample_udf(col("text"))) # It works :) ``` For some older versions of spark, the decorator doesn't support typed udf some you might have to define a custom decorator as follow : ``` import pyspark.sql.functions as F import pyspark.sql.types as t # Custom udf decorator which accept return type def udf_typed(returntype=t.StringType()): def _typed_udf_wrapper(func): return F.udf(func, returntype) return _typed_udf_wrapper @udf_typed(t.IntegerType()) def my_udf(x) return int(x) ```
21,226,366
I have a script to get and setup the latest NodeJS on my .deb system: ``` echo "Downloading, building and installing latest NodeJS" sudo apt-get install python g++ make checkinstall mkdir /tmp/node_build && cd $_ curl -O "http://nodejs.org/dist/node-latest.tar.gz" tar xf node-latest.tar.gz && cd node-v* NODE_VERSION="${PWD#*v}" #NODE_VERSION=python -c "print '$PWD'.split('-')[-1][1:]" echo "Installing NodeJS" $NODE_VERSION ./configure sudo checkinstall -y --install=no --pkgversion NODE_VERSION sudo dpkg -i node_$NODE_VERSION ``` Unfortunately it doesn't work; as the `echo` line outputs: > > Installing NodeJS i8/dir-where-runnning-script-from/node-v0.10.24 > > > It does work from the shell though: ``` $ cd /tmp/node_build/node-v0.10.24 && echo "${PWD#*v}" 0.10.24 ```
2014/01/20
[ "https://Stackoverflow.com/questions/21226366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/587021/" ]
Is there another "v" in the path, like right before the "i8/"? `#*v` will remove through the *first* "v" in the variable; I'm pretty sure you want `##*v` which'll remove through the *last* "v" in the variable. (Technically, `#` removes the shortest matching prefix, and `##` removes the longest match). Thus: ``` NODE_VERSION="${PWD##*v}" ``` Should work.
Try this ``` sudo checkinstall -y --install=no --pkgversion "${NODE_VERSION##*v}" ```
37,445,901
This question comes from [this one](https://stackoverflow.com/questions/37399965/refresh-web-page-using-a-cgi-python-script). What I want is to be able to return the `HTTP 303` header from my python script, when the user clicks on a button. My script is very simple and as far as output is concerned, it *only* prints the following two lines: ``` print "HTTP/1.1 303 See Other\n\n" print "Location: http://192.168.1.109\n\n" ``` I have also tried many different variants of the above (with a different number of `\r` and `\n` at the end of the lines), but without success; so far I always get `Internal Server Error`. Are the above two lines enough for sending a `HTTP 303` response? Should there be something else?
2016/05/25
[ "https://Stackoverflow.com/questions/37445901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/751115/" ]
Assuming you are using cgi ([2.7](https://docs.python.org/2/library/cgi.html))([3.5](https://docs.python.org/3.5/library/cgi.html)) The example below should redirect to the same page. The example doesn't attempt to parse headers, check what POST was send, it simply redirects to the page `'/'` when a POST is detected. ``` # python 3 import below: # from http.server import HTTPServer, BaseHTTPRequestHandler # python 2 import below: from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer import cgi #stuff ... class WebServerHandler(BaseHTTPRequestHandler): def do_GET(self): try: if self.path.endswith("/"): self.send_response(200) self.send_header('Content-type', 'text/html') self.end_headers() page ='''<html> <body> <form action="/" method="POST"> <input type="submit" value="Reload" > </form> </body> </html''' self.wfile.write(page) except IOError: self.send_error(404, "File Not Found {}".format(self.path)) def do_POST(self): self.send_response(303) self.send_header('Content-type', 'text/html') self.send_header('Location', '/') #This will navigate to the original page self.end_headers() def main(): try: port = 8080 server = HTTPServer(('', port), WebServerHandler) print("Web server is running on port {}".format(port)) server.serve_forever() except KeyboardInterrupt: print("^C entered, stopping web server...") server.socket.close() if __name__ == '__main__': main() ```
Typically browsers like to see `/r/n/r/n` at the end of an HTTP response.
37,445,901
This question comes from [this one](https://stackoverflow.com/questions/37399965/refresh-web-page-using-a-cgi-python-script). What I want is to be able to return the `HTTP 303` header from my python script, when the user clicks on a button. My script is very simple and as far as output is concerned, it *only* prints the following two lines: ``` print "HTTP/1.1 303 See Other\n\n" print "Location: http://192.168.1.109\n\n" ``` I have also tried many different variants of the above (with a different number of `\r` and `\n` at the end of the lines), but without success; so far I always get `Internal Server Error`. Are the above two lines enough for sending a `HTTP 303` response? Should there be something else?
2016/05/25
[ "https://Stackoverflow.com/questions/37445901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/751115/" ]
Assuming you are using cgi ([2.7](https://docs.python.org/2/library/cgi.html))([3.5](https://docs.python.org/3.5/library/cgi.html)) The example below should redirect to the same page. The example doesn't attempt to parse headers, check what POST was send, it simply redirects to the page `'/'` when a POST is detected. ``` # python 3 import below: # from http.server import HTTPServer, BaseHTTPRequestHandler # python 2 import below: from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer import cgi #stuff ... class WebServerHandler(BaseHTTPRequestHandler): def do_GET(self): try: if self.path.endswith("/"): self.send_response(200) self.send_header('Content-type', 'text/html') self.end_headers() page ='''<html> <body> <form action="/" method="POST"> <input type="submit" value="Reload" > </form> </body> </html''' self.wfile.write(page) except IOError: self.send_error(404, "File Not Found {}".format(self.path)) def do_POST(self): self.send_response(303) self.send_header('Content-type', 'text/html') self.send_header('Location', '/') #This will navigate to the original page self.end_headers() def main(): try: port = 8080 server = HTTPServer(('', port), WebServerHandler) print("Web server is running on port {}".format(port)) server.serve_forever() except KeyboardInterrupt: print("^C entered, stopping web server...") server.socket.close() if __name__ == '__main__': main() ```
Be very careful about what Python automatically does. For example, in Python 3, the print function adds line endings to each print, which can mess with HTTP's very specific number of line endings between each message. You also still need a content type header, for some reason. This worked for me in Python 3 on Apache 2: ``` print('Status: 303 See Other') print('Location: /foo') print('Content-type:text/plain') print() ```
29,848,351
I have the following list of keys in python. ``` [{'country': None, 'percent': 100.0}, {'country': 'IL', 'percent': 100.0}, {'country': 'IT', 'percent': 100.0}, {'country': 'US', 'percent': 2.0202}, {'country': 'JP', 'percent': 11.1111}, {'country': 'US', 'percent': 6.9767}, {'country': 'SG', 'percent': 99.8482}, {'country': 'US', 'percent': 1.9127}, {'country': 'BR', 'percent': 95.1724}, {'country': 'IE', 'percent': 5.9041}, {'country': None, 'percent': 100.0}, {'country': None, 'percent': 100.0}] ``` So I need to add all the percentages for the same country and remove country that is `None` . Ideally the output would be. ``` [{'country': 'IL', 'percent': 100.0}, {'country': 'IT', 'percent': 100.0}, {'country': 'US', 'percent': 10.9096}, {'country': 'JP', 'percent': 11.1111}, {'country': 'SG', 'percent': 99.8482}, {'country': 'BR', 'percent': 95.1724}, {'country': 'IE', 'percent': 5.9041}, ] ``` I tried the following. ``` for i, v in enumerate(response): for j in response[i:]: if v['country'] == j['country']: response[i]['percent'] = i['percent'] + j['percent'] ``` But I could not succeed and am struggling. Could someone please point me out in the right direction.
2015/04/24
[ "https://Stackoverflow.com/questions/29848351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/567797/" ]
``` result_map = {} for item in response: if item['country'] is None: continue if item['country'] not in result_map: result_map[item['country']] = item['percent'] else: result_map[item['country']] += item['percent'] results = [ {'country': country, 'percent': percent} for country, percent in result_map.items() ] ```
Change the condition of the if to: ``` if response.index(v) != response.index(j) and v['country'] == j['country']: ``` You're addding twice the elements.
29,848,351
I have the following list of keys in python. ``` [{'country': None, 'percent': 100.0}, {'country': 'IL', 'percent': 100.0}, {'country': 'IT', 'percent': 100.0}, {'country': 'US', 'percent': 2.0202}, {'country': 'JP', 'percent': 11.1111}, {'country': 'US', 'percent': 6.9767}, {'country': 'SG', 'percent': 99.8482}, {'country': 'US', 'percent': 1.9127}, {'country': 'BR', 'percent': 95.1724}, {'country': 'IE', 'percent': 5.9041}, {'country': None, 'percent': 100.0}, {'country': None, 'percent': 100.0}] ``` So I need to add all the percentages for the same country and remove country that is `None` . Ideally the output would be. ``` [{'country': 'IL', 'percent': 100.0}, {'country': 'IT', 'percent': 100.0}, {'country': 'US', 'percent': 10.9096}, {'country': 'JP', 'percent': 11.1111}, {'country': 'SG', 'percent': 99.8482}, {'country': 'BR', 'percent': 95.1724}, {'country': 'IE', 'percent': 5.9041}, ] ``` I tried the following. ``` for i, v in enumerate(response): for j in response[i:]: if v['country'] == j['country']: response[i]['percent'] = i['percent'] + j['percent'] ``` But I could not succeed and am struggling. Could someone please point me out in the right direction.
2015/04/24
[ "https://Stackoverflow.com/questions/29848351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/567797/" ]
``` result_map = {} for item in response: if item['country'] is None: continue if item['country'] not in result_map: result_map[item['country']] = item['percent'] else: result_map[item['country']] += item['percent'] results = [ {'country': country, 'percent': percent} for country, percent in result_map.items() ] ```
A solution using `defaultdict` and filtering out the `None` country: ``` from collections import defaultdict data = [{'country': None, 'percent': 100.0}, {'country': 'IL', 'percent': 100.0}, {'country': 'IT', 'percent': 100.0}, {'country': 'US', 'percent': 2.0202}, {'country': 'JP', 'percent': 11.1111}, {'country': 'US', 'percent': 6.9767}, {'country': 'SG', 'percent': 99.8482}, {'country': 'US', 'percent': 1.9127}, {'country': 'BR', 'percent': 95.1724}, {'country': 'IE', 'percent': 5.9041}, {'country': None, 'percent': 100.0}, {'country': None, 'percent': 100.0}] combined_percentages = defaultdict(float) for country_data in data: country, percentage = country_data['country'], country_data['percent'] if country: combined_percentages[country] += percentage output = [{'country': country, 'percent': percentage} for country, percentage in combined_percentages.items()] ``` The [defaultdict](https://docs.python.org/2/library/collections.html#defaultdict-examples) creates a float with value 0.0 if the key doesn't exist so we can directly add to it. I think this is a pythonic solution to the problem at hand.
29,848,351
I have the following list of keys in python. ``` [{'country': None, 'percent': 100.0}, {'country': 'IL', 'percent': 100.0}, {'country': 'IT', 'percent': 100.0}, {'country': 'US', 'percent': 2.0202}, {'country': 'JP', 'percent': 11.1111}, {'country': 'US', 'percent': 6.9767}, {'country': 'SG', 'percent': 99.8482}, {'country': 'US', 'percent': 1.9127}, {'country': 'BR', 'percent': 95.1724}, {'country': 'IE', 'percent': 5.9041}, {'country': None, 'percent': 100.0}, {'country': None, 'percent': 100.0}] ``` So I need to add all the percentages for the same country and remove country that is `None` . Ideally the output would be. ``` [{'country': 'IL', 'percent': 100.0}, {'country': 'IT', 'percent': 100.0}, {'country': 'US', 'percent': 10.9096}, {'country': 'JP', 'percent': 11.1111}, {'country': 'SG', 'percent': 99.8482}, {'country': 'BR', 'percent': 95.1724}, {'country': 'IE', 'percent': 5.9041}, ] ``` I tried the following. ``` for i, v in enumerate(response): for j in response[i:]: if v['country'] == j['country']: response[i]['percent'] = i['percent'] + j['percent'] ``` But I could not succeed and am struggling. Could someone please point me out in the right direction.
2015/04/24
[ "https://Stackoverflow.com/questions/29848351", "https://Stackoverflow.com", "https://Stackoverflow.com/users/567797/" ]
``` result_map = {} for item in response: if item['country'] is None: continue if item['country'] not in result_map: result_map[item['country']] = item['percent'] else: result_map[item['country']] += item['percent'] results = [ {'country': country, 'percent': percent} for country, percent in result_map.items() ] ```
A solution using [`itertools.groupby`](https://docs.python.org/2/library/itertools.html#itertools.groupby): ``` from itertools import groupby new_response = [] def get_country(dct): return dct['country'] sorted_response = sorted(response, key=get_country) # data needs to be sorted for groupby for country, group in groupby(sorted_response, key=get_country): if country is not None: percentages = sum(float(g['percent']) for g in group) new_response.append({'country': country, 'percentage': percentages}) print new_response ```
19,847,275
My function is like ``` def calResult(w,t,l,team): wDict={} for item in team: for x in w: wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1]) for x in t: wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1]) return wDict ``` say I create the empty dict then I use `wDict[item]` to assign value for each key(these are from a team list, we have team like a b c d...). the `x[item.index(" "):item.index(" ")+1]` part will return a value after the int method have run. But the python shell returned that ``` Traceback (most recent call last): File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\_sandbox.py", line 66, in <module> File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\_sandbox.py", line 59, in calResult builtins.KeyError: 'Torino' ``` I can't understand what exactly is the error in my code.
2013/11/07
[ "https://Stackoverflow.com/questions/19847275", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2844097/" ]
You can not access `wDict[item]` the first time, since your dict is empty This would be ok: ``` wDict[item] = 1 ``` But you can not do this : ``` wDict[item] = wDict[item] + 1 ``` Maybe you want to use this syntax : ``` wDict[item] = int(wDict.get(item, 0)]) + int(x[item.index(" "):item.index(" ") + 1]) ```
Looks like you are trying to use wDict[item] as the rvalue and the lvalue in the same assignment statement, when wDict[item] is not yet initialized. ``` wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1]) ``` You are trying to access the "value" of the key item, but there is no key value pair initialized.
19,847,275
My function is like ``` def calResult(w,t,l,team): wDict={} for item in team: for x in w: wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1]) for x in t: wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1]) return wDict ``` say I create the empty dict then I use `wDict[item]` to assign value for each key(these are from a team list, we have team like a b c d...). the `x[item.index(" "):item.index(" ")+1]` part will return a value after the int method have run. But the python shell returned that ``` Traceback (most recent call last): File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\_sandbox.py", line 66, in <module> File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\_sandbox.py", line 59, in calResult builtins.KeyError: 'Torino' ``` I can't understand what exactly is the error in my code.
2013/11/07
[ "https://Stackoverflow.com/questions/19847275", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2844097/" ]
I'm not quite sure what you're trying to do here (consider using more descriptive variable names than `x`, for starters), but here is the problem: ``` wDict[item]=int(wDict[item])+... ``` The first time you do this, `wDict[item]` doesn't exist, hence the `KeyError`. What you want, I think, is: ``` wDict[item] = wDict.get(item, 0) + int(x[item.index(" "):item.index(" ")+1]) ``` `.get()` takes a key and a default value to use if that key doesn't exist. You might also want to use the `Counter` class in `collections`, which is designed to default nonexistent keys to zero for just this sort of situation.
You can not access `wDict[item]` the first time, since your dict is empty This would be ok: ``` wDict[item] = 1 ``` But you can not do this : ``` wDict[item] = wDict[item] + 1 ``` Maybe you want to use this syntax : ``` wDict[item] = int(wDict.get(item, 0)]) + int(x[item.index(" "):item.index(" ") + 1]) ```
19,847,275
My function is like ``` def calResult(w,t,l,team): wDict={} for item in team: for x in w: wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1]) for x in t: wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1]) return wDict ``` say I create the empty dict then I use `wDict[item]` to assign value for each key(these are from a team list, we have team like a b c d...). the `x[item.index(" "):item.index(" ")+1]` part will return a value after the int method have run. But the python shell returned that ``` Traceback (most recent call last): File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\_sandbox.py", line 66, in <module> File "C:\Program Files (x86)\Wing IDE 101 4.1\src\debug\tserver\_sandbox.py", line 59, in calResult builtins.KeyError: 'Torino' ``` I can't understand what exactly is the error in my code.
2013/11/07
[ "https://Stackoverflow.com/questions/19847275", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2844097/" ]
I'm not quite sure what you're trying to do here (consider using more descriptive variable names than `x`, for starters), but here is the problem: ``` wDict[item]=int(wDict[item])+... ``` The first time you do this, `wDict[item]` doesn't exist, hence the `KeyError`. What you want, I think, is: ``` wDict[item] = wDict.get(item, 0) + int(x[item.index(" "):item.index(" ")+1]) ``` `.get()` takes a key and a default value to use if that key doesn't exist. You might also want to use the `Counter` class in `collections`, which is designed to default nonexistent keys to zero for just this sort of situation.
Looks like you are trying to use wDict[item] as the rvalue and the lvalue in the same assignment statement, when wDict[item] is not yet initialized. ``` wDict[item]=int(wDict[item])+int(x[item.index(" "):item.index(" ")+1]) ``` You are trying to access the "value" of the key item, but there is no key value pair initialized.
58,117,763
First of all, sorry for any newbie mistakes that I've made. But I couldn't figure out and couldn't find a source specifically for [deeppavlov (NER)](http://docs.deeppavlov.ai/en/master/features/models/ner.html) library. I'm trying to train ner\_ontonotes\_bert\_mult as described [here](http://docs.deeppavlov.ai/en/master/features/models/ner.html#train-and-use-the-model). I guess it can be trained from its checkpoint to make it recognize some specific patterns like; ``` "Round 23/22; 24,9 x 12,2 x 12,3" ``` as ``` [[['Round', '23/22', ';', '24,9 x 12,2 x 12,3']], [['B-PRODUCT', 'I-PRODUCT', 'B-QUANTITY']]] ``` My questions are (before I dig into details): 1. ~~Is it possible?~~ And I realized I can't use samples like " Round 23/22; 24,9 x 12,2 x 12,3 ". I need them to be in full sentences. 2. Where can I find more info about it specifically related to deeppavlov's model(s)? 3. How can I train pre-trained deeppavlov model to recognize my custom patterns? I don't even understand if it is possible but I've decided to give it go and prepared 3 `.txt` files as `"train.txt"`, `"test.txt"` and `"validation.txt"` as [described in deeppovlov web page](http://docs.deeppavlov.ai/en/master/features/models/ner.html#training-data). And I put them under the folder `'~/.deeppavlov/downloads/ontonotes/ner_ontonotes_bert_mult'`. My dataset looks like this: ``` Round B-PRODUCT 23/22 I-PRODUCT 24,9 x 12,2 x 12,3 B-QUANTITY Ring B-PRODUCT HDFAA I-PRODUCT 12,7 x 10 B-QUANTITY ``` and so on... This is the code I am trying to train it: ``` import os # Force tensorflow to use CPU instead of GPU. os.environ['CUDA_VISIBLE_DEVICES'] = '-1' from deeppavlov import configs, train_model from deeppavlov.core.commands.utils import parse_config config_dict = parse_config(configs.ner.ner_ontonotes_bert_mult) print(config_dict['dataset_reader']['data_path']) from deeppavlov import configs, train_model ner_model = train_model(configs.ner.ner_ontonotes_bert_mult) ``` But I am getting this error: ``` tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3] rhs shape= [37] [[{{node save/Assign_280}}]] ``` Full traceback: ``` 2019-09-26 15:50:27.63 ERROR in 'deeppavlov.core.common.params'['params'] at line 110: Exception in <class 'deeppavlov.models.bert.bert_ner.BertNerModel'> Traceback (most recent call last): File "/home/custom_user/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call return fn(*args) File "/home/custom_user/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/custom_user/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [3] rhs shape= [37] [[{{node save/Assign_280}}]] ``` **UPDATE 2:** ============= And I realized I can't use samples like " Round 23/22; 24,9 x 12,2 x 12,3 ". I need them to be in full sentences. **UPDATE:** =========== It seems like this is happening due to my dataset. My custom dataset only has 3 tags (`B-PRODUCT`, `I-PRODUCT` and `B-QUANTITY`) but the pre-trained model has 37 of them. All available tags can be found [here](http://docs.deeppavlov.ai/en/master/features/models/ner.html#multilingual-bert-zero-shot-transfer) under the sentence of `"The list of available tags and their descriptions are presented below."`. 18 main tags(with `B` and `I` 36 tags), and `O` tag (β€œO” means the absence of entity.)). **Total of all of the 37 tags needs to be present in the dataset.** I was able to pass that error by adding dummy sentences by tagging them all with the missing tags. This is a terrible workaround since I'm willingly disrupting my own data-set. I'm still looking for a 'logical' way to train... PS: Now I am getting this error. ``` Traceback (most recent call last): File "/home/custom_user/.PyCharm2019.2/config/scratches/scratch_9.py", line 13, in <module> ner_model = train_model(configs.ner.ner_ontonotes_bert_mult) File "/home/custom_user/.local/lib/python3.6/site-packages/deeppavlov/__init__.py", line 31, in train_model train_evaluate_model_from_config(config, download=download, recursive=recursive) File "/home/custom_user/.local/lib/python3.6/site-packages/deeppavlov/core/commands/train.py", line 121, in train_evaluate_model_from_config trainer.train(iterator) File "/home/custom_user/.local/lib/python3.6/site-packages/deeppavlov/core/trainers/nn_trainer.py", line 294, in train self.train_on_batches(iterator) File "/home/custom_user/.local/lib/python3.6/site-packages/deeppavlov/core/trainers/nn_trainer.py", line 234, in train_on_batches self._validate(iterator) File "/home/custom_user/.local/lib/python3.6/site-packages/deeppavlov/core/trainers/nn_trainer.py", line 150, in _validate metrics = list(report['metrics'].items()) AttributeError: 'NoneType' object has no attribute 'items' ```
2019/09/26
[ "https://Stackoverflow.com/questions/58117763", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10183880/" ]
There are at least two problems here: 1. instead of `validation.txt` there should be a `valid.txt` file; 2. you are trying to retrain a model that was pretrained on a different dataset with a different set of tags, it's not necessary. To train your model from scratch you can do something like: ```py import json from deeppavlov import configs, build_model, train_model with configs.ner.ner_ontonotes_bert_mult.open(encoding='utf8') as f: ner_config = json.load(f) ner_config['dataset_reader']['data_path'] = '~/my_data_dir/' # directory with train.txt, valid.txt and test.txt files ner_config['metadata']['variables']['NER_PATH'] = '~/where_to_save_the_model/' ner_config['metadata']['download'] = [ner_config['metadata']['download'][-1]] # do not download the pretrained ontonotes model ner_model = train_model(ner_config, download=True) ``` But you can tokenize your texts beforehand: ```py ner_model([['Round', '23/22', ';', '24,9 x 12,2 x 12,3']]) ```
I tried deeppavlov training, and successfully trained the 'ner' model I also got the same error at first while training, then I overcome by researching more about it things to know before training - -> you can find the 'ner\_ontonotes\_bert\_multi.json' config file link in deeppavlov doc, which gives the dataset path, pretrained model path , dataset\_reader and chain pipe to train -> there is a pretrained model in the directory mentioned in the 'config' ,by default it is inside 'C:/users/{user\_name}/.deeppavlov/' is the root directory and pretrained models are gonna store in 'models' subdirectory -> when you started training the already trained model is gonna be modified which means, training just try to improve the pre-trained model so to train and build your own model (by scratch), simply delete the 'models' subdirectory from the '.deeppavlov' path and execute the training
33,114,202
I'm currently testing docker on a Debian 8.2 server and I'm seeking help from mor experienced people. I've followed the official documentation to install docker (<http://docs.docker.com/installation/debian/>) and I'm now trying docker compose (<https://docs.docker.com/compose/>). I've installed compose using pip as described here on the official documentation ("pip install -U docker-compose") Running "docker-compose" gives me the help screen, but "docker-compose up" doesn't work and gives me a lot of errors. Any idea on how I can make this to work? Am I missing something? A pre-requisite maybe? ``` root@server:~/dockerfiles/compose-test# docker-compose up Traceback (most recent call last): File "/usr/local/bin/docker-compose", line 11, in <module> sys.exit(main()) File "/usr/local/lib/python2.7/dist-packages/compose/cli/main.py", line 39, in main command.sys_dispatch() File "/usr/local/lib/python2.7/dist-packages/compose/cli/docopt_command.py", line 21, in sys_dispatch self.dispatch(sys.argv[1:], None) File "/usr/local/lib/python2.7/dist-packages/compose/cli/command.py", line 27, in dispatch super(Command, self).dispatch(*args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/compose/cli/docopt_command.py", line 24, in dispatch self.perform_command(*self.parse(argv, global_options)) File "/usr/local/lib/python2.7/dist-packages/compose/cli/command.py", line 57, in perform_command verbose=options.get('--verbose')) File "/usr/local/lib/python2.7/dist-packages/compose/cli/command.py", line 73, in get_project config_details = config.find(self.base_dir, config_path) File "/usr/local/lib/python2.7/dist-packages/compose/config.py", line 107, in find return ConfigDetails(load_yaml(filename), os.path.dirname(filename), filename) File "/usr/local/lib/python2.7/dist-packages/compose/config.py", line 558, in load_yaml return yaml.safe_load(fh) File "/usr/local/lib/python2.7/dist-packages/yaml/__init__.py", line 93, in safe_load return load(stream, SafeLoader) File "/usr/local/lib/python2.7/dist-packages/yaml/__init__.py", line 71, in load return loader.get_single_data() File "/usr/local/lib/python2.7/dist-packages/yaml/constructor.py", line 37, in get_single_data node = self.get_single_node() File "/usr/local/lib/python2.7/dist-packages/yaml/composer.py", line 36, in get_single_node document = self.compose_document() File "/usr/local/lib/python2.7/dist-packages/yaml/composer.py", line 55, in compose_document node = self.compose_node(None, None) File "/usr/local/lib/python2.7/dist-packages/yaml/composer.py", line 84, in compose_node node = self.compose_mapping_node(anchor) File "/usr/local/lib/python2.7/dist-packages/yaml/composer.py", line 127, in compose_mapping_node while not self.check_event(MappingEndEvent): File "/usr/local/lib/python2.7/dist-packages/yaml/parser.py", line 98, in check_event self.current_event = self.state() File "/usr/local/lib/python2.7/dist-packages/yaml/parser.py", line 428, in parse_block_mapping_key if self.check_token(KeyToken): File "/usr/local/lib/python2.7/dist-packages/yaml/scanner.py", line 116, in check_token self.fetch_more_tokens() File "/usr/local/lib/python2.7/dist-packages/yaml/scanner.py", line 220, in fetch_more_tokens return self.fetch_value() File "/usr/local/lib/python2.7/dist-packages/yaml/scanner.py", line 580, in fetch_value self.get_mark()) yaml.scanner.ScannerError: mapping values are not allowed here in "./docker-compose.yml", line 3, column 8 root@server:~/dockerfiles/compose-test# ``` I'm running docker 1.8.2 and compose 1.4.2
2015/10/13
[ "https://Stackoverflow.com/questions/33114202", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5243755/" ]
You are on the right track. The first approach just needs two things: * a dot at the beginning to make it [context-specific](http://doc.scrapy.org/en/latest/topics/selectors.html#working-with-relative-xpaths) * `text()` at the end Fixed version: ``` selector.xpath('.//div[@class="score unvoted"]/text()').extract() ``` And, FYI, you can make the second option work too by using the [`::text` pseudo-element](http://doc.scrapy.org/en/latest/topics/selectors.html#id1): ``` response.css('div.score.unvoted::text').extract() ```
this should work - ``` selector.xpath('//div[contains(@class, "score unvoted")]/text()').extract() ```
59,711,699
I run my python scrapy project shows the error `no module named 'requests'` So I type `pip install requests` and then terminal information: ``` Requirement already satisfied: requests in ./Library/Python/2.7/lib/python/site-packages (2.22.0) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in ./Library/Python/2.7/lib/python/site-packages (from requests) (3.0.4) Requirement already satisfied: idna<2.9,>=2.5 in ./Library/Python/2.7/lib/python/site-packages (from requests) (2.8) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in ./Library/Python/2.7/lib/python/site-packages (from requests) (1.25.7) Requirement already satisfied: certifi>=2017.4.17 in ./Library/Python/2.7/lib/python/site-packages (from requests) (2019.11.28) ``` type command `pip list` can see `request 2.22.0` I type command `python --version` to check the python version: ``` python 2.7.16 ``` Finally I run my scrapy project again still see the same error `no module named 'requests'` I have no idea how to fix the error now, any help would be appreciated. Thanks.
2020/01/13
[ "https://Stackoverflow.com/questions/59711699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6902961/" ]
Install `python3` and `pip3` and then `pip3 install requests` if you are on ubuntu `python3` is installed by default you should first install `pip3` by `apt install python3-pip` and then `pip3 install requests`
If you are using two different versions of Python, it should explain why you can't use your module. To install the module on Python 3, try: ``` pip3 install requests ``` And make sure, you are using the correct version.
59,711,699
I run my python scrapy project shows the error `no module named 'requests'` So I type `pip install requests` and then terminal information: ``` Requirement already satisfied: requests in ./Library/Python/2.7/lib/python/site-packages (2.22.0) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in ./Library/Python/2.7/lib/python/site-packages (from requests) (3.0.4) Requirement already satisfied: idna<2.9,>=2.5 in ./Library/Python/2.7/lib/python/site-packages (from requests) (2.8) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in ./Library/Python/2.7/lib/python/site-packages (from requests) (1.25.7) Requirement already satisfied: certifi>=2017.4.17 in ./Library/Python/2.7/lib/python/site-packages (from requests) (2019.11.28) ``` type command `pip list` can see `request 2.22.0` I type command `python --version` to check the python version: ``` python 2.7.16 ``` Finally I run my scrapy project again still see the same error `no module named 'requests'` I have no idea how to fix the error now, any help would be appreciated. Thanks.
2020/01/13
[ "https://Stackoverflow.com/questions/59711699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6902961/" ]
Install `python3` and `pip3` and then `pip3 install requests` if you are on ubuntu `python3` is installed by default you should first install `pip3` by `apt install python3-pip` and then `pip3 install requests`
Check if you are using the same interpreter to which you have installed the package using `pip install`. As a best practice to avoid this type of issues when you have multiple versions of python, use pip as a module instead of directly calling pip. eg: `python -m pip install requests` `python3 -m pip install requests`
59,711,699
I run my python scrapy project shows the error `no module named 'requests'` So I type `pip install requests` and then terminal information: ``` Requirement already satisfied: requests in ./Library/Python/2.7/lib/python/site-packages (2.22.0) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in ./Library/Python/2.7/lib/python/site-packages (from requests) (3.0.4) Requirement already satisfied: idna<2.9,>=2.5 in ./Library/Python/2.7/lib/python/site-packages (from requests) (2.8) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in ./Library/Python/2.7/lib/python/site-packages (from requests) (1.25.7) Requirement already satisfied: certifi>=2017.4.17 in ./Library/Python/2.7/lib/python/site-packages (from requests) (2019.11.28) ``` type command `pip list` can see `request 2.22.0` I type command `python --version` to check the python version: ``` python 2.7.16 ``` Finally I run my scrapy project again still see the same error `no module named 'requests'` I have no idea how to fix the error now, any help would be appreciated. Thanks.
2020/01/13
[ "https://Stackoverflow.com/questions/59711699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6902961/" ]
If you are using two different versions of Python, it should explain why you can't use your module. To install the module on Python 3, try: ``` pip3 install requests ``` And make sure, you are using the correct version.
Check if you are using the same interpreter to which you have installed the package using `pip install`. As a best practice to avoid this type of issues when you have multiple versions of python, use pip as a module instead of directly calling pip. eg: `python -m pip install requests` `python3 -m pip install requests`
70,192,924
I have a discord bot running on a python script, and its token is stored in a `.txt` file. If I read from the file using: ``` with open('Stored Discord Token.txt') as storedToken: TOKEN = storedToken.readlines() ``` I can get the discord bot token. The problem is that the discord bot token looks like this: `[' <token> ']` This causes an error when trying to run the script, and the bot fails to connect, as it is an invalid token: ``` discord.errors.LoginFailure: Improper token has been passed. ``` How do I remove the square brackets, `'`s and spaces from the list containing the token? --- **TL;DR**: How to remove `[`, `]`, `'`, and `spaces` from a single item list?
2021/12/02
[ "https://Stackoverflow.com/questions/70192924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12966704/" ]
First of all, `read()` will just return the whole file contents as a string, so you could use `TOKEN = storedToken.read()`. Lists in Python can be accessed using `[index]` so to access the first line in the file you can do `TOKEN = storedToken.readlines()[0]`. If say you wanted to access the `n`th line you could do `storedToken.readlines()[n]`. Where `n` is an `int`.
As @Brian suggested, slicing out the substring solves the problem. If we simply add one line of code, like this: ``` with open('Stored Discord Token.txt') as file: fileContents = file.readlines() TOKEN = fileContents[-1] ``` we remove the `[`, `]`, `'`, and characters, and can now successfully pass that string as the token.
70,192,924
I have a discord bot running on a python script, and its token is stored in a `.txt` file. If I read from the file using: ``` with open('Stored Discord Token.txt') as storedToken: TOKEN = storedToken.readlines() ``` I can get the discord bot token. The problem is that the discord bot token looks like this: `[' <token> ']` This causes an error when trying to run the script, and the bot fails to connect, as it is an invalid token: ``` discord.errors.LoginFailure: Improper token has been passed. ``` How do I remove the square brackets, `'`s and spaces from the list containing the token? --- **TL;DR**: How to remove `[`, `]`, `'`, and `spaces` from a single item list?
2021/12/02
[ "https://Stackoverflow.com/questions/70192924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12966704/" ]
First of all, `read()` will just return the whole file contents as a string, so you could use `TOKEN = storedToken.read()`. Lists in Python can be accessed using `[index]` so to access the first line in the file you can do `TOKEN = storedToken.readlines()[0]`. If say you wanted to access the `n`th line you could do `storedToken.readlines()[n]`. Where `n` is an `int`.
Try this one: ``` import re a=[' <token123> '] re.sub(r'[^a-zA-Z0-9]', '', str(a)) ```
56,009,890
Authors of an xml document did not include all the text inside an element that will be converted to a hyperlink. I would like to process or pre-process the xml to include the necessary text. I find this hard to describe but a simple example should show what I'm attempting. I'm using XSLT 2.0. I already do regular expression processing for various situations but can't figure this out. I know how to do this with perl/python regular expression but I can't figure out how to approach this with XSLT. Here is 'very' simplfied xml from an author in which they left out the ' (Sheet 3)' from the glink element.: ``` <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <root> <para> Go look at figure <glink refid=1>Figure 22</glink> (Sheet 3). Then go do something else. </para> </root> ``` Here is what I'd like it to convert to where the ' (Sheet 3)' is now inside the glink tag: ``` <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <root> <para> Go look at figure <glink refid=1>Figure 22 (Sheet 3)</glink>. Then go do something else. </para> </root> ``` The case when this conversion should happen is when there is a glink element followed by (this regular expression): ``` \s\(Sheet \d\) ``` I currently have 2 XSLTs. The first pre-processes the XML to convert a number of other situations (using regular expression/xsl:analyze-string). The second XSLT to convert from pre-processed xml to HTML. The second XSLT has a template to handle glink elements and turn it into a hyperlink but the hyperlink should be including the Sheet information. I would assume that it is easier to pre-process this first and leave the 2nd XSLT alone, but I always appreciate better ways. Thank you for your time.
2019/05/06
[ "https://Stackoverflow.com/questions/56009890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/107690/" ]
The existing answer has the right approach but I would sharpen the regular expression pattern and the match patterns: ``` <xsl:param name="pattern" as="xs:string">\s\(Sheet \d\)</xsl:param> <xsl:variable name="pattern2" as="xs:string" select="'^' || $pattern"/> <xsl:variable name="pattern3" as="xs:string" select="'^(' || $pattern || ')(.*)'"/> <xsl:template match="glink[@refid][following-sibling::node()[1][self::text()[matches(., $pattern2)]]]"> <xsl:copy> <xsl:apply-templates select="@*"/> <xsl:value-of select=". || replace(following-sibling::node()[1], $pattern3, '$1', 's')"/> </xsl:copy> </xsl:template> <xsl:template match="text()[preceding-sibling::node()[1][self::glink[@refid]]][matches(., $pattern2)]"> <xsl:value-of select="replace(., $pattern3, '$2', 's')"/> </xsl:template> ``` <https://xsltfiddle.liberty-development.net/bFN1y9z/1> Otherwise I think the matches and replacements happen for more than a `glink` followed (directly?) by that pattern, as you can see in <https://xsltfiddle.liberty-development.net/bFN1y9z/2>. The code I posted uses XPath 3.1's `||` string concatenation operator but if an XSLT 2 processor is the target that could of course be replaced with a normal `concat` function call.
You can use these two templates in combination with the *Identity template*: ``` <xsl:template match="glink"> <xsl:copy> <xsl:copy-of select="@*|text()" /> <xsl:text> </xsl:text> <xsl:value-of select="normalize-space(replace(following::text()[1],'\s(\(Sheet \d\)).*',' $1'))" /> </xsl:copy> </xsl:template> <xsl:template match="text()[preceding-sibling::glink]"> <xsl:value-of select="normalize-space(replace(.,'\s\(Sheet \d\)(.*)',' $1'))" /> </xsl:template> ``` The first one includes the `(Sheet 3)` string into `glink` and the second one excludes `(Sheet 3)` from the following `text()` node. **The result is:** ``` <root> <para> Go look at figure <glink refid="1">Figure 22 (Sheet 3)</glink>. Then go do something else.</para> </root> ```
56,009,890
Authors of an xml document did not include all the text inside an element that will be converted to a hyperlink. I would like to process or pre-process the xml to include the necessary text. I find this hard to describe but a simple example should show what I'm attempting. I'm using XSLT 2.0. I already do regular expression processing for various situations but can't figure this out. I know how to do this with perl/python regular expression but I can't figure out how to approach this with XSLT. Here is 'very' simplfied xml from an author in which they left out the ' (Sheet 3)' from the glink element.: ``` <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <root> <para> Go look at figure <glink refid=1>Figure 22</glink> (Sheet 3). Then go do something else. </para> </root> ``` Here is what I'd like it to convert to where the ' (Sheet 3)' is now inside the glink tag: ``` <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <root> <para> Go look at figure <glink refid=1>Figure 22 (Sheet 3)</glink>. Then go do something else. </para> </root> ``` The case when this conversion should happen is when there is a glink element followed by (this regular expression): ``` \s\(Sheet \d\) ``` I currently have 2 XSLTs. The first pre-processes the XML to convert a number of other situations (using regular expression/xsl:analyze-string). The second XSLT to convert from pre-processed xml to HTML. The second XSLT has a template to handle glink elements and turn it into a hyperlink but the hyperlink should be including the Sheet information. I would assume that it is easier to pre-process this first and leave the 2nd XSLT alone, but I always appreciate better ways. Thank you for your time.
2019/05/06
[ "https://Stackoverflow.com/questions/56009890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/107690/" ]
In order to reduce the use of regex functions, I would use this approach: ``` <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0"> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> <xsl:template match="glink"> <xsl:variable name="vAnalyzedString"> <xsl:analyze-string select="following-sibling::node()[1][self::text()]" regex="^\s*\(Sheet\s+\d+\)"> <xsl:matching-substring> <match> <xsl:value-of select="."/> </match> </xsl:matching-substring> <xsl:non-matching-substring> <no-match> <xsl:value-of select="."/> </no-match> </xsl:non-matching-substring> </xsl:analyze-string> </xsl:variable> <xsl:copy> <xsl:apply-templates select="node()|@*"/> <xsl:apply-templates select="$vAnalyzedString/match/text()"/> </xsl:copy> <xsl:apply-templates select="$vAnalyzedString/no-match/text()"/> </xsl:template> <xsl:template match="text()[preceding-sibling::node()[1][self::glink]]"/> </xsl:stylesheet> ``` Output: ``` <root> <para> Go look at figure <glink refid="1">Figure 22 (Sheet 3)</glink>. Then go do something else. </para> </root> ``` **Do note**: all `glink` are processed but none of those text nodes being the first siblings. It's posible to use `xsl:analize-string` instruction, but you will need to declare a variable with partial results and then navegate those results. Also, this approach might easily let you further processing those (now) text nodes and it has **only one regex processing**.
You can use these two templates in combination with the *Identity template*: ``` <xsl:template match="glink"> <xsl:copy> <xsl:copy-of select="@*|text()" /> <xsl:text> </xsl:text> <xsl:value-of select="normalize-space(replace(following::text()[1],'\s(\(Sheet \d\)).*',' $1'))" /> </xsl:copy> </xsl:template> <xsl:template match="text()[preceding-sibling::glink]"> <xsl:value-of select="normalize-space(replace(.,'\s\(Sheet \d\)(.*)',' $1'))" /> </xsl:template> ``` The first one includes the `(Sheet 3)` string into `glink` and the second one excludes `(Sheet 3)` from the following `text()` node. **The result is:** ``` <root> <para> Go look at figure <glink refid="1">Figure 22 (Sheet 3)</glink>. Then go do something else.</para> </root> ```
56,009,890
Authors of an xml document did not include all the text inside an element that will be converted to a hyperlink. I would like to process or pre-process the xml to include the necessary text. I find this hard to describe but a simple example should show what I'm attempting. I'm using XSLT 2.0. I already do regular expression processing for various situations but can't figure this out. I know how to do this with perl/python regular expression but I can't figure out how to approach this with XSLT. Here is 'very' simplfied xml from an author in which they left out the ' (Sheet 3)' from the glink element.: ``` <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <root> <para> Go look at figure <glink refid=1>Figure 22</glink> (Sheet 3). Then go do something else. </para> </root> ``` Here is what I'd like it to convert to where the ' (Sheet 3)' is now inside the glink tag: ``` <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <root> <para> Go look at figure <glink refid=1>Figure 22 (Sheet 3)</glink>. Then go do something else. </para> </root> ``` The case when this conversion should happen is when there is a glink element followed by (this regular expression): ``` \s\(Sheet \d\) ``` I currently have 2 XSLTs. The first pre-processes the XML to convert a number of other situations (using regular expression/xsl:analyze-string). The second XSLT to convert from pre-processed xml to HTML. The second XSLT has a template to handle glink elements and turn it into a hyperlink but the hyperlink should be including the Sheet information. I would assume that it is easier to pre-process this first and leave the 2nd XSLT alone, but I always appreciate better ways. Thank you for your time.
2019/05/06
[ "https://Stackoverflow.com/questions/56009890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/107690/" ]
In order to reduce the use of regex functions, I would use this approach: ``` <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0"> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> <xsl:template match="glink"> <xsl:variable name="vAnalyzedString"> <xsl:analyze-string select="following-sibling::node()[1][self::text()]" regex="^\s*\(Sheet\s+\d+\)"> <xsl:matching-substring> <match> <xsl:value-of select="."/> </match> </xsl:matching-substring> <xsl:non-matching-substring> <no-match> <xsl:value-of select="."/> </no-match> </xsl:non-matching-substring> </xsl:analyze-string> </xsl:variable> <xsl:copy> <xsl:apply-templates select="node()|@*"/> <xsl:apply-templates select="$vAnalyzedString/match/text()"/> </xsl:copy> <xsl:apply-templates select="$vAnalyzedString/no-match/text()"/> </xsl:template> <xsl:template match="text()[preceding-sibling::node()[1][self::glink]]"/> </xsl:stylesheet> ``` Output: ``` <root> <para> Go look at figure <glink refid="1">Figure 22 (Sheet 3)</glink>. Then go do something else. </para> </root> ``` **Do note**: all `glink` are processed but none of those text nodes being the first siblings. It's posible to use `xsl:analize-string` instruction, but you will need to declare a variable with partial results and then navegate those results. Also, this approach might easily let you further processing those (now) text nodes and it has **only one regex processing**.
The existing answer has the right approach but I would sharpen the regular expression pattern and the match patterns: ``` <xsl:param name="pattern" as="xs:string">\s\(Sheet \d\)</xsl:param> <xsl:variable name="pattern2" as="xs:string" select="'^' || $pattern"/> <xsl:variable name="pattern3" as="xs:string" select="'^(' || $pattern || ')(.*)'"/> <xsl:template match="glink[@refid][following-sibling::node()[1][self::text()[matches(., $pattern2)]]]"> <xsl:copy> <xsl:apply-templates select="@*"/> <xsl:value-of select=". || replace(following-sibling::node()[1], $pattern3, '$1', 's')"/> </xsl:copy> </xsl:template> <xsl:template match="text()[preceding-sibling::node()[1][self::glink[@refid]]][matches(., $pattern2)]"> <xsl:value-of select="replace(., $pattern3, '$2', 's')"/> </xsl:template> ``` <https://xsltfiddle.liberty-development.net/bFN1y9z/1> Otherwise I think the matches and replacements happen for more than a `glink` followed (directly?) by that pattern, as you can see in <https://xsltfiddle.liberty-development.net/bFN1y9z/2>. The code I posted uses XPath 3.1's `||` string concatenation operator but if an XSLT 2 processor is the target that could of course be replaced with a normal `concat` function call.
64,026,529
I'm trying to accomplish a basic image processing. Here is my algorithm : Find n., n+1., n+2. pixel's RGB values in a row and create a new image from these values. [![Algorithm](https://i.stack.imgur.com/uFwT2.png)](https://i.stack.imgur.com/uFwT2.png) Here is my example code in python : ``` import glob import ntpath import time from multiprocessing.pool import ThreadPool as Pool import matplotlib.pyplot as plt import numpy as np from PIL import Image images = glob.glob('model/*.png') pool_size = 17 def worker(image_file): try: new_image = np.zeros((2400, 1280, 3), dtype=np.uint8) image_name = ntpath.basename(image_file) print(f'Processing [{image_name}]') image = Image.open(image_file) data = np.asarray(image) for i in range(0, 2399): for j in range(0, 1279): pix_x = j * 3 + 1 red = data[i, pix_x - 1][0] green = data[i, pix_x][1] blue = data[i, pix_x + 1][2] new_image[i, j] = [red, green, blue] im = Image.fromarray(new_image) im.save(f'export/{image_name}') except: print('error with item') pool = Pool(pool_size) for image_file in images: pool.apply_async(worker, (image_file,)) pool.close() pool.join() ``` My input and output images are in RGB format. My code is taking 5 second for every image. I'm open for any idea to optimization this task. Here is example input and output images : [Input Image](https://i.stack.imgur.com/84jvb.png)[2](https://i.stack.imgur.com/84jvb.png) [ 3840 x 2400 ] [Output Image](https://i.stack.imgur.com/kGn4L.png)[3](https://i.stack.imgur.com/kGn4L.png) [ 1280 x 2400 ]
2020/09/23
[ "https://Stackoverflow.com/questions/64026529", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14326860/" ]
Here is an approach: ``` import cv2 import numpy as np # Load input image im = cv2.imread('input.png') # Calculate new first layer - it is every 3rd pixel of the first layer of im n1 = im[:, ::3, 0] # Calculate new second layer - it is every 3rd pixel of the second layer of im, starting with an offset of 1 pixel n2 = im[:, 1::3, 1] # Calculate new third layer - it is every 3rd pixel of the third layer of im, starting with an offset of 2 pixels n3 = im[:, 2::3, 2] # Now stack the three new layers to make a new output image res = np.dstack((n1,n2,n3)) ```
As far as I understood from the question, you want to shift the pixel values of each channel of the input image in the output image. So, here is my approach. ``` im = cv2.cvtColor(cv2.imread('my_image.jpg'), cv2.COLOR_BGR2RGB) im = np.pad(im, [(3, 3),(3,3),(0,0)], mode='constant', constant_values=0) # Add padding for enabling the shifting process later r= im[:,:,0] g= im[:,:,1] g = np.delete(g,np.s_[-1],axis=1) # remove the last column temp_pad = np.zeros(shape=(g.shape[0],1)) # removed part g = np.concatenate((temp_pad,g),axis=1) # put the removed part back b = im[:,:,2] b = np.delete(b,np.s_[-2::],axis=1) # remove the last columns temp_pad = np.zeros(shape=(b.shape[0],2)) # removed parts b = np.concatenate((temp_pad,b),axis=1) # put the removed parts back new_im = np.dstack((r,g,b)) # Merge the channels new_im = new_im[3:-3,3:-3,:]/np.amax(new_im)#*255 # Remove the padding ``` Basically, I achieved the shifting by padding&merging the green and blue channels. Let me know if this is what you are looking for. Kolay gelsin :)
62,281,696
I have been doing some googling but I can't really find a good python3 solution to my problem. Given the following HTML code, how do I extract 2019, 0.7 and 4.50% using python3? ``` <td rowspan='2' style='vertical-align:middle'>2019</td><td rowspan='2' style='vertical-align:middle;font-weight:bold;'>4.50%</td><td rowspan='2' style='vertical-align:middle;font-weight:bold;'>SGD 0.7</td> <td>SGD0.2 </td> ```
2020/06/09
[ "https://Stackoverflow.com/questions/62281696", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3702643/" ]
A solution using [`BeautifulSoup`](https://www.crummy.com/software/BeautifulSoup/bs4/doc/): ``` from bs4 import BeautifulSoup txt = '''<td rowspan='2' style='vertical-align:middle'>2019</td><td rowspan='2' style='vertical-align:middle;font-weight:bold;'>4.50%</td><td rowspan='2' style='vertical-align:middle;font-weight:bold;'>SGD 0.7</td> <td>SGD0.2 </td>''' soup = BeautifulSoup(txt, 'html.parser') info_1, info_2, info_3, *_ = soup.select('td') info_1 = info_1.get_text(strip=True) info_2 = info_2.get_text(strip=True) info_3 = info_3.get_text(strip=True).split()[-1] print(info_1, info_2, info_3) ``` Prints: ``` 2019 4.50% 0.7 ```
I think this might be helpful if does not exactly answer your question: ``` from html.parser import HTMLParser class MyHTMLParser(HTMLParser): def handle_data(self, data): print(data) parser = MyHTMLParser() parser.feed("<Your HTML here>") ``` For your particular case this will return: 2019 4.50% SGD 0.7 SGD0.2
58,736,295
I have the following `boto3` draft script ```py #!/usr/bin/env python3 import boto3 client = boto3.client('athena') BUCKETS='buckets.txt' DATABASE='some_db' QUERY_STR="""CREATE EXTERNAL TABLE IF NOT EXISTS some_db.{}( BucketOwner STRING, Bucket STRING, RequestDateTime STRING, RemoteIP STRING, Requester STRING, RequestID STRING, Operation STRING, Key STRING, RequestURI_operation STRING, RequestURI_key STRING, RequestURI_httpProtoversion STRING, HTTPstatus STRING, ErrorCode STRING, BytesSent BIGINT, ObjectSize BIGINT, TotalTime STRING, TurnAroundTime STRING, Referrer STRING, UserAgent STRING, VersionId STRING, HostId STRING, SigV STRING, CipherSuite STRING, AuthType STRING, EndPoint STRING, TLSVersion STRING ) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe' WITH SERDEPROPERTIES ( 'serialization.format' = '1', 'input.regex' = '([^ ]*) ([^ ]*) \\[(.*?)\\] ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) \\\"([^ ]*) ([^ ]*) (- |[^ ]*)\\\" (-|[0-9]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) (\"[^\"]*\") ([^ ]*)(?: ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*))?.*$' ) LOCATION 's3://my-bucket/{}'""" with open(BUCKETS, 'r') as f: lines = f.readlines() for line in lines: query_string = QUERY_STR.format(line, line) response = client.create_named_query( Name=line, Database=DATABASE, QueryString=QUERY_STR ) print(response) ``` When executed, all responses come back with status code `200`. Why am I not able to see the corresponding tables that should have been created? Shouldn't I be able to (at least) see somewhere those queries stored? **update1**: I am now trying to actually create the tables via the above queries as follows: ```py for line in lines: query_string = QUERY_STR.format(DATABASE, line[:-1].replace('-', '_'), line[:-1]) try: response1 = client.start_query_execution( QueryString=query_string, WorkGroup=WORKGROUP, QueryExecutionContext={ 'Database': DATABASE }, ResultConfiguration={ 'OutputLocation': OUTPUT_BUCKET, }, ) query_execution_id = response1['ResponseMetadata']['RequestId'] print(query_execution_id) except Exception as e1: print(query_string) raise(e1) ``` Once again, the script does output some query ids (no error seems to take place), nonetheless no table is created. I have also followed the advice of @John Rotenstein and initialised my `boto3` client as follows: ``` client = boto3.client('athena', region_name='us-east-1') ```
2019/11/06
[ "https://Stackoverflow.com/questions/58736295", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2409793/" ]
First of all, `response` simply tells you that your request has been successfully submitted. Method `create_named_query()` creates a snippet of your query, which then can be seen/access in AWS Athena console in **Saved Queries** tab. [![enter image description here](https://i.stack.imgur.com/3rZ5I.png)](https://i.stack.imgur.com/3rZ5I.png) It seems to me that you want to create table using `boto3`. If that is the case, you need to use [`start_query_execution()`](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena.html#Athena.Client.start_query_execution) method. > > Runs the SQL query statements contained in the Query . Requires you to have access to the workgroup in which the query ran. > > > Having response 200 out of `start_query_execution` doesn't guarantee that you query will get executed successfully. As I understand, this method does some simple pre-execution checks to validate syntax of the query. However, there are other things that could fail you query at the run time. For example if you try to create table in a database that doesn't exist, or if you try to create a table definition in a database to which you don't have access. Here is an example, when I used you query string, formatted with with some random name for the table. [![enter image description here](https://i.stack.imgur.com/mRHFb.png)](https://i.stack.imgur.com/mRHFb.png) I got response 200 and got some value in `response1['ResponseMetadata']['RequestId']`. However, since I don't have `some_db` in AWS Glue catalog, this query failed at the run time, thus, no table was created. Here is how you can track query execution within boto3 ```py import time response1 = client.start_query_execution( QueryString=query_string, WorkGroup=WORKGROUP, QueryExecutionContext={ 'Database': DATABASE }, ResultConfiguration={ 'OutputLocation': OUTPUT_BUCKET, }, ) query_execution_id = response1['ResponseMetadata']['RequestId'] while True: time.sleep(1) response_2 = client.get_query_execution( QueryExecutionId=query_execution_id ) query_status = response_2['QueryExecution']['Status'] print(query_status) if query_status not in ["QUEUED", "RUNNING", "CANCELLED"]: break ```
To reproduce your situation, I did the following: * In the Athena console, I ran: ```sql CREATE DATABASE foo ``` * In the Athena console, I selected `foo` in the Database drop-down * To start things simple, I ran this Python code: ```py import boto3 athena_client = boto3.client('athena', region_name='ap-southeast-2') # Change as necessary QUERY_STR=""" CREATE EXTERNAL TABLE IF NOT EXISTS foo.bar(id INT) LOCATION 's3://my-bucket/input-files/' """ response = athena_client.start_query_execution( QueryString=QUERY_STR, QueryExecutionContext={'Database': 'foo'}, ResultConfiguration={'OutputLocation': 's3://my-bucket/athena-out/'} ) ``` * I then went to the Athena console, did a refresh, and confirmed that the `bar` table was created **Suggestion:** Try the above to confirm that it works for you, too! I then ran your code, using the `start_query_execution` version of your code (shown in your second code block). I had to make some changes: * I didn't have a `buckets.txt` file, so I just provided a list of names * Your code doesn't show the content of `OUTPUT_BUCKET`, so I used `s3://my-bucket/athena-output/` (Does that match the format that *you* used?) * Your code uses `QUERY_STR.format(DATABASE...` but there was no `{}` in the `QUERY_STR` where the database name would be inserted, so I removed `DATABASE` as an input to the format variable * I did *not* provide a value for `WORKGROUP` **It all ran fine**, creating multiple tables. So, check the above bullet-points to see if it caused a problem for you (such as replacing the Database name in the `format()` statement).
32,954,110
I have the following string in python: ``` foo = 'a_b_c' ``` How do I split the string into 2 parts: `'a_b'` and `'c'`? I.e, I want to split at the second `'_'` `str.split('_')` splits into 3 parts: `'a'`, `'b'` and `'c'`.
2015/10/05
[ "https://Stackoverflow.com/questions/32954110", "https://Stackoverflow.com", "https://Stackoverflow.com/users/308827/" ]
Use the [`str.rsplit()` method](https://docs.python.org/2/library/stdtypes.html#str.rsplit) with a limit: ``` part1, part2 = foo.rsplit('_', 1) ``` `str.rsplit()` splits from the right-hand-side, and the limit (second argument) tells it to only split once. Alternatively, use [`str.rpartition()`](https://docs.python.org/2/library/stdtypes.html#str.rpartition): ``` part1, delimiter, part2 = foo.rpartition('_') ``` This includes the delimiter as a return value. Demo: ``` >>> foo = 'a_b_c' >>> foo.rsplit('_', 1) ['a_b', 'c'] >>> foo.rpartition('_') ('a_b', '_', 'c') ```
``` import re x = "a_b_c" print re.split(r"_(?!.*_)",x) ``` You can do it through `re`.Here in `re` with the use of `lookahead` we state that split by `_` after which there should not be `_`.
19,390,828
I am working on tree based program on python. I need to rewrite this function using recursion and liquidate all of these for-loops: Example of my function: ``` def items_on_level(full_tree, level): for key0, value0 in full_tree.items(): for key1, value1 in value0.items(): for key2, value2 in value1.items(): for key3, value3 in value2.items(): print(key3) ``` Input: - level - level of my recursion tree - full\_tree - dict with parents and children ``` {<Category: test>: {<Category: dkddk>: {}, <Category: test2>: {<Category: test3>: {}, <Category: test5>: {<Category: kfpokpok>: {}}}} ``` Function should return: all the objects on current level Help! Thanks!
2013/10/15
[ "https://Stackoverflow.com/questions/19390828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2863834/" ]
``` def itemsOnLevel(root, level): if not level: return list(root.keys()) else: return list(itertools.chain.from_iterable([itemsOnLevel(v, level-1) for k,v in root.items()])) ```
``` itemsOnLevel = lambda r, l: ( lambda f, r, l: f (f, r, l) ) ( lambda f, r, l: [_ for _ in r.keys () ] if not l else [i for k in r.values () for i in f (f, k, l - 1) ], r, l) ```
19,390,828
I am working on tree based program on python. I need to rewrite this function using recursion and liquidate all of these for-loops: Example of my function: ``` def items_on_level(full_tree, level): for key0, value0 in full_tree.items(): for key1, value1 in value0.items(): for key2, value2 in value1.items(): for key3, value3 in value2.items(): print(key3) ``` Input: - level - level of my recursion tree - full\_tree - dict with parents and children ``` {<Category: test>: {<Category: dkddk>: {}, <Category: test2>: {<Category: test3>: {}, <Category: test5>: {<Category: kfpokpok>: {}}}} ``` Function should return: all the objects on current level Help! Thanks!
2013/10/15
[ "https://Stackoverflow.com/questions/19390828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2863834/" ]
``` def itemsOnLevel(root, level): if not level: return list(root.keys()) else: return list(itertools.chain.from_iterable([itemsOnLevel(v, level-1) for k,v in root.items()])) ```
I guess this will help a bit ``` if(full_tree != {}): for node in return_list_current_level(full_tree): items_on_level(node, level+1) else: return None ```
19,390,828
I am working on tree based program on python. I need to rewrite this function using recursion and liquidate all of these for-loops: Example of my function: ``` def items_on_level(full_tree, level): for key0, value0 in full_tree.items(): for key1, value1 in value0.items(): for key2, value2 in value1.items(): for key3, value3 in value2.items(): print(key3) ``` Input: - level - level of my recursion tree - full\_tree - dict with parents and children ``` {<Category: test>: {<Category: dkddk>: {}, <Category: test2>: {<Category: test3>: {}, <Category: test5>: {<Category: kfpokpok>: {}}}} ``` Function should return: all the objects on current level Help! Thanks!
2013/10/15
[ "https://Stackoverflow.com/questions/19390828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2863834/" ]
``` def itemsOnLevel(root, level): if not level: return list(root.keys()) else: return list(itertools.chain.from_iterable([itemsOnLevel(v, level-1) for k,v in root.items()])) ```
``` def onl(r,l): if l: obj=[] for i in [onl(v,l-1) for v in r.values()]: obj.extend(i) return obj return r.keys() ```
19,390,828
I am working on tree based program on python. I need to rewrite this function using recursion and liquidate all of these for-loops: Example of my function: ``` def items_on_level(full_tree, level): for key0, value0 in full_tree.items(): for key1, value1 in value0.items(): for key2, value2 in value1.items(): for key3, value3 in value2.items(): print(key3) ``` Input: - level - level of my recursion tree - full\_tree - dict with parents and children ``` {<Category: test>: {<Category: dkddk>: {}, <Category: test2>: {<Category: test3>: {}, <Category: test5>: {<Category: kfpokpok>: {}}}} ``` Function should return: all the objects on current level Help! Thanks!
2013/10/15
[ "https://Stackoverflow.com/questions/19390828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2863834/" ]
``` itemsOnLevel = lambda r, l: ( lambda f, r, l: f (f, r, l) ) ( lambda f, r, l: [_ for _ in r.keys () ] if not l else [i for k in r.values () for i in f (f, k, l - 1) ], r, l) ```
I guess this will help a bit ``` if(full_tree != {}): for node in return_list_current_level(full_tree): items_on_level(node, level+1) else: return None ```
19,390,828
I am working on tree based program on python. I need to rewrite this function using recursion and liquidate all of these for-loops: Example of my function: ``` def items_on_level(full_tree, level): for key0, value0 in full_tree.items(): for key1, value1 in value0.items(): for key2, value2 in value1.items(): for key3, value3 in value2.items(): print(key3) ``` Input: - level - level of my recursion tree - full\_tree - dict with parents and children ``` {<Category: test>: {<Category: dkddk>: {}, <Category: test2>: {<Category: test3>: {}, <Category: test5>: {<Category: kfpokpok>: {}}}} ``` Function should return: all the objects on current level Help! Thanks!
2013/10/15
[ "https://Stackoverflow.com/questions/19390828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2863834/" ]
``` itemsOnLevel = lambda r, l: ( lambda f, r, l: f (f, r, l) ) ( lambda f, r, l: [_ for _ in r.keys () ] if not l else [i for k in r.values () for i in f (f, k, l - 1) ], r, l) ```
``` def onl(r,l): if l: obj=[] for i in [onl(v,l-1) for v in r.values()]: obj.extend(i) return obj return r.keys() ```
45,650,904
I am using celery to do a long-time task. The task will create a subprocess using `subprocess.Popen`. To make the task abortable, I write the code below: ``` from celery.contrib import abortable @task(bind=True, base=abortable.AbortableTask) def my_task(self, *args): p = subprocess.Popen([...]) while True: try: p.wait(1) except subprocess.TimeoutExpired: if self.is_aborted(): p.terminate() return else: break # Other codes... ``` I try it in my console and it works well. But when I decide to close the worker by pressing `Ctrl+C`, the program prints out `'worker: Warm shutdown (MainProcess)'` and blocked for a long time, which is not what I expect to be. **It seems that task abortion doesn't happen when a worker is about to shut down.** From the documentation I know that **if I want to abort a task, I should manually instantiate a `AbortableAsyncResult` using a task id and call its `.abort()` method.** But I can find nowhere to place this code, because it requires the ids of all running tasks, which I have no approach to access. So, how to invoke `.abort()` for all running tasks when workers are about to shut down? Or is there any alternative? I am using celery 4.1.0 with python 3.6.2.
2017/08/12
[ "https://Stackoverflow.com/questions/45650904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278171/" ]
You are passing `Int` though the actual type required is `CustomSegmentedControl`. To simply solve this problem just create the `IBOutlet` for your `CustomSegmentedControl` and pass it as parameter to `Button_CustomSegmentValueChanged` method. ``` func SwipedRight(swipe : UISwipeGestureRecognizer){ if currentSelectedView == 1 { customSegmentOutlet.selectedSegmentIndex = 0 Button_CustomSegmentValueChanged(customSegmentOutlet) //LoadLoginView() } } ```
Assuming your `CustomSegmentedControl` is a subclass of `UISegmentedControl`, I have modified few lines of code ``` @IBAction func button_CustomSegmentValueChanged(_ sender: UISegmentedControl?) { // guard sender for nil before use } ``` and when calling this ``` func swipedRight(swipe : UISwipeGestureRecognizer){ if currentSelectedView == 1 { let seg = UISegmentedControl() seg.selectedSegmentIndex = 0 Button_CustomSegmentValueChanged(seg) } } ``` if `CustomSegmentedControl` is not subclass of `UISegmentedControl`, change it so.
45,650,904
I am using celery to do a long-time task. The task will create a subprocess using `subprocess.Popen`. To make the task abortable, I write the code below: ``` from celery.contrib import abortable @task(bind=True, base=abortable.AbortableTask) def my_task(self, *args): p = subprocess.Popen([...]) while True: try: p.wait(1) except subprocess.TimeoutExpired: if self.is_aborted(): p.terminate() return else: break # Other codes... ``` I try it in my console and it works well. But when I decide to close the worker by pressing `Ctrl+C`, the program prints out `'worker: Warm shutdown (MainProcess)'` and blocked for a long time, which is not what I expect to be. **It seems that task abortion doesn't happen when a worker is about to shut down.** From the documentation I know that **if I want to abort a task, I should manually instantiate a `AbortableAsyncResult` using a task id and call its `.abort()` method.** But I can find nowhere to place this code, because it requires the ids of all running tasks, which I have no approach to access. So, how to invoke `.abort()` for all running tasks when workers are about to shut down? Or is there any alternative? I am using celery 4.1.0 with python 3.6.2.
2017/08/12
[ "https://Stackoverflow.com/questions/45650904", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278171/" ]
I would suggest to use the buttons array on CredentialSegmentedController to mimic the behaviour of change segment 1. Make an outlet from your custom segment control 2. Then modify your code using this segment control like below ``` func SwipedRight(swipe : UISwipeGestureRecognizer) { if currentSelectedView == 1 { //Button_CustomSegmentValueChanged(0) //LoadLoginView() let btn = customSegmentOutlet.buttons[0] customSegmentOutlet.buttonTapped(btn) } } func SwipedLeft(swipe : UISwipeGestureRecognizer){ if currentSelectedView == 0 { //LoadSignupView() let btn = customSegmentOutlet.buttons[1] customSegmentOutlet.buttonTapped(btn) } } ```
Assuming your `CustomSegmentedControl` is a subclass of `UISegmentedControl`, I have modified few lines of code ``` @IBAction func button_CustomSegmentValueChanged(_ sender: UISegmentedControl?) { // guard sender for nil before use } ``` and when calling this ``` func swipedRight(swipe : UISwipeGestureRecognizer){ if currentSelectedView == 1 { let seg = UISegmentedControl() seg.selectedSegmentIndex = 0 Button_CustomSegmentValueChanged(seg) } } ``` if `CustomSegmentedControl` is not subclass of `UISegmentedControl`, change it so.
1,081,698
I have a problem of upgrading python from 2.4 to 2.6: I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum"). So what is the right way to migrate/install modules to python2.6?
2009/07/04
[ "https://Stackoverflow.com/questions/1081698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133068/" ]
They are not broken, they are simply not installed. The solution to that is to install them under 2.6. But first we should see if you really should do that... Yes, Python will when installed replace the python command to the version installed (unless you run it with --alt-install). You don't exactly state what your problem is, so I'm going to guess. Your problem is that many local commands using Python now fail, because they get executed with Python 2.6, and not with Python 2.4. Is that correct? If that is so, then simply delete /usr/local/bin/python, and make sure /usr/bin/python is a symbolic link to /usr/bin/python2.4. Then you would have to type python2.6 to run python2,6, but that's OK. That's the best way to do it. Then you only need to install the packages **you** need in 2.6. But if my guess is wrong, and you really need to install all those packages under 2.6, then don't worry too much. First of all, install setuptools. It includes an easy\_install script, and you can then install modules with ``` easy_install <modulename> ``` It will download the module from pypi.python.org and install it. And it will also install any module that is a dependency. easy\_install can install any module that is using distutils as an installer, and not many don't. This will make installing 90% of those modules a breeze. If the module has a C-component, it will compile it, and then you need the library headers too, and that will be more work, and all you can do there is install them the standard CentOS way. You shouldn't use symbolic links between versions, because libraries are generally for a particular version. For 2.4 and 2.6 I think the .pyc files are compatible (but I'm not 100% sure), so that may work, but any module who uses C *will* break. And other versions of Python will have incompatible .pyc files as well. And I'm sure that if you do that, most Python people are not going to help you if you do it. ;-) In general, I try too keep the system python "clean", I.e. I don't install anything there that isn't installed with the packaging tools. Instead I use virtualenv or buildout to let every application have their own python path where it's dependencies live. So every single project I have basically has it's own set of libraries. It gets easier that way.
There are a couple of options... 1. If the modules will run under Python 2.6, you can simply create symbolic links to them from the 2.6 site-packages directory to the 2.4 site-packages directory. 2. If they will not run under 2.6, then you may need to re-compile them against 2.6, or install up-to-date versions of them. Just make sure you are using 2.6 when calling `"python setup.py"` ... You may want to post this on serverfault.com, if you run into additional challenges.
1,081,698
I have a problem of upgrading python from 2.4 to 2.6: I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum"). So what is the right way to migrate/install modules to python2.6?
2009/07/04
[ "https://Stackoverflow.com/questions/1081698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133068/" ]
They are not broken, they are simply not installed. The solution to that is to install them under 2.6. But first we should see if you really should do that... Yes, Python will when installed replace the python command to the version installed (unless you run it with --alt-install). You don't exactly state what your problem is, so I'm going to guess. Your problem is that many local commands using Python now fail, because they get executed with Python 2.6, and not with Python 2.4. Is that correct? If that is so, then simply delete /usr/local/bin/python, and make sure /usr/bin/python is a symbolic link to /usr/bin/python2.4. Then you would have to type python2.6 to run python2,6, but that's OK. That's the best way to do it. Then you only need to install the packages **you** need in 2.6. But if my guess is wrong, and you really need to install all those packages under 2.6, then don't worry too much. First of all, install setuptools. It includes an easy\_install script, and you can then install modules with ``` easy_install <modulename> ``` It will download the module from pypi.python.org and install it. And it will also install any module that is a dependency. easy\_install can install any module that is using distutils as an installer, and not many don't. This will make installing 90% of those modules a breeze. If the module has a C-component, it will compile it, and then you need the library headers too, and that will be more work, and all you can do there is install them the standard CentOS way. You shouldn't use symbolic links between versions, because libraries are generally for a particular version. For 2.4 and 2.6 I think the .pyc files are compatible (but I'm not 100% sure), so that may work, but any module who uses C *will* break. And other versions of Python will have incompatible .pyc files as well. And I'm sure that if you do that, most Python people are not going to help you if you do it. ;-) In general, I try too keep the system python "clean", I.e. I don't install anything there that isn't installed with the packaging tools. Instead I use virtualenv or buildout to let every application have their own python path where it's dependencies live. So every single project I have basically has it's own set of libraries. It gets easier that way.
Some Python libs may be still not accessible as with Python 2.6 `site-packages` is changed to `dist-packages`. The only way in that case is to do move all stuff generated in site-packages (e.g. by `make install`) to dist-packages and create a sym-link.
1,081,698
I have a problem of upgrading python from 2.4 to 2.6: I have CentOS 5 (Full). It has python 2.4 living in /usr/lib/python2.4/ . Additional modules are living in /usr/lib/python2.4/site-packages/ . I've built python 2.6 from sources at /usr/local/lib/python2.6/ . I've set default python to python2.6 . Now old modules for 2.4 are out of pythonpath and are "lost". In particular, yum is broken ("no module named yum"). So what is the right way to migrate/install modules to python2.6?
2009/07/04
[ "https://Stackoverflow.com/questions/1081698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/133068/" ]
They are not broken, they are simply not installed. The solution to that is to install them under 2.6. But first we should see if you really should do that... Yes, Python will when installed replace the python command to the version installed (unless you run it with --alt-install). You don't exactly state what your problem is, so I'm going to guess. Your problem is that many local commands using Python now fail, because they get executed with Python 2.6, and not with Python 2.4. Is that correct? If that is so, then simply delete /usr/local/bin/python, and make sure /usr/bin/python is a symbolic link to /usr/bin/python2.4. Then you would have to type python2.6 to run python2,6, but that's OK. That's the best way to do it. Then you only need to install the packages **you** need in 2.6. But if my guess is wrong, and you really need to install all those packages under 2.6, then don't worry too much. First of all, install setuptools. It includes an easy\_install script, and you can then install modules with ``` easy_install <modulename> ``` It will download the module from pypi.python.org and install it. And it will also install any module that is a dependency. easy\_install can install any module that is using distutils as an installer, and not many don't. This will make installing 90% of those modules a breeze. If the module has a C-component, it will compile it, and then you need the library headers too, and that will be more work, and all you can do there is install them the standard CentOS way. You shouldn't use symbolic links between versions, because libraries are generally for a particular version. For 2.4 and 2.6 I think the .pyc files are compatible (but I'm not 100% sure), so that may work, but any module who uses C *will* break. And other versions of Python will have incompatible .pyc files as well. And I'm sure that if you do that, most Python people are not going to help you if you do it. ;-) In general, I try too keep the system python "clean", I.e. I don't install anything there that isn't installed with the packaging tools. Instead I use virtualenv or buildout to let every application have their own python path where it's dependencies live. So every single project I have basically has it's own set of libraries. It gets easier that way.
easy\_install is good one but there are low level way for installing module, just: 1. unpack module source to some directory 2. type "python setup.py install" Of course you should do this with required installed python interpreter version; for checking it type: python -V
46,121,057
I'm new to bash and was tasked with scripting a check for a compliance process. From bash (or if python is better), I need to script an ssh connection from within the host running the script. For example: ssh -l testaccount localhost But I need to run this 52 times so that it is trapped by an IPS. When running this string I am prompted for a password and I have to hit enter in order to make the script complete. Is there a way to include a password or carriage return to act as manual intervention so that I do not have to hit enter each time? Here's a sample of what I was able to get working, but it only sequenced 30 attempts: ``` #!/bin/bash i=0 while [$i -lt 52] do echo | ssh -l testaccount localhost& i=$[$i+1] done ```
2017/09/08
[ "https://Stackoverflow.com/questions/46121057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8581142/" ]
In contrast to CSS, JS and HTML files which can be [gzipped using dispatcher](https://docs.adobe.com/content/docs/en/dispatcher/disp-config.html), images can be compressed only by reducing quality or resizing them. It is a quite common case for AEM projects and there are a couple of options to do that, some of them are coming out-of-the-box and do not even require programming: * You can extend `DAM Update Asset` with [CreateWebEnabledImageProcess](https://docs.adobe.com/docs/en/aem/6-3/develop/ref/javadoc/com/day/cq/dam/core/process/CreateWebEnabledImageProcess.html) Workflow Process Step. It allows you to generate new image rendition with parameters like size, quality, mime-type. Depending on workflow launcher configuration, this rendition can be generated during creation or modification of assets. You can also trigger the workflow to be run on chosen or all assets. * In case that `CreateWebEnabledImageProcess` configuration is not sufficient for your requirements, you can implement your own Workflow Process Step and generate proper rendition programmatically, using for example [ImageHelper](https://docs.adobe.com/docs/en/aem/6-3/develop/ref/javadoc/com/day/cq/commons/ImageHelper.html#saveLayer(com.day.image.Layer,%20java.lang.String,%20double,%20Node,%20java.lang.String,%20boolean)) or some Java framework for images transformation. That might be also needed if you want to generate the compressed images *on the fly*, for example, instead of generating rendition for each uploaded image, you can implement servlet attached to proper selectors and image extensions (i.e. `imageName.mobile.png`) which return the compressed image. * Eventually, **integration with ImageMagick is possible**, [Adobe documentation](https://docs.adobe.com/docs/en/aem/6-3/develop/extending/assets/best-practices-for-imagemagick.html) describes how it can be achieved using `CommandLineProcess` Workflow Process Step. However, you need to be aware of security vulnerabilities related to this mentioned in the documentation. It is also worth to mention that if your client needs more advanced solutions for images transformation in the future, then [integration with Dynamic Media](https://docs.adobe.com/docs/en/aem/6-3/administer/content/dynamic-media/image-presets.html) can also be considered as a possibility, however, this is the most costly solution.
AEM offers options for "image optimisation" but this is a broad topic so there is no "magic" switch you can turn to "optimise" your images. It all boils down to the amount of kilo- or megabytes that are transferred from AEM to the users browser. The size of an asset is influenced by two things: 1. Asset dimension (width and height). 2. Compression. The biggest gains can be achieved by simply reducing the assets dimensions. AEM does that already. If you have a look at your assets renditions you will notice that there is not just the so called *original* rendition but several other renditions with different dimensions. ``` MyImage.jpg └── jcr:content └── renditions/ β”œβ”€β”€ cq5dam.thumbnail.140.100.png β”œβ”€β”€ cq5dam.thumbnail.319.319.png β”œβ”€β”€ cq5dam.thumbnail.48.48.png └── original ``` The numbers in the renditions name are the width and height of the rendition. So there is a version of `MyImage.jpg` that has a width of 140px and a height of 100px and so on. This is all done by the `DAM Update Asset` workflow when the image is uploaded and can be modified to generate more renditions with different dimensions. But generating images with different dimensions is only half of the story. AEM has to select the rendition with the right dimension at the right moment. This is commonly referred to as "responsive images". The AEM image component does not support "responsive" images out of the box and there are several ways to implement this feature. The gist of it is that your image component has to contain a list of URLs for different sized renditions. When the page is rendered client side JavaScript determines which rendition is the best for current screen size and adds the URL to the `img` tags `src` attribute. I would recommend that you have a look at the fairly new AEM Core components which are not included with AEM. Those core components contain an image component that supports responsive images. You can read more about those here: 1. [AEM Core Components Image Component (GitHub)](https://github.com/Adobe-Marketing-Cloud/aem-core-wcm-components/tree/master/content/src/content/jcr_root/apps/core/wcm/components/image/v1/image) 2. [AEM Core Components Documentation](https://docs.adobe.com/docs/en/aem/6-3/develop/components/core-components.html) Usually, components like that will not use "static" renditions that were already generated by the *DAM Update Asset* workflow but will rely on a Adaptive Image Servlet. This servlet basically gets the asset path and the target width and will return the asset in the requested width. To avoid doing this over and over you should allow the Dispatcher to cache the resulting image. Those are just the basic things you can do. There are a lot of other things that can be done but all of them with less and less gains in terms of "optimisation".
46,121,057
I'm new to bash and was tasked with scripting a check for a compliance process. From bash (or if python is better), I need to script an ssh connection from within the host running the script. For example: ssh -l testaccount localhost But I need to run this 52 times so that it is trapped by an IPS. When running this string I am prompted for a password and I have to hit enter in order to make the script complete. Is there a way to include a password or carriage return to act as manual intervention so that I do not have to hit enter each time? Here's a sample of what I was able to get working, but it only sequenced 30 attempts: ``` #!/bin/bash i=0 while [$i -lt 52] do echo | ssh -l testaccount localhost& i=$[$i+1] done ```
2017/09/08
[ "https://Stackoverflow.com/questions/46121057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8581142/" ]
There are many ways to optimise Images in AEM. Here I will go through 3 of those ways. 1) Using DAM Update Asset Workflow. This is an out of the box workflow in AEM, Where on upload of images renditions get created . You can use those renditions path in img src attribute. 2) Using ACS commons Image transformer Install ACS commons Package , Use Image transformer Servlet config to generate optimised or transformed images acc to requirement. For more Info on this check [ACS AEM commons](https://adobe-consulting-services.github.io/acs-aem-commons/features/named-image-transform/index.html). 3) Using Google PageSpeed in dispatcher level If you want to reduce the size of image, Google PageSpeed is an option to consider. Install PageSpeed in dispatcher level and add image optimise rules to achieve your requirement. This rule Insights detects the images on the page that can be optimized to reduce their filesize without significantly impacting their visual quality. for more info check here [Optimising Images](https://developers.google.com/speed/docs/insights/OptimizeImages)
AEM offers options for "image optimisation" but this is a broad topic so there is no "magic" switch you can turn to "optimise" your images. It all boils down to the amount of kilo- or megabytes that are transferred from AEM to the users browser. The size of an asset is influenced by two things: 1. Asset dimension (width and height). 2. Compression. The biggest gains can be achieved by simply reducing the assets dimensions. AEM does that already. If you have a look at your assets renditions you will notice that there is not just the so called *original* rendition but several other renditions with different dimensions. ``` MyImage.jpg └── jcr:content └── renditions/ β”œβ”€β”€ cq5dam.thumbnail.140.100.png β”œβ”€β”€ cq5dam.thumbnail.319.319.png β”œβ”€β”€ cq5dam.thumbnail.48.48.png └── original ``` The numbers in the renditions name are the width and height of the rendition. So there is a version of `MyImage.jpg` that has a width of 140px and a height of 100px and so on. This is all done by the `DAM Update Asset` workflow when the image is uploaded and can be modified to generate more renditions with different dimensions. But generating images with different dimensions is only half of the story. AEM has to select the rendition with the right dimension at the right moment. This is commonly referred to as "responsive images". The AEM image component does not support "responsive" images out of the box and there are several ways to implement this feature. The gist of it is that your image component has to contain a list of URLs for different sized renditions. When the page is rendered client side JavaScript determines which rendition is the best for current screen size and adds the URL to the `img` tags `src` attribute. I would recommend that you have a look at the fairly new AEM Core components which are not included with AEM. Those core components contain an image component that supports responsive images. You can read more about those here: 1. [AEM Core Components Image Component (GitHub)](https://github.com/Adobe-Marketing-Cloud/aem-core-wcm-components/tree/master/content/src/content/jcr_root/apps/core/wcm/components/image/v1/image) 2. [AEM Core Components Documentation](https://docs.adobe.com/docs/en/aem/6-3/develop/components/core-components.html) Usually, components like that will not use "static" renditions that were already generated by the *DAM Update Asset* workflow but will rely on a Adaptive Image Servlet. This servlet basically gets the asset path and the target width and will return the asset in the requested width. To avoid doing this over and over you should allow the Dispatcher to cache the resulting image. Those are just the basic things you can do. There are a lot of other things that can be done but all of them with less and less gains in terms of "optimisation".
46,121,057
I'm new to bash and was tasked with scripting a check for a compliance process. From bash (or if python is better), I need to script an ssh connection from within the host running the script. For example: ssh -l testaccount localhost But I need to run this 52 times so that it is trapped by an IPS. When running this string I am prompted for a password and I have to hit enter in order to make the script complete. Is there a way to include a password or carriage return to act as manual intervention so that I do not have to hit enter each time? Here's a sample of what I was able to get working, but it only sequenced 30 attempts: ``` #!/bin/bash i=0 while [$i -lt 52] do echo | ssh -l testaccount localhost& i=$[$i+1] done ```
2017/09/08
[ "https://Stackoverflow.com/questions/46121057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8581142/" ]
In contrast to CSS, JS and HTML files which can be [gzipped using dispatcher](https://docs.adobe.com/content/docs/en/dispatcher/disp-config.html), images can be compressed only by reducing quality or resizing them. It is a quite common case for AEM projects and there are a couple of options to do that, some of them are coming out-of-the-box and do not even require programming: * You can extend `DAM Update Asset` with [CreateWebEnabledImageProcess](https://docs.adobe.com/docs/en/aem/6-3/develop/ref/javadoc/com/day/cq/dam/core/process/CreateWebEnabledImageProcess.html) Workflow Process Step. It allows you to generate new image rendition with parameters like size, quality, mime-type. Depending on workflow launcher configuration, this rendition can be generated during creation or modification of assets. You can also trigger the workflow to be run on chosen or all assets. * In case that `CreateWebEnabledImageProcess` configuration is not sufficient for your requirements, you can implement your own Workflow Process Step and generate proper rendition programmatically, using for example [ImageHelper](https://docs.adobe.com/docs/en/aem/6-3/develop/ref/javadoc/com/day/cq/commons/ImageHelper.html#saveLayer(com.day.image.Layer,%20java.lang.String,%20double,%20Node,%20java.lang.String,%20boolean)) or some Java framework for images transformation. That might be also needed if you want to generate the compressed images *on the fly*, for example, instead of generating rendition for each uploaded image, you can implement servlet attached to proper selectors and image extensions (i.e. `imageName.mobile.png`) which return the compressed image. * Eventually, **integration with ImageMagick is possible**, [Adobe documentation](https://docs.adobe.com/docs/en/aem/6-3/develop/extending/assets/best-practices-for-imagemagick.html) describes how it can be achieved using `CommandLineProcess` Workflow Process Step. However, you need to be aware of security vulnerabilities related to this mentioned in the documentation. It is also worth to mention that if your client needs more advanced solutions for images transformation in the future, then [integration with Dynamic Media](https://docs.adobe.com/docs/en/aem/6-3/administer/content/dynamic-media/image-presets.html) can also be considered as a possibility, however, this is the most costly solution.
I had the same need, and I looked at ImageMagick too and researched various options. Ultimately I customized the workflows that we use to create our image renditions to integrate with another tool. I modified them to use the [Kraken.io](https://kraken.io/) API to automatically send the rendition images AEM produced to Kraken where they would be fully web-optimized (using the default Kraken settings). I used their [Java integration library](https://kraken.io/docs/integration-libraries) to get the basic code for this integration. So eventually I ended up with properly web-optimized images for all the generated renditions (and the same could be done to the original) that were automatically optimized during a workflow without authors having to manually re-upload images. This API usage required a Kraken license. So I believe the answer is that at this time AEM does not provide a feature to achieve this, and your best bet is to integrate with another tool that does it (custom code). [TinyPng.com](https://tinypng.com/) was another image optimization service that looked like it would be good for this need and that also had an API. And for the record, I also submitted this as a feature request to our AEM rep. It seems like a glaring product gap to me, and something I am surprised hasn't been built into the product yet to allow customers to make images fully web-optimized.
46,121,057
I'm new to bash and was tasked with scripting a check for a compliance process. From bash (or if python is better), I need to script an ssh connection from within the host running the script. For example: ssh -l testaccount localhost But I need to run this 52 times so that it is trapped by an IPS. When running this string I am prompted for a password and I have to hit enter in order to make the script complete. Is there a way to include a password or carriage return to act as manual intervention so that I do not have to hit enter each time? Here's a sample of what I was able to get working, but it only sequenced 30 attempts: ``` #!/bin/bash i=0 while [$i -lt 52] do echo | ssh -l testaccount localhost& i=$[$i+1] done ```
2017/09/08
[ "https://Stackoverflow.com/questions/46121057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8581142/" ]
There are many ways to optimise Images in AEM. Here I will go through 3 of those ways. 1) Using DAM Update Asset Workflow. This is an out of the box workflow in AEM, Where on upload of images renditions get created . You can use those renditions path in img src attribute. 2) Using ACS commons Image transformer Install ACS commons Package , Use Image transformer Servlet config to generate optimised or transformed images acc to requirement. For more Info on this check [ACS AEM commons](https://adobe-consulting-services.github.io/acs-aem-commons/features/named-image-transform/index.html). 3) Using Google PageSpeed in dispatcher level If you want to reduce the size of image, Google PageSpeed is an option to consider. Install PageSpeed in dispatcher level and add image optimise rules to achieve your requirement. This rule Insights detects the images on the page that can be optimized to reduce their filesize without significantly impacting their visual quality. for more info check here [Optimising Images](https://developers.google.com/speed/docs/insights/OptimizeImages)
I had the same need, and I looked at ImageMagick too and researched various options. Ultimately I customized the workflows that we use to create our image renditions to integrate with another tool. I modified them to use the [Kraken.io](https://kraken.io/) API to automatically send the rendition images AEM produced to Kraken where they would be fully web-optimized (using the default Kraken settings). I used their [Java integration library](https://kraken.io/docs/integration-libraries) to get the basic code for this integration. So eventually I ended up with properly web-optimized images for all the generated renditions (and the same could be done to the original) that were automatically optimized during a workflow without authors having to manually re-upload images. This API usage required a Kraken license. So I believe the answer is that at this time AEM does not provide a feature to achieve this, and your best bet is to integrate with another tool that does it (custom code). [TinyPng.com](https://tinypng.com/) was another image optimization service that looked like it would be good for this need and that also had an API. And for the record, I also submitted this as a feature request to our AEM rep. It seems like a glaring product gap to me, and something I am surprised hasn't been built into the product yet to allow customers to make images fully web-optimized.
46,121,057
I'm new to bash and was tasked with scripting a check for a compliance process. From bash (or if python is better), I need to script an ssh connection from within the host running the script. For example: ssh -l testaccount localhost But I need to run this 52 times so that it is trapped by an IPS. When running this string I am prompted for a password and I have to hit enter in order to make the script complete. Is there a way to include a password or carriage return to act as manual intervention so that I do not have to hit enter each time? Here's a sample of what I was able to get working, but it only sequenced 30 attempts: ``` #!/bin/bash i=0 while [$i -lt 52] do echo | ssh -l testaccount localhost& i=$[$i+1] done ```
2017/09/08
[ "https://Stackoverflow.com/questions/46121057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8581142/" ]
In contrast to CSS, JS and HTML files which can be [gzipped using dispatcher](https://docs.adobe.com/content/docs/en/dispatcher/disp-config.html), images can be compressed only by reducing quality or resizing them. It is a quite common case for AEM projects and there are a couple of options to do that, some of them are coming out-of-the-box and do not even require programming: * You can extend `DAM Update Asset` with [CreateWebEnabledImageProcess](https://docs.adobe.com/docs/en/aem/6-3/develop/ref/javadoc/com/day/cq/dam/core/process/CreateWebEnabledImageProcess.html) Workflow Process Step. It allows you to generate new image rendition with parameters like size, quality, mime-type. Depending on workflow launcher configuration, this rendition can be generated during creation or modification of assets. You can also trigger the workflow to be run on chosen or all assets. * In case that `CreateWebEnabledImageProcess` configuration is not sufficient for your requirements, you can implement your own Workflow Process Step and generate proper rendition programmatically, using for example [ImageHelper](https://docs.adobe.com/docs/en/aem/6-3/develop/ref/javadoc/com/day/cq/commons/ImageHelper.html#saveLayer(com.day.image.Layer,%20java.lang.String,%20double,%20Node,%20java.lang.String,%20boolean)) or some Java framework for images transformation. That might be also needed if you want to generate the compressed images *on the fly*, for example, instead of generating rendition for each uploaded image, you can implement servlet attached to proper selectors and image extensions (i.e. `imageName.mobile.png`) which return the compressed image. * Eventually, **integration with ImageMagick is possible**, [Adobe documentation](https://docs.adobe.com/docs/en/aem/6-3/develop/extending/assets/best-practices-for-imagemagick.html) describes how it can be achieved using `CommandLineProcess` Workflow Process Step. However, you need to be aware of security vulnerabilities related to this mentioned in the documentation. It is also worth to mention that if your client needs more advanced solutions for images transformation in the future, then [integration with Dynamic Media](https://docs.adobe.com/docs/en/aem/6-3/administer/content/dynamic-media/image-presets.html) can also be considered as a possibility, however, this is the most costly solution.
There are many ways to optimise Images in AEM. Here I will go through 3 of those ways. 1) Using DAM Update Asset Workflow. This is an out of the box workflow in AEM, Where on upload of images renditions get created . You can use those renditions path in img src attribute. 2) Using ACS commons Image transformer Install ACS commons Package , Use Image transformer Servlet config to generate optimised or transformed images acc to requirement. For more Info on this check [ACS AEM commons](https://adobe-consulting-services.github.io/acs-aem-commons/features/named-image-transform/index.html). 3) Using Google PageSpeed in dispatcher level If you want to reduce the size of image, Google PageSpeed is an option to consider. Install PageSpeed in dispatcher level and add image optimise rules to achieve your requirement. This rule Insights detects the images on the page that can be optimized to reduce their filesize without significantly impacting their visual quality. for more info check here [Optimising Images](https://developers.google.com/speed/docs/insights/OptimizeImages)
60,751,007
I am trying to build a simple dictionary of all us english vs uk english differences for a web application I am working on. Is there a non-hacky way to build a dictionary where both the value and key can be looked up in python as efficiently as possible? I'd prefer not to loop through the dict by values for us spelling. For example: ``` baz = {'foo', 'bar'} # baz['foo'] => 'bar' # baz['bar'] => 'foo' ```
2020/03/19
[ "https://Stackoverflow.com/questions/60751007", "https://Stackoverflow.com", "https://Stackoverflow.com/users/872097/" ]
You have a raw `@JoinColumn` in `RolePrivilege`, change it, so that the name of the column is configured: `@JoinColumn(name = "roleId")`. Also you're saving `RolePrivilege`, but the changes are not cascading, change the mapping to: ``` @ManyToOne(cascade = CascadeType.ALL) ``` P.S.: Prefer `List`s over `Set`s in -to-many mapping for [performance reasons](https://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html/ch20.html#performance-collections-mostefficentinverse).
Firstly, do not return String(wrap it to class for example to `RolePriviligueResponse` with `String status` as response body), secondly you dont need `@ResponseBody` annotation, your `@PostMapping` annotation already has it, third - dont use `Integer` for ID, better use `Long` type. And you did not provide the name of `@JoinColumn(name="roleId")`
14,657,433
How do I calculate correlation matrix in python? I have an n-dimensional vector in which each element has 5 dimension. For example my vector looks like ``` [ [0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21] ] ``` In this case dimension of the vector is 4 and each element of this vector have 5 dimension. How to construct the matrix in the easiest way? Thanks
2013/02/02
[ "https://Stackoverflow.com/questions/14657433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1964587/" ]
Using [numpy](http://www.numpy.org/), you could use [np.corrcoef](http://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html): ``` In [88]: import numpy as np In [89]: np.corrcoef([[0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21]]) Out[89]: array([[ 1. , -0.35153114, -0.74736506, -0.48917666], [-0.35153114, 1. , 0.23810227, 0.15958285], [-0.74736506, 0.23810227, 1. , -0.03960706], [-0.48917666, 0.15958285, -0.03960706, 1. ]]) ```
Here is a [pretty good example](http://www.tradinggeeks.net/2015/08/calculating-correlation-in-python/) of calculating a correlations matrix form multiple time series using Python. Included source code calculates correlation matrix for a set of Forex currency pairs using Pandas, NumPy, and matplotlib to produce a graph of correlations. Sample data is a set of historical data files, and the output is a single correlation matrix and a plot. The code is very well documented.
14,657,433
How do I calculate correlation matrix in python? I have an n-dimensional vector in which each element has 5 dimension. For example my vector looks like ``` [ [0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21] ] ``` In this case dimension of the vector is 4 and each element of this vector have 5 dimension. How to construct the matrix in the easiest way? Thanks
2013/02/02
[ "https://Stackoverflow.com/questions/14657433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1964587/" ]
Using [numpy](http://www.numpy.org/), you could use [np.corrcoef](http://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html): ``` In [88]: import numpy as np In [89]: np.corrcoef([[0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21]]) Out[89]: array([[ 1. , -0.35153114, -0.74736506, -0.48917666], [-0.35153114, 1. , 0.23810227, 0.15958285], [-0.74736506, 0.23810227, 1. , -0.03960706], [-0.48917666, 0.15958285, -0.03960706, 1. ]]) ```
You can also use np.array if you don't want to write your matrix all over again. ``` import numpy as np a = np.array([ [0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21]]) b = np.corrcoef(a) print b ```
14,657,433
How do I calculate correlation matrix in python? I have an n-dimensional vector in which each element has 5 dimension. For example my vector looks like ``` [ [0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21] ] ``` In this case dimension of the vector is 4 and each element of this vector have 5 dimension. How to construct the matrix in the easiest way? Thanks
2013/02/02
[ "https://Stackoverflow.com/questions/14657433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1964587/" ]
Using [numpy](http://www.numpy.org/), you could use [np.corrcoef](http://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html): ``` In [88]: import numpy as np In [89]: np.corrcoef([[0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21]]) Out[89]: array([[ 1. , -0.35153114, -0.74736506, -0.48917666], [-0.35153114, 1. , 0.23810227, 0.15958285], [-0.74736506, 0.23810227, 1. , -0.03960706], [-0.48917666, 0.15958285, -0.03960706, 1. ]]) ```
As I almost missed that comment by @Anton Tarasenko, I'll provide a new answer. So given your array: ``` a = np.array([[0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21]]) ``` If you want the correlation matrix of your dimensions (columns), which I assume, you can use numpy (note the transpose!): ``` import numpy as np print(np.corrcoef(a.T)) ``` Or if you have it in Pandas anyhow: ``` import pandas as pd print(pd.DataFrame(a).corr()) ``` Both print ``` array([[ 1. , -0.03783885, 0.34905716, 0.14648975, -0.34945863], [-0.03783885, 1. , 0.67888519, -0.96102583, -0.12757741], [ 0.34905716, 0.67888519, 1. , -0.45104803, -0.80429469], [ 0.14648975, -0.96102583, -0.45104803, 1. , -0.15132323], [-0.34945863, -0.12757741, -0.80429469, -0.15132323, 1. ]]) ```
14,657,433
How do I calculate correlation matrix in python? I have an n-dimensional vector in which each element has 5 dimension. For example my vector looks like ``` [ [0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21] ] ``` In this case dimension of the vector is 4 and each element of this vector have 5 dimension. How to construct the matrix in the easiest way? Thanks
2013/02/02
[ "https://Stackoverflow.com/questions/14657433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1964587/" ]
You can also use np.array if you don't want to write your matrix all over again. ``` import numpy as np a = np.array([ [0.1, .32, .2, 0.4, 0.8], [.23, .18, .56, .61, .12], [.9, .3, .6, .5, .3], [.34, .75, .91, .19, .21]]) b = np.corrcoef(a) print b ```
Here is a [pretty good example](http://www.tradinggeeks.net/2015/08/calculating-correlation-in-python/) of calculating a correlations matrix form multiple time series using Python. Included source code calculates correlation matrix for a set of Forex currency pairs using Pandas, NumPy, and matplotlib to produce a graph of correlations. Sample data is a set of historical data files, and the output is a single correlation matrix and a plot. The code is very well documented.