qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
7,538,628
So, once again, I make a nice python program which makes my life ever the more easier and saves a lot of time. Ofcourse, this involves a virtualenv, made with the `mkvirtualenv` function of virtualenvwrapper. The project has a requirements.txt file with a few required libraries (requests too :D) and the program won't run without these libraries. I am trying to add a `bin/run-app` executable shell script which would be in my path (symlink actually). Now, inside this script, I need to switch to the virtualenv before I can run this program. So I put this in ``` #!/bin/bash # cd into the project directory workon "$(cat .venv)" python main.py ``` A file `.venv` contains the virtualenv name. But when I run this script, I get `workon: command not found` error. Of course, I have the virtualenvwrapper.sh sourced in my bashrc but it doesn't seem to be available in this shell script. So, how can I access those virtualenvwrapper functions here? Or am I doing this the wrong way? How do you launch your python tools, each of which has its own virtualenv!?
2011/09/24
[ "https://Stackoverflow.com/questions/7538628", "https://Stackoverflow.com", "https://Stackoverflow.com/users/151048/" ]
Just source the `virtualenvwrapper.sh` script in your script to import the virtualenvwrapper's functions. You should then be able to use the `workon` function in your script. And maybe better, you could create a shell script (you could name it `venv-run.sh` for example) to run any Python script into a given virtualenv, and place it in `/usr/bin`, `/usr/local/bin`, or any directory which is in your `PATH`. Such a script could look like this: ``` #!/bin/sh # if virtualenvwrapper.sh is in your PATH (i.e. installed with pip) source `which virtualenvwrapper.sh` #source /path/to/virtualenvwrapper.sh # if it's not in your PATH workon $1 python $2 deactivate ``` And could be used simply like `venv-run.sh my_virtualenv /path/to/script.py`
I can't find the way to trigger the commands of `virtualenvwrapper` in shell. But this trick can help: assume your env. name is `myenv`, then put following lines at the beginning of scripts: ``` ENV=myenv source $WORKON_HOME/$ENV/bin/activate ```
7,538,628
So, once again, I make a nice python program which makes my life ever the more easier and saves a lot of time. Ofcourse, this involves a virtualenv, made with the `mkvirtualenv` function of virtualenvwrapper. The project has a requirements.txt file with a few required libraries (requests too :D) and the program won't run without these libraries. I am trying to add a `bin/run-app` executable shell script which would be in my path (symlink actually). Now, inside this script, I need to switch to the virtualenv before I can run this program. So I put this in ``` #!/bin/bash # cd into the project directory workon "$(cat .venv)" python main.py ``` A file `.venv` contains the virtualenv name. But when I run this script, I get `workon: command not found` error. Of course, I have the virtualenvwrapper.sh sourced in my bashrc but it doesn't seem to be available in this shell script. So, how can I access those virtualenvwrapper functions here? Or am I doing this the wrong way? How do you launch your python tools, each of which has its own virtualenv!?
2011/09/24
[ "https://Stackoverflow.com/questions/7538628", "https://Stackoverflow.com", "https://Stackoverflow.com/users/151048/" ]
I can't find the way to trigger the commands of `virtualenvwrapper` in shell. But this trick can help: assume your env. name is `myenv`, then put following lines at the beginning of scripts: ``` ENV=myenv source $WORKON_HOME/$ENV/bin/activate ```
This is a super old thread and I had a similar issue. I started digging for a simpler solution out of curiousity. ``` gnome-terminal --working-directory='/home/exact/path/here' --tab --title="API" -- bash -ci "workon aaapi && python manage.py runserver 8001; exec bash;" ``` The --workingdirectory forces the tab to open there by default under the hood and the -ci forces it to work like an interactive interface, which gets around the issues with the venvwrapper not functioning as expected. You can run as many of these in sequence. It will open tabs, give them an alias, and run the script you want. Personally I dropped an alias into my bashrc to just do this when I type startdev in my terminal. I like this because its easy, simple to replicate, flexible, and doesn't require any fiddling with variables and whatnot.
7,538,628
So, once again, I make a nice python program which makes my life ever the more easier and saves a lot of time. Ofcourse, this involves a virtualenv, made with the `mkvirtualenv` function of virtualenvwrapper. The project has a requirements.txt file with a few required libraries (requests too :D) and the program won't run without these libraries. I am trying to add a `bin/run-app` executable shell script which would be in my path (symlink actually). Now, inside this script, I need to switch to the virtualenv before I can run this program. So I put this in ``` #!/bin/bash # cd into the project directory workon "$(cat .venv)" python main.py ``` A file `.venv` contains the virtualenv name. But when I run this script, I get `workon: command not found` error. Of course, I have the virtualenvwrapper.sh sourced in my bashrc but it doesn't seem to be available in this shell script. So, how can I access those virtualenvwrapper functions here? Or am I doing this the wrong way? How do you launch your python tools, each of which has its own virtualenv!?
2011/09/24
[ "https://Stackoverflow.com/questions/7538628", "https://Stackoverflow.com", "https://Stackoverflow.com/users/151048/" ]
This is a super old thread and I had a similar issue. I started digging for a simpler solution out of curiousity. ``` gnome-terminal --working-directory='/home/exact/path/here' --tab --title="API" -- bash -ci "workon aaapi && python manage.py runserver 8001; exec bash;" ``` The --workingdirectory forces the tab to open there by default under the hood and the -ci forces it to work like an interactive interface, which gets around the issues with the venvwrapper not functioning as expected. You can run as many of these in sequence. It will open tabs, give them an alias, and run the script you want. Personally I dropped an alias into my bashrc to just do this when I type startdev in my terminal. I like this because its easy, simple to replicate, flexible, and doesn't require any fiddling with variables and whatnot.
Apparently, I was doing this the wrong way. Instead of saving the virtualenv's name in the .venv file, I should be putting the virtualenv's directory path. ``` (cdvirtualenv && pwd) > .venv ``` and in the `bin/run-app`, I put ``` source "$(cat .venv)/bin/activate" python main.py ``` And yay!
7,538,628
So, once again, I make a nice python program which makes my life ever the more easier and saves a lot of time. Ofcourse, this involves a virtualenv, made with the `mkvirtualenv` function of virtualenvwrapper. The project has a requirements.txt file with a few required libraries (requests too :D) and the program won't run without these libraries. I am trying to add a `bin/run-app` executable shell script which would be in my path (symlink actually). Now, inside this script, I need to switch to the virtualenv before I can run this program. So I put this in ``` #!/bin/bash # cd into the project directory workon "$(cat .venv)" python main.py ``` A file `.venv` contains the virtualenv name. But when I run this script, I get `workon: command not found` error. Of course, I have the virtualenvwrapper.sh sourced in my bashrc but it doesn't seem to be available in this shell script. So, how can I access those virtualenvwrapper functions here? Or am I doing this the wrong way? How do you launch your python tools, each of which has its own virtualenv!?
2011/09/24
[ "https://Stackoverflow.com/questions/7538628", "https://Stackoverflow.com", "https://Stackoverflow.com/users/151048/" ]
Just source the `virtualenvwrapper.sh` script in your script to import the virtualenvwrapper's functions. You should then be able to use the `workon` function in your script. And maybe better, you could create a shell script (you could name it `venv-run.sh` for example) to run any Python script into a given virtualenv, and place it in `/usr/bin`, `/usr/local/bin`, or any directory which is in your `PATH`. Such a script could look like this: ``` #!/bin/sh # if virtualenvwrapper.sh is in your PATH (i.e. installed with pip) source `which virtualenvwrapper.sh` #source /path/to/virtualenvwrapper.sh # if it's not in your PATH workon $1 python $2 deactivate ``` And could be used simply like `venv-run.sh my_virtualenv /path/to/script.py`
Apparently, I was doing this the wrong way. Instead of saving the virtualenv's name in the .venv file, I should be putting the virtualenv's directory path. ``` (cdvirtualenv && pwd) > .venv ``` and in the `bin/run-app`, I put ``` source "$(cat .venv)/bin/activate" python main.py ``` And yay!
7,538,628
So, once again, I make a nice python program which makes my life ever the more easier and saves a lot of time. Ofcourse, this involves a virtualenv, made with the `mkvirtualenv` function of virtualenvwrapper. The project has a requirements.txt file with a few required libraries (requests too :D) and the program won't run without these libraries. I am trying to add a `bin/run-app` executable shell script which would be in my path (symlink actually). Now, inside this script, I need to switch to the virtualenv before I can run this program. So I put this in ``` #!/bin/bash # cd into the project directory workon "$(cat .venv)" python main.py ``` A file `.venv` contains the virtualenv name. But when I run this script, I get `workon: command not found` error. Of course, I have the virtualenvwrapper.sh sourced in my bashrc but it doesn't seem to be available in this shell script. So, how can I access those virtualenvwrapper functions here? Or am I doing this the wrong way? How do you launch your python tools, each of which has its own virtualenv!?
2011/09/24
[ "https://Stackoverflow.com/questions/7538628", "https://Stackoverflow.com", "https://Stackoverflow.com/users/151048/" ]
It's a [known issue](https://bitbucket.org/dhellmann/virtualenvwrapper/issue/219/cant-deactivate-active-virtualenv-from). As a workaround, you can make the content of the script a function and place it in either `~/.bashrc` or `~/.profile` ``` function run-app() { workon "$(cat .venv)" python main.py } ```
Apparently, I was doing this the wrong way. Instead of saving the virtualenv's name in the .venv file, I should be putting the virtualenv's directory path. ``` (cdvirtualenv && pwd) > .venv ``` and in the `bin/run-app`, I put ``` source "$(cat .venv)/bin/activate" python main.py ``` And yay!
69,624,176
I need to make a letter "C" print using python the code I currently have is down below but I'm not sure how to add 2 stars to the end of the letter. Needed in python. **Here is my current output:** ``` Enter an odd number 5 or greater: 5 *** * * * * *** ``` **Here is my needed Output:** ``` Enter an odd number 5 or greater: 5 *** * * * * * * *** ``` **Current Code:** ``` import math # Purpose: Draw one symbol on either side of the line # leaving (width-2) spaces between the symbols def draw_side(width,symbol,height): print(symbol + " "*(width)) # Purpose: Produce a letetr 'A' drawn with the symbol # provided using the given width and height def draw_A(width,height,symbol): print(" ") mid = math.floor(height/2) print(" " + symbol*(width)) for i in range(1, height): if i == mid: print(symbol) else: draw_side(width,symbol,height) print(" " + symbol*(width)) # Purpose: Prompt the user for an integer 5 or greater # and return valid user input def get_height(): height = int(input("Enter an odd number 5 or greater: ")) while(1): if (height % 2)==0 or (height < 5) : height = int(input("-> Error! Try again: ")) else : break; return height # Purpose: Calls helper function to get height, and calculates width. # Finally calls draw_A() to draw letter 'A' with given symbol def draw_letter(symbol): height = get_height() width = height - 2 draw_A(width,height,symbol) draw_letter('*') ```
2021/10/19
[ "https://Stackoverflow.com/questions/69624176", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17187287/" ]
You can decouple authentication with authorization to allow more flexible connections between all three entities: Browser, HTTP server, and DB. To make your second example work you could do: * The HTTP server (US) submits asynchroneously the query to the DB (Asia) and requests a auth token for it. * The HTTP server (US) sends the auth token back to the browser (Europe), while the query is now running. * The browser (Europe) now initiates a second HTTP call against the DB (Asia) using the auth token, and maybe the queryID as well. * The DB will probably need to implement a simple token auth protocol. It should: + Authenticate the incoming auth token. + Retrieve the session. + Start streaming the query result set back to the caller. For the DB server, there are plenty of out-of-the-box slim docker images you can spin in seconds that implement authorization server and that can listen to the browser using nginx. As you can see the architecture can be worked out. However, the DB server in Asia will need to be revamped to implement some kind of token authorization. The simplest and widespread strategy is to use OAuth2, that is all the rage nowadays.
Building on @TheImpalers answer: How about add another table to your remote DB that is just for retrieving query result? When client asks the backend service for database query, the backend service will generate a UUID or other secure token and tell the DB to run the query and store it under the given UUID. The backend service also returns the UUID to the client who can then retrieve the associated data from the DB directly.
69,624,176
I need to make a letter "C" print using python the code I currently have is down below but I'm not sure how to add 2 stars to the end of the letter. Needed in python. **Here is my current output:** ``` Enter an odd number 5 or greater: 5 *** * * * * *** ``` **Here is my needed Output:** ``` Enter an odd number 5 or greater: 5 *** * * * * * * *** ``` **Current Code:** ``` import math # Purpose: Draw one symbol on either side of the line # leaving (width-2) spaces between the symbols def draw_side(width,symbol,height): print(symbol + " "*(width)) # Purpose: Produce a letetr 'A' drawn with the symbol # provided using the given width and height def draw_A(width,height,symbol): print(" ") mid = math.floor(height/2) print(" " + symbol*(width)) for i in range(1, height): if i == mid: print(symbol) else: draw_side(width,symbol,height) print(" " + symbol*(width)) # Purpose: Prompt the user for an integer 5 or greater # and return valid user input def get_height(): height = int(input("Enter an odd number 5 or greater: ")) while(1): if (height % 2)==0 or (height < 5) : height = int(input("-> Error! Try again: ")) else : break; return height # Purpose: Calls helper function to get height, and calculates width. # Finally calls draw_A() to draw letter 'A' with given symbol def draw_letter(symbol): height = get_height() width = height - 2 draw_A(width,height,symbol) draw_letter('*') ```
2021/10/19
[ "https://Stackoverflow.com/questions/69624176", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17187287/" ]
You can decouple authentication with authorization to allow more flexible connections between all three entities: Browser, HTTP server, and DB. To make your second example work you could do: * The HTTP server (US) submits asynchroneously the query to the DB (Asia) and requests a auth token for it. * The HTTP server (US) sends the auth token back to the browser (Europe), while the query is now running. * The browser (Europe) now initiates a second HTTP call against the DB (Asia) using the auth token, and maybe the queryID as well. * The DB will probably need to implement a simple token auth protocol. It should: + Authenticate the incoming auth token. + Retrieve the session. + Start streaming the query result set back to the caller. For the DB server, there are plenty of out-of-the-box slim docker images you can spin in seconds that implement authorization server and that can listen to the browser using nginx. As you can see the architecture can be worked out. However, the DB server in Asia will need to be revamped to implement some kind of token authorization. The simplest and widespread strategy is to use OAuth2, that is all the rage nowadays.
TLDR: ``` Europe (Client) -> US (Server) -> Asia (Server) -> Asia (DB) ``` Open a HTTP server in Asia (if not don't have access to same DC/server - rent a different one), then re-direct request from HTTP US -> HTTP Asia, which will connect to local DB & stream the response. Redirect can either be a public one (302) or a private proxying over VPN if you care about latency & have such possibility. Frontend talking to DB directly is not a very good pattern, because you can't do any middleware operations that you'll need in a long term (breaking changes, analytics, authorization, redirects, rate-limiting, scalability...) If your SQL is very heavy & you can't do sync requests with long-lasting TCP connections, set up streaming over websocket server (also in Asia).
43,556,353
I have started learning python and using online interpreter for python 2.9-pythontutor ``` x=5,6 if x==5: print "5" else: print "not" ``` It goes in else loop and print not. why is that? what exactly x=5,6 means?
2017/04/22
[ "https://Stackoverflow.com/questions/43556353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7747088/" ]
`,` is tuple expr, where `x,y` will return a tuple `(x,y)` so expression `5,6` will return a tuple `(5,6)` `x` is nether `5` nor `6` but a tuple
When you declared `x = 5, 6` you made it a tuple. Then later when you do `x == 5` this translates to `(5, 6) == 5` which is not true, so the else branch is run. If instead you did `x[0] == 5` that would be true, and print 5. Because we are accessing the 0 index of the tuple, which is equal to 5. Check out [some tutorials on tuples](https://www.tutorialspoint.com/python/python_tuples.htm) for more info.
43,556,353
I have started learning python and using online interpreter for python 2.9-pythontutor ``` x=5,6 if x==5: print "5" else: print "not" ``` It goes in else loop and print not. why is that? what exactly x=5,6 means?
2017/04/22
[ "https://Stackoverflow.com/questions/43556353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7747088/" ]
`,` is tuple expr, where `x,y` will return a tuple `(x,y)` so expression `5,6` will return a tuple `(5,6)` `x` is nether `5` nor `6` but a tuple
In Python when you write `x = 4, 5`, it is same as declaring a tuple as `x = (4, 5)`. In interpreter, if you write: ``` >>> x = 4, 5 >>> x (4, 5) ``` Hence, it is similar to comparing a `tuple` with an `int`.
43,556,353
I have started learning python and using online interpreter for python 2.9-pythontutor ``` x=5,6 if x==5: print "5" else: print "not" ``` It goes in else loop and print not. why is that? what exactly x=5,6 means?
2017/04/22
[ "https://Stackoverflow.com/questions/43556353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7747088/" ]
`,` is tuple expr, where `x,y` will return a tuple `(x,y)` so expression `5,6` will return a tuple `(5,6)` `x` is nether `5` nor `6` but a tuple
X here acts as an array, where x is pointed to the first element of the array as x [0] = 5 and x [1] = 6 Execute this code, and the display will be 5 ``` x=5,6 if x[0]==5: print "5" else: print "not" ``` and try to See this link "<http://www.pythontutor.com/visualize.html#mode=edit> " you can run your code python step by step
8,572,830
I am building a django application which depends on a python module where a SIGINT signal handler has been implemented. Assuming I cannot change the module I am dependent from, how can I workaround the "signal only works in main thread" error I get integrating it in Django ? Can I run it on the Django main thread? Is there a way to inhibit the handler to allow the module to run on non-main threads ? Thanks!
2011/12/20
[ "https://Stackoverflow.com/questions/8572830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898179/" ]
Django's built-in development server has auto-reload feature enabled by default which spawns a new thread as a means of reloading code. To work around this you can simply do the following, although you'd obviously lose the convenience of auto-reloading: ``` python manage.py runserver --noreload ``` You'll also need to be mindful of this when choosing your production setup. At least some of the deployment options (such as threaded fastcgi) are certain to execute your code outside main thread.
Although the question does not describe exactly the situation you are in, here is some more generic advice: The signal is only sent to the main thread. For this reason, the signal handler should be in the main thread. From that point on, the action that the signal triggers, needs to be communicated to the other threads. I usually do this using [Events](http://docs.python.org/library/threading.html#event-objects). The signal handler sets the event, which the other threads will read, and then realize that action X has been triggered. Obviously this implies that the event attribute should be shared among the threads.
8,572,830
I am building a django application which depends on a python module where a SIGINT signal handler has been implemented. Assuming I cannot change the module I am dependent from, how can I workaround the "signal only works in main thread" error I get integrating it in Django ? Can I run it on the Django main thread? Is there a way to inhibit the handler to allow the module to run on non-main threads ? Thanks!
2011/12/20
[ "https://Stackoverflow.com/questions/8572830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898179/" ]
I use Python 3.5 and Django 1.8.5 with my project, and I met a similar problem recently. I can easily run my `xxx.py` code with **SIGNAL** directly, but it can't be executed on Django as a package just because of the error "**signal only works in main thread**". Firstly, runserver with `--noreload --nothreading` is usable but it runs my multi-thread code too slow for me. Secondly, I found that code in `__init__.py` of my package ran in the main thread. But, of course, only the main thread can catch this **signal**, my code in package can't catch it at all. It can't solve my problem, although, it may be a solution for you. Finally, I found that there is a built-in module named `subprocess` in Python. It means you can run a sub real complete process with it, that is to say, this process has its own main thread, so you can run your code with **SIGNAL** easily here. Though I don't know the performance with using it, it works well for me. PS, you can find all details about `subprocess` in Python Documentation. Thank you~
Although the question does not describe exactly the situation you are in, here is some more generic advice: The signal is only sent to the main thread. For this reason, the signal handler should be in the main thread. From that point on, the action that the signal triggers, needs to be communicated to the other threads. I usually do this using [Events](http://docs.python.org/library/threading.html#event-objects). The signal handler sets the event, which the other threads will read, and then realize that action X has been triggered. Obviously this implies that the event attribute should be shared among the threads.
8,572,830
I am building a django application which depends on a python module where a SIGINT signal handler has been implemented. Assuming I cannot change the module I am dependent from, how can I workaround the "signal only works in main thread" error I get integrating it in Django ? Can I run it on the Django main thread? Is there a way to inhibit the handler to allow the module to run on non-main threads ? Thanks!
2011/12/20
[ "https://Stackoverflow.com/questions/8572830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898179/" ]
There is a cleaner way, that doesn't break your ability to use threads and processes. Put your registration calls in manage.py: ``` def handleKill(signum, frame): print "Killing Thread." # Or whatever code you want here ForceTerminate.FORCE_TERMINATE = True print threading.active_count() exit(0) if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings") from django.core.management import execute_from_command_line signal.signal(signal.SIGINT, handleKill) signal.signal(signal.SIGTERM, handleKill) execute_from_command_line(sys.argv) ```
Although the question does not describe exactly the situation you are in, here is some more generic advice: The signal is only sent to the main thread. For this reason, the signal handler should be in the main thread. From that point on, the action that the signal triggers, needs to be communicated to the other threads. I usually do this using [Events](http://docs.python.org/library/threading.html#event-objects). The signal handler sets the event, which the other threads will read, and then realize that action X has been triggered. Obviously this implies that the event attribute should be shared among the threads.
8,572,830
I am building a django application which depends on a python module where a SIGINT signal handler has been implemented. Assuming I cannot change the module I am dependent from, how can I workaround the "signal only works in main thread" error I get integrating it in Django ? Can I run it on the Django main thread? Is there a way to inhibit the handler to allow the module to run on non-main threads ? Thanks!
2011/12/20
[ "https://Stackoverflow.com/questions/8572830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898179/" ]
Django's built-in development server has auto-reload feature enabled by default which spawns a new thread as a means of reloading code. To work around this you can simply do the following, although you'd obviously lose the convenience of auto-reloading: ``` python manage.py runserver --noreload ``` You'll also need to be mindful of this when choosing your production setup. At least some of the deployment options (such as threaded fastcgi) are certain to execute your code outside main thread.
I use Python 3.5 and Django 1.8.5 with my project, and I met a similar problem recently. I can easily run my `xxx.py` code with **SIGNAL** directly, but it can't be executed on Django as a package just because of the error "**signal only works in main thread**". Firstly, runserver with `--noreload --nothreading` is usable but it runs my multi-thread code too slow for me. Secondly, I found that code in `__init__.py` of my package ran in the main thread. But, of course, only the main thread can catch this **signal**, my code in package can't catch it at all. It can't solve my problem, although, it may be a solution for you. Finally, I found that there is a built-in module named `subprocess` in Python. It means you can run a sub real complete process with it, that is to say, this process has its own main thread, so you can run your code with **SIGNAL** easily here. Though I don't know the performance with using it, it works well for me. PS, you can find all details about `subprocess` in Python Documentation. Thank you~
8,572,830
I am building a django application which depends on a python module where a SIGINT signal handler has been implemented. Assuming I cannot change the module I am dependent from, how can I workaround the "signal only works in main thread" error I get integrating it in Django ? Can I run it on the Django main thread? Is there a way to inhibit the handler to allow the module to run on non-main threads ? Thanks!
2011/12/20
[ "https://Stackoverflow.com/questions/8572830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/898179/" ]
Django's built-in development server has auto-reload feature enabled by default which spawns a new thread as a means of reloading code. To work around this you can simply do the following, although you'd obviously lose the convenience of auto-reloading: ``` python manage.py runserver --noreload ``` You'll also need to be mindful of this when choosing your production setup. At least some of the deployment options (such as threaded fastcgi) are certain to execute your code outside main thread.
There is a cleaner way, that doesn't break your ability to use threads and processes. Put your registration calls in manage.py: ``` def handleKill(signum, frame): print "Killing Thread." # Or whatever code you want here ForceTerminate.FORCE_TERMINATE = True print threading.active_count() exit(0) if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings") from django.core.management import execute_from_command_line signal.signal(signal.SIGINT, handleKill) signal.signal(signal.SIGTERM, handleKill) execute_from_command_line(sys.argv) ```
40,390,705
I would like to make an intention list like python does. ``` list = [1,2,3,4] newList = [ i * 2 for i in list ]  ``` Using std,iterator and lambda function, it should be possible to do the same things in one line. ``` std::vector<int> list = {1,2,3,4} ; std::vector<int> newList = ``` Could you complete it ?
2016/11/02
[ "https://Stackoverflow.com/questions/40390705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2708072/" ]
[`std::transform`](http://en.cppreference.com/w/cpp/algorithm/transform) lets you transform values and put them somewhere else: ``` std::vector<int> list = {1,2,3,4}; std::vector<int> newList; std::transform( list.cbegin(), list.cend(), back_inserter(newList), [](int x) { return x * 2; }); ``` But really, C++ is not the language for conciseness.
I found this solution. But it's not very nice . ``` std::vector<int> list = {1,2,3,4}; std::vector<int> newList; std::for_each(list.begin(), list.end(),[&newList](int val){newList.push_back(val*2);}); ```
11,333,261
my views.py file code: ``` #!/usr/bin/python from django.template import loader, RequestContext from django.http import HttpResponse #from skey import find_root_tags, count, sorting_list from search.models import Keywords from django.shortcuts import render_to_response as rr def front_page(request): if request.method == 'POST' : from skey import find_root_tags, count, sorting_list str1 = request.POST['word'] fo = open("/home/pooja/Desktop/xml.txt","r") for i in range(count.__len__()): file = fo.readline() file = file.rstrip('\n') find_root_tags(file,str1,i) list.append((file,count[i])) sorting_list(list) for name, count1 in list: s = Keywords(file_name=name,frequency_count=count1) s.save() fo.close() list1 = Keywords.objects.all() t = loader.get_template('search/results.html') c = Context({'list1':list1,}) return HttpResponse(t.render(c)) else : str1 = '' list = [] template = loader.get_template('search/front_page.html') c = RequestContext(request) response = template.render(c) return HttpResponse(response) <body BGCOLOR = #9ACD32"> <ul> { % for l in list1 %} <li> {{l.file_name}}, {{l.frquency_count}}</li> { % endfor %} </ul> </body> ``` on running my app on server, it asks me for the word and redirects to results.html and is giving this output: ``` { % for l in list1 %} - List item, { % endfor %} ``` why is this happening, where am I mistaking?whereas values are getting stored in table , I checked it through admin page , then why is'it displaying? Please help.
2012/07/04
[ "https://Stackoverflow.com/questions/11333261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1493850/" ]
``` { % for l in list1 %} ``` should be ``` {% for l in list1 %} ``` and ``` { % endfor %} ``` should be ``` {% endfor %} ```
never put a space between '{' and '%'
13,515,471
I'm generating a bar-chart with matplotlib. It all works well but I can't figure out how to prevent the labels of the x-axis from overlapping each other. Here an example: ![enter image description here](https://i.stack.imgur.com/BCm0v.png) Here is some sample SQL for a postgres 9.1 database: ``` drop table if exists mytable; create table mytable(id bigint, version smallint, date_from timestamp without time zone); insert into mytable(id, version, date_from) values ('4084036', '1', '2006-12-22 22:46:35'), ('4084938', '1', '2006-12-23 16:19:13'), ('4084938', '2', '2006-12-23 16:20:23'), ('4084939', '1', '2006-12-23 16:29:14'), ('4084954', '1', '2006-12-23 16:28:28'), ('4250653', '1', '2007-02-12 21:58:53'), ('4250657', '1', '2007-03-12 21:58:53') ; ``` And this is my python-script: ``` # -*- coding: utf-8 -*- #!/usr/bin/python2.7 import psycopg2 import matplotlib.pyplot as plt fig = plt.figure() # for savefig() import pylab ### ### Connect to database with psycopg2 ### try: conn_string="dbname='x' user='y' host='z' password='pw'" print "Connecting to database\n->%s" % (conn_string) conn = psycopg2.connect(conn_string) print "Connection to database was established succesfully" except: print "Connection to database failed" ### ### Execute SQL query ### # New cursor method for sql cur = conn.cursor() # Execute SQL query. For more than one row use three '"' try: cur.execute(""" -- In which year/month have these points been created? -- Need 'yyyymm' because I only need Months with years (values are summeed up). Without, query returns every day the db has an entry. SELECT to_char(s.day,'yyyymm') AS month ,count(t.id)::int AS count FROM ( SELECT generate_series(min(date_from)::date ,max(date_from)::date ,interval '1 day' )::date AS day FROM mytable t ) s LEFT JOIN mytable t ON t.date_from::date = s.day GROUP BY month ORDER BY month; """) # Return the results of the query. Fetchall() = all rows, fetchone() = first row records = cur.fetchall() cur.close() except: print "Query could not be executed" # Unzip the data from the db-query. Order is the same as db-query output year, count = zip(*records) ### ### Plot (Barchart) ### # Count the length of the range of the count-values, y-axis-values, position of axis-labels, legend-label plt.bar(range(len(count)), count, align='center', label='Amount of created/edited points') # Add database-values to the plot with an offset of 10px/10px ax = fig.add_subplot(111) for i,j in zip(year,count): ax.annotate(str(j), xy=(i,j), xytext=(10,10), textcoords='offset points') # Rotate x-labels on the x-axis fig.autofmt_xdate() # Label-values for x and y axis plt.xticks(range(len(count)), (year)) # Label x and y axis plt.xlabel('Year') plt.ylabel('Amount of created/edited points') # Locate legend on the plot (http://matplotlib.org/users/legend_guide.html#legend-location) plt.legend(loc=1) # Plot-title plt.title("Amount of created/edited points over time") # show plot pylab.show() ``` Is there a way how I can prevent the labels from overlapping each other? Ideally in an automatic way, because I can't predict the amount of bars.
2012/11/22
[ "https://Stackoverflow.com/questions/13515471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1808868/" ]
I think you're confused on a few points about how matplotlib handles dates. You're not actually plotting dates, at the moment. You're plotting things on the x-axis with `[0,1,2,...]` and then manually labeling every point with a string representation of the date. Matplotlib will automatically position ticks. However, you're over-riding matplotlib's tick positioning functionality (Using `xticks` is basically saying: "I want ticks in exactly these positions".) At the moment, you'll get ticks at `[10, 20, 30, ...]` if matplotlib automatically positions them. However, these will correspond to the values that you used to plot them, not the dates (which you didn't use when plotting). You probably want to actually plot things using dates. Currently, you're doing something like this: ``` import datetime as dt import matplotlib.dates as mdates import numpy as np import matplotlib.pyplot as plt # Generate a series of dates (these are in matplotlib's internal date format) dates = mdates.drange(dt.datetime(2010, 01, 01), dt.datetime(2012,11,01), dt.timedelta(weeks=3)) # Create some data for the y-axis counts = np.sin(np.linspace(0, np.pi, dates.size)) # Set up the axes and figure fig, ax = plt.subplots() # Make a bar plot, ignoring the date values ax.bar(np.arange(counts.size), counts, align='center', width=1.0) # Force matplotlib to place a tick at every bar and label them with the date datelabels = mdates.num2date(dates) # Go back to a sequence of datetimes... ax.set(xticks=np.arange(dates.size), xticklabels=datelabels) #Same as plt.xticks # Make space for and rotate the x-axis tick labels fig.autofmt_xdate() plt.show() ``` ![enter image description here](https://i.stack.imgur.com/0Qi3s.png) Instead, try something like this: ``` import datetime as dt import matplotlib.dates as mdates import numpy as np import matplotlib.pyplot as plt # Generate a series of dates (these are in matplotlib's internal date format) dates = mdates.drange(dt.datetime(2010, 01, 01), dt.datetime(2012,11,01), dt.timedelta(weeks=3)) # Create some data for the y-axis counts = np.sin(np.linspace(0, np.pi, dates.size)) # Set up the axes and figure fig, ax = plt.subplots() # By default, the bars will have a width of 0.8 (days, in this case) We want # them quite a bit wider, so we'll make them them the minimum spacing between # the dates. (To use the exact code below, you'll need to convert your sequence # of datetimes into matplotlib's float-based date format. # Use "dates = mdates.date2num(dates)" to convert them.) width = np.diff(dates).min() # Make a bar plot. Note that I'm using "dates" directly instead of plotting # "counts" against x-values of [0,1,2...] ax.bar(dates, counts, align='center', width=width) # Tell matplotlib to interpret the x-axis values as dates ax.xaxis_date() # Make space for and rotate the x-axis tick labels fig.autofmt_xdate() plt.show() ``` ![enter image description here](https://i.stack.imgur.com/jTdzq.png)
As for your question on how to show only every 4th tick (for example) on the xaxis, you can do this: ``` import matplotlib.ticker as mticker myLocator = mticker.MultipleLocator(4) ax.xaxis.set_major_locator(myLocator) ```
13,515,471
I'm generating a bar-chart with matplotlib. It all works well but I can't figure out how to prevent the labels of the x-axis from overlapping each other. Here an example: ![enter image description here](https://i.stack.imgur.com/BCm0v.png) Here is some sample SQL for a postgres 9.1 database: ``` drop table if exists mytable; create table mytable(id bigint, version smallint, date_from timestamp without time zone); insert into mytable(id, version, date_from) values ('4084036', '1', '2006-12-22 22:46:35'), ('4084938', '1', '2006-12-23 16:19:13'), ('4084938', '2', '2006-12-23 16:20:23'), ('4084939', '1', '2006-12-23 16:29:14'), ('4084954', '1', '2006-12-23 16:28:28'), ('4250653', '1', '2007-02-12 21:58:53'), ('4250657', '1', '2007-03-12 21:58:53') ; ``` And this is my python-script: ``` # -*- coding: utf-8 -*- #!/usr/bin/python2.7 import psycopg2 import matplotlib.pyplot as plt fig = plt.figure() # for savefig() import pylab ### ### Connect to database with psycopg2 ### try: conn_string="dbname='x' user='y' host='z' password='pw'" print "Connecting to database\n->%s" % (conn_string) conn = psycopg2.connect(conn_string) print "Connection to database was established succesfully" except: print "Connection to database failed" ### ### Execute SQL query ### # New cursor method for sql cur = conn.cursor() # Execute SQL query. For more than one row use three '"' try: cur.execute(""" -- In which year/month have these points been created? -- Need 'yyyymm' because I only need Months with years (values are summeed up). Without, query returns every day the db has an entry. SELECT to_char(s.day,'yyyymm') AS month ,count(t.id)::int AS count FROM ( SELECT generate_series(min(date_from)::date ,max(date_from)::date ,interval '1 day' )::date AS day FROM mytable t ) s LEFT JOIN mytable t ON t.date_from::date = s.day GROUP BY month ORDER BY month; """) # Return the results of the query. Fetchall() = all rows, fetchone() = first row records = cur.fetchall() cur.close() except: print "Query could not be executed" # Unzip the data from the db-query. Order is the same as db-query output year, count = zip(*records) ### ### Plot (Barchart) ### # Count the length of the range of the count-values, y-axis-values, position of axis-labels, legend-label plt.bar(range(len(count)), count, align='center', label='Amount of created/edited points') # Add database-values to the plot with an offset of 10px/10px ax = fig.add_subplot(111) for i,j in zip(year,count): ax.annotate(str(j), xy=(i,j), xytext=(10,10), textcoords='offset points') # Rotate x-labels on the x-axis fig.autofmt_xdate() # Label-values for x and y axis plt.xticks(range(len(count)), (year)) # Label x and y axis plt.xlabel('Year') plt.ylabel('Amount of created/edited points') # Locate legend on the plot (http://matplotlib.org/users/legend_guide.html#legend-location) plt.legend(loc=1) # Plot-title plt.title("Amount of created/edited points over time") # show plot pylab.show() ``` Is there a way how I can prevent the labels from overlapping each other? Ideally in an automatic way, because I can't predict the amount of bars.
2012/11/22
[ "https://Stackoverflow.com/questions/13515471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1808868/" ]
I think you're confused on a few points about how matplotlib handles dates. You're not actually plotting dates, at the moment. You're plotting things on the x-axis with `[0,1,2,...]` and then manually labeling every point with a string representation of the date. Matplotlib will automatically position ticks. However, you're over-riding matplotlib's tick positioning functionality (Using `xticks` is basically saying: "I want ticks in exactly these positions".) At the moment, you'll get ticks at `[10, 20, 30, ...]` if matplotlib automatically positions them. However, these will correspond to the values that you used to plot them, not the dates (which you didn't use when plotting). You probably want to actually plot things using dates. Currently, you're doing something like this: ``` import datetime as dt import matplotlib.dates as mdates import numpy as np import matplotlib.pyplot as plt # Generate a series of dates (these are in matplotlib's internal date format) dates = mdates.drange(dt.datetime(2010, 01, 01), dt.datetime(2012,11,01), dt.timedelta(weeks=3)) # Create some data for the y-axis counts = np.sin(np.linspace(0, np.pi, dates.size)) # Set up the axes and figure fig, ax = plt.subplots() # Make a bar plot, ignoring the date values ax.bar(np.arange(counts.size), counts, align='center', width=1.0) # Force matplotlib to place a tick at every bar and label them with the date datelabels = mdates.num2date(dates) # Go back to a sequence of datetimes... ax.set(xticks=np.arange(dates.size), xticklabels=datelabels) #Same as plt.xticks # Make space for and rotate the x-axis tick labels fig.autofmt_xdate() plt.show() ``` ![enter image description here](https://i.stack.imgur.com/0Qi3s.png) Instead, try something like this: ``` import datetime as dt import matplotlib.dates as mdates import numpy as np import matplotlib.pyplot as plt # Generate a series of dates (these are in matplotlib's internal date format) dates = mdates.drange(dt.datetime(2010, 01, 01), dt.datetime(2012,11,01), dt.timedelta(weeks=3)) # Create some data for the y-axis counts = np.sin(np.linspace(0, np.pi, dates.size)) # Set up the axes and figure fig, ax = plt.subplots() # By default, the bars will have a width of 0.8 (days, in this case) We want # them quite a bit wider, so we'll make them them the minimum spacing between # the dates. (To use the exact code below, you'll need to convert your sequence # of datetimes into matplotlib's float-based date format. # Use "dates = mdates.date2num(dates)" to convert them.) width = np.diff(dates).min() # Make a bar plot. Note that I'm using "dates" directly instead of plotting # "counts" against x-values of [0,1,2...] ax.bar(dates, counts, align='center', width=width) # Tell matplotlib to interpret the x-axis values as dates ax.xaxis_date() # Make space for and rotate the x-axis tick labels fig.autofmt_xdate() plt.show() ``` ![enter image description here](https://i.stack.imgur.com/jTdzq.png)
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt # create a random dataframe with datetimeindex date_range = pd.date_range('1/1/2011', '4/10/2011', freq='D') df = pd.DataFrame(np.random.randint(0,10,size=(100, 1)), columns=['value'], index=date_range) ``` Date ticklabels often overlap: ``` plt.plot(df.index,df['value']) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/EQDvG.png)](https://i.stack.imgur.com/EQDvG.png) So it is useful to rotate them and right align them. ``` fig, ax = plt.subplots() ax.plot(df.index,df['value']) ax.xaxis_date() # interpret the x-axis values as dates fig.autofmt_xdate() # make space for and rotate the x-axis tick labels plt.show() ``` [![enter image description here](https://i.stack.imgur.com/2B0rs.png)](https://i.stack.imgur.com/2B0rs.png)
13,515,471
I'm generating a bar-chart with matplotlib. It all works well but I can't figure out how to prevent the labels of the x-axis from overlapping each other. Here an example: ![enter image description here](https://i.stack.imgur.com/BCm0v.png) Here is some sample SQL for a postgres 9.1 database: ``` drop table if exists mytable; create table mytable(id bigint, version smallint, date_from timestamp without time zone); insert into mytable(id, version, date_from) values ('4084036', '1', '2006-12-22 22:46:35'), ('4084938', '1', '2006-12-23 16:19:13'), ('4084938', '2', '2006-12-23 16:20:23'), ('4084939', '1', '2006-12-23 16:29:14'), ('4084954', '1', '2006-12-23 16:28:28'), ('4250653', '1', '2007-02-12 21:58:53'), ('4250657', '1', '2007-03-12 21:58:53') ; ``` And this is my python-script: ``` # -*- coding: utf-8 -*- #!/usr/bin/python2.7 import psycopg2 import matplotlib.pyplot as plt fig = plt.figure() # for savefig() import pylab ### ### Connect to database with psycopg2 ### try: conn_string="dbname='x' user='y' host='z' password='pw'" print "Connecting to database\n->%s" % (conn_string) conn = psycopg2.connect(conn_string) print "Connection to database was established succesfully" except: print "Connection to database failed" ### ### Execute SQL query ### # New cursor method for sql cur = conn.cursor() # Execute SQL query. For more than one row use three '"' try: cur.execute(""" -- In which year/month have these points been created? -- Need 'yyyymm' because I only need Months with years (values are summeed up). Without, query returns every day the db has an entry. SELECT to_char(s.day,'yyyymm') AS month ,count(t.id)::int AS count FROM ( SELECT generate_series(min(date_from)::date ,max(date_from)::date ,interval '1 day' )::date AS day FROM mytable t ) s LEFT JOIN mytable t ON t.date_from::date = s.day GROUP BY month ORDER BY month; """) # Return the results of the query. Fetchall() = all rows, fetchone() = first row records = cur.fetchall() cur.close() except: print "Query could not be executed" # Unzip the data from the db-query. Order is the same as db-query output year, count = zip(*records) ### ### Plot (Barchart) ### # Count the length of the range of the count-values, y-axis-values, position of axis-labels, legend-label plt.bar(range(len(count)), count, align='center', label='Amount of created/edited points') # Add database-values to the plot with an offset of 10px/10px ax = fig.add_subplot(111) for i,j in zip(year,count): ax.annotate(str(j), xy=(i,j), xytext=(10,10), textcoords='offset points') # Rotate x-labels on the x-axis fig.autofmt_xdate() # Label-values for x and y axis plt.xticks(range(len(count)), (year)) # Label x and y axis plt.xlabel('Year') plt.ylabel('Amount of created/edited points') # Locate legend on the plot (http://matplotlib.org/users/legend_guide.html#legend-location) plt.legend(loc=1) # Plot-title plt.title("Amount of created/edited points over time") # show plot pylab.show() ``` Is there a way how I can prevent the labels from overlapping each other? Ideally in an automatic way, because I can't predict the amount of bars.
2012/11/22
[ "https://Stackoverflow.com/questions/13515471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1808868/" ]
* The issue in the OP is the dates are formatted as `string` type. `matplotlib` plots every value as a tick label with the tick location being a 0 indexed number based on the number of values. * The resolution to this issue is to convert all values to the correct `type`, `datetime` in this case. + Once the `axes` have the correct `type`, there are additional `matplotlib` [methods](https://matplotlib.org/stable/gallery/text_labels_and_annotations/date.html), which can be used to further customize the tick spacing. * The answers to [What is plotted when string data is passed to the matplotlib API?](https://stackoverflow.com/q/68873370/7758804) explain in more detail what happens when `string` values are passed to `matplotlib`. * As of 2014-09-30, pandas has a [`read_sql`](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html) function, which has a `parse_dates` parameter. You definitely want to use that instead. ### Original Answer Here's how you should convert your date string into real datetime objects: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates data_tuples = [ ('4084036', '1', '2006-12-22 22:46:35'), ('4084938', '1', '2006-12-23 16:19:13'), ('4084938', '2', '2006-12-23 16:20:23'), ('4084939', '1', '2006-12-23 16:29:14'), ('4084954', '1', '2006-12-23 16:28:28'), ('4250653', '1', '2007-02-12 21:58:53'), ('4250657', '1', '2007-03-12 21:58:53')] datatypes = [('col1', 'i4'), ('col2', 'i4'), ('date', 'S20')] data = np.array(data_tuples, dtype=datatypes) col1 = data['col1'] # convert the dates to a datetime type dates = mdates.num2date(mdates.datestr2num(data['date'])) fig, ax1 = plt.subplots() ax1.bar(dates, col1) fig.autofmt_xdate() ``` [![enter image description here](https://i.stack.imgur.com/HsOSY.png)](https://i.stack.imgur.com/HsOSY.png) Getting a simple list of tuples out of your database cursor should be as simple as... ``` data_tuples = [] for row in cursor: data_tuples.append(row) ``` However, I posted a version of a function that I use to take db cursors directly to record arrays or pandas dataframes here: [How to convert SQL Query result to PANDAS Data Structure?](https://stackoverflow.com/questions/12047193/how-to-convert-sql-query-result-to-pandas-data-structure/12056933#12056933) Hopefully that helps too.
As for your question on how to show only every 4th tick (for example) on the xaxis, you can do this: ``` import matplotlib.ticker as mticker myLocator = mticker.MultipleLocator(4) ax.xaxis.set_major_locator(myLocator) ```
13,515,471
I'm generating a bar-chart with matplotlib. It all works well but I can't figure out how to prevent the labels of the x-axis from overlapping each other. Here an example: ![enter image description here](https://i.stack.imgur.com/BCm0v.png) Here is some sample SQL for a postgres 9.1 database: ``` drop table if exists mytable; create table mytable(id bigint, version smallint, date_from timestamp without time zone); insert into mytable(id, version, date_from) values ('4084036', '1', '2006-12-22 22:46:35'), ('4084938', '1', '2006-12-23 16:19:13'), ('4084938', '2', '2006-12-23 16:20:23'), ('4084939', '1', '2006-12-23 16:29:14'), ('4084954', '1', '2006-12-23 16:28:28'), ('4250653', '1', '2007-02-12 21:58:53'), ('4250657', '1', '2007-03-12 21:58:53') ; ``` And this is my python-script: ``` # -*- coding: utf-8 -*- #!/usr/bin/python2.7 import psycopg2 import matplotlib.pyplot as plt fig = plt.figure() # for savefig() import pylab ### ### Connect to database with psycopg2 ### try: conn_string="dbname='x' user='y' host='z' password='pw'" print "Connecting to database\n->%s" % (conn_string) conn = psycopg2.connect(conn_string) print "Connection to database was established succesfully" except: print "Connection to database failed" ### ### Execute SQL query ### # New cursor method for sql cur = conn.cursor() # Execute SQL query. For more than one row use three '"' try: cur.execute(""" -- In which year/month have these points been created? -- Need 'yyyymm' because I only need Months with years (values are summeed up). Without, query returns every day the db has an entry. SELECT to_char(s.day,'yyyymm') AS month ,count(t.id)::int AS count FROM ( SELECT generate_series(min(date_from)::date ,max(date_from)::date ,interval '1 day' )::date AS day FROM mytable t ) s LEFT JOIN mytable t ON t.date_from::date = s.day GROUP BY month ORDER BY month; """) # Return the results of the query. Fetchall() = all rows, fetchone() = first row records = cur.fetchall() cur.close() except: print "Query could not be executed" # Unzip the data from the db-query. Order is the same as db-query output year, count = zip(*records) ### ### Plot (Barchart) ### # Count the length of the range of the count-values, y-axis-values, position of axis-labels, legend-label plt.bar(range(len(count)), count, align='center', label='Amount of created/edited points') # Add database-values to the plot with an offset of 10px/10px ax = fig.add_subplot(111) for i,j in zip(year,count): ax.annotate(str(j), xy=(i,j), xytext=(10,10), textcoords='offset points') # Rotate x-labels on the x-axis fig.autofmt_xdate() # Label-values for x and y axis plt.xticks(range(len(count)), (year)) # Label x and y axis plt.xlabel('Year') plt.ylabel('Amount of created/edited points') # Locate legend on the plot (http://matplotlib.org/users/legend_guide.html#legend-location) plt.legend(loc=1) # Plot-title plt.title("Amount of created/edited points over time") # show plot pylab.show() ``` Is there a way how I can prevent the labels from overlapping each other? Ideally in an automatic way, because I can't predict the amount of bars.
2012/11/22
[ "https://Stackoverflow.com/questions/13515471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1808868/" ]
* The issue in the OP is the dates are formatted as `string` type. `matplotlib` plots every value as a tick label with the tick location being a 0 indexed number based on the number of values. * The resolution to this issue is to convert all values to the correct `type`, `datetime` in this case. + Once the `axes` have the correct `type`, there are additional `matplotlib` [methods](https://matplotlib.org/stable/gallery/text_labels_and_annotations/date.html), which can be used to further customize the tick spacing. * The answers to [What is plotted when string data is passed to the matplotlib API?](https://stackoverflow.com/q/68873370/7758804) explain in more detail what happens when `string` values are passed to `matplotlib`. * As of 2014-09-30, pandas has a [`read_sql`](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html) function, which has a `parse_dates` parameter. You definitely want to use that instead. ### Original Answer Here's how you should convert your date string into real datetime objects: ``` import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates data_tuples = [ ('4084036', '1', '2006-12-22 22:46:35'), ('4084938', '1', '2006-12-23 16:19:13'), ('4084938', '2', '2006-12-23 16:20:23'), ('4084939', '1', '2006-12-23 16:29:14'), ('4084954', '1', '2006-12-23 16:28:28'), ('4250653', '1', '2007-02-12 21:58:53'), ('4250657', '1', '2007-03-12 21:58:53')] datatypes = [('col1', 'i4'), ('col2', 'i4'), ('date', 'S20')] data = np.array(data_tuples, dtype=datatypes) col1 = data['col1'] # convert the dates to a datetime type dates = mdates.num2date(mdates.datestr2num(data['date'])) fig, ax1 = plt.subplots() ax1.bar(dates, col1) fig.autofmt_xdate() ``` [![enter image description here](https://i.stack.imgur.com/HsOSY.png)](https://i.stack.imgur.com/HsOSY.png) Getting a simple list of tuples out of your database cursor should be as simple as... ``` data_tuples = [] for row in cursor: data_tuples.append(row) ``` However, I posted a version of a function that I use to take db cursors directly to record arrays or pandas dataframes here: [How to convert SQL Query result to PANDAS Data Structure?](https://stackoverflow.com/questions/12047193/how-to-convert-sql-query-result-to-pandas-data-structure/12056933#12056933) Hopefully that helps too.
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt # create a random dataframe with datetimeindex date_range = pd.date_range('1/1/2011', '4/10/2011', freq='D') df = pd.DataFrame(np.random.randint(0,10,size=(100, 1)), columns=['value'], index=date_range) ``` Date ticklabels often overlap: ``` plt.plot(df.index,df['value']) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/EQDvG.png)](https://i.stack.imgur.com/EQDvG.png) So it is useful to rotate them and right align them. ``` fig, ax = plt.subplots() ax.plot(df.index,df['value']) ax.xaxis_date() # interpret the x-axis values as dates fig.autofmt_xdate() # make space for and rotate the x-axis tick labels plt.show() ``` [![enter image description here](https://i.stack.imgur.com/2B0rs.png)](https://i.stack.imgur.com/2B0rs.png)
13,515,471
I'm generating a bar-chart with matplotlib. It all works well but I can't figure out how to prevent the labels of the x-axis from overlapping each other. Here an example: ![enter image description here](https://i.stack.imgur.com/BCm0v.png) Here is some sample SQL for a postgres 9.1 database: ``` drop table if exists mytable; create table mytable(id bigint, version smallint, date_from timestamp without time zone); insert into mytable(id, version, date_from) values ('4084036', '1', '2006-12-22 22:46:35'), ('4084938', '1', '2006-12-23 16:19:13'), ('4084938', '2', '2006-12-23 16:20:23'), ('4084939', '1', '2006-12-23 16:29:14'), ('4084954', '1', '2006-12-23 16:28:28'), ('4250653', '1', '2007-02-12 21:58:53'), ('4250657', '1', '2007-03-12 21:58:53') ; ``` And this is my python-script: ``` # -*- coding: utf-8 -*- #!/usr/bin/python2.7 import psycopg2 import matplotlib.pyplot as plt fig = plt.figure() # for savefig() import pylab ### ### Connect to database with psycopg2 ### try: conn_string="dbname='x' user='y' host='z' password='pw'" print "Connecting to database\n->%s" % (conn_string) conn = psycopg2.connect(conn_string) print "Connection to database was established succesfully" except: print "Connection to database failed" ### ### Execute SQL query ### # New cursor method for sql cur = conn.cursor() # Execute SQL query. For more than one row use three '"' try: cur.execute(""" -- In which year/month have these points been created? -- Need 'yyyymm' because I only need Months with years (values are summeed up). Without, query returns every day the db has an entry. SELECT to_char(s.day,'yyyymm') AS month ,count(t.id)::int AS count FROM ( SELECT generate_series(min(date_from)::date ,max(date_from)::date ,interval '1 day' )::date AS day FROM mytable t ) s LEFT JOIN mytable t ON t.date_from::date = s.day GROUP BY month ORDER BY month; """) # Return the results of the query. Fetchall() = all rows, fetchone() = first row records = cur.fetchall() cur.close() except: print "Query could not be executed" # Unzip the data from the db-query. Order is the same as db-query output year, count = zip(*records) ### ### Plot (Barchart) ### # Count the length of the range of the count-values, y-axis-values, position of axis-labels, legend-label plt.bar(range(len(count)), count, align='center', label='Amount of created/edited points') # Add database-values to the plot with an offset of 10px/10px ax = fig.add_subplot(111) for i,j in zip(year,count): ax.annotate(str(j), xy=(i,j), xytext=(10,10), textcoords='offset points') # Rotate x-labels on the x-axis fig.autofmt_xdate() # Label-values for x and y axis plt.xticks(range(len(count)), (year)) # Label x and y axis plt.xlabel('Year') plt.ylabel('Amount of created/edited points') # Locate legend on the plot (http://matplotlib.org/users/legend_guide.html#legend-location) plt.legend(loc=1) # Plot-title plt.title("Amount of created/edited points over time") # show plot pylab.show() ``` Is there a way how I can prevent the labels from overlapping each other? Ideally in an automatic way, because I can't predict the amount of bars.
2012/11/22
[ "https://Stackoverflow.com/questions/13515471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1808868/" ]
As for your question on how to show only every 4th tick (for example) on the xaxis, you can do this: ``` import matplotlib.ticker as mticker myLocator = mticker.MultipleLocator(4) ax.xaxis.set_major_locator(myLocator) ```
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt # create a random dataframe with datetimeindex date_range = pd.date_range('1/1/2011', '4/10/2011', freq='D') df = pd.DataFrame(np.random.randint(0,10,size=(100, 1)), columns=['value'], index=date_range) ``` Date ticklabels often overlap: ``` plt.plot(df.index,df['value']) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/EQDvG.png)](https://i.stack.imgur.com/EQDvG.png) So it is useful to rotate them and right align them. ``` fig, ax = plt.subplots() ax.plot(df.index,df['value']) ax.xaxis_date() # interpret the x-axis values as dates fig.autofmt_xdate() # make space for and rotate the x-axis tick labels plt.show() ``` [![enter image description here](https://i.stack.imgur.com/2B0rs.png)](https://i.stack.imgur.com/2B0rs.png)
37,827,920
I followed [this](http://www.samontab.com/web/2014/06/installing-opencv-2-4-9-in-ubuntu-14-04-lts/#comment-72178) to install opencv. When I tested the C and Java samples, they worked fine. But the python samples resulted in a ``` import cv2 ImportError: No module named cv2 ``` How can I fix this? I am using python 2.7 and ubuntu 14.04.
2016/06/15
[ "https://Stackoverflow.com/questions/37827920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6417704/" ]
This is a [well known bug](https://connect.microsoft.com/SQLServer/feedback/details/499608/ssms-can-not-paste-more-than-43679-characters-from-a-column-in-grid-mode) in SSMS, You can't paste more than 43679 char from a grid view column and unfortunately this limit can't be increased, You can get around this by displaying your Data in Xml format instead of nvarchar
The datatypes like NCHAR, NVARCHAR, NVARCHAR(MAX) stores half of CHAR, VARCHAR & NVARCHAR(MAX). Because these datatype used to store UNICODE characters. Use these datatypes when you need to store data other then default language (Collation). UNICODE characters take 2 bytes for each character. That's why lenth of NCHAR, NVARCHAR, NVARCHAR(MAX) stores half of CHAR, VARCHAR & NVARCHAR(MAX).
37,827,920
I followed [this](http://www.samontab.com/web/2014/06/installing-opencv-2-4-9-in-ubuntu-14-04-lts/#comment-72178) to install opencv. When I tested the C and Java samples, they worked fine. But the python samples resulted in a ``` import cv2 ImportError: No module named cv2 ``` How can I fix this? I am using python 2.7 and ubuntu 14.04.
2016/06/15
[ "https://Stackoverflow.com/questions/37827920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6417704/" ]
This is a [well known bug](https://connect.microsoft.com/SQLServer/feedback/details/499608/ssms-can-not-paste-more-than-43679-characters-from-a-column-in-grid-mode) in SSMS, You can't paste more than 43679 char from a grid view column and unfortunately this limit can't be increased, You can get around this by displaying your Data in Xml format instead of nvarchar
SQL Server Management Studio has a character limit when printing to the messages pane. There is a workaround to achieve what you need. Using FOR XML to select your data using TYPE you can specify [processing-instruction] and give it a name. Your text will be presented as a link which you can open. This text will have wrappers and the name you specified. Here is an example. ``` declare @t table (a nvarchar(max)); insert into @t (a) values (REPLICATE(CONVERT(nvarchar(max),'a'),200000)); select LEN(a) from @t -- result 200000 SELECT a [processing-instruction(TextOutput)] from @t FOR XML PATH(''), TYPE; ```
65,716,401
I am trying to install numpy on a macOS Big Sur but got this error. I've tried update pip and setuptool, also update xcode, but the error still appears ``` ERROR: Command errored out with exit status 1: command: /Users/mac/opt/miniconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-install-3jq2_h06/numpy_a0d6a6e5f34d4b3b887afd79c04b5c7c/setup.py'"'"'; __file__='"'"'/private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-install-3jq2_h06/numpy_a0d6a6e5f34d4b3b887afd79c04b5c7c/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-wheel-hvs0y1qx cwd: /private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-install-3jq2_h06/numpy_a0d6a6e5f34d4b3b887afd79c04b5c7c/ ... error: Command "gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/mac/opt/miniconda3/include -arch x86_64 -I/Users/mac/opt/miniconda3/include -arch x86_64 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/include/numpy -Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/Users/mac/opt/miniconda3/include/python3.7m -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/private -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/npymath -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/private -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/npymath -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/private -Ibuild/src.macosx-10.9-x86_64-3.7/numpy/core/src/npymath -c numpy/random/mtrand/mtrand.c -o build/temp.macosx-10.9-x86_64-3.7/numpy/random/mtrand/mtrand.o -MMD -MF build/temp.macosx-10.9-x86_64-3.7/numpy/random/mtrand/mtrand.o.d" failed with exit status 1 ``` Also when pip trying to reinstall numpy, error messages appears like this ``` ERROR: Command errored out with exit status 1: /Users/mac/opt/miniconda3/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-install-3jq2_h06/numpy_a0d6a6e5f34d4b3b887afd79c04b5c7c/setup.py'"'"'; __file__='"'"'/private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-install-3jq2_h06/numpy_a0d6a6e5f34d4b3b887afd79c04b5c7c/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/var/folders/24/2bgc31xs4w51ksff8v5kphcc0000gn/T/pip-record-4yafo15v/install-record.txt --single-version-externally-managed --compile --install-headers /Users/mac/opt/miniconda3/include/python3.7m/numpy Check the logs for full command output. ``` Any Ideas?
2021/01/14
[ "https://Stackoverflow.com/questions/65716401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15004197/" ]
Looks like `gcc` compiler or some system library dependencies problem. Here is mine `gcc` version (MacOS Catalina) ``` $ which gcc ``` ``` /usr/bin/gcc ``` ``` $ gcc --version ``` ``` Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include/c++/4.2.1 Apple clang version 11.0.0 (clang-1100.0.33.16) Target: x86_64-apple-darwin19.6.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin ``` Are your macbook architecture is `x86_64`? If that is new `ARM64` one, architecture may be a reason. Also `build/temp.macosx-10.9-x86_64-3.7/...` and `build/src.macosx-10.9-x86_64-3.7/...` looking pretty old (OS X 10.9?). I'm not familiar with pip build processes, but looks like `python 3.7.X` libraries don't updated long time. Newer python version is `3.9.X`. Try to use it, that may be solution.
According to similar problem in this link (<https://github.com/numpy/numpy/issues/12026>). You installation tries to compile numpy on your system, which is not necessary. Try to install concrete version of numpy, e.g. `pip3 install numpy==1.19.5`
41,769,507
I have a set of strings that's JSONish, but totally JSON uncompliant. It's also kind of CSV, but values themselves sometimes have commas. The strings look like this: ATTRIBUTE: Value of this attribute, ATTRIBUTE2: Another value, but this one has a comma in it, ATTRIBUTE3:, another value... The only two patterns I can see that would mostly work are that the attribute names are in caps and followed by a : and space. After the first attribute, the pattern is , name-in-caps : space. The data is stored in Redshift, so I was going to see if I can use regex to resolved this, but my regex knowledge is limited - where would I start? If not, I'll resort to python hacking.
2017/01/20
[ "https://Stackoverflow.com/questions/41769507", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3430943/" ]
**Mistake # 1** ``` if (a=0) // condition will be always FALSE ``` must be ``` if (a==0) ``` or better ``` if (0 == a) ``` **Mistake # 2** ``` scanf("%d", &b); // when b is float ``` instead of ``` scanf("%f", &b); ``` **UPDATE:** Actually, for case of checking results of `scanf` I personally prefer to use `!=` with number of values entered with the last `scanf`. E.g. if two comma separated integers required to continue calculation snippet can be: ``` int x, y; int check; do{ printf("Enter x,y:"); check = scanf("%d,%d", &x, &y); // enter format is x,y while(getchar()!='\n'); // clean the input buffer }while(check != 2); ``` that loop will re-ask for input if `check` is not `2`, i.e. if it is `0` (when even the first value is incorrect, e.g. `abc,12`) or if it is `1` (when user forgot comma or enter not a number after comma, e.g. `12,y`
Code with corrections and comments - also available here - <http://ideone.com/eqzRQe> ``` #include <stdio.h> #include <math.h> int main(void) { float b; // printf("Eneter a float number"); printf("Enter a float number"); // Corrected typo fflush(stdout); // Send the buffer to the console so the user can see it int a=0; // a=5; -- Not required a=scanf("%f", &b); // See the manual page for reading floats if (a==0) // Need comparison operator not assignemnt { printf("scanf error: (%d)\n",a); // A better error message could be placed here } else { printf("%g\n", b); // Just to check the input with ideone - debugging printf("%g %g %g",floor(b), round(b), ceil(b)); } return 0; // You need the semi-colon here } ``` For VenuKant Sahu benefit > > Return Value > > > These functions return the number of input items successfully matched > and assigned, which can be fewer than provided for, or even zero in > the event of an early matching failure. > > > The value EOF is returned if the end of input is reached before either > the first successful conversion or a matching failure occurs. EOF is > also returned if a read error occurs, in which case the error > indicator for the stream (see ferror(3)) is set, and errno is set > indicate the error. > > >
39,540,128
I was playing around with the `dis` library to gather information about a function (like what other functions it called). The documentation for [`dis.findlabels`](https://docs.python.org/2/library/dis.html#dis.findlabels) sounds like it would return other function calls, but I've tried it with a handful of functions and it always returns an empty list. > > > ``` > dis.findlabels(code) > > Detect all offsets in the code object code which > are jump targets, and return a list of these offsets. > > ``` > > What is this function *supposed* to do and *how* would you use it?
2016/09/16
[ "https://Stackoverflow.com/questions/39540128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1547004/" ]
No, do not explicitly assign a default value to `Freight`. The warning is legitimate, because you never really assign a value to the field. You do not assign a value, because the field gets populated by magic. (Incidentally, that's why I do not like magic; but that's a different story altogether.) So, the best approach is to acknowledge the fact that the warning is legitimate but accounted for, and to explicitly suppress it. So, take a look at the documentation of the `#pragma warn` directive: <https://msdn.microsoft.com/en-us/library/441722ys.aspx>
You essentially have two choices and which way to go really depends on the intent (to suggest one or the other is subjective). First, you could eliminate the warning if the design requirement of your `Orders` type dictates that it should have a null default value. ``` public string Freight = null; ``` The above merely clarifies that intent and therefore eliminates the warning. The alternative is to suppress the warning as the other answers mention. In your case, if the assumption is that the value should have been set via Reflection then this alternative seems reasonable if not preferable in such a case.
39,540,128
I was playing around with the `dis` library to gather information about a function (like what other functions it called). The documentation for [`dis.findlabels`](https://docs.python.org/2/library/dis.html#dis.findlabels) sounds like it would return other function calls, but I've tried it with a handful of functions and it always returns an empty list. > > > ``` > dis.findlabels(code) > > Detect all offsets in the code object code which > are jump targets, and return a list of these offsets. > > ``` > > What is this function *supposed* to do and *how* would you use it?
2016/09/16
[ "https://Stackoverflow.com/questions/39540128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1547004/" ]
For the sake of completeness, I'm just going to combine blins' answer and Mike's answer - nothing original, just trying to help the next person who runs across this page. Per blins: You may set the value equal to null and the first warning "Field XYZ is assigned to but never used" ``` public string Freight = null; //or = "", or = default(string) (which is null) ``` Per Mike, the "magic" he's talking about is Reflection. The variable is assigned to at runtime. This is something the compiler doesn't detect. More on Mike's answer about suppressing the warning found here: [Suppressing "is never used" and "is never assigned to" warnings in C#](https://stackoverflow.com/questions/3820985/suppressing-is-never-used-and-is-never-assigned-to-warnings-in-c-sharp) To suppress warnings for "Field XYZ is never used", you do this: ``` #pragma warning disable 0169 ... field declaration #pragma warning restore 0169 ``` To suppress warnings for "Field XYZ is never assigned to, and will always have its default value XX", you do this: ``` #pragma warning disable 0649 ... field declaration #pragma warning restore 0649 ```
You essentially have two choices and which way to go really depends on the intent (to suggest one or the other is subjective). First, you could eliminate the warning if the design requirement of your `Orders` type dictates that it should have a null default value. ``` public string Freight = null; ``` The above merely clarifies that intent and therefore eliminates the warning. The alternative is to suppress the warning as the other answers mention. In your case, if the assumption is that the value should have been set via Reflection then this alternative seems reasonable if not preferable in such a case.
39,540,128
I was playing around with the `dis` library to gather information about a function (like what other functions it called). The documentation for [`dis.findlabels`](https://docs.python.org/2/library/dis.html#dis.findlabels) sounds like it would return other function calls, but I've tried it with a handful of functions and it always returns an empty list. > > > ``` > dis.findlabels(code) > > Detect all offsets in the code object code which > are jump targets, and return a list of these offsets. > > ``` > > What is this function *supposed* to do and *how* would you use it?
2016/09/16
[ "https://Stackoverflow.com/questions/39540128", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1547004/" ]
No, do not explicitly assign a default value to `Freight`. The warning is legitimate, because you never really assign a value to the field. You do not assign a value, because the field gets populated by magic. (Incidentally, that's why I do not like magic; but that's a different story altogether.) So, the best approach is to acknowledge the fact that the warning is legitimate but accounted for, and to explicitly suppress it. So, take a look at the documentation of the `#pragma warn` directive: <https://msdn.microsoft.com/en-us/library/441722ys.aspx>
For the sake of completeness, I'm just going to combine blins' answer and Mike's answer - nothing original, just trying to help the next person who runs across this page. Per blins: You may set the value equal to null and the first warning "Field XYZ is assigned to but never used" ``` public string Freight = null; //or = "", or = default(string) (which is null) ``` Per Mike, the "magic" he's talking about is Reflection. The variable is assigned to at runtime. This is something the compiler doesn't detect. More on Mike's answer about suppressing the warning found here: [Suppressing "is never used" and "is never assigned to" warnings in C#](https://stackoverflow.com/questions/3820985/suppressing-is-never-used-and-is-never-assigned-to-warnings-in-c-sharp) To suppress warnings for "Field XYZ is never used", you do this: ``` #pragma warning disable 0169 ... field declaration #pragma warning restore 0169 ``` To suppress warnings for "Field XYZ is never assigned to, and will always have its default value XX", you do this: ``` #pragma warning disable 0649 ... field declaration #pragma warning restore 0649 ```
27,872,305
I have a table with 12 columns and want to select the items in the first column (`qseqid`) based on the second column (`sseqid`). Meaning that the second column (`sseqid`) is repeating with different values in the 11th and 12th columns, which are`evalue`and`bitscore`, respectively. The ones that I would like to get are having the **lowest**`evalue`and the **highest**`bitscore`(when`evalue`s are the same, the rest of the columns can be ignored and the data is down below). So, I have made a short code which uses the second columns as a key for the dictionary. I can get five different items from the second column with lists of `qseqid`+`evalue`and`qseqid`+`bitscore`. Here is the code: ``` #!usr/bin/python filename = "data.txt" readfile = open(filename,"r") d = dict() for i in readfile.readlines(): i = i.strip() i = i.split("\t") d.setdefault(i[1], []).append([i[0],i[10]]) d.setdefault(i[1], []).append([i[0],i[11]]) for x in d: print(x,d[x]) readfile.close() ``` But, I am struggling to get the `qseqid` with the lowest evalue and the highest bitscore for each sseqid. Is there any good logic to solve the problem? The`data.txt`file (including the header row and with`»`representing tab characters) ```none qseqid»sseqid»pident»length»mismatch»gapopen»qstart»qend»sstart»send»evalue»bitscore ACLA_022040»TBB»32.71»431»258»8»39»468»24»423»2.00E-76»240 ACLA_024600»TBB»80»435»87»0»1»435»1»435»0»729 ACLA_031860»TBB»39.74»453»251»3»1»447»1»437»1.00E-121»357 ACLA_046030»TBB»75.81»434»105»0»1»434»1»434»0»704 ACLA_072490»TBB»41.7»446»245»3»4»447»3»435»2.00E-120»353 ACLA_010400»EF1A»27.31»249»127»8»69»286»9»234»3.00E-13»61.6 ACLA_015630»EF1A»22»491»255»17»186»602»3»439»8.00E-19»78.2 ACLA_016510»EF1A»26.23»122»61»4»21»127»9»116»2.00E-08»46.2 ACLA_023300»EF1A»29.31»447»249»12»48»437»3»439»2.00E-45»155 ACLA_028450»EF1A»85.55»443»63»1»1»443»1»442»0»801 ACLA_074730»CALM»23.13»147»101»4»6»143»2»145»7.00E-08»41.2 ACLA_096170»CALM»29.33»150»96»4»34»179»2»145»1.00E-13»55.1 ACLA_016630»CALM»23.9»159»106»5»58»216»4»147»5.00E-12»51.2 ACLA_031930»RPB2»36.87»1226»633»24»121»1237»26»1219»0»734 ACLA_065630»RPB2»65.79»1257»386»14»1»1252»4»1221»0»1691 ACLA_082370»RPB2»27.69»1228»667»37»31»1132»35»1167»7.00E-110»365 ACLA_061960»ACT»28.57»147»95»5»146»284»69»213»3.00E-12»57.4 ACLA_068200»ACT»28.73»463»231»13»16»471»4»374»1.00E-53»176 ACLA_069960»ACT»24.11»141»97»4»581»718»242»375»9.00E-09»46.2 ACLA_095800»ACT»91.73»375»31»0»1»375»1»375»0»732 ``` And here's a little more readable version of the table's contents: ```none 0 1 2 3 4 5 6 7 8 9 10 11 qseqid sseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore ACLA_022040 TBB 32.71 431 258 8 39 468 24 423 2.00E-76 240 ACLA_024600 TBB 80 435 87 0 1 435 1 435 0 729 ACLA_031860 TBB 39.74 453 251 3 1 447 1 437 1.00E-121 357 ACLA_046030 TBB 75.81 434 105 0 1 434 1 434 0 704 ACLA_072490 TBB 41.7 446 245 3 4 447 3 435 2.00E-120 353 ACLA_010400 EF1A 27.31 249 127 8 69 286 9 234 3.00E-13 61.6 ACLA_015630 EF1A 22 491 255 17 186 602 3 439 8.00E-19 78.2 ACLA_016510 EF1A 26.23 122 61 4 21 127 9 116 2.00E-08 46.2 ACLA_023300 EF1A 29.31 447 249 12 48 437 3 439 2.00E-45 155 ACLA_028450 EF1A 85.55 443 63 1 1 443 1 442 0 801 ACLA_074730 CALM 23.13 147 101 4 6 143 2 145 7.00E-08 41.2 ACLA_096170 CALM 29.33 150 96 4 34 179 2 145 1.00E-13 55.1 ACLA_016630 CALM 23.9 159 106 5 58 216 4 147 5.00E-12 51.2 ACLA_031930 RPB2 36.87 1226 633 24 121 1237 26 1219 0 734 ACLA_065630 RPB2 65.79 1257 386 14 1 1252 4 1221 0 1691 ACLA_082370 RPB2 27.69 1228 667 37 31 1132 35 1167 7.00E-110 365 ACLA_061960 ACT 28.57 147 95 5 146 284 69 213 3.00E-12 57.4 ACLA_068200 ACT 28.73 463 231 13 16 471 4 374 1.00E-53 176 ACLA_069960 ACT 24.11 141 97 4 581 718 242 375 9.00E-09 46.2 ACLA_095800 ACT 91.73 375 31 0 1 375 1 375 0 732 ```
2015/01/10
[ "https://Stackoverflow.com/questions/27872305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1918515/" ]
``` #!usr/bin/python import csv DATA = "data.txt" class Sequence: def __init__(self, row): self.qseqid = row[0] self.sseqid = row[1] self.pident = float(row[2]) self.length = int(row[3]) self.mismatch = int(row[4]) self.gapopen = int(row[5]) self.qstart = int(row[6]) self.qend = int(row[7]) self.sstart = int(row[8]) self.send = int(row[9]) self.evalue = float(row[10]) self.bitscore = float(row[11]) def __str__(self): return ( "{qseqid}\t" "{sseqid}\t" "{pident}\t" "{length}\t" "{mismatch}\t" "{gapopen}\t" "{qstart}\t" "{qend}\t" "{sstart}\t" "{send}\t" "{evalue}\t" "{bitscore}" ).format(**self.__dict__) def entries(fname, header_rows=1, dtype=list, **kwargs): with open(fname) as inf: incsv = csv.reader(inf, **kwargs) # skip header rows for i in range(header_rows): next(incsv) for row in incsv: yield dtype(row) def main(): bestseq = {} for seq in entries(DATA, dtype=Sequence, delimiter="\t"): # see if a sequence with the same sseqid already exists prev = bestseq.get(seq.sseqid, None) if ( prev is None or seq.evalue < prev.evalue or (seq.evalue == prev.evalue and seq.bitscore > prev.bitscore) ): bestseq[seq.sseqid] = seq # display selected sequences keys = sorted(bestseq) for key in keys: print(bestseq[key]) if __name__ == "__main__": main() ``` which results in ``` ACLA_095800 ACT 91.73 375 31 0 1 375 1 375 0.0 732.0 ACLA_096170 CALM 29.33 150 96 4 34 179 2 145 1e-13 55.1 ACLA_028450 EF1A 85.55 443 63 1 1 443 1 442 0.0 801.0 ACLA_065630 RPB2 65.79 1257 386 14 1 1252 4 1221 0.0 1691.0 ACLA_024600 TBB 80.0 435 87 0 1 435 1 435 0.0 729.0 ```
``` filename = 'data.txt' readfile = open(filename,'r') d = dict() sseqid=[] lines=[] for i in readfile.readlines(): sseqid.append(i.rsplit()[1]) lines.append(i.rsplit()) sorted_sseqid = sorted(set(sseqid)) sdqDict={} key =None for sorted_ssqd in sorted_sseqid: key=sorted_ssqd evalue=[] bitscore=[] qseid=[] for line in lines: if key in line: evalue.append(line[10]) bitscore.append(line[11]) qseid.append(line[0]) sdqDict[key]=[qseid,evalue,bitscore] print sdqDict print 'TBB LOWEST EVALUE' + '---->' + min(sdqDict['TBB'][1]) ##I think you can do the list manipulation below to find out the qseqid readfile.close() ```
27,872,305
I have a table with 12 columns and want to select the items in the first column (`qseqid`) based on the second column (`sseqid`). Meaning that the second column (`sseqid`) is repeating with different values in the 11th and 12th columns, which are`evalue`and`bitscore`, respectively. The ones that I would like to get are having the **lowest**`evalue`and the **highest**`bitscore`(when`evalue`s are the same, the rest of the columns can be ignored and the data is down below). So, I have made a short code which uses the second columns as a key for the dictionary. I can get five different items from the second column with lists of `qseqid`+`evalue`and`qseqid`+`bitscore`. Here is the code: ``` #!usr/bin/python filename = "data.txt" readfile = open(filename,"r") d = dict() for i in readfile.readlines(): i = i.strip() i = i.split("\t") d.setdefault(i[1], []).append([i[0],i[10]]) d.setdefault(i[1], []).append([i[0],i[11]]) for x in d: print(x,d[x]) readfile.close() ``` But, I am struggling to get the `qseqid` with the lowest evalue and the highest bitscore for each sseqid. Is there any good logic to solve the problem? The`data.txt`file (including the header row and with`»`representing tab characters) ```none qseqid»sseqid»pident»length»mismatch»gapopen»qstart»qend»sstart»send»evalue»bitscore ACLA_022040»TBB»32.71»431»258»8»39»468»24»423»2.00E-76»240 ACLA_024600»TBB»80»435»87»0»1»435»1»435»0»729 ACLA_031860»TBB»39.74»453»251»3»1»447»1»437»1.00E-121»357 ACLA_046030»TBB»75.81»434»105»0»1»434»1»434»0»704 ACLA_072490»TBB»41.7»446»245»3»4»447»3»435»2.00E-120»353 ACLA_010400»EF1A»27.31»249»127»8»69»286»9»234»3.00E-13»61.6 ACLA_015630»EF1A»22»491»255»17»186»602»3»439»8.00E-19»78.2 ACLA_016510»EF1A»26.23»122»61»4»21»127»9»116»2.00E-08»46.2 ACLA_023300»EF1A»29.31»447»249»12»48»437»3»439»2.00E-45»155 ACLA_028450»EF1A»85.55»443»63»1»1»443»1»442»0»801 ACLA_074730»CALM»23.13»147»101»4»6»143»2»145»7.00E-08»41.2 ACLA_096170»CALM»29.33»150»96»4»34»179»2»145»1.00E-13»55.1 ACLA_016630»CALM»23.9»159»106»5»58»216»4»147»5.00E-12»51.2 ACLA_031930»RPB2»36.87»1226»633»24»121»1237»26»1219»0»734 ACLA_065630»RPB2»65.79»1257»386»14»1»1252»4»1221»0»1691 ACLA_082370»RPB2»27.69»1228»667»37»31»1132»35»1167»7.00E-110»365 ACLA_061960»ACT»28.57»147»95»5»146»284»69»213»3.00E-12»57.4 ACLA_068200»ACT»28.73»463»231»13»16»471»4»374»1.00E-53»176 ACLA_069960»ACT»24.11»141»97»4»581»718»242»375»9.00E-09»46.2 ACLA_095800»ACT»91.73»375»31»0»1»375»1»375»0»732 ``` And here's a little more readable version of the table's contents: ```none 0 1 2 3 4 5 6 7 8 9 10 11 qseqid sseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore ACLA_022040 TBB 32.71 431 258 8 39 468 24 423 2.00E-76 240 ACLA_024600 TBB 80 435 87 0 1 435 1 435 0 729 ACLA_031860 TBB 39.74 453 251 3 1 447 1 437 1.00E-121 357 ACLA_046030 TBB 75.81 434 105 0 1 434 1 434 0 704 ACLA_072490 TBB 41.7 446 245 3 4 447 3 435 2.00E-120 353 ACLA_010400 EF1A 27.31 249 127 8 69 286 9 234 3.00E-13 61.6 ACLA_015630 EF1A 22 491 255 17 186 602 3 439 8.00E-19 78.2 ACLA_016510 EF1A 26.23 122 61 4 21 127 9 116 2.00E-08 46.2 ACLA_023300 EF1A 29.31 447 249 12 48 437 3 439 2.00E-45 155 ACLA_028450 EF1A 85.55 443 63 1 1 443 1 442 0 801 ACLA_074730 CALM 23.13 147 101 4 6 143 2 145 7.00E-08 41.2 ACLA_096170 CALM 29.33 150 96 4 34 179 2 145 1.00E-13 55.1 ACLA_016630 CALM 23.9 159 106 5 58 216 4 147 5.00E-12 51.2 ACLA_031930 RPB2 36.87 1226 633 24 121 1237 26 1219 0 734 ACLA_065630 RPB2 65.79 1257 386 14 1 1252 4 1221 0 1691 ACLA_082370 RPB2 27.69 1228 667 37 31 1132 35 1167 7.00E-110 365 ACLA_061960 ACT 28.57 147 95 5 146 284 69 213 3.00E-12 57.4 ACLA_068200 ACT 28.73 463 231 13 16 471 4 374 1.00E-53 176 ACLA_069960 ACT 24.11 141 97 4 581 718 242 375 9.00E-09 46.2 ACLA_095800 ACT 91.73 375 31 0 1 375 1 375 0 732 ```
2015/01/10
[ "https://Stackoverflow.com/questions/27872305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1918515/" ]
``` #!usr/bin/python import csv DATA = "data.txt" class Sequence: def __init__(self, row): self.qseqid = row[0] self.sseqid = row[1] self.pident = float(row[2]) self.length = int(row[3]) self.mismatch = int(row[4]) self.gapopen = int(row[5]) self.qstart = int(row[6]) self.qend = int(row[7]) self.sstart = int(row[8]) self.send = int(row[9]) self.evalue = float(row[10]) self.bitscore = float(row[11]) def __str__(self): return ( "{qseqid}\t" "{sseqid}\t" "{pident}\t" "{length}\t" "{mismatch}\t" "{gapopen}\t" "{qstart}\t" "{qend}\t" "{sstart}\t" "{send}\t" "{evalue}\t" "{bitscore}" ).format(**self.__dict__) def entries(fname, header_rows=1, dtype=list, **kwargs): with open(fname) as inf: incsv = csv.reader(inf, **kwargs) # skip header rows for i in range(header_rows): next(incsv) for row in incsv: yield dtype(row) def main(): bestseq = {} for seq in entries(DATA, dtype=Sequence, delimiter="\t"): # see if a sequence with the same sseqid already exists prev = bestseq.get(seq.sseqid, None) if ( prev is None or seq.evalue < prev.evalue or (seq.evalue == prev.evalue and seq.bitscore > prev.bitscore) ): bestseq[seq.sseqid] = seq # display selected sequences keys = sorted(bestseq) for key in keys: print(bestseq[key]) if __name__ == "__main__": main() ``` which results in ``` ACLA_095800 ACT 91.73 375 31 0 1 375 1 375 0.0 732.0 ACLA_096170 CALM 29.33 150 96 4 34 179 2 145 1e-13 55.1 ACLA_028450 EF1A 85.55 443 63 1 1 443 1 442 0.0 801.0 ACLA_065630 RPB2 65.79 1257 386 14 1 1252 4 1221 0.0 1691.0 ACLA_024600 TBB 80.0 435 87 0 1 435 1 435 0.0 729.0 ```
Since you're a Python newbie I'm glad that there are several examples of how to this manually, but for comparison I'll show how it can be done using the [`pandas`](http://pandas.pydata.org) library which makes working with tabular data much simpler. Since you didn't provide example output, I'm assuming that by "with the lowest evalue and the highest bitscore for each sseqid" you mean "the highest bitscore among the lowest evalues" for a given `sseqid`; if you want those *separately*, that's trivial too. ``` import pandas as pd df = pd.read_csv("acla1.dat", sep="\t") df = df.sort(["evalue", "bitscore"],ascending=[True, False]) df_new = df.groupby("sseqid", as_index=False).first() ``` which produces ``` >>> df_new sseqid qseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore 0 ACT ACLA_095800 91.73 375 31 0 1 375 1 375 0.000000e+00 732.0 1 CALM ACLA_096170 29.33 150 96 4 34 179 2 145 1.000000e-13 55.1 2 EF1A ACLA_028450 85.55 443 63 1 1 443 1 442 0.000000e+00 801.0 3 RPB2 ACLA_065630 65.79 1257 386 14 1 1252 4 1221 0.000000e+00 1691.0 4 TBB ACLA_024600 80.00 435 87 0 1 435 1 435 0.000000e+00 729.0 ``` Basically, first we read the data file into an object called a `DataFrame`, which is kind of like an Excel worksheet. Then we sort by `evalue` ascending (so that lower `evalue`s come first) and by `bitscore` descending (so that higher `bitscore`s come first). Then we can use `groupby` to collect the data in groups of equal `sseqid`, and take the first one in each group, which because of the sorting will be the one we want.
27,872,305
I have a table with 12 columns and want to select the items in the first column (`qseqid`) based on the second column (`sseqid`). Meaning that the second column (`sseqid`) is repeating with different values in the 11th and 12th columns, which are`evalue`and`bitscore`, respectively. The ones that I would like to get are having the **lowest**`evalue`and the **highest**`bitscore`(when`evalue`s are the same, the rest of the columns can be ignored and the data is down below). So, I have made a short code which uses the second columns as a key for the dictionary. I can get five different items from the second column with lists of `qseqid`+`evalue`and`qseqid`+`bitscore`. Here is the code: ``` #!usr/bin/python filename = "data.txt" readfile = open(filename,"r") d = dict() for i in readfile.readlines(): i = i.strip() i = i.split("\t") d.setdefault(i[1], []).append([i[0],i[10]]) d.setdefault(i[1], []).append([i[0],i[11]]) for x in d: print(x,d[x]) readfile.close() ``` But, I am struggling to get the `qseqid` with the lowest evalue and the highest bitscore for each sseqid. Is there any good logic to solve the problem? The`data.txt`file (including the header row and with`»`representing tab characters) ```none qseqid»sseqid»pident»length»mismatch»gapopen»qstart»qend»sstart»send»evalue»bitscore ACLA_022040»TBB»32.71»431»258»8»39»468»24»423»2.00E-76»240 ACLA_024600»TBB»80»435»87»0»1»435»1»435»0»729 ACLA_031860»TBB»39.74»453»251»3»1»447»1»437»1.00E-121»357 ACLA_046030»TBB»75.81»434»105»0»1»434»1»434»0»704 ACLA_072490»TBB»41.7»446»245»3»4»447»3»435»2.00E-120»353 ACLA_010400»EF1A»27.31»249»127»8»69»286»9»234»3.00E-13»61.6 ACLA_015630»EF1A»22»491»255»17»186»602»3»439»8.00E-19»78.2 ACLA_016510»EF1A»26.23»122»61»4»21»127»9»116»2.00E-08»46.2 ACLA_023300»EF1A»29.31»447»249»12»48»437»3»439»2.00E-45»155 ACLA_028450»EF1A»85.55»443»63»1»1»443»1»442»0»801 ACLA_074730»CALM»23.13»147»101»4»6»143»2»145»7.00E-08»41.2 ACLA_096170»CALM»29.33»150»96»4»34»179»2»145»1.00E-13»55.1 ACLA_016630»CALM»23.9»159»106»5»58»216»4»147»5.00E-12»51.2 ACLA_031930»RPB2»36.87»1226»633»24»121»1237»26»1219»0»734 ACLA_065630»RPB2»65.79»1257»386»14»1»1252»4»1221»0»1691 ACLA_082370»RPB2»27.69»1228»667»37»31»1132»35»1167»7.00E-110»365 ACLA_061960»ACT»28.57»147»95»5»146»284»69»213»3.00E-12»57.4 ACLA_068200»ACT»28.73»463»231»13»16»471»4»374»1.00E-53»176 ACLA_069960»ACT»24.11»141»97»4»581»718»242»375»9.00E-09»46.2 ACLA_095800»ACT»91.73»375»31»0»1»375»1»375»0»732 ``` And here's a little more readable version of the table's contents: ```none 0 1 2 3 4 5 6 7 8 9 10 11 qseqid sseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore ACLA_022040 TBB 32.71 431 258 8 39 468 24 423 2.00E-76 240 ACLA_024600 TBB 80 435 87 0 1 435 1 435 0 729 ACLA_031860 TBB 39.74 453 251 3 1 447 1 437 1.00E-121 357 ACLA_046030 TBB 75.81 434 105 0 1 434 1 434 0 704 ACLA_072490 TBB 41.7 446 245 3 4 447 3 435 2.00E-120 353 ACLA_010400 EF1A 27.31 249 127 8 69 286 9 234 3.00E-13 61.6 ACLA_015630 EF1A 22 491 255 17 186 602 3 439 8.00E-19 78.2 ACLA_016510 EF1A 26.23 122 61 4 21 127 9 116 2.00E-08 46.2 ACLA_023300 EF1A 29.31 447 249 12 48 437 3 439 2.00E-45 155 ACLA_028450 EF1A 85.55 443 63 1 1 443 1 442 0 801 ACLA_074730 CALM 23.13 147 101 4 6 143 2 145 7.00E-08 41.2 ACLA_096170 CALM 29.33 150 96 4 34 179 2 145 1.00E-13 55.1 ACLA_016630 CALM 23.9 159 106 5 58 216 4 147 5.00E-12 51.2 ACLA_031930 RPB2 36.87 1226 633 24 121 1237 26 1219 0 734 ACLA_065630 RPB2 65.79 1257 386 14 1 1252 4 1221 0 1691 ACLA_082370 RPB2 27.69 1228 667 37 31 1132 35 1167 7.00E-110 365 ACLA_061960 ACT 28.57 147 95 5 146 284 69 213 3.00E-12 57.4 ACLA_068200 ACT 28.73 463 231 13 16 471 4 374 1.00E-53 176 ACLA_069960 ACT 24.11 141 97 4 581 718 242 375 9.00E-09 46.2 ACLA_095800 ACT 91.73 375 31 0 1 375 1 375 0 732 ```
2015/01/10
[ "https://Stackoverflow.com/questions/27872305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1918515/" ]
``` #!usr/bin/python import csv DATA = "data.txt" class Sequence: def __init__(self, row): self.qseqid = row[0] self.sseqid = row[1] self.pident = float(row[2]) self.length = int(row[3]) self.mismatch = int(row[4]) self.gapopen = int(row[5]) self.qstart = int(row[6]) self.qend = int(row[7]) self.sstart = int(row[8]) self.send = int(row[9]) self.evalue = float(row[10]) self.bitscore = float(row[11]) def __str__(self): return ( "{qseqid}\t" "{sseqid}\t" "{pident}\t" "{length}\t" "{mismatch}\t" "{gapopen}\t" "{qstart}\t" "{qend}\t" "{sstart}\t" "{send}\t" "{evalue}\t" "{bitscore}" ).format(**self.__dict__) def entries(fname, header_rows=1, dtype=list, **kwargs): with open(fname) as inf: incsv = csv.reader(inf, **kwargs) # skip header rows for i in range(header_rows): next(incsv) for row in incsv: yield dtype(row) def main(): bestseq = {} for seq in entries(DATA, dtype=Sequence, delimiter="\t"): # see if a sequence with the same sseqid already exists prev = bestseq.get(seq.sseqid, None) if ( prev is None or seq.evalue < prev.evalue or (seq.evalue == prev.evalue and seq.bitscore > prev.bitscore) ): bestseq[seq.sseqid] = seq # display selected sequences keys = sorted(bestseq) for key in keys: print(bestseq[key]) if __name__ == "__main__": main() ``` which results in ``` ACLA_095800 ACT 91.73 375 31 0 1 375 1 375 0.0 732.0 ACLA_096170 CALM 29.33 150 96 4 34 179 2 145 1e-13 55.1 ACLA_028450 EF1A 85.55 443 63 1 1 443 1 442 0.0 801.0 ACLA_065630 RPB2 65.79 1257 386 14 1 1252 4 1221 0.0 1691.0 ACLA_024600 TBB 80.0 435 87 0 1 435 1 435 0.0 729.0 ```
While not nearly as elegant and concise as using the`pandas`library, it's quite possible to do what you want without resorting to third-party modules. The following uses the`collections.defaultdict`class to facilitate creation of dictionaries of variable-length lists of records. The use of the`AttrDict`class is optional, but it makes accessing the fields of each dictionary-based records easier and is less awkward-looking than the usual`dict['fieldname']`syntax otherwise required. ``` import csv from collections import defaultdict, namedtuple from itertools import imap from operator import itemgetter data_file_name = 'data.txt' DELIMITER = '\t' ssqeid_dict = defaultdict(list) # from http://stackoverflow.com/a/1144405/355230 def multikeysort(items, columns): comparers = [((itemgetter(col[1:].strip()), -1) if col.startswith('-') else (itemgetter(col.strip()), 1)) for col in columns] def comparer(left, right): for fn, mult in comparers: result = cmp(fn(left), fn(right)) if result: return mult * result else: return 0 return sorted(items, cmp=comparer) # from http://stackoverflow.com/a/15109345/355230 class AttrDict(dict): def __init__(self, *args, **kwargs): super(AttrDict, self).__init__(*args, **kwargs) self.__dict__ = self with open(data_file_name, 'rb') as data_file: reader = csv.DictReader(data_file, delimiter=DELIMITER) format_spec = '\t'.join([('{%s}' % field) for field in reader.fieldnames]) for rec in (AttrDict(r) for r in reader): # Convert the two sort fields to numeric values for proper ordering. rec.evalue, rec.bitscore = map(float, (rec.evalue, rec.bitscore)) ssqeid_dict[rec.sseqid].append(rec) for ssqeid in sorted(ssqeid_dict): # Sort each group of recs with same ssqeid. The first record after sorting # will be the one sought that has the lowest evalue and highest bitscore. selected = multikeysort(ssqeid_dict[ssqeid], ['evalue', '-bitscore'])[0] print format_spec.format(**selected) ``` Output (`»`represents tabs): ```none ACLA_095800» ACT» 91.73» 375» 31» 0» 1» 375» 1» 375» 0.0» 732.0 ACLA_096170» CALM» 29.33» 150» 96» 4» 34» 179» 2» 145» 1e-13» 55.1 ACLA_028450» EF1A» 85.55» 443» 63» 1» 1» 443» 1» 442» 0.0» 801.0 ACLA_065630» RPB2» 65.79» 1257» 386» 14» 1» 1252» 4» 1221» 0.0» 1691.0 ACLA_024600» TBB» 80» 435» 87» 0» 1» 435» 1» 435» 0.0» 729.0 ```
27,872,305
I have a table with 12 columns and want to select the items in the first column (`qseqid`) based on the second column (`sseqid`). Meaning that the second column (`sseqid`) is repeating with different values in the 11th and 12th columns, which are`evalue`and`bitscore`, respectively. The ones that I would like to get are having the **lowest**`evalue`and the **highest**`bitscore`(when`evalue`s are the same, the rest of the columns can be ignored and the data is down below). So, I have made a short code which uses the second columns as a key for the dictionary. I can get five different items from the second column with lists of `qseqid`+`evalue`and`qseqid`+`bitscore`. Here is the code: ``` #!usr/bin/python filename = "data.txt" readfile = open(filename,"r") d = dict() for i in readfile.readlines(): i = i.strip() i = i.split("\t") d.setdefault(i[1], []).append([i[0],i[10]]) d.setdefault(i[1], []).append([i[0],i[11]]) for x in d: print(x,d[x]) readfile.close() ``` But, I am struggling to get the `qseqid` with the lowest evalue and the highest bitscore for each sseqid. Is there any good logic to solve the problem? The`data.txt`file (including the header row and with`»`representing tab characters) ```none qseqid»sseqid»pident»length»mismatch»gapopen»qstart»qend»sstart»send»evalue»bitscore ACLA_022040»TBB»32.71»431»258»8»39»468»24»423»2.00E-76»240 ACLA_024600»TBB»80»435»87»0»1»435»1»435»0»729 ACLA_031860»TBB»39.74»453»251»3»1»447»1»437»1.00E-121»357 ACLA_046030»TBB»75.81»434»105»0»1»434»1»434»0»704 ACLA_072490»TBB»41.7»446»245»3»4»447»3»435»2.00E-120»353 ACLA_010400»EF1A»27.31»249»127»8»69»286»9»234»3.00E-13»61.6 ACLA_015630»EF1A»22»491»255»17»186»602»3»439»8.00E-19»78.2 ACLA_016510»EF1A»26.23»122»61»4»21»127»9»116»2.00E-08»46.2 ACLA_023300»EF1A»29.31»447»249»12»48»437»3»439»2.00E-45»155 ACLA_028450»EF1A»85.55»443»63»1»1»443»1»442»0»801 ACLA_074730»CALM»23.13»147»101»4»6»143»2»145»7.00E-08»41.2 ACLA_096170»CALM»29.33»150»96»4»34»179»2»145»1.00E-13»55.1 ACLA_016630»CALM»23.9»159»106»5»58»216»4»147»5.00E-12»51.2 ACLA_031930»RPB2»36.87»1226»633»24»121»1237»26»1219»0»734 ACLA_065630»RPB2»65.79»1257»386»14»1»1252»4»1221»0»1691 ACLA_082370»RPB2»27.69»1228»667»37»31»1132»35»1167»7.00E-110»365 ACLA_061960»ACT»28.57»147»95»5»146»284»69»213»3.00E-12»57.4 ACLA_068200»ACT»28.73»463»231»13»16»471»4»374»1.00E-53»176 ACLA_069960»ACT»24.11»141»97»4»581»718»242»375»9.00E-09»46.2 ACLA_095800»ACT»91.73»375»31»0»1»375»1»375»0»732 ``` And here's a little more readable version of the table's contents: ```none 0 1 2 3 4 5 6 7 8 9 10 11 qseqid sseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore ACLA_022040 TBB 32.71 431 258 8 39 468 24 423 2.00E-76 240 ACLA_024600 TBB 80 435 87 0 1 435 1 435 0 729 ACLA_031860 TBB 39.74 453 251 3 1 447 1 437 1.00E-121 357 ACLA_046030 TBB 75.81 434 105 0 1 434 1 434 0 704 ACLA_072490 TBB 41.7 446 245 3 4 447 3 435 2.00E-120 353 ACLA_010400 EF1A 27.31 249 127 8 69 286 9 234 3.00E-13 61.6 ACLA_015630 EF1A 22 491 255 17 186 602 3 439 8.00E-19 78.2 ACLA_016510 EF1A 26.23 122 61 4 21 127 9 116 2.00E-08 46.2 ACLA_023300 EF1A 29.31 447 249 12 48 437 3 439 2.00E-45 155 ACLA_028450 EF1A 85.55 443 63 1 1 443 1 442 0 801 ACLA_074730 CALM 23.13 147 101 4 6 143 2 145 7.00E-08 41.2 ACLA_096170 CALM 29.33 150 96 4 34 179 2 145 1.00E-13 55.1 ACLA_016630 CALM 23.9 159 106 5 58 216 4 147 5.00E-12 51.2 ACLA_031930 RPB2 36.87 1226 633 24 121 1237 26 1219 0 734 ACLA_065630 RPB2 65.79 1257 386 14 1 1252 4 1221 0 1691 ACLA_082370 RPB2 27.69 1228 667 37 31 1132 35 1167 7.00E-110 365 ACLA_061960 ACT 28.57 147 95 5 146 284 69 213 3.00E-12 57.4 ACLA_068200 ACT 28.73 463 231 13 16 471 4 374 1.00E-53 176 ACLA_069960 ACT 24.11 141 97 4 581 718 242 375 9.00E-09 46.2 ACLA_095800 ACT 91.73 375 31 0 1 375 1 375 0 732 ```
2015/01/10
[ "https://Stackoverflow.com/questions/27872305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1918515/" ]
Since you're a Python newbie I'm glad that there are several examples of how to this manually, but for comparison I'll show how it can be done using the [`pandas`](http://pandas.pydata.org) library which makes working with tabular data much simpler. Since you didn't provide example output, I'm assuming that by "with the lowest evalue and the highest bitscore for each sseqid" you mean "the highest bitscore among the lowest evalues" for a given `sseqid`; if you want those *separately*, that's trivial too. ``` import pandas as pd df = pd.read_csv("acla1.dat", sep="\t") df = df.sort(["evalue", "bitscore"],ascending=[True, False]) df_new = df.groupby("sseqid", as_index=False).first() ``` which produces ``` >>> df_new sseqid qseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore 0 ACT ACLA_095800 91.73 375 31 0 1 375 1 375 0.000000e+00 732.0 1 CALM ACLA_096170 29.33 150 96 4 34 179 2 145 1.000000e-13 55.1 2 EF1A ACLA_028450 85.55 443 63 1 1 443 1 442 0.000000e+00 801.0 3 RPB2 ACLA_065630 65.79 1257 386 14 1 1252 4 1221 0.000000e+00 1691.0 4 TBB ACLA_024600 80.00 435 87 0 1 435 1 435 0.000000e+00 729.0 ``` Basically, first we read the data file into an object called a `DataFrame`, which is kind of like an Excel worksheet. Then we sort by `evalue` ascending (so that lower `evalue`s come first) and by `bitscore` descending (so that higher `bitscore`s come first). Then we can use `groupby` to collect the data in groups of equal `sseqid`, and take the first one in each group, which because of the sorting will be the one we want.
``` filename = 'data.txt' readfile = open(filename,'r') d = dict() sseqid=[] lines=[] for i in readfile.readlines(): sseqid.append(i.rsplit()[1]) lines.append(i.rsplit()) sorted_sseqid = sorted(set(sseqid)) sdqDict={} key =None for sorted_ssqd in sorted_sseqid: key=sorted_ssqd evalue=[] bitscore=[] qseid=[] for line in lines: if key in line: evalue.append(line[10]) bitscore.append(line[11]) qseid.append(line[0]) sdqDict[key]=[qseid,evalue,bitscore] print sdqDict print 'TBB LOWEST EVALUE' + '---->' + min(sdqDict['TBB'][1]) ##I think you can do the list manipulation below to find out the qseqid readfile.close() ```
27,872,305
I have a table with 12 columns and want to select the items in the first column (`qseqid`) based on the second column (`sseqid`). Meaning that the second column (`sseqid`) is repeating with different values in the 11th and 12th columns, which are`evalue`and`bitscore`, respectively. The ones that I would like to get are having the **lowest**`evalue`and the **highest**`bitscore`(when`evalue`s are the same, the rest of the columns can be ignored and the data is down below). So, I have made a short code which uses the second columns as a key for the dictionary. I can get five different items from the second column with lists of `qseqid`+`evalue`and`qseqid`+`bitscore`. Here is the code: ``` #!usr/bin/python filename = "data.txt" readfile = open(filename,"r") d = dict() for i in readfile.readlines(): i = i.strip() i = i.split("\t") d.setdefault(i[1], []).append([i[0],i[10]]) d.setdefault(i[1], []).append([i[0],i[11]]) for x in d: print(x,d[x]) readfile.close() ``` But, I am struggling to get the `qseqid` with the lowest evalue and the highest bitscore for each sseqid. Is there any good logic to solve the problem? The`data.txt`file (including the header row and with`»`representing tab characters) ```none qseqid»sseqid»pident»length»mismatch»gapopen»qstart»qend»sstart»send»evalue»bitscore ACLA_022040»TBB»32.71»431»258»8»39»468»24»423»2.00E-76»240 ACLA_024600»TBB»80»435»87»0»1»435»1»435»0»729 ACLA_031860»TBB»39.74»453»251»3»1»447»1»437»1.00E-121»357 ACLA_046030»TBB»75.81»434»105»0»1»434»1»434»0»704 ACLA_072490»TBB»41.7»446»245»3»4»447»3»435»2.00E-120»353 ACLA_010400»EF1A»27.31»249»127»8»69»286»9»234»3.00E-13»61.6 ACLA_015630»EF1A»22»491»255»17»186»602»3»439»8.00E-19»78.2 ACLA_016510»EF1A»26.23»122»61»4»21»127»9»116»2.00E-08»46.2 ACLA_023300»EF1A»29.31»447»249»12»48»437»3»439»2.00E-45»155 ACLA_028450»EF1A»85.55»443»63»1»1»443»1»442»0»801 ACLA_074730»CALM»23.13»147»101»4»6»143»2»145»7.00E-08»41.2 ACLA_096170»CALM»29.33»150»96»4»34»179»2»145»1.00E-13»55.1 ACLA_016630»CALM»23.9»159»106»5»58»216»4»147»5.00E-12»51.2 ACLA_031930»RPB2»36.87»1226»633»24»121»1237»26»1219»0»734 ACLA_065630»RPB2»65.79»1257»386»14»1»1252»4»1221»0»1691 ACLA_082370»RPB2»27.69»1228»667»37»31»1132»35»1167»7.00E-110»365 ACLA_061960»ACT»28.57»147»95»5»146»284»69»213»3.00E-12»57.4 ACLA_068200»ACT»28.73»463»231»13»16»471»4»374»1.00E-53»176 ACLA_069960»ACT»24.11»141»97»4»581»718»242»375»9.00E-09»46.2 ACLA_095800»ACT»91.73»375»31»0»1»375»1»375»0»732 ``` And here's a little more readable version of the table's contents: ```none 0 1 2 3 4 5 6 7 8 9 10 11 qseqid sseqid pident length mismatch gapopen qstart qend sstart send evalue bitscore ACLA_022040 TBB 32.71 431 258 8 39 468 24 423 2.00E-76 240 ACLA_024600 TBB 80 435 87 0 1 435 1 435 0 729 ACLA_031860 TBB 39.74 453 251 3 1 447 1 437 1.00E-121 357 ACLA_046030 TBB 75.81 434 105 0 1 434 1 434 0 704 ACLA_072490 TBB 41.7 446 245 3 4 447 3 435 2.00E-120 353 ACLA_010400 EF1A 27.31 249 127 8 69 286 9 234 3.00E-13 61.6 ACLA_015630 EF1A 22 491 255 17 186 602 3 439 8.00E-19 78.2 ACLA_016510 EF1A 26.23 122 61 4 21 127 9 116 2.00E-08 46.2 ACLA_023300 EF1A 29.31 447 249 12 48 437 3 439 2.00E-45 155 ACLA_028450 EF1A 85.55 443 63 1 1 443 1 442 0 801 ACLA_074730 CALM 23.13 147 101 4 6 143 2 145 7.00E-08 41.2 ACLA_096170 CALM 29.33 150 96 4 34 179 2 145 1.00E-13 55.1 ACLA_016630 CALM 23.9 159 106 5 58 216 4 147 5.00E-12 51.2 ACLA_031930 RPB2 36.87 1226 633 24 121 1237 26 1219 0 734 ACLA_065630 RPB2 65.79 1257 386 14 1 1252 4 1221 0 1691 ACLA_082370 RPB2 27.69 1228 667 37 31 1132 35 1167 7.00E-110 365 ACLA_061960 ACT 28.57 147 95 5 146 284 69 213 3.00E-12 57.4 ACLA_068200 ACT 28.73 463 231 13 16 471 4 374 1.00E-53 176 ACLA_069960 ACT 24.11 141 97 4 581 718 242 375 9.00E-09 46.2 ACLA_095800 ACT 91.73 375 31 0 1 375 1 375 0 732 ```
2015/01/10
[ "https://Stackoverflow.com/questions/27872305", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1918515/" ]
While not nearly as elegant and concise as using the`pandas`library, it's quite possible to do what you want without resorting to third-party modules. The following uses the`collections.defaultdict`class to facilitate creation of dictionaries of variable-length lists of records. The use of the`AttrDict`class is optional, but it makes accessing the fields of each dictionary-based records easier and is less awkward-looking than the usual`dict['fieldname']`syntax otherwise required. ``` import csv from collections import defaultdict, namedtuple from itertools import imap from operator import itemgetter data_file_name = 'data.txt' DELIMITER = '\t' ssqeid_dict = defaultdict(list) # from http://stackoverflow.com/a/1144405/355230 def multikeysort(items, columns): comparers = [((itemgetter(col[1:].strip()), -1) if col.startswith('-') else (itemgetter(col.strip()), 1)) for col in columns] def comparer(left, right): for fn, mult in comparers: result = cmp(fn(left), fn(right)) if result: return mult * result else: return 0 return sorted(items, cmp=comparer) # from http://stackoverflow.com/a/15109345/355230 class AttrDict(dict): def __init__(self, *args, **kwargs): super(AttrDict, self).__init__(*args, **kwargs) self.__dict__ = self with open(data_file_name, 'rb') as data_file: reader = csv.DictReader(data_file, delimiter=DELIMITER) format_spec = '\t'.join([('{%s}' % field) for field in reader.fieldnames]) for rec in (AttrDict(r) for r in reader): # Convert the two sort fields to numeric values for proper ordering. rec.evalue, rec.bitscore = map(float, (rec.evalue, rec.bitscore)) ssqeid_dict[rec.sseqid].append(rec) for ssqeid in sorted(ssqeid_dict): # Sort each group of recs with same ssqeid. The first record after sorting # will be the one sought that has the lowest evalue and highest bitscore. selected = multikeysort(ssqeid_dict[ssqeid], ['evalue', '-bitscore'])[0] print format_spec.format(**selected) ``` Output (`»`represents tabs): ```none ACLA_095800» ACT» 91.73» 375» 31» 0» 1» 375» 1» 375» 0.0» 732.0 ACLA_096170» CALM» 29.33» 150» 96» 4» 34» 179» 2» 145» 1e-13» 55.1 ACLA_028450» EF1A» 85.55» 443» 63» 1» 1» 443» 1» 442» 0.0» 801.0 ACLA_065630» RPB2» 65.79» 1257» 386» 14» 1» 1252» 4» 1221» 0.0» 1691.0 ACLA_024600» TBB» 80» 435» 87» 0» 1» 435» 1» 435» 0.0» 729.0 ```
``` filename = 'data.txt' readfile = open(filename,'r') d = dict() sseqid=[] lines=[] for i in readfile.readlines(): sseqid.append(i.rsplit()[1]) lines.append(i.rsplit()) sorted_sseqid = sorted(set(sseqid)) sdqDict={} key =None for sorted_ssqd in sorted_sseqid: key=sorted_ssqd evalue=[] bitscore=[] qseid=[] for line in lines: if key in line: evalue.append(line[10]) bitscore.append(line[11]) qseid.append(line[0]) sdqDict[key]=[qseid,evalue,bitscore] print sdqDict print 'TBB LOWEST EVALUE' + '---->' + min(sdqDict['TBB'][1]) ##I think you can do the list manipulation below to find out the qseqid readfile.close() ```
44,063,297
In python, I'm trying to extract 4 charterers before and after '©' symbol,this code extracts the characters after ©,can anyone help printing the characters before © (I don't want the entire string to get print,only few characters) ``` import re html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" if "©" in html: symbol=re.findall(r"(?<=©).+$",html,re.M) print(symbol[0][0:100]) ```
2017/05/19
[ "https://Stackoverflow.com/questions/44063297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6228540/" ]
``` html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" html = html.split("©") print(html[0][-4:]) print(html[1][:4]) ``` Output : ``` ight Bro ```
Try doing it this way : ``` if "©" in html: pos_c = html.find("©") symbol = html[pos_c-4:pos_c] print symbol ```
44,063,297
In python, I'm trying to extract 4 charterers before and after '©' symbol,this code extracts the characters after ©,can anyone help printing the characters before © (I don't want the entire string to get print,only few characters) ``` import re html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" if "©" in html: symbol=re.findall(r"(?<=©).+$",html,re.M) print(symbol[0][0:100]) ```
2017/05/19
[ "https://Stackoverflow.com/questions/44063297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6228540/" ]
Here's a **regex only** solution to get the 4 characters before and after the © ``` import re text = "This is all test and try things that's going on bro Copyright© Bro Code Bro" print(re.findall(".{4}©.{4}", text)) ``` Output: ``` ['ight© Bro'] ```
Try doing it this way : ``` if "©" in html: pos_c = html.find("©") symbol = html[pos_c-4:pos_c] print symbol ```
44,063,297
In python, I'm trying to extract 4 charterers before and after '©' symbol,this code extracts the characters after ©,can anyone help printing the characters before © (I don't want the entire string to get print,only few characters) ``` import re html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" if "©" in html: symbol=re.findall(r"(?<=©).+$",html,re.M) print(symbol[0][0:100]) ```
2017/05/19
[ "https://Stackoverflow.com/questions/44063297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6228540/" ]
``` html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" html = html.split("©") print(html[0][-4:]) print(html[1][:4]) ``` Output : ``` ight Bro ```
You are almost there! Use search to get index and then slice/dice the string as you like ``` symbol=re.search(r"(?<=©).+$",html).start() ``` The above line give you the index of the match , in this case 63 Use ``` html[symbol:symbol+4] for post and html[symbol-4:symbol] for pre. ```
44,063,297
In python, I'm trying to extract 4 charterers before and after '©' symbol,this code extracts the characters after ©,can anyone help printing the characters before © (I don't want the entire string to get print,only few characters) ``` import re html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" if "©" in html: symbol=re.findall(r"(?<=©).+$",html,re.M) print(symbol[0][0:100]) ```
2017/05/19
[ "https://Stackoverflow.com/questions/44063297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6228540/" ]
Here's a **regex only** solution to get the 4 characters before and after the © ``` import re text = "This is all test and try things that's going on bro Copyright© Bro Code Bro" print(re.findall(".{4}©.{4}", text)) ``` Output: ``` ['ight© Bro'] ```
You are almost there! Use search to get index and then slice/dice the string as you like ``` symbol=re.search(r"(?<=©).+$",html).start() ``` The above line give you the index of the match , in this case 63 Use ``` html[symbol:symbol+4] for post and html[symbol-4:symbol] for pre. ```
44,063,297
In python, I'm trying to extract 4 charterers before and after '©' symbol,this code extracts the characters after ©,can anyone help printing the characters before © (I don't want the entire string to get print,only few characters) ``` import re html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" if "©" in html: symbol=re.findall(r"(?<=©).+$",html,re.M) print(symbol[0][0:100]) ```
2017/05/19
[ "https://Stackoverflow.com/questions/44063297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6228540/" ]
``` html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" html = html.split("©") print(html[0][-4:]) print(html[1][:4]) ``` Output : ``` ight Bro ```
Please use python built in function split() to solve the problem. `html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" html = html.split('©')`
44,063,297
In python, I'm trying to extract 4 charterers before and after '©' symbol,this code extracts the characters after ©,can anyone help printing the characters before © (I don't want the entire string to get print,only few characters) ``` import re html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" if "©" in html: symbol=re.findall(r"(?<=©).+$",html,re.M) print(symbol[0][0:100]) ```
2017/05/19
[ "https://Stackoverflow.com/questions/44063297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6228540/" ]
Here's a **regex only** solution to get the 4 characters before and after the © ``` import re text = "This is all test and try things that's going on bro Copyright© Bro Code Bro" print(re.findall(".{4}©.{4}", text)) ``` Output: ``` ['ight© Bro'] ```
``` html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" html = html.split("©") print(html[0][-4:]) print(html[1][:4]) ``` Output : ``` ight Bro ```
44,063,297
In python, I'm trying to extract 4 charterers before and after '©' symbol,this code extracts the characters after ©,can anyone help printing the characters before © (I don't want the entire string to get print,only few characters) ``` import re html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" if "©" in html: symbol=re.findall(r"(?<=©).+$",html,re.M) print(symbol[0][0:100]) ```
2017/05/19
[ "https://Stackoverflow.com/questions/44063297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6228540/" ]
Here's a **regex only** solution to get the 4 characters before and after the © ``` import re text = "This is all test and try things that's going on bro Copyright© Bro Code Bro" print(re.findall(".{4}©.{4}", text)) ``` Output: ``` ['ight© Bro'] ```
Please use python built in function split() to solve the problem. `html = "This is all test and try things that's going on bro Copyright© Bro Code Bro" html = html.split('©')`
65,635,575
I get the following error when I attempt to load a saved `sklearn.preprocessing.MinMaxScaler` ``` /shared/env/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator MinMaxScaler from version 0.23.2 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) [2021-01-08 19:40:28,805 INFO train.py:1317 - main ] EXCEPTION WORKER 100: Traceback (most recent call last): ... File "/shared/core/simulate.py", line 129, in process_obs obs = scaler.transform(obs) File "/shared/env/lib/python3.6/site-packages/sklearn/preprocessing/_data.py", line 439, in transform if self.clip: AttributeError: 'MinMaxScaler' object has no attribute 'clip' ``` I trained the scaler on one machine, saved it, and pushed it to a second machine where it was loaded and used to transform input. ``` # loading and transforming import joblib from sklearn.preprocessing import MinMaxScaler scaler = joblib.load('scaler') assert isinstance(scaler, MinMaxScaler) data = scaler.transform(data) # throws exception ```
2021/01/08
[ "https://Stackoverflow.com/questions/65635575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7253453/" ]
The issue is you are training the scaler on a machine with an older verion of sklearn than the machine you're using to load the scaler. Noitce the `UserWarning` `UserWarning: Trying to unpickle estimator MinMaxScaler from version 0.23.2 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk. UserWarning)` The solution is to fix the version mismatch. Either by upgrading one sklearn to `0.24.0` or downgrading to `0.23.2`
I solved this issue with `pip install scikit-learn==0.23.2` in my conda or cmd. Essentially downgrading the scikit module helped.
65,635,575
I get the following error when I attempt to load a saved `sklearn.preprocessing.MinMaxScaler` ``` /shared/env/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator MinMaxScaler from version 0.23.2 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) [2021-01-08 19:40:28,805 INFO train.py:1317 - main ] EXCEPTION WORKER 100: Traceback (most recent call last): ... File "/shared/core/simulate.py", line 129, in process_obs obs = scaler.transform(obs) File "/shared/env/lib/python3.6/site-packages/sklearn/preprocessing/_data.py", line 439, in transform if self.clip: AttributeError: 'MinMaxScaler' object has no attribute 'clip' ``` I trained the scaler on one machine, saved it, and pushed it to a second machine where it was loaded and used to transform input. ``` # loading and transforming import joblib from sklearn.preprocessing import MinMaxScaler scaler = joblib.load('scaler') assert isinstance(scaler, MinMaxScaler) data = scaler.transform(data) # throws exception ```
2021/01/08
[ "https://Stackoverflow.com/questions/65635575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7253453/" ]
The issue is you are training the scaler on a machine with an older verion of sklearn than the machine you're using to load the scaler. Noitce the `UserWarning` `UserWarning: Trying to unpickle estimator MinMaxScaler from version 0.23.2 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk. UserWarning)` The solution is to fix the version mismatch. Either by upgrading one sklearn to `0.24.0` or downgrading to `0.23.2`
New property `clip` was added to `MinMaxScaler` in later version (since 0.24). ``` # loading and transforming import joblib from sklearn.preprocessing import MinMaxScaler scaler = joblib.load('scaler') assert isinstance(scaler, MinMaxScaler) scaler.clip = False # add this line data = scaler.transform(data) # throws exceptio ``` **Explanation:** Becase `clip` is defined in `__init__` method it is part of `MinMaxScaler.__dict__`. When you try to create object from pickle `__setattr__` method is used to set all attributues, but `clip` was not used in older version therefore is missing in your new `MinMaxScale` instance. Simply add: ``` scaler.clip = False ``` and it should work fine.
65,635,575
I get the following error when I attempt to load a saved `sklearn.preprocessing.MinMaxScaler` ``` /shared/env/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator MinMaxScaler from version 0.23.2 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) [2021-01-08 19:40:28,805 INFO train.py:1317 - main ] EXCEPTION WORKER 100: Traceback (most recent call last): ... File "/shared/core/simulate.py", line 129, in process_obs obs = scaler.transform(obs) File "/shared/env/lib/python3.6/site-packages/sklearn/preprocessing/_data.py", line 439, in transform if self.clip: AttributeError: 'MinMaxScaler' object has no attribute 'clip' ``` I trained the scaler on one machine, saved it, and pushed it to a second machine where it was loaded and used to transform input. ``` # loading and transforming import joblib from sklearn.preprocessing import MinMaxScaler scaler = joblib.load('scaler') assert isinstance(scaler, MinMaxScaler) data = scaler.transform(data) # throws exception ```
2021/01/08
[ "https://Stackoverflow.com/questions/65635575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7253453/" ]
The issue is you are training the scaler on a machine with an older verion of sklearn than the machine you're using to load the scaler. Noitce the `UserWarning` `UserWarning: Trying to unpickle estimator MinMaxScaler from version 0.23.2 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk. UserWarning)` The solution is to fix the version mismatch. Either by upgrading one sklearn to `0.24.0` or downgrading to `0.23.2`
version issue of **sklearn** You need to install in windows `pip install scikit-learn==0.24.0` I solve my Problem using this command
65,635,575
I get the following error when I attempt to load a saved `sklearn.preprocessing.MinMaxScaler` ``` /shared/env/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator MinMaxScaler from version 0.23.2 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) [2021-01-08 19:40:28,805 INFO train.py:1317 - main ] EXCEPTION WORKER 100: Traceback (most recent call last): ... File "/shared/core/simulate.py", line 129, in process_obs obs = scaler.transform(obs) File "/shared/env/lib/python3.6/site-packages/sklearn/preprocessing/_data.py", line 439, in transform if self.clip: AttributeError: 'MinMaxScaler' object has no attribute 'clip' ``` I trained the scaler on one machine, saved it, and pushed it to a second machine where it was loaded and used to transform input. ``` # loading and transforming import joblib from sklearn.preprocessing import MinMaxScaler scaler = joblib.load('scaler') assert isinstance(scaler, MinMaxScaler) data = scaler.transform(data) # throws exception ```
2021/01/08
[ "https://Stackoverflow.com/questions/65635575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7253453/" ]
I solved this issue with `pip install scikit-learn==0.23.2` in my conda or cmd. Essentially downgrading the scikit module helped.
version issue of **sklearn** You need to install in windows `pip install scikit-learn==0.24.0` I solve my Problem using this command
65,635,575
I get the following error when I attempt to load a saved `sklearn.preprocessing.MinMaxScaler` ``` /shared/env/lib/python3.6/site-packages/sklearn/base.py:315: UserWarning: Trying to unpickle estimator MinMaxScaler from version 0.23.2 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk. UserWarning) [2021-01-08 19:40:28,805 INFO train.py:1317 - main ] EXCEPTION WORKER 100: Traceback (most recent call last): ... File "/shared/core/simulate.py", line 129, in process_obs obs = scaler.transform(obs) File "/shared/env/lib/python3.6/site-packages/sklearn/preprocessing/_data.py", line 439, in transform if self.clip: AttributeError: 'MinMaxScaler' object has no attribute 'clip' ``` I trained the scaler on one machine, saved it, and pushed it to a second machine where it was loaded and used to transform input. ``` # loading and transforming import joblib from sklearn.preprocessing import MinMaxScaler scaler = joblib.load('scaler') assert isinstance(scaler, MinMaxScaler) data = scaler.transform(data) # throws exception ```
2021/01/08
[ "https://Stackoverflow.com/questions/65635575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7253453/" ]
New property `clip` was added to `MinMaxScaler` in later version (since 0.24). ``` # loading and transforming import joblib from sklearn.preprocessing import MinMaxScaler scaler = joblib.load('scaler') assert isinstance(scaler, MinMaxScaler) scaler.clip = False # add this line data = scaler.transform(data) # throws exceptio ``` **Explanation:** Becase `clip` is defined in `__init__` method it is part of `MinMaxScaler.__dict__`. When you try to create object from pickle `__setattr__` method is used to set all attributues, but `clip` was not used in older version therefore is missing in your new `MinMaxScale` instance. Simply add: ``` scaler.clip = False ``` and it should work fine.
version issue of **sklearn** You need to install in windows `pip install scikit-learn==0.24.0` I solve my Problem using this command
40,511,177
I am actually new to python. While learning it I came across this piece of code. Python official document says when encounter with continue statement, control will shift to the beginning of the loop, but in this case it is shifting to final statement and executing from there onward. Is this a bug in python or what? Can somebody please explain this to me? Thanks. ``` def askint(): while True: try: val =int(input("pleas enter an integer ")) except: print ("it seems like you did'n enter an integer ") continue else: print ("yep that's an integer thank you") break finally: print ('control is now on finally me') print ('i am also getting executed ') askint() ```
2016/11/09
[ "https://Stackoverflow.com/questions/40511177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6250108/" ]
The `finally` code is always executed in a `try/except` block. The `continue` doesn't skip it (or it would be a bug in python).
The `finally` clause **must be executed** no matter what happens, and so it is. This is why it's called `finally`: it doesn't matter whether what you have *tried* succeeded or raised an *except*ion, it is *always* executed.
40,511,177
I am actually new to python. While learning it I came across this piece of code. Python official document says when encounter with continue statement, control will shift to the beginning of the loop, but in this case it is shifting to final statement and executing from there onward. Is this a bug in python or what? Can somebody please explain this to me? Thanks. ``` def askint(): while True: try: val =int(input("pleas enter an integer ")) except: print ("it seems like you did'n enter an integer ") continue else: print ("yep that's an integer thank you") break finally: print ('control is now on finally me') print ('i am also getting executed ') askint() ```
2016/11/09
[ "https://Stackoverflow.com/questions/40511177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6250108/" ]
The `finally` code is always executed in a `try/except` block. The `continue` doesn't skip it (or it would be a bug in python).
finally will be executed always in a try/exept no matter what is the exception.I think this material will help you <https://docs.python.org/2.5/whatsnew/pep-341.html>
40,511,177
I am actually new to python. While learning it I came across this piece of code. Python official document says when encounter with continue statement, control will shift to the beginning of the loop, but in this case it is shifting to final statement and executing from there onward. Is this a bug in python or what? Can somebody please explain this to me? Thanks. ``` def askint(): while True: try: val =int(input("pleas enter an integer ")) except: print ("it seems like you did'n enter an integer ") continue else: print ("yep that's an integer thank you") break finally: print ('control is now on finally me') print ('i am also getting executed ') askint() ```
2016/11/09
[ "https://Stackoverflow.com/questions/40511177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6250108/" ]
This is stated in the [documentation for the `continue` statement](https://docs.python.org/3/reference/simple_stmts.html#the-continue-statement): > > When continue passes control out of a try statement with a finally clause, *that finally clause is executed before really starting the next loop cycle*. > > > (Emphasis mine) Yes, without the `while` loop present, you'd have a `SyntaxError` popping up; since `continue` is inside the `while` and `finally` always gets a chance at finalizing things; it gets executed on the way out.
The `finally` clause **must be executed** no matter what happens, and so it is. This is why it's called `finally`: it doesn't matter whether what you have *tried* succeeded or raised an *except*ion, it is *always* executed.
40,511,177
I am actually new to python. While learning it I came across this piece of code. Python official document says when encounter with continue statement, control will shift to the beginning of the loop, but in this case it is shifting to final statement and executing from there onward. Is this a bug in python or what? Can somebody please explain this to me? Thanks. ``` def askint(): while True: try: val =int(input("pleas enter an integer ")) except: print ("it seems like you did'n enter an integer ") continue else: print ("yep that's an integer thank you") break finally: print ('control is now on finally me') print ('i am also getting executed ') askint() ```
2016/11/09
[ "https://Stackoverflow.com/questions/40511177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6250108/" ]
The `finally` clause **must be executed** no matter what happens, and so it is. This is why it's called `finally`: it doesn't matter whether what you have *tried* succeeded or raised an *except*ion, it is *always* executed.
finally will be executed always in a try/exept no matter what is the exception.I think this material will help you <https://docs.python.org/2.5/whatsnew/pep-341.html>
40,511,177
I am actually new to python. While learning it I came across this piece of code. Python official document says when encounter with continue statement, control will shift to the beginning of the loop, but in this case it is shifting to final statement and executing from there onward. Is this a bug in python or what? Can somebody please explain this to me? Thanks. ``` def askint(): while True: try: val =int(input("pleas enter an integer ")) except: print ("it seems like you did'n enter an integer ") continue else: print ("yep that's an integer thank you") break finally: print ('control is now on finally me') print ('i am also getting executed ') askint() ```
2016/11/09
[ "https://Stackoverflow.com/questions/40511177", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6250108/" ]
This is stated in the [documentation for the `continue` statement](https://docs.python.org/3/reference/simple_stmts.html#the-continue-statement): > > When continue passes control out of a try statement with a finally clause, *that finally clause is executed before really starting the next loop cycle*. > > > (Emphasis mine) Yes, without the `while` loop present, you'd have a `SyntaxError` popping up; since `continue` is inside the `while` and `finally` always gets a chance at finalizing things; it gets executed on the way out.
finally will be executed always in a try/exept no matter what is the exception.I think this material will help you <https://docs.python.org/2.5/whatsnew/pep-341.html>
30,080,491
I am trying to reproduce the algorithm described in the Isolation Forest paper in python. <http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf?q=isolation> This is my current code: ``` import numpy as np import sklearn as sk import matplotlib.pyplot as plt import pandas as pd from sklearn.decomposition import PCA def _h(i): return np.log(i) + 0.5772156649 def _c(n): if n > 2: h = _h(n-1) return 2*h - 2*(n - 1)/n if n == 2: return 1 else: return 0 def _anomaly_score(dict_scores, n_samples): score = np.array([np.mean(dict_scores[k]) for k in dict_scores.keys()]) score = -score/_c(n_samples) return 2**score def _split_data(X): ''' split the data in the left and right nodes ''' n_samples, n_columns = X.shape n_features = n_columns - 1 feature_id = np.random.randint(low=0, high=n_features-1) feature = X[:, feature_id] split_value = np.random.choice(feature) left_X = X[feature <= split_value] right_X = X[feature > split_value] return left_X, right_X, feature_id, split_value def iTree(X, add_index=False, max_depth = np.inf): ''' construct an isolation tree and returns the number of step required to isolate an element. A column of index is added to the input matrix X if add_index=True. This column is required in the algorithm. ''' n_split = {} def iterate(X, count = 0): n_samples, n_columns = X.shape n_features = n_columns - 1 if count > max_depth: for index in X[:,-1]: n_split[index] = count return if n_samples == 1: index = X[0, n_columns-1] n_split[index] = count return else: lX, rX, feature_id, split_value = _split_data(X) # Uncomment the print to visualize a draft of # the construction of the tree #print(lX[:,-1], rX[:,-1], feature_id, split_value, n_split) n_samples_lX, _ = lX.shape n_samples_rX, _ = rX.shape if n_samples_lX > 0: iterate(lX, count+1) if n_samples_rX >0: iterate(rX, count+1) if add_index: n_samples, _ = X.shape X = np.c_[X, range(n_samples)] iterate(X) return n_split class iForest(): ''' Class to construct the isolation forest. -n_estimators: is the number of trees in the forest, -sample_size: is the bootstrap parameter used during the construction of the forest, -add_index: adds a column of index to the matrix X. This is required and add_index can be set to False only if the last column of X contains already indeces. -max_depth: is the maximum depth of each tree ''' def __init__(self, n_estimators=20, sample_size=None, add_index = True, max_depth = 100): self.n_estimators = n_estimators self.sample_size = sample_size self.add_index = add_index self.max_depth = max_depth return def fit(self, X): n_samples, n_features = X.shape if self.sample_size == None: self.sample_size = int(n_samples/2) if self.add_index: X = np.c_[X, range(n_samples)] trees = [iTree(X[np.random.choice(n_samples, self.sample_size, replace=False)], max_depth=self.max_depth) for i in range(self.n_estimators)] self.all_anomaly_score_ = {k:None for k in range(n_samples)} for k in self.all_anomaly_score_.keys(): self.all_anomaly_score_[k] = np.array([tree[k] for tree in trees if k in tree]) self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, n_samples) return self ``` The main part of the code is the iTree function that returns a dictionary with the number of steps required to isolate each sample. A column of index is attached to the input matrix `X` in order to make easier to understand what are the samples in each node. When I compare the result obtained with my code and the ones obtained with the isolation forest available for R I get different results. Consider for example that stackloss dataset : ``` data = pd.read_csv("stackloss.csv") X = data.as_matrix()[:, 1:] max_depth = 100 itree = iTree(X, add_index=True, max_depth=max_depth) #example of isolation tree iforest = iForest(n_estimators=1, max_depth=max_depth, sample_size=21) # isolation forest iforest.fit(X) sol = np.argsort(iforest.anomaly_score_) #correct sol = [10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0] ``` `sol` is often different by correct solution obtained with the R software. <https://r-forge.r-project.org/projects/iforest/> The correct solution in R has been obtained with: ``` > tr = IsolationTrees(stackloss,ntree = 100000,hlim = 100, rFactor = 1) > as = AnomalyScore(stackloss, tr) > order(as$outF) [1] 11 6 5 9 13 10 12 18 7 20 8 15 14 16 19 4 21 17 3 2 1 > order(as$outF)-1 [1] 10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0 > ``` Where is the mistake ?
2015/05/06
[ "https://Stackoverflow.com/questions/30080491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2411173/" ]
``` self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, n_samples) ``` You're calculating \_anomaly\_score with n\_samples which is total number of samples. However, you are building trees with subsamples. Therefore, when you're calculating the average search length '\_c(n)' you should use sample\_size instead of n\_samples as the trees are build with subsamples. So, I believe your code should be: ``` self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, self.sample_size) ```
There is a pull-request in scikit-learn: <https://github.com/scikit-learn/scikit-learn/pull/4163>
30,080,491
I am trying to reproduce the algorithm described in the Isolation Forest paper in python. <http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf?q=isolation> This is my current code: ``` import numpy as np import sklearn as sk import matplotlib.pyplot as plt import pandas as pd from sklearn.decomposition import PCA def _h(i): return np.log(i) + 0.5772156649 def _c(n): if n > 2: h = _h(n-1) return 2*h - 2*(n - 1)/n if n == 2: return 1 else: return 0 def _anomaly_score(dict_scores, n_samples): score = np.array([np.mean(dict_scores[k]) for k in dict_scores.keys()]) score = -score/_c(n_samples) return 2**score def _split_data(X): ''' split the data in the left and right nodes ''' n_samples, n_columns = X.shape n_features = n_columns - 1 feature_id = np.random.randint(low=0, high=n_features-1) feature = X[:, feature_id] split_value = np.random.choice(feature) left_X = X[feature <= split_value] right_X = X[feature > split_value] return left_X, right_X, feature_id, split_value def iTree(X, add_index=False, max_depth = np.inf): ''' construct an isolation tree and returns the number of step required to isolate an element. A column of index is added to the input matrix X if add_index=True. This column is required in the algorithm. ''' n_split = {} def iterate(X, count = 0): n_samples, n_columns = X.shape n_features = n_columns - 1 if count > max_depth: for index in X[:,-1]: n_split[index] = count return if n_samples == 1: index = X[0, n_columns-1] n_split[index] = count return else: lX, rX, feature_id, split_value = _split_data(X) # Uncomment the print to visualize a draft of # the construction of the tree #print(lX[:,-1], rX[:,-1], feature_id, split_value, n_split) n_samples_lX, _ = lX.shape n_samples_rX, _ = rX.shape if n_samples_lX > 0: iterate(lX, count+1) if n_samples_rX >0: iterate(rX, count+1) if add_index: n_samples, _ = X.shape X = np.c_[X, range(n_samples)] iterate(X) return n_split class iForest(): ''' Class to construct the isolation forest. -n_estimators: is the number of trees in the forest, -sample_size: is the bootstrap parameter used during the construction of the forest, -add_index: adds a column of index to the matrix X. This is required and add_index can be set to False only if the last column of X contains already indeces. -max_depth: is the maximum depth of each tree ''' def __init__(self, n_estimators=20, sample_size=None, add_index = True, max_depth = 100): self.n_estimators = n_estimators self.sample_size = sample_size self.add_index = add_index self.max_depth = max_depth return def fit(self, X): n_samples, n_features = X.shape if self.sample_size == None: self.sample_size = int(n_samples/2) if self.add_index: X = np.c_[X, range(n_samples)] trees = [iTree(X[np.random.choice(n_samples, self.sample_size, replace=False)], max_depth=self.max_depth) for i in range(self.n_estimators)] self.all_anomaly_score_ = {k:None for k in range(n_samples)} for k in self.all_anomaly_score_.keys(): self.all_anomaly_score_[k] = np.array([tree[k] for tree in trees if k in tree]) self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, n_samples) return self ``` The main part of the code is the iTree function that returns a dictionary with the number of steps required to isolate each sample. A column of index is attached to the input matrix `X` in order to make easier to understand what are the samples in each node. When I compare the result obtained with my code and the ones obtained with the isolation forest available for R I get different results. Consider for example that stackloss dataset : ``` data = pd.read_csv("stackloss.csv") X = data.as_matrix()[:, 1:] max_depth = 100 itree = iTree(X, add_index=True, max_depth=max_depth) #example of isolation tree iforest = iForest(n_estimators=1, max_depth=max_depth, sample_size=21) # isolation forest iforest.fit(X) sol = np.argsort(iforest.anomaly_score_) #correct sol = [10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0] ``` `sol` is often different by correct solution obtained with the R software. <https://r-forge.r-project.org/projects/iforest/> The correct solution in R has been obtained with: ``` > tr = IsolationTrees(stackloss,ntree = 100000,hlim = 100, rFactor = 1) > as = AnomalyScore(stackloss, tr) > order(as$outF) [1] 11 6 5 9 13 10 12 18 7 20 8 15 14 16 19 4 21 17 3 2 1 > order(as$outF)-1 [1] 10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0 > ``` Where is the mistake ?
2015/05/06
[ "https://Stackoverflow.com/questions/30080491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2411173/" ]
I have finally been able to solve the problem. The code is still slow due to the continuous copy operation performed in each split on the data. This is the working version of the algorithm. ``` import numpy as np import sklearn as sk import matplotlib.pyplot as plt import pandas as pd def _h(i): return np.log(i) + 0.5772156649 def _c(n): if n > 2: h = _h(n-1) return 2*h - 2*(n - 1)/n if n == 2: return 1 else: return 0 def _anomaly_score(score, n_samples): score = -score/_c(n_samples) return 2**score def _split_data(X): ''' split the data in the left and right nodes ''' n_samples, n_columns = X.shape n_features = n_columns - 1 m = M = 0 while m == M: feature_id = np.random.randint(low=0, high=n_features) feature = X[:, feature_id] m = feature.min() M = feature.max() #print(m, M, feature_id, X.shape) split_value = np.random.uniform(m, M, 1) left_X = X[feature <= split_value] right_X = X[feature > split_value] return left_X, right_X, feature_id, split_value def iTree(X, add_index=False, max_depth = np.inf): ''' construct an isolation tree and returns the number of step required to isolate an element. A column of index is added to the input matrix X if add_index=True. This column is required in the algorithm. ''' n_split = {} def iterate(X, count = 0): n_samples, n_columns = X.shape n_features = n_columns - 1 if count > max_depth: for index in X[:,-1]: n_split[index] = count return if n_samples == 1: index = X[0, n_columns-1] n_split[index] = count return else: lX, rX, feature_id, split_value = _split_data(X) # Uncomment the print to visualize a draft of # the construction of the tree #print(lX[:,-1], rX[:,-1], feature_id, split_value, n_split) n_samples_lX, _ = lX.shape n_samples_rX, _ = rX.shape if n_samples_lX > 0: iterate(lX, count+1) if n_samples_rX >0: iterate(rX, count+1) if add_index: n_samples, _ = X.shape X = np.c_[X, range(n_samples)] iterate(X) return n_split class iForest(): ''' Class to construct the isolation forest. -n_estimators: is the number of trees in the forest, -sample_size: is the bootstrap parameter used during the construction of the forest, -add_index: adds a column of index to the matrix X. This is required and add_index can be set to False only if the last column of X contains already indeces. -max_depth: is the maximum depth of each tree ''' def __init__(self, n_estimators=20, sample_size=None, add_index = True, max_depth = 100): self.n_estimators = n_estimators self.sample_size = sample_size self.add_index = add_index self.max_depth = max_depth return def fit(self, X): n_samples, n_features = X.shape if self.sample_size == None: self.sample_size = int(n_samples/2) if self.add_index: X = np.c_[X, range(n_samples)] trees = [iTree(X[np.random.choice(n_samples, self.sample_size, replace=False)], max_depth=self.max_depth) for i in range(self.n_estimators)] self.path_length_ = {k:None for k in range(n_samples)} for k in self.path_length_.keys(): self.path_length_[k] = np.array([tree[k] for tree in trees if k in tree]) self.path_length_ = np.array([self.path_length_[k].mean() for k in self.path_length_.keys()]) self.anomaly_score_ = _anomaly_score(self.path_length_, self.sample_size) return self ```
There is a pull-request in scikit-learn: <https://github.com/scikit-learn/scikit-learn/pull/4163>
30,080,491
I am trying to reproduce the algorithm described in the Isolation Forest paper in python. <http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf?q=isolation> This is my current code: ``` import numpy as np import sklearn as sk import matplotlib.pyplot as plt import pandas as pd from sklearn.decomposition import PCA def _h(i): return np.log(i) + 0.5772156649 def _c(n): if n > 2: h = _h(n-1) return 2*h - 2*(n - 1)/n if n == 2: return 1 else: return 0 def _anomaly_score(dict_scores, n_samples): score = np.array([np.mean(dict_scores[k]) for k in dict_scores.keys()]) score = -score/_c(n_samples) return 2**score def _split_data(X): ''' split the data in the left and right nodes ''' n_samples, n_columns = X.shape n_features = n_columns - 1 feature_id = np.random.randint(low=0, high=n_features-1) feature = X[:, feature_id] split_value = np.random.choice(feature) left_X = X[feature <= split_value] right_X = X[feature > split_value] return left_X, right_X, feature_id, split_value def iTree(X, add_index=False, max_depth = np.inf): ''' construct an isolation tree and returns the number of step required to isolate an element. A column of index is added to the input matrix X if add_index=True. This column is required in the algorithm. ''' n_split = {} def iterate(X, count = 0): n_samples, n_columns = X.shape n_features = n_columns - 1 if count > max_depth: for index in X[:,-1]: n_split[index] = count return if n_samples == 1: index = X[0, n_columns-1] n_split[index] = count return else: lX, rX, feature_id, split_value = _split_data(X) # Uncomment the print to visualize a draft of # the construction of the tree #print(lX[:,-1], rX[:,-1], feature_id, split_value, n_split) n_samples_lX, _ = lX.shape n_samples_rX, _ = rX.shape if n_samples_lX > 0: iterate(lX, count+1) if n_samples_rX >0: iterate(rX, count+1) if add_index: n_samples, _ = X.shape X = np.c_[X, range(n_samples)] iterate(X) return n_split class iForest(): ''' Class to construct the isolation forest. -n_estimators: is the number of trees in the forest, -sample_size: is the bootstrap parameter used during the construction of the forest, -add_index: adds a column of index to the matrix X. This is required and add_index can be set to False only if the last column of X contains already indeces. -max_depth: is the maximum depth of each tree ''' def __init__(self, n_estimators=20, sample_size=None, add_index = True, max_depth = 100): self.n_estimators = n_estimators self.sample_size = sample_size self.add_index = add_index self.max_depth = max_depth return def fit(self, X): n_samples, n_features = X.shape if self.sample_size == None: self.sample_size = int(n_samples/2) if self.add_index: X = np.c_[X, range(n_samples)] trees = [iTree(X[np.random.choice(n_samples, self.sample_size, replace=False)], max_depth=self.max_depth) for i in range(self.n_estimators)] self.all_anomaly_score_ = {k:None for k in range(n_samples)} for k in self.all_anomaly_score_.keys(): self.all_anomaly_score_[k] = np.array([tree[k] for tree in trees if k in tree]) self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, n_samples) return self ``` The main part of the code is the iTree function that returns a dictionary with the number of steps required to isolate each sample. A column of index is attached to the input matrix `X` in order to make easier to understand what are the samples in each node. When I compare the result obtained with my code and the ones obtained with the isolation forest available for R I get different results. Consider for example that stackloss dataset : ``` data = pd.read_csv("stackloss.csv") X = data.as_matrix()[:, 1:] max_depth = 100 itree = iTree(X, add_index=True, max_depth=max_depth) #example of isolation tree iforest = iForest(n_estimators=1, max_depth=max_depth, sample_size=21) # isolation forest iforest.fit(X) sol = np.argsort(iforest.anomaly_score_) #correct sol = [10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0] ``` `sol` is often different by correct solution obtained with the R software. <https://r-forge.r-project.org/projects/iforest/> The correct solution in R has been obtained with: ``` > tr = IsolationTrees(stackloss,ntree = 100000,hlim = 100, rFactor = 1) > as = AnomalyScore(stackloss, tr) > order(as$outF) [1] 11 6 5 9 13 10 12 18 7 20 8 15 14 16 19 4 21 17 3 2 1 > order(as$outF)-1 [1] 10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0 > ``` Where is the mistake ?
2015/05/06
[ "https://Stackoverflow.com/questions/30080491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2411173/" ]
I have finally been able to solve the problem. The code is still slow due to the continuous copy operation performed in each split on the data. This is the working version of the algorithm. ``` import numpy as np import sklearn as sk import matplotlib.pyplot as plt import pandas as pd def _h(i): return np.log(i) + 0.5772156649 def _c(n): if n > 2: h = _h(n-1) return 2*h - 2*(n - 1)/n if n == 2: return 1 else: return 0 def _anomaly_score(score, n_samples): score = -score/_c(n_samples) return 2**score def _split_data(X): ''' split the data in the left and right nodes ''' n_samples, n_columns = X.shape n_features = n_columns - 1 m = M = 0 while m == M: feature_id = np.random.randint(low=0, high=n_features) feature = X[:, feature_id] m = feature.min() M = feature.max() #print(m, M, feature_id, X.shape) split_value = np.random.uniform(m, M, 1) left_X = X[feature <= split_value] right_X = X[feature > split_value] return left_X, right_X, feature_id, split_value def iTree(X, add_index=False, max_depth = np.inf): ''' construct an isolation tree and returns the number of step required to isolate an element. A column of index is added to the input matrix X if add_index=True. This column is required in the algorithm. ''' n_split = {} def iterate(X, count = 0): n_samples, n_columns = X.shape n_features = n_columns - 1 if count > max_depth: for index in X[:,-1]: n_split[index] = count return if n_samples == 1: index = X[0, n_columns-1] n_split[index] = count return else: lX, rX, feature_id, split_value = _split_data(X) # Uncomment the print to visualize a draft of # the construction of the tree #print(lX[:,-1], rX[:,-1], feature_id, split_value, n_split) n_samples_lX, _ = lX.shape n_samples_rX, _ = rX.shape if n_samples_lX > 0: iterate(lX, count+1) if n_samples_rX >0: iterate(rX, count+1) if add_index: n_samples, _ = X.shape X = np.c_[X, range(n_samples)] iterate(X) return n_split class iForest(): ''' Class to construct the isolation forest. -n_estimators: is the number of trees in the forest, -sample_size: is the bootstrap parameter used during the construction of the forest, -add_index: adds a column of index to the matrix X. This is required and add_index can be set to False only if the last column of X contains already indeces. -max_depth: is the maximum depth of each tree ''' def __init__(self, n_estimators=20, sample_size=None, add_index = True, max_depth = 100): self.n_estimators = n_estimators self.sample_size = sample_size self.add_index = add_index self.max_depth = max_depth return def fit(self, X): n_samples, n_features = X.shape if self.sample_size == None: self.sample_size = int(n_samples/2) if self.add_index: X = np.c_[X, range(n_samples)] trees = [iTree(X[np.random.choice(n_samples, self.sample_size, replace=False)], max_depth=self.max_depth) for i in range(self.n_estimators)] self.path_length_ = {k:None for k in range(n_samples)} for k in self.path_length_.keys(): self.path_length_[k] = np.array([tree[k] for tree in trees if k in tree]) self.path_length_ = np.array([self.path_length_[k].mean() for k in self.path_length_.keys()]) self.anomaly_score_ = _anomaly_score(self.path_length_, self.sample_size) return self ```
``` self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, n_samples) ``` You're calculating \_anomaly\_score with n\_samples which is total number of samples. However, you are building trees with subsamples. Therefore, when you're calculating the average search length '\_c(n)' you should use sample\_size instead of n\_samples as the trees are build with subsamples. So, I believe your code should be: ``` self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, self.sample_size) ```
30,080,491
I am trying to reproduce the algorithm described in the Isolation Forest paper in python. <http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf?q=isolation> This is my current code: ``` import numpy as np import sklearn as sk import matplotlib.pyplot as plt import pandas as pd from sklearn.decomposition import PCA def _h(i): return np.log(i) + 0.5772156649 def _c(n): if n > 2: h = _h(n-1) return 2*h - 2*(n - 1)/n if n == 2: return 1 else: return 0 def _anomaly_score(dict_scores, n_samples): score = np.array([np.mean(dict_scores[k]) for k in dict_scores.keys()]) score = -score/_c(n_samples) return 2**score def _split_data(X): ''' split the data in the left and right nodes ''' n_samples, n_columns = X.shape n_features = n_columns - 1 feature_id = np.random.randint(low=0, high=n_features-1) feature = X[:, feature_id] split_value = np.random.choice(feature) left_X = X[feature <= split_value] right_X = X[feature > split_value] return left_X, right_X, feature_id, split_value def iTree(X, add_index=False, max_depth = np.inf): ''' construct an isolation tree and returns the number of step required to isolate an element. A column of index is added to the input matrix X if add_index=True. This column is required in the algorithm. ''' n_split = {} def iterate(X, count = 0): n_samples, n_columns = X.shape n_features = n_columns - 1 if count > max_depth: for index in X[:,-1]: n_split[index] = count return if n_samples == 1: index = X[0, n_columns-1] n_split[index] = count return else: lX, rX, feature_id, split_value = _split_data(X) # Uncomment the print to visualize a draft of # the construction of the tree #print(lX[:,-1], rX[:,-1], feature_id, split_value, n_split) n_samples_lX, _ = lX.shape n_samples_rX, _ = rX.shape if n_samples_lX > 0: iterate(lX, count+1) if n_samples_rX >0: iterate(rX, count+1) if add_index: n_samples, _ = X.shape X = np.c_[X, range(n_samples)] iterate(X) return n_split class iForest(): ''' Class to construct the isolation forest. -n_estimators: is the number of trees in the forest, -sample_size: is the bootstrap parameter used during the construction of the forest, -add_index: adds a column of index to the matrix X. This is required and add_index can be set to False only if the last column of X contains already indeces. -max_depth: is the maximum depth of each tree ''' def __init__(self, n_estimators=20, sample_size=None, add_index = True, max_depth = 100): self.n_estimators = n_estimators self.sample_size = sample_size self.add_index = add_index self.max_depth = max_depth return def fit(self, X): n_samples, n_features = X.shape if self.sample_size == None: self.sample_size = int(n_samples/2) if self.add_index: X = np.c_[X, range(n_samples)] trees = [iTree(X[np.random.choice(n_samples, self.sample_size, replace=False)], max_depth=self.max_depth) for i in range(self.n_estimators)] self.all_anomaly_score_ = {k:None for k in range(n_samples)} for k in self.all_anomaly_score_.keys(): self.all_anomaly_score_[k] = np.array([tree[k] for tree in trees if k in tree]) self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, n_samples) return self ``` The main part of the code is the iTree function that returns a dictionary with the number of steps required to isolate each sample. A column of index is attached to the input matrix `X` in order to make easier to understand what are the samples in each node. When I compare the result obtained with my code and the ones obtained with the isolation forest available for R I get different results. Consider for example that stackloss dataset : ``` data = pd.read_csv("stackloss.csv") X = data.as_matrix()[:, 1:] max_depth = 100 itree = iTree(X, add_index=True, max_depth=max_depth) #example of isolation tree iforest = iForest(n_estimators=1, max_depth=max_depth, sample_size=21) # isolation forest iforest.fit(X) sol = np.argsort(iforest.anomaly_score_) #correct sol = [10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0] ``` `sol` is often different by correct solution obtained with the R software. <https://r-forge.r-project.org/projects/iforest/> The correct solution in R has been obtained with: ``` > tr = IsolationTrees(stackloss,ntree = 100000,hlim = 100, rFactor = 1) > as = AnomalyScore(stackloss, tr) > order(as$outF) [1] 11 6 5 9 13 10 12 18 7 20 8 15 14 16 19 4 21 17 3 2 1 > order(as$outF)-1 [1] 10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0 > ``` Where is the mistake ?
2015/05/06
[ "https://Stackoverflow.com/questions/30080491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2411173/" ]
``` self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, n_samples) ``` You're calculating \_anomaly\_score with n\_samples which is total number of samples. However, you are building trees with subsamples. Therefore, when you're calculating the average search length '\_c(n)' you should use sample\_size instead of n\_samples as the trees are build with subsamples. So, I believe your code should be: ``` self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, self.sample_size) ```
Donbeo, your code works pretty well with just a few minor adjustments, the main problem it had is that you missed one of the base cases (end condition) of the recursive algorithm, so it hangs in loop when that condition comes up. You need something to this effect in the \_split\_data function (shown in code below) and also handle the this case in the iterate function (not shown) ``` minv = maxv = 0 inspected = Set() # this set tracks the candidates that we already inspected while minv == maxv: # check whether we run out of features to try an none of them has different values, # if that is the case we need to break the loop otherwise this loops forever if len(inspected) == n_features: # if we run out of features to try an none of them has different values, # return -1 to signal the caller that we can't split X anymore. return X, X, -1, None feature_id = np.random.randint(low=0, high=n_features) if feature_id not in inspected: inspected.add(feature_id) split_feature = X[:, feature_id] minv = split_feature.min() maxv = split_feature.max() ```
30,080,491
I am trying to reproduce the algorithm described in the Isolation Forest paper in python. <http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf?q=isolation> This is my current code: ``` import numpy as np import sklearn as sk import matplotlib.pyplot as plt import pandas as pd from sklearn.decomposition import PCA def _h(i): return np.log(i) + 0.5772156649 def _c(n): if n > 2: h = _h(n-1) return 2*h - 2*(n - 1)/n if n == 2: return 1 else: return 0 def _anomaly_score(dict_scores, n_samples): score = np.array([np.mean(dict_scores[k]) for k in dict_scores.keys()]) score = -score/_c(n_samples) return 2**score def _split_data(X): ''' split the data in the left and right nodes ''' n_samples, n_columns = X.shape n_features = n_columns - 1 feature_id = np.random.randint(low=0, high=n_features-1) feature = X[:, feature_id] split_value = np.random.choice(feature) left_X = X[feature <= split_value] right_X = X[feature > split_value] return left_X, right_X, feature_id, split_value def iTree(X, add_index=False, max_depth = np.inf): ''' construct an isolation tree and returns the number of step required to isolate an element. A column of index is added to the input matrix X if add_index=True. This column is required in the algorithm. ''' n_split = {} def iterate(X, count = 0): n_samples, n_columns = X.shape n_features = n_columns - 1 if count > max_depth: for index in X[:,-1]: n_split[index] = count return if n_samples == 1: index = X[0, n_columns-1] n_split[index] = count return else: lX, rX, feature_id, split_value = _split_data(X) # Uncomment the print to visualize a draft of # the construction of the tree #print(lX[:,-1], rX[:,-1], feature_id, split_value, n_split) n_samples_lX, _ = lX.shape n_samples_rX, _ = rX.shape if n_samples_lX > 0: iterate(lX, count+1) if n_samples_rX >0: iterate(rX, count+1) if add_index: n_samples, _ = X.shape X = np.c_[X, range(n_samples)] iterate(X) return n_split class iForest(): ''' Class to construct the isolation forest. -n_estimators: is the number of trees in the forest, -sample_size: is the bootstrap parameter used during the construction of the forest, -add_index: adds a column of index to the matrix X. This is required and add_index can be set to False only if the last column of X contains already indeces. -max_depth: is the maximum depth of each tree ''' def __init__(self, n_estimators=20, sample_size=None, add_index = True, max_depth = 100): self.n_estimators = n_estimators self.sample_size = sample_size self.add_index = add_index self.max_depth = max_depth return def fit(self, X): n_samples, n_features = X.shape if self.sample_size == None: self.sample_size = int(n_samples/2) if self.add_index: X = np.c_[X, range(n_samples)] trees = [iTree(X[np.random.choice(n_samples, self.sample_size, replace=False)], max_depth=self.max_depth) for i in range(self.n_estimators)] self.all_anomaly_score_ = {k:None for k in range(n_samples)} for k in self.all_anomaly_score_.keys(): self.all_anomaly_score_[k] = np.array([tree[k] for tree in trees if k in tree]) self.anomaly_score_ = _anomaly_score(self.all_anomaly_score_, n_samples) return self ``` The main part of the code is the iTree function that returns a dictionary with the number of steps required to isolate each sample. A column of index is attached to the input matrix `X` in order to make easier to understand what are the samples in each node. When I compare the result obtained with my code and the ones obtained with the isolation forest available for R I get different results. Consider for example that stackloss dataset : ``` data = pd.read_csv("stackloss.csv") X = data.as_matrix()[:, 1:] max_depth = 100 itree = iTree(X, add_index=True, max_depth=max_depth) #example of isolation tree iforest = iForest(n_estimators=1, max_depth=max_depth, sample_size=21) # isolation forest iforest.fit(X) sol = np.argsort(iforest.anomaly_score_) #correct sol = [10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0] ``` `sol` is often different by correct solution obtained with the R software. <https://r-forge.r-project.org/projects/iforest/> The correct solution in R has been obtained with: ``` > tr = IsolationTrees(stackloss,ntree = 100000,hlim = 100, rFactor = 1) > as = AnomalyScore(stackloss, tr) > order(as$outF) [1] 11 6 5 9 13 10 12 18 7 20 8 15 14 16 19 4 21 17 3 2 1 > order(as$outF)-1 [1] 10 5 4 8 12 9 11 17 6 19 7 14 13 15 18 3 20 16 2 1 0 > ``` Where is the mistake ?
2015/05/06
[ "https://Stackoverflow.com/questions/30080491", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2411173/" ]
I have finally been able to solve the problem. The code is still slow due to the continuous copy operation performed in each split on the data. This is the working version of the algorithm. ``` import numpy as np import sklearn as sk import matplotlib.pyplot as plt import pandas as pd def _h(i): return np.log(i) + 0.5772156649 def _c(n): if n > 2: h = _h(n-1) return 2*h - 2*(n - 1)/n if n == 2: return 1 else: return 0 def _anomaly_score(score, n_samples): score = -score/_c(n_samples) return 2**score def _split_data(X): ''' split the data in the left and right nodes ''' n_samples, n_columns = X.shape n_features = n_columns - 1 m = M = 0 while m == M: feature_id = np.random.randint(low=0, high=n_features) feature = X[:, feature_id] m = feature.min() M = feature.max() #print(m, M, feature_id, X.shape) split_value = np.random.uniform(m, M, 1) left_X = X[feature <= split_value] right_X = X[feature > split_value] return left_X, right_X, feature_id, split_value def iTree(X, add_index=False, max_depth = np.inf): ''' construct an isolation tree and returns the number of step required to isolate an element. A column of index is added to the input matrix X if add_index=True. This column is required in the algorithm. ''' n_split = {} def iterate(X, count = 0): n_samples, n_columns = X.shape n_features = n_columns - 1 if count > max_depth: for index in X[:,-1]: n_split[index] = count return if n_samples == 1: index = X[0, n_columns-1] n_split[index] = count return else: lX, rX, feature_id, split_value = _split_data(X) # Uncomment the print to visualize a draft of # the construction of the tree #print(lX[:,-1], rX[:,-1], feature_id, split_value, n_split) n_samples_lX, _ = lX.shape n_samples_rX, _ = rX.shape if n_samples_lX > 0: iterate(lX, count+1) if n_samples_rX >0: iterate(rX, count+1) if add_index: n_samples, _ = X.shape X = np.c_[X, range(n_samples)] iterate(X) return n_split class iForest(): ''' Class to construct the isolation forest. -n_estimators: is the number of trees in the forest, -sample_size: is the bootstrap parameter used during the construction of the forest, -add_index: adds a column of index to the matrix X. This is required and add_index can be set to False only if the last column of X contains already indeces. -max_depth: is the maximum depth of each tree ''' def __init__(self, n_estimators=20, sample_size=None, add_index = True, max_depth = 100): self.n_estimators = n_estimators self.sample_size = sample_size self.add_index = add_index self.max_depth = max_depth return def fit(self, X): n_samples, n_features = X.shape if self.sample_size == None: self.sample_size = int(n_samples/2) if self.add_index: X = np.c_[X, range(n_samples)] trees = [iTree(X[np.random.choice(n_samples, self.sample_size, replace=False)], max_depth=self.max_depth) for i in range(self.n_estimators)] self.path_length_ = {k:None for k in range(n_samples)} for k in self.path_length_.keys(): self.path_length_[k] = np.array([tree[k] for tree in trees if k in tree]) self.path_length_ = np.array([self.path_length_[k].mean() for k in self.path_length_.keys()]) self.anomaly_score_ = _anomaly_score(self.path_length_, self.sample_size) return self ```
Donbeo, your code works pretty well with just a few minor adjustments, the main problem it had is that you missed one of the base cases (end condition) of the recursive algorithm, so it hangs in loop when that condition comes up. You need something to this effect in the \_split\_data function (shown in code below) and also handle the this case in the iterate function (not shown) ``` minv = maxv = 0 inspected = Set() # this set tracks the candidates that we already inspected while minv == maxv: # check whether we run out of features to try an none of them has different values, # if that is the case we need to break the loop otherwise this loops forever if len(inspected) == n_features: # if we run out of features to try an none of them has different values, # return -1 to signal the caller that we can't split X anymore. return X, X, -1, None feature_id = np.random.randint(low=0, high=n_features) if feature_id not in inspected: inspected.add(feature_id) split_feature = X[:, feature_id] minv = split_feature.min() maxv = split_feature.max() ```
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
**1)** To the best of my knowledge as of early Nov, 2017 you are correct: there is no commonly available way to compile Swift to WebAssembly. Maybe some enterprising hacker somewhere has made it happen but if so she hasn't shared her code with us yet. **2)** In order to enable Wasm support you will probably need to hack on a few different parts. I think you could do it without knowing much of anything about the internals of the compiler (e.g. the parser & optimizers), but you'd need to learn about how the toolchain works and how it integrates with the platform at runtime. You can learn a *ton* about what you'd need to do by studying how Swift was ported to Android. Luckily, Brian Gesiak posted a *really* detailed blog post about exactly how that port worked (warning: small Patreon donation required): <https://modocache.io/how-to-port-the-swift-runtime-to-android> Seriously, you would be nuts to embark on this project without reading that article. Though I'm *NOT* an expert, based on that port and my (basic) understanding of Swift, I think the rough overview of where you'd need to hack would be: * The Swift compiler + You'll need to teach it about the Wasm "triple" used by LLVM, so it knows how to integrate with the rest of its toolchain + You'll need to set up a WebAssembly platform so that people can write `#if os(WebAssembly)` in places that require conditional compilation + You'll also need to set up similar build-time macros. The Android article explains this sort of thing really well. * The Swift runtime + This is written in C++ and it needs to run on Wasm + Since Wasm is an unusual platform there will probably be some work here. You might need to provide compatibility shims for various system calls and the like. + Projects like Emscripten have demonstrated lots of success compiling C++ to Wasm. * The Swift standard library + In theory you can write & run Swift code that doesn't use the standard library, but who would want to? + Also in theory this should "just work" if the runtime works, but you will likely need to use your `#if os(WebAssembly)` feature here to work around platform irregularities * Bonus: The Foundation and Dispatch libraries + If you want to use existing Swift code these two libraries will be essential. ### Links: * Brian Gesiak's awesome blog post: <https://modocache.io/how-to-port-the-swift-runtime-to-android> * Link to the Android port's pull request: <https://github.com/apple/swift/pull/1442> * Interesting article about the challenges and rewards of cross-platform Swift: <https://medium.com/@ephemer/how-we-put-an-app-in-the-android-play-store-using-swift-67bd99573e3c>
WebAssembly target would be like a generic unix target for llvm, so I think someone needs develop that port. Please note that Swift -> Wasm in browser would be pretty much useless because Wasm has no DOM or DOM API access so you still need JavaScript to do anything meaningful, thus the question: why would anyone bother to make the port? It looks like JavaScript remains the only web language. If you don't like JavaScript you better forget the web development. Chances are that Swift will run on Android before it runs on the web so stick with Swift/iOS and then make the port to Android whenever that becomes possible. People don't use the web/browser that much anyway.
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
It looks like there is a commercial offering that supports compilation of Swift to WebAssembly. RemObjects, the developer tooling company, has [just announced support for WebAssembly with their Elements compiler](https://blogs.remobjects.com/2018/01/12/webassembly-swift-c-java-and-oxygene-in-the-browser/), which can compile Java, Swift, C# and Oxygene.
WebAssembly target would be like a generic unix target for llvm, so I think someone needs develop that port. Please note that Swift -> Wasm in browser would be pretty much useless because Wasm has no DOM or DOM API access so you still need JavaScript to do anything meaningful, thus the question: why would anyone bother to make the port? It looks like JavaScript remains the only web language. If you don't like JavaScript you better forget the web development. Chances are that Swift will run on Android before it runs on the web so stick with Swift/iOS and then make the port to Android whenever that becomes possible. People don't use the web/browser that much anyway.
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
As of May 2019 there's an open-source project available called [SwiftWasm](https://swiftwasm.org/) that allows you to compile Swift code to WebAssembly targeting [WASI SDK](https://wasi.dev). This means that binaries produced by SwiftWasm can be executed either in browsers with [WASI polyfill](https://wasi.dev/polyfill/) or standalone WebAssembly runtimes supporting WASI such as [wasmtime](https://github.com/CraneStation/wasmtime/), [lucet](https://github.com/fastly/lucet/) or [wasmer](https://wasmer.io).
WebAssembly target would be like a generic unix target for llvm, so I think someone needs develop that port. Please note that Swift -> Wasm in browser would be pretty much useless because Wasm has no DOM or DOM API access so you still need JavaScript to do anything meaningful, thus the question: why would anyone bother to make the port? It looks like JavaScript remains the only web language. If you don't like JavaScript you better forget the web development. Chances are that Swift will run on Android before it runs on the web so stick with Swift/iOS and then make the port to Android whenever that becomes possible. People don't use the web/browser that much anyway.
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
I was looking for a way to convert Swift code to web assembly, and I found this. <https://swiftwasm.org/> I do not know how mature this platform is (October 2022) and if it can flourish, but having the capability is exciting. Also, it provides means for writing JavaScript in Swift directly.
WebAssembly target would be like a generic unix target for llvm, so I think someone needs develop that port. Please note that Swift -> Wasm in browser would be pretty much useless because Wasm has no DOM or DOM API access so you still need JavaScript to do anything meaningful, thus the question: why would anyone bother to make the port? It looks like JavaScript remains the only web language. If you don't like JavaScript you better forget the web development. Chances are that Swift will run on Android before it runs on the web so stick with Swift/iOS and then make the port to Android whenever that becomes possible. People don't use the web/browser that much anyway.
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
**1)** To the best of my knowledge as of early Nov, 2017 you are correct: there is no commonly available way to compile Swift to WebAssembly. Maybe some enterprising hacker somewhere has made it happen but if so she hasn't shared her code with us yet. **2)** In order to enable Wasm support you will probably need to hack on a few different parts. I think you could do it without knowing much of anything about the internals of the compiler (e.g. the parser & optimizers), but you'd need to learn about how the toolchain works and how it integrates with the platform at runtime. You can learn a *ton* about what you'd need to do by studying how Swift was ported to Android. Luckily, Brian Gesiak posted a *really* detailed blog post about exactly how that port worked (warning: small Patreon donation required): <https://modocache.io/how-to-port-the-swift-runtime-to-android> Seriously, you would be nuts to embark on this project without reading that article. Though I'm *NOT* an expert, based on that port and my (basic) understanding of Swift, I think the rough overview of where you'd need to hack would be: * The Swift compiler + You'll need to teach it about the Wasm "triple" used by LLVM, so it knows how to integrate with the rest of its toolchain + You'll need to set up a WebAssembly platform so that people can write `#if os(WebAssembly)` in places that require conditional compilation + You'll also need to set up similar build-time macros. The Android article explains this sort of thing really well. * The Swift runtime + This is written in C++ and it needs to run on Wasm + Since Wasm is an unusual platform there will probably be some work here. You might need to provide compatibility shims for various system calls and the like. + Projects like Emscripten have demonstrated lots of success compiling C++ to Wasm. * The Swift standard library + In theory you can write & run Swift code that doesn't use the standard library, but who would want to? + Also in theory this should "just work" if the runtime works, but you will likely need to use your `#if os(WebAssembly)` feature here to work around platform irregularities * Bonus: The Foundation and Dispatch libraries + If you want to use existing Swift code these two libraries will be essential. ### Links: * Brian Gesiak's awesome blog post: <https://modocache.io/how-to-port-the-swift-runtime-to-android> * Link to the Android port's pull request: <https://github.com/apple/swift/pull/1442> * Interesting article about the challenges and rewards of cross-platform Swift: <https://medium.com/@ephemer/how-we-put-an-app-in-the-android-play-store-using-swift-67bd99573e3c>
It looks like there is a commercial offering that supports compilation of Swift to WebAssembly. RemObjects, the developer tooling company, has [just announced support for WebAssembly with their Elements compiler](https://blogs.remobjects.com/2018/01/12/webassembly-swift-c-java-and-oxygene-in-the-browser/), which can compile Java, Swift, C# and Oxygene.
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
**1)** To the best of my knowledge as of early Nov, 2017 you are correct: there is no commonly available way to compile Swift to WebAssembly. Maybe some enterprising hacker somewhere has made it happen but if so she hasn't shared her code with us yet. **2)** In order to enable Wasm support you will probably need to hack on a few different parts. I think you could do it without knowing much of anything about the internals of the compiler (e.g. the parser & optimizers), but you'd need to learn about how the toolchain works and how it integrates with the platform at runtime. You can learn a *ton* about what you'd need to do by studying how Swift was ported to Android. Luckily, Brian Gesiak posted a *really* detailed blog post about exactly how that port worked (warning: small Patreon donation required): <https://modocache.io/how-to-port-the-swift-runtime-to-android> Seriously, you would be nuts to embark on this project without reading that article. Though I'm *NOT* an expert, based on that port and my (basic) understanding of Swift, I think the rough overview of where you'd need to hack would be: * The Swift compiler + You'll need to teach it about the Wasm "triple" used by LLVM, so it knows how to integrate with the rest of its toolchain + You'll need to set up a WebAssembly platform so that people can write `#if os(WebAssembly)` in places that require conditional compilation + You'll also need to set up similar build-time macros. The Android article explains this sort of thing really well. * The Swift runtime + This is written in C++ and it needs to run on Wasm + Since Wasm is an unusual platform there will probably be some work here. You might need to provide compatibility shims for various system calls and the like. + Projects like Emscripten have demonstrated lots of success compiling C++ to Wasm. * The Swift standard library + In theory you can write & run Swift code that doesn't use the standard library, but who would want to? + Also in theory this should "just work" if the runtime works, but you will likely need to use your `#if os(WebAssembly)` feature here to work around platform irregularities * Bonus: The Foundation and Dispatch libraries + If you want to use existing Swift code these two libraries will be essential. ### Links: * Brian Gesiak's awesome blog post: <https://modocache.io/how-to-port-the-swift-runtime-to-android> * Link to the Android port's pull request: <https://github.com/apple/swift/pull/1442> * Interesting article about the challenges and rewards of cross-platform Swift: <https://medium.com/@ephemer/how-we-put-an-app-in-the-android-play-store-using-swift-67bd99573e3c>
As of May 2019 there's an open-source project available called [SwiftWasm](https://swiftwasm.org/) that allows you to compile Swift code to WebAssembly targeting [WASI SDK](https://wasi.dev). This means that binaries produced by SwiftWasm can be executed either in browsers with [WASI polyfill](https://wasi.dev/polyfill/) or standalone WebAssembly runtimes supporting WASI such as [wasmtime](https://github.com/CraneStation/wasmtime/), [lucet](https://github.com/fastly/lucet/) or [wasmer](https://wasmer.io).
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
**1)** To the best of my knowledge as of early Nov, 2017 you are correct: there is no commonly available way to compile Swift to WebAssembly. Maybe some enterprising hacker somewhere has made it happen but if so she hasn't shared her code with us yet. **2)** In order to enable Wasm support you will probably need to hack on a few different parts. I think you could do it without knowing much of anything about the internals of the compiler (e.g. the parser & optimizers), but you'd need to learn about how the toolchain works and how it integrates with the platform at runtime. You can learn a *ton* about what you'd need to do by studying how Swift was ported to Android. Luckily, Brian Gesiak posted a *really* detailed blog post about exactly how that port worked (warning: small Patreon donation required): <https://modocache.io/how-to-port-the-swift-runtime-to-android> Seriously, you would be nuts to embark on this project without reading that article. Though I'm *NOT* an expert, based on that port and my (basic) understanding of Swift, I think the rough overview of where you'd need to hack would be: * The Swift compiler + You'll need to teach it about the Wasm "triple" used by LLVM, so it knows how to integrate with the rest of its toolchain + You'll need to set up a WebAssembly platform so that people can write `#if os(WebAssembly)` in places that require conditional compilation + You'll also need to set up similar build-time macros. The Android article explains this sort of thing really well. * The Swift runtime + This is written in C++ and it needs to run on Wasm + Since Wasm is an unusual platform there will probably be some work here. You might need to provide compatibility shims for various system calls and the like. + Projects like Emscripten have demonstrated lots of success compiling C++ to Wasm. * The Swift standard library + In theory you can write & run Swift code that doesn't use the standard library, but who would want to? + Also in theory this should "just work" if the runtime works, but you will likely need to use your `#if os(WebAssembly)` feature here to work around platform irregularities * Bonus: The Foundation and Dispatch libraries + If you want to use existing Swift code these two libraries will be essential. ### Links: * Brian Gesiak's awesome blog post: <https://modocache.io/how-to-port-the-swift-runtime-to-android> * Link to the Android port's pull request: <https://github.com/apple/swift/pull/1442> * Interesting article about the challenges and rewards of cross-platform Swift: <https://medium.com/@ephemer/how-we-put-an-app-in-the-android-play-store-using-swift-67bd99573e3c>
I was looking for a way to convert Swift code to web assembly, and I found this. <https://swiftwasm.org/> I do not know how mature this platform is (October 2022) and if it can flourish, but having the capability is exciting. Also, it provides means for writing JavaScript in Swift directly.
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
It looks like there is a commercial offering that supports compilation of Swift to WebAssembly. RemObjects, the developer tooling company, has [just announced support for WebAssembly with their Elements compiler](https://blogs.remobjects.com/2018/01/12/webassembly-swift-c-java-and-oxygene-in-the-browser/), which can compile Java, Swift, C# and Oxygene.
As of May 2019 there's an open-source project available called [SwiftWasm](https://swiftwasm.org/) that allows you to compile Swift code to WebAssembly targeting [WASI SDK](https://wasi.dev). This means that binaries produced by SwiftWasm can be executed either in browsers with [WASI polyfill](https://wasi.dev/polyfill/) or standalone WebAssembly runtimes supporting WASI such as [wasmtime](https://github.com/CraneStation/wasmtime/), [lucet](https://github.com/fastly/lucet/) or [wasmer](https://wasmer.io).
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
It looks like there is a commercial offering that supports compilation of Swift to WebAssembly. RemObjects, the developer tooling company, has [just announced support for WebAssembly with their Elements compiler](https://blogs.remobjects.com/2018/01/12/webassembly-swift-c-java-and-oxygene-in-the-browser/), which can compile Java, Swift, C# and Oxygene.
I was looking for a way to convert Swift code to web assembly, and I found this. <https://swiftwasm.org/> I do not know how mature this platform is (October 2022) and if it can flourish, but having the capability is exciting. Also, it provides means for writing JavaScript in Swift directly.
46,572,148
I am receiving the following error message in spyder. Warning: You are using requests version , which is older than requests-oauthlib expects, please upgrade to 2.0.0 or later. I am not sure how i upgrade requests. I am using python 2.7 as part of an anaconda installation
2017/10/04
[ "https://Stackoverflow.com/questions/46572148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7426556/" ]
As of May 2019 there's an open-source project available called [SwiftWasm](https://swiftwasm.org/) that allows you to compile Swift code to WebAssembly targeting [WASI SDK](https://wasi.dev). This means that binaries produced by SwiftWasm can be executed either in browsers with [WASI polyfill](https://wasi.dev/polyfill/) or standalone WebAssembly runtimes supporting WASI such as [wasmtime](https://github.com/CraneStation/wasmtime/), [lucet](https://github.com/fastly/lucet/) or [wasmer](https://wasmer.io).
I was looking for a way to convert Swift code to web assembly, and I found this. <https://swiftwasm.org/> I do not know how mature this platform is (October 2022) and if it can flourish, but having the capability is exciting. Also, it provides means for writing JavaScript in Swift directly.
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
It is the simplest way to download and save the image from internet using **urlib.request** package. Here, you can simply pass the image URL(from where you want to download and save the image) and directory(where you want to save the download image locally, and give the image name with .jpg or .png) Here I given "local-filename.jpg" replace with this. **Python 3** ``` import urllib.request imgURL = "http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg" urllib.request.urlretrieve(imgURL, "D:/abc/image/local-filename.jpg") ``` You can download multiple images as well if you have all the image URLs from the internet. Just pass those image URLs in for loop, and the code automatically download the images from the internet.
download and save image to directory ==================================== ``` import requests headers = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.9" } img_data = requests.get(url=image_url, headers=headers).content with open(create_dir() + "/" + 'image_name' + '.png', 'wb') as handler: handler.write(img_data) ``` for creating directory ``` def create_dir(): # Directory dir_ = "CountryFlags" # Parent Directory path parent_dir = os.path.dirname(os.path.realpath(__file__)) # Path path = os.path.join(parent_dir, dir_) os.mkdir(path) return path ```
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
``` import requests img_data = requests.get(image_url).content with open('image_name.jpg', 'wb') as handler: handler.write(img_data) ```
Python code snippet to download a file from an url and save with its name ``` import requests url = 'http://google.com/favicon.ico' filename = url.split('/')[-1] r = requests.get(url, allow_redirects=True) open(filename, 'wb').write(r.content) ```
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
``` import requests img_data = requests.get(image_url).content with open('image_name.jpg', 'wb') as handler: handler.write(img_data) ```
For linux in case; you can use wget command ``` import os url1 = 'YOUR_URL_WHATEVER' os.system('wget {}'.format(url1)) ```
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
``` import requests img_data = requests.get(image_url).content with open('image_name.jpg', 'wb') as handler: handler.write(img_data) ```
```py import random import urllib.request def download_image(url): name = random.randrange(1,100) fullname = str(name)+".jpg" urllib.request.urlretrieve(url,fullname) download_image("http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg") ```
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
A sample code that works for me on Windows: ``` import requests with open('pic1.jpg', 'wb') as handle: response = requests.get(pic_url, stream=True) if not response.ok: print(response) for block in response.iter_content(1024): if not block: break handle.write(block) ```
For linux in case; you can use wget command ``` import os url1 = 'YOUR_URL_WHATEVER' os.system('wget {}'.format(url1)) ```
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
A sample code that works for me on Windows: ``` import requests with open('pic1.jpg', 'wb') as handle: response = requests.get(pic_url, stream=True) if not response.ok: print(response) for block in response.iter_content(1024): if not block: break handle.write(block) ```
download and save image to directory ==================================== ``` import requests headers = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.9" } img_data = requests.get(url=image_url, headers=headers).content with open(create_dir() + "/" + 'image_name' + '.png', 'wb') as handler: handler.write(img_data) ``` for creating directory ``` def create_dir(): # Directory dir_ = "CountryFlags" # Parent Directory path parent_dir = os.path.dirname(os.path.realpath(__file__)) # Path path = os.path.join(parent_dir, dir_) os.mkdir(path) return path ```
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
```py import random import urllib.request def download_image(url): name = random.randrange(1,100) fullname = str(name)+".jpg" urllib.request.urlretrieve(url,fullname) download_image("http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg") ```
For linux in case; you can use wget command ``` import os url1 = 'YOUR_URL_WHATEVER' os.system('wget {}'.format(url1)) ```
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
For linux in case; you can use wget command ``` import os url1 = 'YOUR_URL_WHATEVER' os.system('wget {}'.format(url1)) ```
download and save image to directory ==================================== ``` import requests headers = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-US,en;q=0.9" } img_data = requests.get(url=image_url, headers=headers).content with open(create_dir() + "/" + 'image_name' + '.png', 'wb') as handler: handler.write(img_data) ``` for creating directory ``` def create_dir(): # Directory dir_ = "CountryFlags" # Parent Directory path parent_dir = os.path.dirname(os.path.realpath(__file__)) # Path path = os.path.join(parent_dir, dir_) os.mkdir(path) return path ```
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
You can pick any arbitrary image from Google Images, copy the url, and use the following approach to download the image. Note that the extension isn't always included in the url, as some of the other answers seem to assume. You can automatically detect the correct extension using imghdr, which is included with Python 3.9. ```py import requests, imghdr gif_url = 'https://media.tenor.com/images/eff22afc2220e9df92a7aa2f53948f9f/tenor.gif' img_url = 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQwXRq7zbWry0MyqWq1Rbq12g_oL-uOoxo4Yw&usqp=CAU' for url, save_basename in [ (gif_url, 'gif_download_test'), (img_url, 'img_download_test') ]: response = requests.get(url) if response.status_code != 200: raise URLError extension = imghdr.what(file=None, h=response.content) save_path = f"{save_basename}.{extension}" with open(save_path, 'wb') as f: f.write(response.content) ```
if you want to stick to 2 lines? : ``` with open(os.path.join(dir_path, url[0]), 'wb') as f: f.write(requests.get(new_url).content) ```
30,229,231
I got a problem when I am using python to save an image from url either by urllib2 request or urllib.urlretrieve. That is the url of the image is valid. I could download it manually using the explorer. However, when I use python to download the image, the file cannot be opened. I use Mac OS preview to view the image. Thank you! UPDATE: The code is as follow ``` def downloadImage(self): request = urllib2.Request(self.url) pic = urllib2.urlopen(request) print "downloading: " + self.url print self.fileName filePath = localSaveRoot + self.catalog + self.fileName + Picture.postfix # urllib.urlretrieve(self.url, filePath) with open(filePath, 'wb') as localFile: localFile.write(pic.read()) ``` The image URL that I want to download is <http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg> This URL is valid and I can save it through the browser but the python code would download a file that cannot be opened. The Preview says "It may be damaged or use a file format that Preview doesn't recognize." I compare the image that I download by Python and the one that I download manually through the browser. The size of the former one is several byte smaller. So it seems that the file is uncompleted, but I don't know why python cannot completely download it.
2015/05/14
[ "https://Stackoverflow.com/questions/30229231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4898319/" ]
It is the simplest way to download and save the image from internet using **urlib.request** package. Here, you can simply pass the image URL(from where you want to download and save the image) and directory(where you want to save the download image locally, and give the image name with .jpg or .png) Here I given "local-filename.jpg" replace with this. **Python 3** ``` import urllib.request imgURL = "http://site.meishij.net/r/58/25/3568808/a3568808_142682562777944.jpg" urllib.request.urlretrieve(imgURL, "D:/abc/image/local-filename.jpg") ``` You can download multiple images as well if you have all the image URLs from the internet. Just pass those image URLs in for loop, and the code automatically download the images from the internet.
For linux in case; you can use wget command ``` import os url1 = 'YOUR_URL_WHATEVER' os.system('wget {}'.format(url1)) ```
31,550,622
I can call MATLAB from my system python: ``` >>> import matlab.engine >>> ``` but when I load a virtual environment, I now get a segfault: ``` >>> import matlab.engine Segmentation fault: 11 ``` I've run the [setup.py install instructions](http://www.mathworks.com/help/matlab/matlab_external/install-the-matlab-engine-for-python.html) for both system python and my virtual environment. I expected questions [like this one](http://www.mathworks.com/matlabcentral/answers/180518-why-does-matlab-engine-for-python-crash-when-using-a-non-system-default-version-of-python-on-mac), in which I have to set the `DYLD_LIBRARY_PATH` explicitly would fix things, but I don't have that environment variable set when I run my system python. What could be different between the two python implementations that would cause this segfault? EDIT: I'm using OS X Yosemite on a Late 2013 13" Macbook Pro. I'm using Python 2.7 from a freshly installed virtual environment (NOT a virtual machine).
2015/07/21
[ "https://Stackoverflow.com/questions/31550622", "https://Stackoverflow.com", "https://Stackoverflow.com/users/362703/" ]
I did this: ``` cd "matlabroot\extern\engines\python" python setup.py install --prefix="installdir" ``` Where the `installdir` is my virtualenv, `matlabroot` the directory for the MatLab install. Seems to work with my windows server, so far, so good. Reference here: <https://www.mathworks.com/help/matlab/matlab_external/install-matlab-engine-api-for-python-in-nondefault-locations.html>
I ran the "`python setup.py install`" from `matlabroot\extern\engines\python` with my virtual environment active. Note that I did use `venv`.
31,550,622
I can call MATLAB from my system python: ``` >>> import matlab.engine >>> ``` but when I load a virtual environment, I now get a segfault: ``` >>> import matlab.engine Segmentation fault: 11 ``` I've run the [setup.py install instructions](http://www.mathworks.com/help/matlab/matlab_external/install-the-matlab-engine-for-python.html) for both system python and my virtual environment. I expected questions [like this one](http://www.mathworks.com/matlabcentral/answers/180518-why-does-matlab-engine-for-python-crash-when-using-a-non-system-default-version-of-python-on-mac), in which I have to set the `DYLD_LIBRARY_PATH` explicitly would fix things, but I don't have that environment variable set when I run my system python. What could be different between the two python implementations that would cause this segfault? EDIT: I'm using OS X Yosemite on a Late 2013 13" Macbook Pro. I'm using Python 2.7 from a freshly installed virtual environment (NOT a virtual machine).
2015/07/21
[ "https://Stackoverflow.com/questions/31550622", "https://Stackoverflow.com", "https://Stackoverflow.com/users/362703/" ]
I did this: ``` cd "matlabroot\extern\engines\python" python setup.py install --prefix="installdir" ``` Where the `installdir` is my virtualenv, `matlabroot` the directory for the MatLab install. Seems to work with my windows server, so far, so good. Reference here: <https://www.mathworks.com/help/matlab/matlab_external/install-matlab-engine-api-for-python-in-nondefault-locations.html>
I have successfully run Matlab 2019b through venv. The command that I used: `sudo python3.7 setup.py install --prefix="/home/ubuntu/alexandros/vitrualEnv/py37` You have to give the path to the full environment. In my case, it is `/home/ubuntu/alexandros/vitrualEnv/py37/`. Inside the virtual environment, you will see ![image](https://i.stack.imgur.com/UwDgB.png) Then you have to go inside the lib and choose the version that you have installed. For instance, mine was 3.7 and that's why I have python3.7 instead of python3. You might have several python folders inside the lib folder like: ![image](https://i.stack.imgur.com/mBYyO.png) However, you have to use the one that is the main. Inside this folder you will see something similar to this: ![image](https://i.stack.imgur.com/0HrLa.png)
37,465,816
The Getting Started docs for aiohttp give the following client example: ``` import asyncio import aiohttp async def fetch_page(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: assert response.status == 200 return await response.read() loop = asyncio.get_event_loop() with aiohttp.ClientSession(loop=loop) as session: content = loop.run_until_complete( fetch_page(session, 'http://python.org')) print(content) ``` And they give the following note for Python 3.4 users: > > If you are using Python 3.4, please replace await with yield from and > async def with a @coroutine decorator. > > > If I follow these instructions I get: ``` import aiohttp import asyncio @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(10): async with session.get(url) as response: return (yield from response.text()) if __name__ == '__main__': loop = asyncio.get_event_loop() with aiohttp.ClientSession(loop=loop) as session: html = loop.run_until_complete( fetch(session, 'http://python.org')) print(html) ``` However, this will not run, because `async with` is not supported in Python 3.4: ``` $ python3 client.py File "client.py", line 7 async with session.get(url) as response: ^ SyntaxError: invalid syntax ``` How can I translate the `async with` statement to work with Python 3.4?
2016/05/26
[ "https://Stackoverflow.com/questions/37465816", "https://Stackoverflow.com", "https://Stackoverflow.com/users/58866/" ]
Just don't use the result of `session.get()` as a context manager; use it as a coroutine directly instead. The request context manager that `session.get()` produces would normally [*release* the request](http://pythonhosted.org/aiohttp/client_reference.html#aiohttp.ClientResponse.release) on exit, but [so does using `response.text()`](http://pythonhosted.org/aiohttp/client_reference.html#aiohttp.ClientResponse.text), so you could ignore that here: ``` @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(10): response = yield from session.get(url) return (yield from response.text()) ``` The request wrapper returned here doesn't have the required asynchronous methods (`__aenter__` and `__aexit__`), they omitted entirely when not using Python 3.5 (see the [relevant source code](https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/client.py#L491-L560)). If you have more statements between the `session.get()` call and accessing the `response.text()` awaitable, you probably want to use a `try:..finally:` anyway to release the connection; the Python 3.5 release context manager also *closes* the response if an exception occurred. Because a `yield from response.release()` is needed here, this can't be encapsulated in a context manager before Python 3.4: ``` import sys @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(10): response = yield from session.get(url) try: # other statements return (yield from response.text()) finally: if sys.exc_info()[0] is not None: # on exceptions, close the connection altogether response.close() else: yield from response.release() ```
`aiohttp`'s [examples](https://github.com/KeepSafe/aiohttp/tree/master/examples) implemented using 3.4 syntax. Based on [json client example](https://github.com/KeepSafe/aiohttp/blob/master/examples/client_json.py) your function would be: ``` @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(10): resp = yield from session.get(url) try: return (yield from resp.text()) finally: yield from resp.release() ``` **Upd:** Note that Martijn's solution would work for simple cases, but may lead to unwanted behavior in specific cases: ``` @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(5): response = yield from session.get(url) # Any actions that may lead to error: 1/0 return (yield from response.text()) # exception + warning "Unclosed response" ``` Besides exception you'll get also warning "Unclosed response". This may lead to connections leak in complex app. You will avoid this problem if you'll manually call `resp.release()`/`resp.close()`: ``` @asyncio.coroutine def fetch(session, url): with aiohttp.Timeout(5): resp = yield from session.get(url) try: # Any actions that may lead to error: 1/0 return (yield from resp.text()) except Exception as e: # .close() on exception. resp.close() raise e finally: # .release() otherwise to return connection into free connection pool. # It's ok to release closed response: # https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/client_reqrep.py#L664 yield from resp.release() # exception only ``` I think it's better to follow official examples (and `__aexit__` [implementation](https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/client.py#L555)) and call `resp.release()`/`resp.close()` explicitly.
52,211,917
Trying to create a batch script for windows that runs a program with python3 if available else python2. I know the script can be executed with `$py -2 script.py` and py3 with `$py -3 script.py`. and if I run `py -0`, it returns all the python versions. How do I build this script? I do not want to check if the python directory is available or not, I'd prefer to check in a way that is python location agnostic.
2018/09/06
[ "https://Stackoverflow.com/questions/52211917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5252492/" ]
Not a full solution, but a method to detect which version of Python is installed: You can check if Python 3 is installed by running `py -3 --version` and then checking the `%ERRORLEVEL%` variable in the batch script. If it is 0, then `py -3 --version` was successful, i.e. Python 3 is installed on the system. If it is nonzero, then Python 3 is not installed.
You can do it without temporary file ``` set PYTHON_MAJOR_VERSION=0 for /f %%i in ('python -c "import sys; print(sys.version_info[0])"') do set PYTHON_MAJOR_VERSION=%%i ```
52,211,917
Trying to create a batch script for windows that runs a program with python3 if available else python2. I know the script can be executed with `$py -2 script.py` and py3 with `$py -3 script.py`. and if I run `py -0`, it returns all the python versions. How do I build this script? I do not want to check if the python directory is available or not, I'd prefer to check in a way that is python location agnostic.
2018/09/06
[ "https://Stackoverflow.com/questions/52211917", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5252492/" ]
Piping directly `python.exe --version` to `find` or `findstr` is not working (with python 2.7). Building dinamicaly and running a python script that return the version, will enable this piping ! The solution : ``` @echo off set "$py=0" call:construct for /f "delims=" %%a in ('python #.py ^| findstr "2"') do set "$py=2" for /f "delims=" %%a in ('python #.py ^| findstr "3"') do set "$py=3" del #.py goto:%$py% echo python is not installed or python's path Path is not in the %%$path%% env. var exit/b :2 echo running with PY 2 exit/b :3 echo running with PY 3 exit/b :construct echo import sys; print('{0[0]}.{0[1]}'.format(sys.version_info^)^) >#.py ```
You can do it without temporary file ``` set PYTHON_MAJOR_VERSION=0 for /f %%i in ('python -c "import sys; print(sys.version_info[0])"') do set PYTHON_MAJOR_VERSION=%%i ```
67,644,959
Since the code was cryptic I decided to reformulate it. This code is trying to remove the second element from a linked list (the node with number 2 on "int data"). The first parameter of remove\_node is the address of the first pointer of the list, so if I have to remove the first pointer of the list I am able to start the list from the next pointer. The problem is inside the second while loop, inside the if clause, more specifically with the free(previous->next) function, it is changing the address pointed by address\_of\_ptr (aka \*address\_of\_ptr) and I can't understand why this is happening. Could someone enlighten me? ``` #include <stdlib.h> typedef struct node { struct node *next; int data; } node; void remove_node(node **address_of_ptr, int int_to_remove) { node *previous; while (*address_of_ptr != NULL && (*address_of_ptr)->data == int_to_remove) { previous = *address_of_ptr; *address_of_ptr = (*address_of_ptr)->next; free(previous); } previous = *address_of_ptr; address_of_ptr = &((*address_of_ptr)->next); while (*address_of_ptr != NULL) { if ((*address_of_ptr)->data == int_to_remove) { address_of_ptr = &((*address_of_ptr)->next); free(previous->next); previous->next = *address_of_ptr; /*or previous->next = (*address_of_pointer)->next; free(*address_of_pointer); address_of_pointer = &previous->next; */ } else { previous = *address_of_ptr; address_of_ptr = &((*address_of_ptr)->next); } } } int main(void) { node *head = malloc(sizeof(node)); node *a = malloc(sizeof(node)); node *b = malloc(sizeof(node)); head->next = a; a->next = b; b->next = NULL; head->data = 1; a->data = 2; b->data = 1; remove_node(&head, 2); return 0; } ``` I figured it out using pythontutor.com. Here is an image of the execution about to replace *address\_of\_ptr* with the address of the next pointer in the list: [about to replace *address\_of\_ptr*](https://i.stack.imgur.com/NqMAr.png) In my mind what would happen is that *address\_of\_ptr* would move from *head* to *a*: [like this](https://i.stack.imgur.com/c2r9l.png) But what actually happens is this: [*address\_of\_ptr* move from head to the first node's *next* variable](https://i.stack.imgur.com/56UPn.png) Which is also the address of the next node (such as *a*). Inside the if clause: [*address\_of\_ptr* is set to the second node's *next* variable](https://i.stack.imgur.com/OjsIq.png) What I was expecting was it being set to *b*: [Like this](https://i.stack.imgur.com/TFtRv.png) Since it becomes equivalent to *previous->next*, it causes program to free *address\_of\_ptr*.
2021/05/22
[ "https://Stackoverflow.com/questions/67644959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15996314/" ]
First of all, if the first `while` loop stops due to `*address_of_ptr == NULL`, then the line `address_of_ptr = &((*address_of_ptr)->next);` will cause undefined behavior, due to deferencing a `NULL` pointer. However, this will not happen with the sample input provided by the function `main`, so it is not the cause of your problem. --- Your problem is the following: When first executing inside the `if` block of the second `while` loop (which will be in the first iteration of the second `while` loop with your sample input), `address_of_ptr` will point to the `next` member of the first node in the linked list, so the pointer will have the value of `&head->next`. This is because you have the line `address_of_ptr = &((*address_of_ptr)->next);` in several places in your program, and it will execute one of these lines exactly once by the time you enter the `if` block. The first line of that `if` block is also the line `address_of_ptr = &((*address_of_ptr)->next);` so that after executing that line, `address_of_ptr` will point to the `next` member of the second node in the linked list, so the pointer will have the value of `&a->next`. At that time, the value of `previous` will be `head`, because at the time `previous = *address_of_ptr;` was executed, `address_of_ptr` had the value of `&head`. Therefore, when the next line `free(previous->next);` is executed, `previous->next` will have the value of `a`, which ends the lifetime of that node. As already stated, at this time `address_of_ptr` will have the value `&a->next`, which means that `address_of_ptr` is now a dangling pointer, because it is now pointing to a freed memory location. Therefore, it is not surprising that `*address_of_ptr` changes after the `free` statement, because freed memory can change at any time. --- Note that the function `remove_node` can be implemented in a much simpler way: ``` void remove_node( node **pp_head, int int_to_remove ) { node **pp = pp_head, *p; //process one node per loop iteration while ( (p = *pp) != NULL ) { //check if node should be removed if ( p->data == int_to_remove ) { //unlink the node from the linked list *pp = p->next; //free the unlinked node free( p ); } else { //go to next node in the list pp = &p->next; } } } ```
This line: ``` begin_list = &previous->next; ``` You might be mis-thinking how pointers to pointers work. Since `begin_list` is a `t_list **`, you generally want to assign to `*begin_list`. Also, since `previous->next` is already an address, you don't want to assign the address to `*begin_list`, just the value, so this should work instead: ``` *begin_list = previous->next; ``` Same for the similar line below it.
67,644,959
Since the code was cryptic I decided to reformulate it. This code is trying to remove the second element from a linked list (the node with number 2 on "int data"). The first parameter of remove\_node is the address of the first pointer of the list, so if I have to remove the first pointer of the list I am able to start the list from the next pointer. The problem is inside the second while loop, inside the if clause, more specifically with the free(previous->next) function, it is changing the address pointed by address\_of\_ptr (aka \*address\_of\_ptr) and I can't understand why this is happening. Could someone enlighten me? ``` #include <stdlib.h> typedef struct node { struct node *next; int data; } node; void remove_node(node **address_of_ptr, int int_to_remove) { node *previous; while (*address_of_ptr != NULL && (*address_of_ptr)->data == int_to_remove) { previous = *address_of_ptr; *address_of_ptr = (*address_of_ptr)->next; free(previous); } previous = *address_of_ptr; address_of_ptr = &((*address_of_ptr)->next); while (*address_of_ptr != NULL) { if ((*address_of_ptr)->data == int_to_remove) { address_of_ptr = &((*address_of_ptr)->next); free(previous->next); previous->next = *address_of_ptr; /*or previous->next = (*address_of_pointer)->next; free(*address_of_pointer); address_of_pointer = &previous->next; */ } else { previous = *address_of_ptr; address_of_ptr = &((*address_of_ptr)->next); } } } int main(void) { node *head = malloc(sizeof(node)); node *a = malloc(sizeof(node)); node *b = malloc(sizeof(node)); head->next = a; a->next = b; b->next = NULL; head->data = 1; a->data = 2; b->data = 1; remove_node(&head, 2); return 0; } ``` I figured it out using pythontutor.com. Here is an image of the execution about to replace *address\_of\_ptr* with the address of the next pointer in the list: [about to replace *address\_of\_ptr*](https://i.stack.imgur.com/NqMAr.png) In my mind what would happen is that *address\_of\_ptr* would move from *head* to *a*: [like this](https://i.stack.imgur.com/c2r9l.png) But what actually happens is this: [*address\_of\_ptr* move from head to the first node's *next* variable](https://i.stack.imgur.com/56UPn.png) Which is also the address of the next node (such as *a*). Inside the if clause: [*address\_of\_ptr* is set to the second node's *next* variable](https://i.stack.imgur.com/OjsIq.png) What I was expecting was it being set to *b*: [Like this](https://i.stack.imgur.com/TFtRv.png) Since it becomes equivalent to *previous->next*, it causes program to free *address\_of\_ptr*.
2021/05/22
[ "https://Stackoverflow.com/questions/67644959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15996314/" ]
First of all, if the first `while` loop stops due to `*address_of_ptr == NULL`, then the line `address_of_ptr = &((*address_of_ptr)->next);` will cause undefined behavior, due to deferencing a `NULL` pointer. However, this will not happen with the sample input provided by the function `main`, so it is not the cause of your problem. --- Your problem is the following: When first executing inside the `if` block of the second `while` loop (which will be in the first iteration of the second `while` loop with your sample input), `address_of_ptr` will point to the `next` member of the first node in the linked list, so the pointer will have the value of `&head->next`. This is because you have the line `address_of_ptr = &((*address_of_ptr)->next);` in several places in your program, and it will execute one of these lines exactly once by the time you enter the `if` block. The first line of that `if` block is also the line `address_of_ptr = &((*address_of_ptr)->next);` so that after executing that line, `address_of_ptr` will point to the `next` member of the second node in the linked list, so the pointer will have the value of `&a->next`. At that time, the value of `previous` will be `head`, because at the time `previous = *address_of_ptr;` was executed, `address_of_ptr` had the value of `&head`. Therefore, when the next line `free(previous->next);` is executed, `previous->next` will have the value of `a`, which ends the lifetime of that node. As already stated, at this time `address_of_ptr` will have the value `&a->next`, which means that `address_of_ptr` is now a dangling pointer, because it is now pointing to a freed memory location. Therefore, it is not surprising that `*address_of_ptr` changes after the `free` statement, because freed memory can change at any time. --- Note that the function `remove_node` can be implemented in a much simpler way: ``` void remove_node( node **pp_head, int int_to_remove ) { node **pp = pp_head, *p; //process one node per loop iteration while ( (p = *pp) != NULL ) { //check if node should be removed if ( p->data == int_to_remove ) { //unlink the node from the linked list *pp = p->next; //free the unlinked node free( p ); } else { //go to next node in the list pp = &p->next; } } } ```
I prepared an answer last night several minutes after you asked the question at first, and put a lot of thought into it, but didn't come back until just now. I'd let my version go, but it's a more detailed look into the problem anyway, so I thought I'd include it for posterity. --- The problem lay on line 36: ``` begin_list = &(*begin_list)->next; ``` In the first `while` loop in the function `ft_list_remove_if`, I can see that you wanted to change the beginning of the list to point to some list item potentially later in the list; so for example, if you wanted to remove all items in the list which contained the string `"a"`, this function should indeed change the value of `list1` in the `main` function; this way the new start of the list would actually be `list2`. However, by the time the second `while` loop is entered, there's no need to change the pointer to the start of the list anymore; that was taken care of in the first `while` loop. As such, `begin_list` should really not be used anymore; it is a pointer to a pointer to a list item, so it references the variable which points to a list item. This is not desirable. Let's step through the program line-by-line and I'll show you what I mean. --- Upon entering the function `ft_list_remove_if`, we run past the first `while` loop since the first element of the list does not contain the data `"b"`: ``` while (*begin_list != 0 && (*cmp)((*begin_list)->data, data_ref) == 0) { [...] } ``` We're still on the first list item when we enter the second `while` loop, and the first item does not match. We therefore go to the `else` branch of the `if` statement on line 25: ``` if ((*cmp)((*begin_list)->data, data_ref) == 0) { [...] } else { previous = *begin_list; begin_list = &(*begin_list)->next; } ``` In the `else` branch, `previous` is set to the first item in the linked list. Fair enough. Then `begin_list` is set to *the address of the `next` member of the first list item*. This means that **when the first list item's `next` member changes, so does the value of `*begin_list`**. Let that point sink in. Now we go on to the next iteration of the `while` loop. This time `*begin_list` points to the second list item, which does indeed match the search criteria, so we enter the main `if` branch instead of the `else`: ``` if ((*cmp)((*begin_list)->data, data_ref) == 0) { previous->next = (*begin_list)->next; free(*begin_list); begin_list = &previous->next; } ``` OK. So let's remember here: `previous` is set to the first item in the linked list. `*begin_list` is set to the second item in the linked list. But what is `begin_list` set to? In short, the address of `previous->next`. Remember the following from the last cycle of the loop? ``` previous = *begin_list; begin_list = &(*begin_list)->next; ``` So now when `previous->next` is changed on line 28 to the third item in the list, `*begin_list` is *also* changed to the third item in the list! That means that line 29 frees the memory allocated to the *third* item in the list, while the second item no longer referenced by any pointers! By sheer luck, the program might continue to *look* like it functions properly for the next few steps, but it doesn't; the third element has been freed, and the contents of the third element get scrambled up a little bit and the third element's `next` and `data` members are filled with invalid data, resulting in the segfault. **However**, the procedure worked when you copied `*begin_list` to `to_remove` first, and then freed the data pointed to by `to_remove`: ``` to_remove = *begin_list; previous->next = (*begin_list)->next; free(to_remove); begin_list = &previous->next; ``` This is because `to_remove` continues to point to the second element, even after `*begin_list` has been changed by the `previous->next = (*begin_list)->next;` statement; thus the second element is freed as it should be, and on line 31 `begin_list` is set to the address of the third element, just like it should be. --- That doesn't mean that the latter is a good solution. Really, I think `begin_list` shouldn't be used at all beyond the first `while` loop, since it doesn't have to be. My suggestion is to use another pointer named `current` with which you can step through the list: ``` #include <stdlib.h> #include <stdio.h> typedef struct s_list { struct s_list *next; void *data; } t_list; void ft_list_remove_if(t_list **begin_list, void *data_ref, int (*cmp)()) { t_list *previous; t_list *current; if (begin_list == 0) return; while (*begin_list != 0 && (*cmp)((*begin_list)->data, data_ref) == 0) { previous = *begin_list; *begin_list = (*begin_list)->next; free(previous); } current = *begin_list; while (current != 0) { if ((*cmp)(current->data, data_ref) == 0) { previous->next = current->next; free(current); current = previous->next; } else { previous = current; current = current->next; } } } int ft_strcmp(char *s1, char *s2) { int i; i = 0; while (s1[i] == s2[i] && s1[i] != '\0') ++i; return (s1[i] - s2[i]); } int main(void) { t_list *list1 = malloc(sizeof(*list1)); t_list *list2 = malloc(sizeof(*list2)); t_list *list3 = malloc(sizeof(*list3)); list1->data = "a"; list2->data = "b"; list3->data = "a"; list1->next = list2; list2->next = list3; list3->next = 0; ft_list_remove_if(&list1, "b", &ft_strcmp); return (0); } ```
67,644,959
Since the code was cryptic I decided to reformulate it. This code is trying to remove the second element from a linked list (the node with number 2 on "int data"). The first parameter of remove\_node is the address of the first pointer of the list, so if I have to remove the first pointer of the list I am able to start the list from the next pointer. The problem is inside the second while loop, inside the if clause, more specifically with the free(previous->next) function, it is changing the address pointed by address\_of\_ptr (aka \*address\_of\_ptr) and I can't understand why this is happening. Could someone enlighten me? ``` #include <stdlib.h> typedef struct node { struct node *next; int data; } node; void remove_node(node **address_of_ptr, int int_to_remove) { node *previous; while (*address_of_ptr != NULL && (*address_of_ptr)->data == int_to_remove) { previous = *address_of_ptr; *address_of_ptr = (*address_of_ptr)->next; free(previous); } previous = *address_of_ptr; address_of_ptr = &((*address_of_ptr)->next); while (*address_of_ptr != NULL) { if ((*address_of_ptr)->data == int_to_remove) { address_of_ptr = &((*address_of_ptr)->next); free(previous->next); previous->next = *address_of_ptr; /*or previous->next = (*address_of_pointer)->next; free(*address_of_pointer); address_of_pointer = &previous->next; */ } else { previous = *address_of_ptr; address_of_ptr = &((*address_of_ptr)->next); } } } int main(void) { node *head = malloc(sizeof(node)); node *a = malloc(sizeof(node)); node *b = malloc(sizeof(node)); head->next = a; a->next = b; b->next = NULL; head->data = 1; a->data = 2; b->data = 1; remove_node(&head, 2); return 0; } ``` I figured it out using pythontutor.com. Here is an image of the execution about to replace *address\_of\_ptr* with the address of the next pointer in the list: [about to replace *address\_of\_ptr*](https://i.stack.imgur.com/NqMAr.png) In my mind what would happen is that *address\_of\_ptr* would move from *head* to *a*: [like this](https://i.stack.imgur.com/c2r9l.png) But what actually happens is this: [*address\_of\_ptr* move from head to the first node's *next* variable](https://i.stack.imgur.com/56UPn.png) Which is also the address of the next node (such as *a*). Inside the if clause: [*address\_of\_ptr* is set to the second node's *next* variable](https://i.stack.imgur.com/OjsIq.png) What I was expecting was it being set to *b*: [Like this](https://i.stack.imgur.com/TFtRv.png) Since it becomes equivalent to *previous->next*, it causes program to free *address\_of\_ptr*.
2021/05/22
[ "https://Stackoverflow.com/questions/67644959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15996314/" ]
I would like to thank everyone who helped me sorting this issue, I truly appreciate your effort in making the explanation clear as possible. After a lot of thinking a solution that I believe is much simpler than my first code come up. @David C. Rankin mentioned ["Linus on Understanding Pointers"](https://grisha.org/blog/2013/04/02/linus-on-understanding-pointers/), and I think "I might have grasped" what Linus was referring to: ``` void remove_node(node **address_of_ptr, int int_to_remove) { node *to_remove; while (*address_of_ptr != NULL) { if ((*address_of_ptr)->data == int_to_remove) { to_remove = *address_of_ptr; *address_of_ptr = (*address_of_ptr)->next; free(to_remove); } else { address_of_ptr = &(*address_of_ptr)->next; } } } ```
This line: ``` begin_list = &previous->next; ``` You might be mis-thinking how pointers to pointers work. Since `begin_list` is a `t_list **`, you generally want to assign to `*begin_list`. Also, since `previous->next` is already an address, you don't want to assign the address to `*begin_list`, just the value, so this should work instead: ``` *begin_list = previous->next; ``` Same for the similar line below it.
67,644,959
Since the code was cryptic I decided to reformulate it. This code is trying to remove the second element from a linked list (the node with number 2 on "int data"). The first parameter of remove\_node is the address of the first pointer of the list, so if I have to remove the first pointer of the list I am able to start the list from the next pointer. The problem is inside the second while loop, inside the if clause, more specifically with the free(previous->next) function, it is changing the address pointed by address\_of\_ptr (aka \*address\_of\_ptr) and I can't understand why this is happening. Could someone enlighten me? ``` #include <stdlib.h> typedef struct node { struct node *next; int data; } node; void remove_node(node **address_of_ptr, int int_to_remove) { node *previous; while (*address_of_ptr != NULL && (*address_of_ptr)->data == int_to_remove) { previous = *address_of_ptr; *address_of_ptr = (*address_of_ptr)->next; free(previous); } previous = *address_of_ptr; address_of_ptr = &((*address_of_ptr)->next); while (*address_of_ptr != NULL) { if ((*address_of_ptr)->data == int_to_remove) { address_of_ptr = &((*address_of_ptr)->next); free(previous->next); previous->next = *address_of_ptr; /*or previous->next = (*address_of_pointer)->next; free(*address_of_pointer); address_of_pointer = &previous->next; */ } else { previous = *address_of_ptr; address_of_ptr = &((*address_of_ptr)->next); } } } int main(void) { node *head = malloc(sizeof(node)); node *a = malloc(sizeof(node)); node *b = malloc(sizeof(node)); head->next = a; a->next = b; b->next = NULL; head->data = 1; a->data = 2; b->data = 1; remove_node(&head, 2); return 0; } ``` I figured it out using pythontutor.com. Here is an image of the execution about to replace *address\_of\_ptr* with the address of the next pointer in the list: [about to replace *address\_of\_ptr*](https://i.stack.imgur.com/NqMAr.png) In my mind what would happen is that *address\_of\_ptr* would move from *head* to *a*: [like this](https://i.stack.imgur.com/c2r9l.png) But what actually happens is this: [*address\_of\_ptr* move from head to the first node's *next* variable](https://i.stack.imgur.com/56UPn.png) Which is also the address of the next node (such as *a*). Inside the if clause: [*address\_of\_ptr* is set to the second node's *next* variable](https://i.stack.imgur.com/OjsIq.png) What I was expecting was it being set to *b*: [Like this](https://i.stack.imgur.com/TFtRv.png) Since it becomes equivalent to *previous->next*, it causes program to free *address\_of\_ptr*.
2021/05/22
[ "https://Stackoverflow.com/questions/67644959", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15996314/" ]
I would like to thank everyone who helped me sorting this issue, I truly appreciate your effort in making the explanation clear as possible. After a lot of thinking a solution that I believe is much simpler than my first code come up. @David C. Rankin mentioned ["Linus on Understanding Pointers"](https://grisha.org/blog/2013/04/02/linus-on-understanding-pointers/), and I think "I might have grasped" what Linus was referring to: ``` void remove_node(node **address_of_ptr, int int_to_remove) { node *to_remove; while (*address_of_ptr != NULL) { if ((*address_of_ptr)->data == int_to_remove) { to_remove = *address_of_ptr; *address_of_ptr = (*address_of_ptr)->next; free(to_remove); } else { address_of_ptr = &(*address_of_ptr)->next; } } } ```
I prepared an answer last night several minutes after you asked the question at first, and put a lot of thought into it, but didn't come back until just now. I'd let my version go, but it's a more detailed look into the problem anyway, so I thought I'd include it for posterity. --- The problem lay on line 36: ``` begin_list = &(*begin_list)->next; ``` In the first `while` loop in the function `ft_list_remove_if`, I can see that you wanted to change the beginning of the list to point to some list item potentially later in the list; so for example, if you wanted to remove all items in the list which contained the string `"a"`, this function should indeed change the value of `list1` in the `main` function; this way the new start of the list would actually be `list2`. However, by the time the second `while` loop is entered, there's no need to change the pointer to the start of the list anymore; that was taken care of in the first `while` loop. As such, `begin_list` should really not be used anymore; it is a pointer to a pointer to a list item, so it references the variable which points to a list item. This is not desirable. Let's step through the program line-by-line and I'll show you what I mean. --- Upon entering the function `ft_list_remove_if`, we run past the first `while` loop since the first element of the list does not contain the data `"b"`: ``` while (*begin_list != 0 && (*cmp)((*begin_list)->data, data_ref) == 0) { [...] } ``` We're still on the first list item when we enter the second `while` loop, and the first item does not match. We therefore go to the `else` branch of the `if` statement on line 25: ``` if ((*cmp)((*begin_list)->data, data_ref) == 0) { [...] } else { previous = *begin_list; begin_list = &(*begin_list)->next; } ``` In the `else` branch, `previous` is set to the first item in the linked list. Fair enough. Then `begin_list` is set to *the address of the `next` member of the first list item*. This means that **when the first list item's `next` member changes, so does the value of `*begin_list`**. Let that point sink in. Now we go on to the next iteration of the `while` loop. This time `*begin_list` points to the second list item, which does indeed match the search criteria, so we enter the main `if` branch instead of the `else`: ``` if ((*cmp)((*begin_list)->data, data_ref) == 0) { previous->next = (*begin_list)->next; free(*begin_list); begin_list = &previous->next; } ``` OK. So let's remember here: `previous` is set to the first item in the linked list. `*begin_list` is set to the second item in the linked list. But what is `begin_list` set to? In short, the address of `previous->next`. Remember the following from the last cycle of the loop? ``` previous = *begin_list; begin_list = &(*begin_list)->next; ``` So now when `previous->next` is changed on line 28 to the third item in the list, `*begin_list` is *also* changed to the third item in the list! That means that line 29 frees the memory allocated to the *third* item in the list, while the second item no longer referenced by any pointers! By sheer luck, the program might continue to *look* like it functions properly for the next few steps, but it doesn't; the third element has been freed, and the contents of the third element get scrambled up a little bit and the third element's `next` and `data` members are filled with invalid data, resulting in the segfault. **However**, the procedure worked when you copied `*begin_list` to `to_remove` first, and then freed the data pointed to by `to_remove`: ``` to_remove = *begin_list; previous->next = (*begin_list)->next; free(to_remove); begin_list = &previous->next; ``` This is because `to_remove` continues to point to the second element, even after `*begin_list` has been changed by the `previous->next = (*begin_list)->next;` statement; thus the second element is freed as it should be, and on line 31 `begin_list` is set to the address of the third element, just like it should be. --- That doesn't mean that the latter is a good solution. Really, I think `begin_list` shouldn't be used at all beyond the first `while` loop, since it doesn't have to be. My suggestion is to use another pointer named `current` with which you can step through the list: ``` #include <stdlib.h> #include <stdio.h> typedef struct s_list { struct s_list *next; void *data; } t_list; void ft_list_remove_if(t_list **begin_list, void *data_ref, int (*cmp)()) { t_list *previous; t_list *current; if (begin_list == 0) return; while (*begin_list != 0 && (*cmp)((*begin_list)->data, data_ref) == 0) { previous = *begin_list; *begin_list = (*begin_list)->next; free(previous); } current = *begin_list; while (current != 0) { if ((*cmp)(current->data, data_ref) == 0) { previous->next = current->next; free(current); current = previous->next; } else { previous = current; current = current->next; } } } int ft_strcmp(char *s1, char *s2) { int i; i = 0; while (s1[i] == s2[i] && s1[i] != '\0') ++i; return (s1[i] - s2[i]); } int main(void) { t_list *list1 = malloc(sizeof(*list1)); t_list *list2 = malloc(sizeof(*list2)); t_list *list3 = malloc(sizeof(*list3)); list1->data = "a"; list2->data = "b"; list3->data = "a"; list1->next = list2; list2->next = list3; list3->next = 0; ft_list_remove_if(&list1, "b", &ft_strcmp); return (0); } ```
50,187,041
I need help with my python program. I'm doing a calculator. The numbers must be formed, but for some reason they do not add up. It seems that I did everything right, but the program does not work. Please help me. [Picture](https://i.stack.imgur.com/777SB.png) Code: ``` a = input('Enter number A \n'); d = input('Enter sign operations \n') b = input('Enter number B \n') c = a + b if str(d) == "+": int(c) == "a + b" print('Answer: ' + c) ```
2018/05/05
[ "https://Stackoverflow.com/questions/50187041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9744650/" ]
Please don't post screenshots. Copy and paste text and use the {} CODE markdown. What data type is returned by input()? It's always a string. It doesn't matter what you type. Where is the variable c actually calculated in this program? Line 4. What types of data are used to compute c? Two strings. What happens when you use the "+" operation on two strings instead of two numbers? Try running your program and when it prompts you to "enter number A", type "Joe". When it prompts you to "enter number B", type "Bob". What does your program do? You need to create numerical objects from each of the strings you entered if you want to do arithmetic. I think that you tried what you thought would do that on line 7. It doesn't work though. "==" is used to test for equality, not to assign a value. The single "=" is used to bind values to variable names. You do that correctly on lines 1 through 4. Notice that the plain variable name is always by itself on the left of the "=" sign. You do all the fancy stuff on the right side of the "=". You can actually delete lines 6 and 7 and the output of the program will not change.
`a` and `b` are strings. `a + b` concatenates strings `a` and `b`. You need to convert the strings to int: `c = int(a) + int(b)` And remove the lines: ``` if str(d) == "+": int(c) == "a + b" ```
50,187,041
I need help with my python program. I'm doing a calculator. The numbers must be formed, but for some reason they do not add up. It seems that I did everything right, but the program does not work. Please help me. [Picture](https://i.stack.imgur.com/777SB.png) Code: ``` a = input('Enter number A \n'); d = input('Enter sign operations \n') b = input('Enter number B \n') c = a + b if str(d) == "+": int(c) == "a + b" print('Answer: ' + c) ```
2018/05/05
[ "https://Stackoverflow.com/questions/50187041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9744650/" ]
Please don't post screenshots. Copy and paste text and use the {} CODE markdown. What data type is returned by input()? It's always a string. It doesn't matter what you type. Where is the variable c actually calculated in this program? Line 4. What types of data are used to compute c? Two strings. What happens when you use the "+" operation on two strings instead of two numbers? Try running your program and when it prompts you to "enter number A", type "Joe". When it prompts you to "enter number B", type "Bob". What does your program do? You need to create numerical objects from each of the strings you entered if you want to do arithmetic. I think that you tried what you thought would do that on line 7. It doesn't work though. "==" is used to test for equality, not to assign a value. The single "=" is used to bind values to variable names. You do that correctly on lines 1 through 4. Notice that the plain variable name is always by itself on the left of the "=" sign. You do all the fancy stuff on the right side of the "=". You can actually delete lines 6 and 7 and the output of the program will not change.
Here is the complete code, that should to what you want: ``` a = input('Enter number A \n'); operation = input('Enter sign operations \n') b = input('Enter number B \n') c = a + b if operation == "+": c= int(a) + int(b) print('Answer:', c) ```
50,187,041
I need help with my python program. I'm doing a calculator. The numbers must be formed, but for some reason they do not add up. It seems that I did everything right, but the program does not work. Please help me. [Picture](https://i.stack.imgur.com/777SB.png) Code: ``` a = input('Enter number A \n'); d = input('Enter sign operations \n') b = input('Enter number B \n') c = a + b if str(d) == "+": int(c) == "a + b" print('Answer: ' + c) ```
2018/05/05
[ "https://Stackoverflow.com/questions/50187041", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9744650/" ]
Please don't post screenshots. Copy and paste text and use the {} CODE markdown. What data type is returned by input()? It's always a string. It doesn't matter what you type. Where is the variable c actually calculated in this program? Line 4. What types of data are used to compute c? Two strings. What happens when you use the "+" operation on two strings instead of two numbers? Try running your program and when it prompts you to "enter number A", type "Joe". When it prompts you to "enter number B", type "Bob". What does your program do? You need to create numerical objects from each of the strings you entered if you want to do arithmetic. I think that you tried what you thought would do that on line 7. It doesn't work though. "==" is used to test for equality, not to assign a value. The single "=" is used to bind values to variable names. You do that correctly on lines 1 through 4. Notice that the plain variable name is always by itself on the left of the "=" sign. You do all the fancy stuff on the right side of the "=". You can actually delete lines 6 and 7 and the output of the program will not change.
Since it looks like you also want to enter an operation sign you might also try `eval` ``` a = input('Enter number A \n'); d = raw_input('Enter sign operations \n') b = input('Enter number B \n') eval_string = str(a) + d + str(b) print ( eval(eval_string) ) ``` You should know `input` accepts only integers and `raw_input` even if given an integer saves it as a string so it saves only strings.
9,776,332
I wrote a python program that needs to call aircrack program to some tasks, but I run into trouble with the privilege. Initially the aircrack program is called in command line, it requires "sudo" at the beginning. After that I checked the location of the executable and found that it locates under `/usr/sbin/`. Right now my program is driven by apache and the default user is www-data. If I execute the program with user www-data without "sudo", it won't give any result. I googled a little bit on this problem, and found that we can change the owner of aircrack program by executing `sudo chmod 4755 /usr/sbin/airodump-ng`. I tried it and the privilege is changed from `-rwxr-xr-x` to `-rwsr-xr-x`. I executed the command once again without "sudo" but it doesn't make any difference. My questions is that: how can I change the privilege of a program so that it can be executed by any user without "sudo"? If I run the program with "sudo", then the output files are owned by "root", which adds more complexity to my program. Please help, thanks.
2012/03/19
[ "https://Stackoverflow.com/questions/9776332", "https://Stackoverflow.com", "https://Stackoverflow.com/users/636625/" ]
first, none of us can replace the chmod man page (so i'll start by quoting it): ``` A numeric mode is from one to four octal digits (0-7), derived by adding up the bits with values 4, 2, and 1. Omitted digits are assumed to be leading zeros. The first digit selects the set user ID (4) and set group ID (2) and restricted deletion or sticky (1) attributes. The second digit selects permissions for the user who owns the file: read (4), write (2), and execute (1); the third selects permissions for other users in the file's group, with the same values; and the fourth for other users not in the file's group, with the same values. ``` so, if you want the program to be executable by all users, you 'chmod 111 file'. however, iirc you need read permissions as well for most things, so 'chmod 555 file'. also, the reason i'm making the bits the same is because you said you wanted 'everyone' to be able to do some.
You have to set the suid flag for all users. ``` sudo chmod ogu+sxr /usr/sbing/airodump-ng ```
7,377,494
I'm trying to scrape a page using python The problem is, I keep getting Errno54 Connection reset by peer. The error comes when I run this code - ``` urllib2.urlopen("http://www.bkstr.com/webapp/wcs/stores/servlet/CourseMaterialsResultsView?catalogId=10001&categoryId=9604&storeId=10161&langId=-1&programId=562&termId=100020629&divisionDisplayName=Stanford&departmentDisplayName=ILAC&courseDisplayName=126&sectionDisplayName=01&demoKey=d&purpose=browse") ``` this happens for all the urls on this pag- what is the issue?
2011/09/11
[ "https://Stackoverflow.com/questions/7377494", "https://Stackoverflow.com", "https://Stackoverflow.com/users/915672/" ]
``` $> telnet www.bkstr.com 80 Trying 64.37.224.85... Connected to www.bkstr.com. Escape character is '^]'. GET /webapp/wcs/stores/servlet/CourseMaterialsResultsView?catalogId=10001&categoryId=9604&storeId=10161&langId=-1&programId=562&termId=100020629&divisionDisplayName=Stanford&departmentDisplayName=ILAC&courseDisplayName=126&sectionDisplayName=01&demoKey=d&purpose=browse HTTP/1.0 Connection closed by foreign host. ``` You're not going to have any joy fetching that URL from python, or anywhere else. If it works in your browser then there must be something else going on, like cookies or authentication or some such. Or, possibly, the server's broken or they've changed their configuration. Try opening it in a browser that you've never accessed that site in before to check. Then log in and try it again. Edit: It was cookies after all: ``` import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) #Need to set a cookie opener.open("http://www.bkstr.com/") #Now open the page we want data = opener.open("http://www.bkstr.com/webapp/wcs/stores/servlet/CourseMaterialsResultsView?catalogId=10001&categoryId=9604&storeId=10161&langId=-1&programId=562&termId=100020629&divisionDisplayName=Stanford&departmentDisplayName=ILAC&courseDisplayName=126&sectionDisplayName=01&demoKey=d&purpose=browse").read() ``` The output looks ok, but you'll have to check that it does what you want :)
I came across a similar error just recently. The connection was dropping out and being reset. I tried cookiejars, extended delays and different headers/useragents, but nothing worked. In the end the fix was simple. I went from urllib2 to requests. The old; ``` import urllib2 opener = urllib2.build_opener() buf = opener.open(url).read() ``` The new; ``` import requests buf = requests.get(url).text ``` After that everything worked perfectly.
50,983,646
I have a dataframe like this: ``` df = pd.DataFrame({'timestamp':pd.date_range('2018-01-01', '2018-01-02', freq='2h', closed='right'),'col1':[np.nan, np.nan, np.nan, 1,2,3,4,5,6,7,8,np.nan], 'col2':[np.nan, np.nan, 0, 1,2,3,4,5,np.nan,np.nan,np.nan,np.nan], 'col3':[np.nan, -1, 0, 1,2,3,4,5,6,7,8,9], 'col4':[-2, -1, 0, 1,2,3,4,np.nan,np.nan,np.nan,np.nan,np.nan] })[['timestamp', 'col1', 'col2', 'col3', 'col4']] ``` which looks like this: ``` timestamp col1 col2 col3 col4 0 2018-01-01 02:00:00 NaN NaN NaN -2.0 1 2018-01-01 04:00:00 NaN NaN -1.0 -1.0 2 2018-01-01 06:00:00 NaN 0.0 NaN 0.0 3 2018-01-01 08:00:00 1.0 1.0 1.0 1.0 4 2018-01-01 10:00:00 2.0 NaN 2.0 2.0 5 2018-01-01 12:00:00 3.0 3.0 NaN 3.0 6 2018-01-01 14:00:00 NaN 4.0 4.0 4.0 7 2018-01-01 16:00:00 5.0 NaN 5.0 NaN 8 2018-01-01 18:00:00 6.0 NaN 6.0 NaN 9 2018-01-01 20:00:00 7.0 NaN 7.0 NaN 10 2018-01-01 22:00:00 8.0 NaN 8.0 NaN 11 2018-01-02 00:00:00 NaN NaN 9.0 NaN ``` Now, I want to find an efficient and pythonic way of chopping off (for each column! Not counting timestamp) before the first valid index and after the last valid index. In this example I have 4 columns, but in reality I have a lot more, 600 or so. I am looking for a way of chop of all the NaN values before the first valid index and all the NaN values after the last valid index. One way would be to loop through I guess.. But is there a better way? This way has to be efficient. I tried to "unpivot" the dataframe using melt, but then this didn't help. An obvious point is that each column would have a different number of rows after the chopping. So I would like the result to be a list of data frames (one for each column) having timestamp and the column in question. For instance: ``` timestamp col1 3 2018-01-01 08:00:00 1.0 4 2018-01-01 10:00:00 2.0 5 2018-01-01 12:00:00 3.0 6 2018-01-01 14:00:00 NaN 7 2018-01-01 16:00:00 5.0 8 2018-01-01 18:00:00 6.0 9 2018-01-01 20:00:00 7.0 10 2018-01-01 22:00:00 8.0 ``` **My try** I tried like this: ``` final = [] columns = [c for c in df if c !='timestamp'] for col in columns: first = df.loc[:, col].first_valid_index() last = df.loc[:, col].last_valid_index() final.append(df.loc[:, ['timestamp', col]].iloc[first:last+1, :]) ```
2018/06/22
[ "https://Stackoverflow.com/questions/50983646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6435921/" ]
Since it's a response header i assume you mean this: ``` ctx.Response.Header.Set("Access-Control-Allow-Origin", "*") ```
Another option if you are not using `Context`: ``` func setResponseHeader(h http.HandlerFunc) http.HandlerFunc { return func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Access-Control-Allow-Origin", "*") h.ServeHTTP(w, r) } } ``` `setResponseHeader` is essentially a decorator of the argument `HandlerFunc` `h`. When you assemble your routes, you can do something like this: ``` http.HandleFunc("/api/endpoint", setResponseHeader(myHandlerFunc)) http.ListenAndServe(":8000", nil) ```
50,983,646
I have a dataframe like this: ``` df = pd.DataFrame({'timestamp':pd.date_range('2018-01-01', '2018-01-02', freq='2h', closed='right'),'col1':[np.nan, np.nan, np.nan, 1,2,3,4,5,6,7,8,np.nan], 'col2':[np.nan, np.nan, 0, 1,2,3,4,5,np.nan,np.nan,np.nan,np.nan], 'col3':[np.nan, -1, 0, 1,2,3,4,5,6,7,8,9], 'col4':[-2, -1, 0, 1,2,3,4,np.nan,np.nan,np.nan,np.nan,np.nan] })[['timestamp', 'col1', 'col2', 'col3', 'col4']] ``` which looks like this: ``` timestamp col1 col2 col3 col4 0 2018-01-01 02:00:00 NaN NaN NaN -2.0 1 2018-01-01 04:00:00 NaN NaN -1.0 -1.0 2 2018-01-01 06:00:00 NaN 0.0 NaN 0.0 3 2018-01-01 08:00:00 1.0 1.0 1.0 1.0 4 2018-01-01 10:00:00 2.0 NaN 2.0 2.0 5 2018-01-01 12:00:00 3.0 3.0 NaN 3.0 6 2018-01-01 14:00:00 NaN 4.0 4.0 4.0 7 2018-01-01 16:00:00 5.0 NaN 5.0 NaN 8 2018-01-01 18:00:00 6.0 NaN 6.0 NaN 9 2018-01-01 20:00:00 7.0 NaN 7.0 NaN 10 2018-01-01 22:00:00 8.0 NaN 8.0 NaN 11 2018-01-02 00:00:00 NaN NaN 9.0 NaN ``` Now, I want to find an efficient and pythonic way of chopping off (for each column! Not counting timestamp) before the first valid index and after the last valid index. In this example I have 4 columns, but in reality I have a lot more, 600 or so. I am looking for a way of chop of all the NaN values before the first valid index and all the NaN values after the last valid index. One way would be to loop through I guess.. But is there a better way? This way has to be efficient. I tried to "unpivot" the dataframe using melt, but then this didn't help. An obvious point is that each column would have a different number of rows after the chopping. So I would like the result to be a list of data frames (one for each column) having timestamp and the column in question. For instance: ``` timestamp col1 3 2018-01-01 08:00:00 1.0 4 2018-01-01 10:00:00 2.0 5 2018-01-01 12:00:00 3.0 6 2018-01-01 14:00:00 NaN 7 2018-01-01 16:00:00 5.0 8 2018-01-01 18:00:00 6.0 9 2018-01-01 20:00:00 7.0 10 2018-01-01 22:00:00 8.0 ``` **My try** I tried like this: ``` final = [] columns = [c for c in df if c !='timestamp'] for col in columns: first = df.loc[:, col].first_valid_index() last = df.loc[:, col].last_valid_index() final.append(df.loc[:, ['timestamp', col]].iloc[first:last+1, :]) ```
2018/06/22
[ "https://Stackoverflow.com/questions/50983646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6435921/" ]
Since it's a response header i assume you mean this: ``` ctx.Response.Header.Set("Access-Control-Allow-Origin", "*") ```
To enable CORS support on fasthttp, better use [fasthttpcors](https://github.com/AdhityaRamadhanus/fasthttpcors) package. ``` import ( ... cors "github.com/AdhityaRamadhanus/fasthttpcors" ... ) func main() { ... withCors := cors.NewCorsHandler(cors.Options{ AllowMaxAge: math.MaxInt32, }) log.Fatal(fasthttp.ListenAndServe(":8080", withCors.CorsMiddleware(router.HandleRequest))) } ```
41,846,085
I am doing this python programming and stuck with an issue. The program cannot call the value of the n\_zero when I put it in the conditional statement. Here's the program ``` import numpy as np n_zero=int(input('Insert the amount of 0: ')) n_one =int(input('Insert the amount of 1: ')) n_two =int(input('Insert the amount of 2: ')) n_three = int(input('Insert the amount of 3: ')) data = [0]*n_zero + [1]*n_one + [2]*n_two + [3]*n_three print len(data) if len(data)==(2(n_zero)-1): np.random.shuffle(data) datastring = ''.join(map(str, data)) print ("Data string is : %s " % datastring ) else: print(error) ``` this is the error ``` if len(data)==(2(n_zero)-1): TypeError: 'int' object is not callable ``` Thank you
2017/01/25
[ "https://Stackoverflow.com/questions/41846085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7456346/" ]
In python you don't cast the left value variable as you don't specify the left value type. ``` n_zero=int(input('Insert the amount of 0: ')) ``` **Regarding your edit:** What exactly are you trying to reach? if multiply than use the operator \* ``` if len(data)==(2*(n_zero)-1): ... ```
``` if len(data) == (2(n_zero) - 1): ``` should become: ``` if len(data) == (2 * (n_zero) - 1): ``` Note `*` in second example. You must explicitly provide operator, Python will not assume that You want to multiply those 2 numbers if You don't tell it.
62,191,477
somebody can explain why docker does not wanna run Django server Thats my structure of project: ``` app bankproject env docker-compose.yml Dockerfile manage.py requirements.txt ``` There is my file Docker: ``` # pull official base image FROM python:3.8.0-alpine # set work directory WORKDIR /app # set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # install dependencies RUN pip install --upgrade pip COPY ./requirements.txt requirements.txt RUN pip install -r requirements.txt # copy project COPY . /app ``` and my docker-compose.yml ``` version: '3.7' services: web: build: ./ command: python manage.py runserver 0.0.0.0:8000 volumes: - ./app/:/app ports: - 8000:8000 env_file: - ./.env.dev ``` Actually in folder env I have file .env.dev and it is consist : ``` DEBUG=1 SECRET_KEY=foo DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1] ``` The mistake I got : web\_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory app\_web\_1 exited with code 2
2020/06/04
[ "https://Stackoverflow.com/questions/62191477", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13678793/" ]
You haven't declared a `ref` and passed it to `HeaderComponent`, see [`useRef` hook](https://reactjs.org/docs/hooks-reference.html#useref). ``` const App = () => { const calendarRef = useRef(); React.useEffect(() => { console.log(calendarRef.current); }, []); return ( <div className="App"> <HeaderComponent calendarRef={calendarRef} /> <Calendar ref={calendarRef} height="100vh" /> </div> ); }; // HeaderComponent function HeaderComponent({ calendarRef }) { const handleNext = () => { const calendarInstance = calendarRef.current.getInstance(); calendarInstance.next(); }; return ... } ``` [![Edit Q-62191456-RefExample](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/purple-hooks-kwgd3?fontsize=14&hidenavigation=1&theme=dark)
You should define a ref in the App component and pass it on to the HeaderComponent ``` const App = () => { let calendarRef = useRef(); return ( <div className="App"> <HeaderComponent calendarRef={calendarRef} /> <Calendar ref={calendarRef} height="100vh" /> </div> ); }; export default App; ``` --- ``` function HeaderComponent({ calendarRef }) { let handleNext = () => { let calendarInstance = calendarRef.current.getInstance(); calendarInstance.next(); }; return ( <div> <button>Prev</button> <button onClick={handleNext}>Next</button> </div> ); } export default HeaderComponent; ``` [Codesandbox](https://codesandbox.io/s/affectionate-mcclintock-v9lgw?file=/src/App.tsx)
32,080,635
I am looking into Python's **Least Recently Used (LRU) cache** implementation [here](http://code.activestate.com/recipes/578078-py26-and-py30-backport-of-python-33s-lru-cache/). Can someone please explain what the `_make_key` function is doing? ``` def _make_key(args, kwds, typed, kwd_mark = (object(),), fasttypes = {int, str, frozenset, type(None)}, sorted=sorted, tuple=tuple, type=type, len=len): 'Make a cache key from optionally typed positional and keyword arguments' key = args if kwds: sorted_items = sorted(kwds.items()) key += kwd_mark for item in sorted_items: key += item if typed: key += tuple(type(v) for v in args) if kwds: key += tuple(type(v) for k, v in sorted_items) elif len(key) == 1 and type(key[0]) in fasttypes: return key[0] return _HashedSeq(key) ```
2015/08/18
[ "https://Stackoverflow.com/questions/32080635", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4677617/" ]
The function is converting a set of function arguments to a sequence that (a) can be hashed and (b) returns the same hash value for the same arguments, if a function is invoked multiple times with the same arguments. Obviously, `args` and `kwargs` are intended to be the positional and keyword arguments of a generic function definition: ``` def foo(*args, **kwargs): pass ``` `args` is a list, which is hashable as long as all its elements are hashable; `kwargs` is however a `dict` which is not a hashable type. So the function uses `.items()` to convert it to a list of key-value pairs; it is then sorted because the hash value of a list is sensitive to the order of elements, and a `dict` can list its items in arbitrary order. The purpose of `kwd_mark` is to ensure that keyword arguments cannot be confused with positional arguments that might happen to consist of the same key-value pair as an item of `kwargs`. As a default argument, its value is assigned when the function is defined; this guarantees that its sentinel object will never appear as a function argument.
Summary ------- The *\_make\_key()* function flattens the arguments into a compact tuple that can be used to determine whether the arguments for two calls are the same. Generated keys -------------- The call `f(10, 20, x=30, y=40)` and `f(10, 20, y=40, x=30)` both have the same key: ``` (10, 20, <object object at 0x7fae2bb25040>, 'x', 30, 'y', 40) ``` The *10* and *20* positional arguments are record in the order specified. The *kwd\_mark* separates the keyword arguments from the positional arguments. The keyword arguments are sorted so that `x=30, y=40` is recognized as being the same as `y=40, x=30`. If *typed* is true, then the key also records the argument types: ``` (10, 20, # positional args <object object at 0x7fae2bb25040>, # kwd_mark 'x', 30, 'y', 40, # keyword args <class 'int'>, <class 'int'>, # types of the args <class 'int'>, <class 'int'>) ```
51,361,441
I want to rewrite a `python` code for calculating accumulated result of `max`. I refered to numpy [documentation](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.accumulate.html) Input: `[7200,7050,7300,7500,7440,7200,7300,7280,7400]` Output: `[7200, 7200, 7300, 7500, 7500, 7500, 7500, 7500, 7500]` I understand that I can do it in a loop, but I'm looking for a compact, one-line solution if possible
2018/07/16
[ "https://Stackoverflow.com/questions/51361441", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3911211/" ]
Something like this is inline, more or less: ```js var max = 0; var input = [7200, 7050, 7300, 7500, 7440, 7200, 7300, 7280, 7400]; var output = input.map(function(e) { return max > e ? max : max = e; }); console.log(output); ``` Inspired by [Pierre's answer](https://stackoverflow.com/a/51361720/2959522), here's another one line solution with map ```js var input = [7200, 7050, 7300, 7500, 7440, 7200, 7300, 7280, 7400]; var output = input.map(function(e, i) { return Math.max(...input.slice(0, i + 1)) }); console.log(output); ```
One line solution with `Array.from()` : ```js var arr = [7200, 7050, 7300, 7500, 7440, 7200, 7300, 7280, 7400]; var output = Array.from({length:arr.length}, (el,i) => Math.max(...arr.slice(0,i+1))) console.log(output); ```
51,361,441
I want to rewrite a `python` code for calculating accumulated result of `max`. I refered to numpy [documentation](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.accumulate.html) Input: `[7200,7050,7300,7500,7440,7200,7300,7280,7400]` Output: `[7200, 7200, 7300, 7500, 7500, 7500, 7500, 7500, 7500]` I understand that I can do it in a loop, but I'm looking for a compact, one-line solution if possible
2018/07/16
[ "https://Stackoverflow.com/questions/51361441", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3911211/" ]
Something like this is inline, more or less: ```js var max = 0; var input = [7200, 7050, 7300, 7500, 7440, 7200, 7300, 7280, 7400]; var output = input.map(function(e) { return max > e ? max : max = e; }); console.log(output); ``` Inspired by [Pierre's answer](https://stackoverflow.com/a/51361720/2959522), here's another one line solution with map ```js var input = [7200, 7050, 7300, 7500, 7440, 7200, 7300, 7280, 7400]; var output = input.map(function(e, i) { return Math.max(...input.slice(0, i + 1)) }); console.log(output); ```
Here is a one-liner alternative: ``` a = [7200,7050,7300,7500,7440,7200,7300,7280,7400] b = [max(a[0:i]) if i > 0 else a[i] for i in range(len(a))] ```
53,057,646
Here is my code for upload a file to S3 bucket sing boto3 in python. ``` import boto3 def upload_to_s3(backupFile, s3Bucket, bucket_directory, file_format): s3 = boto3.resource('s3') s3.meta.client.upload_file(backupFile, s3Bucket, bucket_directory.format(file_format)) upload_to_s3('/tmp/backup.py', 'bsfbackup', 'pfsense/{}', 'hello.py') ``` My question is about, I want to print the "Upload success" once the upload is succeeded, and print "Upload is failed" and the error stack if the uploading is failed. Any help ? Thanks.
2018/10/30
[ "https://Stackoverflow.com/questions/53057646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3440631/" ]
A CAN transceiver is just a high speed step down converter. (on a basic level) CAN protocol works in a variant of voltage ranges. MCP2551 is a set CAN transceiver suitable for 12V and 24V systems. With added features to help with the physical layer like `externally-controlled slope` for reduced RFI emissions, `detection of ground fault`, `voltage brown-out protection`, etc. It has no dependency on the CAN logic. It is just to help you with the bare physical layer. To answer your question: As RishabhHardas recommended, use the HAL library provided by STM32 through CubeMx. **Using CubeMx** This is a software provided by ST-Micro to help you setup the boilerplate code for any peripheral application. You can also check out the examples projects provided by STM in the Cube. This will give you a kick-start in understanding CAN on STM32 > > STM32Cube\_FW\_F4\_V1.9.0\Projects\STM324xG\_EVAL\Examples\CAN\CAN\_Networking > > > After setting it up, you'll be able to call `HAL_CAN_Transmit()` and `HAL_CAN_Receive()` by including the header. Check out [this](https://community.st.com/s/question/0D50X00009XkWCJSA3/can-bus-cubemx-settings-and-transmit-f091cc) discussion on STM32-Community.
For software, look for the CANtact open source project on Github. It is an implementation for the STM32F042. I had to adapt the project to build it under Atollic but it was not too hard and it works. It provides a SLCAN type of interface over a virtual COM port over USB, which is very fast and convenient. There is also CAN code for the STM32F103 (Bluepill) (Google "lawicel-slcan") but that chip is not as convenient because you cannot use both CAN and USB at the same time (they share RAM buffers) so if you want CAN, you will not have USB, and routing the CAN messages over a UART will severely limit the bandwidth. That may be OK if your entire application runs on the STM32.
59,433,681
I have the following data in terms of dataframe ``` data = pd.DataFrame({'colA': ['a', 'c', 'a', 'e', 'c', 'c'], 'colB': ['b', 'd', 'b', 'f', 'd', 'd'], 'colC':['SD100', 'SD200', 'SD300', 'SD400', 'SD500', 'SD600']}) ``` I want the output as attached [enter image description here][2] I want to achieve this using pandas dataframe in python Can somebody help me?
2019/12/21
[ "https://Stackoverflow.com/questions/59433681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12054665/" ]
You can try: ``` Column A Column B Column C 0 a b SD100 1 c d SD200 2 a b SD300 3 e f SD400 4 c d SD500 5 c d SD600 ``` --- ``` >>> df.groupby(['Column A', 'Column B']).agg(list) Column C Column A Column B a b [SD100, SD300] c d [SD200, SD500, SD600] e f [SD400] ```
I don't know why you want to make multindex, but you can simply `sort_values` or use `groupby`. ```py import pandas as pd df = pd.DataFrame({"ColumnA":['a','c','a','e','c','c'], "ColumnB":['b','d','b','f','d','d'], "ColumnC":['SD100','SD200','SD300','SD400','SD500','SD600']}) print(df) ``` ``` ColumnA ColumnB ColumnC 0 a b SD100 1 c d SD200 2 a b SD300 3 e f SD400 4 c d SD500 5 c d SD600 ``` ```py df = df.sort_values(by=['ColumnA','ColumnB']) df.set_index(['ColumnA', 'ColumnB','ColumnC'], inplace=True) df ```
59,433,681
I have the following data in terms of dataframe ``` data = pd.DataFrame({'colA': ['a', 'c', 'a', 'e', 'c', 'c'], 'colB': ['b', 'd', 'b', 'f', 'd', 'd'], 'colC':['SD100', 'SD200', 'SD300', 'SD400', 'SD500', 'SD600']}) ``` I want the output as attached [enter image description here][2] I want to achieve this using pandas dataframe in python Can somebody help me?
2019/12/21
[ "https://Stackoverflow.com/questions/59433681", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12054665/" ]
You can try: ``` Column A Column B Column C 0 a b SD100 1 c d SD200 2 a b SD300 3 e f SD400 4 c d SD500 5 c d SD600 ``` --- ``` >>> df.groupby(['Column A', 'Column B']).agg(list) Column C Column A Column B a b [SD100, SD300] c d [SD200, SD500, SD600] e f [SD400] ```
This will update your data into what you wished `data=data.groupby(['colA','colB']).agg(list)`