qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
10,354,000
I using python 2.7 and openCV 2.3.1 (win 7). I trying open video file: ``` stream = cv.VideoCapture("test1.avi") if stream.isOpened() == False: print "Cannot open input video!" exit() ``` But I have warning: ``` warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl_v2.hpp:394) ``` If use video camera (`stream = cv.VideoCapture(0)`), this code works. Any ideas as to what I'm doing wrong? Thanks a lot to all!
2012/04/27
[ "https://Stackoverflow.com/questions/10354000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1361499/" ]
Try using `cv.CaptureFromFile()` instead. Copy this code if you must: [Watch Video in Python with OpenCV](http://web.michaelchughes.com/how-to/watch-video-in-python-with-opencv).
you can use the new interface of OpenCV (cv2), the object oriented one, which is binded from c++. I find it easier and more readable. note: if your open a picture with this, the fps doesn't mean anything, so the picture stays still. ``` import cv2 import sys try: vidFile = cv2.VideoCapture(sys.argv[1]) except: print "problem opening input stream" sys.exit(1) if not vidFile.isOpened(): print "capture stream not open" sys.exit(1) nFrames = int(vidFile.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT)) # one good way of namespacing legacy openCV: cv2.cv.* print "frame number: %s" %nFrames fps = vidFile.get(cv2.cv.CV_CAP_PROP_FPS) print "FPS value: %s" %fps ret, frame = vidFile.read() # read first frame, and the return code of the function. while ret: # note that we don't have to use frame number here, we could read from a live written file. print "yes" cv2.imshow("frameWindow", frame) cv2.waitKey(int(1/fps*1000)) # time to wait between frames, in mSec ret, frame = vidFile.read() # read next frame, get next return code ```
10,354,000
I using python 2.7 and openCV 2.3.1 (win 7). I trying open video file: ``` stream = cv.VideoCapture("test1.avi") if stream.isOpened() == False: print "Cannot open input video!" exit() ``` But I have warning: ``` warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl_v2.hpp:394) ``` If use video camera (`stream = cv.VideoCapture(0)`), this code works. Any ideas as to what I'm doing wrong? Thanks a lot to all!
2012/04/27
[ "https://Stackoverflow.com/questions/10354000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1361499/" ]
Try using `cv.CaptureFromFile()` instead. Copy this code if you must: [Watch Video in Python with OpenCV](http://web.michaelchughes.com/how-to/watch-video-in-python-with-opencv).
As from [this answer](https://stackoverflow.com/a/11703998/623999), try copying all the `.dll` files from your OpenCV installation into `C:\Python27`.
10,354,000
I using python 2.7 and openCV 2.3.1 (win 7). I trying open video file: ``` stream = cv.VideoCapture("test1.avi") if stream.isOpened() == False: print "Cannot open input video!" exit() ``` But I have warning: ``` warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl_v2.hpp:394) ``` If use video camera (`stream = cv.VideoCapture(0)`), this code works. Any ideas as to what I'm doing wrong? Thanks a lot to all!
2012/04/27
[ "https://Stackoverflow.com/questions/10354000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1361499/" ]
you can use the new interface of OpenCV (cv2), the object oriented one, which is binded from c++. I find it easier and more readable. note: if your open a picture with this, the fps doesn't mean anything, so the picture stays still. ``` import cv2 import sys try: vidFile = cv2.VideoCapture(sys.argv[1]) except: print "problem opening input stream" sys.exit(1) if not vidFile.isOpened(): print "capture stream not open" sys.exit(1) nFrames = int(vidFile.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT)) # one good way of namespacing legacy openCV: cv2.cv.* print "frame number: %s" %nFrames fps = vidFile.get(cv2.cv.CV_CAP_PROP_FPS) print "FPS value: %s" %fps ret, frame = vidFile.read() # read first frame, and the return code of the function. while ret: # note that we don't have to use frame number here, we could read from a live written file. print "yes" cv2.imshow("frameWindow", frame) cv2.waitKey(int(1/fps*1000)) # time to wait between frames, in mSec ret, frame = vidFile.read() # read next frame, get next return code ```
As from [this answer](https://stackoverflow.com/a/11703998/623999), try copying all the `.dll` files from your OpenCV installation into `C:\Python27`.
43,044,060
There is this similar [question](https://stackoverflow.com/questions/22214086/python-a-program-to-find-the-length-of-the-longest-run-in-a-given-list), but not quite what I am asking. Let's say I have a list of ones and zeroes: ``` # i.e. [1, 0, 0, 0, 1, 1, 1, 1, 0, 1] sample = np.random.randint(0, 2, (10,)).tolist() ``` I am trying to find the index of subsequences of the same value, sorted by their length. So here, we would have the following sublists: ``` [1, 1, 1, 1] [0, 0, 0] [1] [0] [1] ``` So their indices would be `[4, 1, 0, 8, 9]`. I can get the sorted subsequences doing this: ``` sorted([list(l) for n, l in itertools.groupby(sample)], key=lambda l: -len(l)) ``` However, if I get repeated subsequences I won't be able to find the indices right away (I would have to use another loop). I feel like there is a more straightforward and Pythonic way of doing what I'm after, just like the answer to the previous questions suggests. This is what I'm looking for.
2017/03/27
[ "https://Stackoverflow.com/questions/43044060", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3120489/" ]
You can first create tuples of indices and values with `enumerate(..)`. Next you `groupby` but on the second element of the tuple, and finally you map them back on the second index. Like: ``` **map(lambda x:x[0][0],** # obtain the index of the first element sorted([list(l) for _,l in itertools.groupby(**enumerate(**sample**)**, # create tuples with their indices **key=lambda x:x[1]**)], # group in value, not on index key=lambda l: -len(l))) ``` When running (the compressed command) in the console, it produces: ``` >>> map(lambda x:x[0][0],sorted([list(l) for _,l in itertools.groupby(enumerate(sample),key=lambda x:x[1])],key=lambda l: -len(l))) [4, 1, 0, 8, 9] ``` > > **N.B. 1**: instead of using `lambda l: -len(l)` as `key` when you sort, you can use `reverse=True` (and `key = len`), which is more > declarative, like: > > > > ``` > map(lambda x:x[0][0], > sorted([list(l) for _,l in itertools.groupby(enumerate(sample), > key=lambda x:x[1])], > **key=len, reverse=True**)) > ``` > > **N.B. 2**: In [python-3.x](/questions/tagged/python-3.x "show questions tagged 'python-3.x'") `map` will produce an **iterator** and not a list. You can *materialize* the result by calling `list(..)` on > the result. > > >
You can use a `groupby` the `sorted` function with generator function to do this efficiently. ``` from itertools import groupby from operator import itemgetter data = [1, 0, 0, 0, 1, 1, 1, 1, 0, 1] def gen(items): for _, elements in groupby(enumerate(items)): indexes, values = zip(*elements) yield indexes[0], values result = sorted(list(gen(data)), key=lambda x: len(x[1]), reverse=True) ``` Printing result yields: ``` [(4, (1, 1, 1, 1)), (1, (0, 0, 0)), (0, (1,)), (8, (0,)), (9, (1,))] ```
60,182,910
In my droplet is running several PHP websites, recently I tried to deploy a Django website that I build. But it's doesn't work properly. I will explain the step that I did. 1, Pointed a Domain name to my droplet. 2, Added Domain name using Plesk Add Domain Option. 3, Uploaded the Django files to httpdocs by Plesk file manager. 4, Connected the server through ssh and type `python manage.py runserver 0:8000` 5, My Django Website is successfully running. Here are the real issues occurs, We need to type exact port number to view the website every time. Eg:- \*\*xyz.com:8000 \*\* As well as the **Django webserver** is down after sometimes. I am newbie to Django all I have experience in deploying a PHP website. If my procedure is wrong please guide me to correct procedure. Thanks in Advance.
2020/02/12
[ "https://Stackoverflow.com/questions/60182910", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9892045/" ]
[Django Runserver is not a production one](https://vsupalov.com/django-runserver-in-production/), it should be used only for development. That's why you need to explicitly type in the port and it sometimes go down because of code reload or other triggers. Check [Gunicorn](https://gunicorn.org/) for example, as a production server for Django applications. (there are other options also)
I suggest following this [guide](https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04). Covers initial setup of django with gunicorn and nginx which is essential for deployment, you don't have to add the port to access the site. It doesn't cover how to add the domain but it seems you already know how to add the domain though.
3,631,510
how to do a zoom in/out with wxpython? what are the very basics for this purpose? I googled this, but could not find much, thanks!!
2010/09/02
[ "https://Stackoverflow.com/questions/3631510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/427183/" ]
JavaScript is single threaded. So this wouldn't apply to JavaScript. However, it is possible to spawn multiple threads through a very **limited** [Worker](http://www.w3.org/TR/workers/#dedicated-workers-and-the-worker-interface) interface introduced in HTML5 and is already available on some browsers. From an [MDC article](https://developer.mozilla.org/en/Using_web_workers), > > The Worker interface **spawns real OS-level threads**, and concurrency can cause interesting effects in your code if you aren't careful. However, in the case of web workers, the carefully controlled communication points with other threads means that it's actually very hard to cause concurrency problems. There's no access to non-thread safe components or the DOM and you have to pass specific data in and out of a thread through serialized objects. So you have to work really hard to cause problems in your code. > > > What do you need this for?
For most things in JavaScript there's one thread, so there's no method for this, since it'd invariable by "1" where you could access such information. There are more threads in the background for events and queuing (handled by the browser), but as far as your code's concerned, there's a main thread. Java != JavaScript, they only share 4 letters :)
3,631,510
how to do a zoom in/out with wxpython? what are the very basics for this purpose? I googled this, but could not find much, thanks!!
2010/09/02
[ "https://Stackoverflow.com/questions/3631510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/427183/" ]
Aside from the name, Javascript is totally unrelated to Java. Javascript does not have threads that you can access.
For most things in JavaScript there's one thread, so there's no method for this, since it'd invariable by "1" where you could access such information. There are more threads in the background for events and queuing (handled by the browser), but as far as your code's concerned, there's a main thread. Java != JavaScript, they only share 4 letters :)
3,631,510
how to do a zoom in/out with wxpython? what are the very basics for this purpose? I googled this, but could not find much, thanks!!
2010/09/02
[ "https://Stackoverflow.com/questions/3631510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/427183/" ]
JavaScript is single threaded. So this wouldn't apply to JavaScript. However, it is possible to spawn multiple threads through a very **limited** [Worker](http://www.w3.org/TR/workers/#dedicated-workers-and-the-worker-interface) interface introduced in HTML5 and is already available on some browsers. From an [MDC article](https://developer.mozilla.org/en/Using_web_workers), > > The Worker interface **spawns real OS-level threads**, and concurrency can cause interesting effects in your code if you aren't careful. However, in the case of web workers, the carefully controlled communication points with other threads means that it's actually very hard to cause concurrency problems. There's no access to non-thread safe components or the DOM and you have to pass specific data in and out of a thread through serialized objects. So you have to work really hard to cause problems in your code. > > > What do you need this for?
Aside from the name, Javascript is totally unrelated to Java. Javascript does not have threads that you can access.
3,631,510
how to do a zoom in/out with wxpython? what are the very basics for this purpose? I googled this, but could not find much, thanks!!
2010/09/02
[ "https://Stackoverflow.com/questions/3631510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/427183/" ]
JavaScript is single threaded. So this wouldn't apply to JavaScript. However, it is possible to spawn multiple threads through a very **limited** [Worker](http://www.w3.org/TR/workers/#dedicated-workers-and-the-worker-interface) interface introduced in HTML5 and is already available on some browsers. From an [MDC article](https://developer.mozilla.org/en/Using_web_workers), > > The Worker interface **spawns real OS-level threads**, and concurrency can cause interesting effects in your code if you aren't careful. However, in the case of web workers, the carefully controlled communication points with other threads means that it's actually very hard to cause concurrency problems. There's no access to non-thread safe components or the DOM and you have to pass specific data in and out of a thread through serialized objects. So you have to work really hard to cause problems in your code. > > > What do you need this for?
In javascript the scripts run in a browser thread, and your code have no access to that info, actually your code have no idea whatsoever how it's being run. So NO! there's no such thing in javascript.
3,631,510
how to do a zoom in/out with wxpython? what are the very basics for this purpose? I googled this, but could not find much, thanks!!
2010/09/02
[ "https://Stackoverflow.com/questions/3631510", "https://Stackoverflow.com", "https://Stackoverflow.com/users/427183/" ]
Aside from the name, Javascript is totally unrelated to Java. Javascript does not have threads that you can access.
In javascript the scripts run in a browser thread, and your code have no access to that info, actually your code have no idea whatsoever how it's being run. So NO! there's no such thing in javascript.
3,664,124
I posted this basic question before, but didn't get an answer I could work with. I've been writing applications on my Mac, and have been physically making them into .app bundles (i.e., making the directories and plist files by hand). But when I open a file in the application by right clicking on the file in finder and specifying my app, how do I then reference that file? I mostly use python, but I'm looking for a way that is fairly universal. My first guess was as an argument, as were the answers to my previous post, but that is not the case. Py: ``` >>> print(sys.argv[1:]) '-psn_0_#######' ``` Where is the file reference? Thanks in advance,
2010/09/08
[ "https://Stackoverflow.com/questions/3664124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/428582/" ]
The file is passed by the Apple Event, see [this Apple document](http://developer.apple.com/mac/library/documentation/cocoa/conceptual/ScriptableCocoaApplications/SApps_handle_AEs/SAppsHandleAEs.html#//apple_ref/doc/uid/20001239-BBCBCIJE). You need to receive that from inside your Python script. If it's a PyObjC script, there should be a standard way to translate what's explained in that Apple document in Objective-C to Python. If your script is not a GUI app, but if you just want to pass a file to a Python script by clicking it, the easiest way would be to use Automator. There's an action called "Run Shell Script", to which you can specify the interpreter and the code. You can choose whether you receive the file names via `stdin` or the arguments. Automator packages the script into the app for you.
Are we referring to the file where per-user binding of file types/extensions are set to point to certain applications? ``` ~/Library/Preferences/com.apple.LaunchServices.plist ``` The framework is [launchservices](http://developer.apple.com/library/mac/#documentation/Carbon/Conceptual/LaunchServicesConcepts/LSCIntro/LSCIntro.html), which had received a good amount of scrutiny due to 'murkiness' early in 10.6, and (like all property list files) can be altered via the bridges to ObjectiveC made for Python and Ruby. [Here's](http://wranglingmacs.blogspot.com/2010/05/using-plists-from-python.html) a link with Python code examples for how to associate a given file type with an app.
3,664,124
I posted this basic question before, but didn't get an answer I could work with. I've been writing applications on my Mac, and have been physically making them into .app bundles (i.e., making the directories and plist files by hand). But when I open a file in the application by right clicking on the file in finder and specifying my app, how do I then reference that file? I mostly use python, but I'm looking for a way that is fairly universal. My first guess was as an argument, as were the answers to my previous post, but that is not the case. Py: ``` >>> print(sys.argv[1:]) '-psn_0_#######' ``` Where is the file reference? Thanks in advance,
2010/09/08
[ "https://Stackoverflow.com/questions/3664124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/428582/" ]
The file is passed by the Apple Event, see [this Apple document](http://developer.apple.com/mac/library/documentation/cocoa/conceptual/ScriptableCocoaApplications/SApps_handle_AEs/SAppsHandleAEs.html#//apple_ref/doc/uid/20001239-BBCBCIJE). You need to receive that from inside your Python script. If it's a PyObjC script, there should be a standard way to translate what's explained in that Apple document in Objective-C to Python. If your script is not a GUI app, but if you just want to pass a file to a Python script by clicking it, the easiest way would be to use Automator. There's an action called "Run Shell Script", to which you can specify the interpreter and the code. You can choose whether you receive the file names via `stdin` or the arguments. Automator packages the script into the app for you.
This is not an answer but it wouldn't fit in the comments. To respond to @Sacrilicious and to give everyone else insight on this: @Sacrilicious You're talking about something different. [Download this sample application](http://www.filedropper.com/myscript1), it's a python script wrapped as an "App". Look inside and find a 4-line python script: `myscript.app/Contents/MacOS/myscript` - which will print the arguments using ``` file = open("/tmp/test.txt", "w") file.writelines(sys.argv[1:]) ``` Stick it in your Applications folder. Then right click some file and choose "Open With" and select this `myscript.app`. Now take a look at `/tmp/text.txt` and you'll see that something like `-psn_0_#######` is there and not the name of the file you had selected "open with". This is because the file is passed using [Apple Events](http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/ScriptableCocoaApplications/SApps_handle_AEs/SAppsHandleAEs.html) and not a filename as an argument. **So this question is asking how can you access the filename of the thing that was passed in the python script wrapped in an OS X `.app` application wrapper**, and if someone can let me know that they'll get the Bounty :)
3,664,124
I posted this basic question before, but didn't get an answer I could work with. I've been writing applications on my Mac, and have been physically making them into .app bundles (i.e., making the directories and plist files by hand). But when I open a file in the application by right clicking on the file in finder and specifying my app, how do I then reference that file? I mostly use python, but I'm looking for a way that is fairly universal. My first guess was as an argument, as were the answers to my previous post, but that is not the case. Py: ``` >>> print(sys.argv[1:]) '-psn_0_#######' ``` Where is the file reference? Thanks in advance,
2010/09/08
[ "https://Stackoverflow.com/questions/3664124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/428582/" ]
The file is passed by the Apple Event, see [this Apple document](http://developer.apple.com/mac/library/documentation/cocoa/conceptual/ScriptableCocoaApplications/SApps_handle_AEs/SAppsHandleAEs.html#//apple_ref/doc/uid/20001239-BBCBCIJE). You need to receive that from inside your Python script. If it's a PyObjC script, there should be a standard way to translate what's explained in that Apple document in Objective-C to Python. If your script is not a GUI app, but if you just want to pass a file to a Python script by clicking it, the easiest way would be to use Automator. There's an action called "Run Shell Script", to which you can specify the interpreter and the code. You can choose whether you receive the file names via `stdin` or the arguments. Automator packages the script into the app for you.
I've never heard of it being done without a Cocoa / Carbon wrapper.
3,664,124
I posted this basic question before, but didn't get an answer I could work with. I've been writing applications on my Mac, and have been physically making them into .app bundles (i.e., making the directories and plist files by hand). But when I open a file in the application by right clicking on the file in finder and specifying my app, how do I then reference that file? I mostly use python, but I'm looking for a way that is fairly universal. My first guess was as an argument, as were the answers to my previous post, but that is not the case. Py: ``` >>> print(sys.argv[1:]) '-psn_0_#######' ``` Where is the file reference? Thanks in advance,
2010/09/08
[ "https://Stackoverflow.com/questions/3664124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/428582/" ]
The file is passed by the Apple Event, see [this Apple document](http://developer.apple.com/mac/library/documentation/cocoa/conceptual/ScriptableCocoaApplications/SApps_handle_AEs/SAppsHandleAEs.html#//apple_ref/doc/uid/20001239-BBCBCIJE). You need to receive that from inside your Python script. If it's a PyObjC script, there should be a standard way to translate what's explained in that Apple document in Objective-C to Python. If your script is not a GUI app, but if you just want to pass a file to a Python script by clicking it, the easiest way would be to use Automator. There's an action called "Run Shell Script", to which you can specify the interpreter and the code. You can choose whether you receive the file names via `stdin` or the arguments. Automator packages the script into the app for you.
I described how to link certain filetypes to py2app-bundled Python applications at <https://moosystems.com/articles/8-double-click-on-files-in-finder-to-open-them-in-your-python-and-tk-application.html>
3,664,124
I posted this basic question before, but didn't get an answer I could work with. I've been writing applications on my Mac, and have been physically making them into .app bundles (i.e., making the directories and plist files by hand). But when I open a file in the application by right clicking on the file in finder and specifying my app, how do I then reference that file? I mostly use python, but I'm looking for a way that is fairly universal. My first guess was as an argument, as were the answers to my previous post, but that is not the case. Py: ``` >>> print(sys.argv[1:]) '-psn_0_#######' ``` Where is the file reference? Thanks in advance,
2010/09/08
[ "https://Stackoverflow.com/questions/3664124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/428582/" ]
This is not an answer but it wouldn't fit in the comments. To respond to @Sacrilicious and to give everyone else insight on this: @Sacrilicious You're talking about something different. [Download this sample application](http://www.filedropper.com/myscript1), it's a python script wrapped as an "App". Look inside and find a 4-line python script: `myscript.app/Contents/MacOS/myscript` - which will print the arguments using ``` file = open("/tmp/test.txt", "w") file.writelines(sys.argv[1:]) ``` Stick it in your Applications folder. Then right click some file and choose "Open With" and select this `myscript.app`. Now take a look at `/tmp/text.txt` and you'll see that something like `-psn_0_#######` is there and not the name of the file you had selected "open with". This is because the file is passed using [Apple Events](http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/ScriptableCocoaApplications/SApps_handle_AEs/SAppsHandleAEs.html) and not a filename as an argument. **So this question is asking how can you access the filename of the thing that was passed in the python script wrapped in an OS X `.app` application wrapper**, and if someone can let me know that they'll get the Bounty :)
Are we referring to the file where per-user binding of file types/extensions are set to point to certain applications? ``` ~/Library/Preferences/com.apple.LaunchServices.plist ``` The framework is [launchservices](http://developer.apple.com/library/mac/#documentation/Carbon/Conceptual/LaunchServicesConcepts/LSCIntro/LSCIntro.html), which had received a good amount of scrutiny due to 'murkiness' early in 10.6, and (like all property list files) can be altered via the bridges to ObjectiveC made for Python and Ruby. [Here's](http://wranglingmacs.blogspot.com/2010/05/using-plists-from-python.html) a link with Python code examples for how to associate a given file type with an app.
3,664,124
I posted this basic question before, but didn't get an answer I could work with. I've been writing applications on my Mac, and have been physically making them into .app bundles (i.e., making the directories and plist files by hand). But when I open a file in the application by right clicking on the file in finder and specifying my app, how do I then reference that file? I mostly use python, but I'm looking for a way that is fairly universal. My first guess was as an argument, as were the answers to my previous post, but that is not the case. Py: ``` >>> print(sys.argv[1:]) '-psn_0_#######' ``` Where is the file reference? Thanks in advance,
2010/09/08
[ "https://Stackoverflow.com/questions/3664124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/428582/" ]
This is not an answer but it wouldn't fit in the comments. To respond to @Sacrilicious and to give everyone else insight on this: @Sacrilicious You're talking about something different. [Download this sample application](http://www.filedropper.com/myscript1), it's a python script wrapped as an "App". Look inside and find a 4-line python script: `myscript.app/Contents/MacOS/myscript` - which will print the arguments using ``` file = open("/tmp/test.txt", "w") file.writelines(sys.argv[1:]) ``` Stick it in your Applications folder. Then right click some file and choose "Open With" and select this `myscript.app`. Now take a look at `/tmp/text.txt` and you'll see that something like `-psn_0_#######` is there and not the name of the file you had selected "open with". This is because the file is passed using [Apple Events](http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/ScriptableCocoaApplications/SApps_handle_AEs/SAppsHandleAEs.html) and not a filename as an argument. **So this question is asking how can you access the filename of the thing that was passed in the python script wrapped in an OS X `.app` application wrapper**, and if someone can let me know that they'll get the Bounty :)
I've never heard of it being done without a Cocoa / Carbon wrapper.
3,664,124
I posted this basic question before, but didn't get an answer I could work with. I've been writing applications on my Mac, and have been physically making them into .app bundles (i.e., making the directories and plist files by hand). But when I open a file in the application by right clicking on the file in finder and specifying my app, how do I then reference that file? I mostly use python, but I'm looking for a way that is fairly universal. My first guess was as an argument, as were the answers to my previous post, but that is not the case. Py: ``` >>> print(sys.argv[1:]) '-psn_0_#######' ``` Where is the file reference? Thanks in advance,
2010/09/08
[ "https://Stackoverflow.com/questions/3664124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/428582/" ]
This is not an answer but it wouldn't fit in the comments. To respond to @Sacrilicious and to give everyone else insight on this: @Sacrilicious You're talking about something different. [Download this sample application](http://www.filedropper.com/myscript1), it's a python script wrapped as an "App". Look inside and find a 4-line python script: `myscript.app/Contents/MacOS/myscript` - which will print the arguments using ``` file = open("/tmp/test.txt", "w") file.writelines(sys.argv[1:]) ``` Stick it in your Applications folder. Then right click some file and choose "Open With" and select this `myscript.app`. Now take a look at `/tmp/text.txt` and you'll see that something like `-psn_0_#######` is there and not the name of the file you had selected "open with". This is because the file is passed using [Apple Events](http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/ScriptableCocoaApplications/SApps_handle_AEs/SAppsHandleAEs.html) and not a filename as an argument. **So this question is asking how can you access the filename of the thing that was passed in the python script wrapped in an OS X `.app` application wrapper**, and if someone can let me know that they'll get the Bounty :)
I described how to link certain filetypes to py2app-bundled Python applications at <https://moosystems.com/articles/8-double-click-on-files-in-finder-to-open-them-in-your-python-and-tk-application.html>
52,079,637
I have been trying to scroll the output of a script run via Ipython in a separate window/session created by tmux (off notebook, meaning that I am not using Ipython notebook as usual: I am just using Ipython). I see that, while the program is loading the output, I can’t scroll the window to see what has been published before, and can just visualize the latest results produced. Honestly, this problem comes totally unexpected because I usually use a notebook and have never encountered a similar issue. Could you help me please? Thanks in advance
2018/08/29
[ "https://Stackoverflow.com/questions/52079637", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10285919/" ]
I finally found the answer to my problem right here <https://superuser.com/questions/209437/how-do-i-scroll-in-tmux> , it did not depend on Ipython but on tmux as I was suspecting. To scroll on tmux press ctrl+b and then [, so that you can enter the copy mode, and then use arrows or page up/down to scroll through the window, then press ctrl+c or Esc to quit the copy mode after finishing the desired scroll
Use threads for data processing, visualization, and gui interaction that is how you can avoid freez.
38,079,862
Sometimes this code works just fine and runs through, but other times it throws the int object not callable error. I am not real sure as to why it is doing so. ``` for ship in ships: vert_or_horz = randint(0,100) % 2 for size in range(ship.size): if size == 0: ship.location.append((random_row(board),random_col(board))) else: # This is the horizontal placing if vert_or_horz != 0 and ship.size > 1: ship.location.append((ship.location[0][0], \ ship.location[0][1] + size)) while(ship.location[size][1] > len(board[0])) or \ (ship.location[size][1] < 0): if ship.location[size][1] > len(board[0]): ship.location[size][1]((ship.location[0][0], \ ship.location[0][1] - size)) if ship.location[size][1] < 0: ship.location[size][1]((ship.location[0][0], \ ship.location[0][1] + size)) # This is the vertical placing if vert_or_horz == 0 and ship.size > 1: ship.location.append((ship.location[0][0] + size, \ ship.location[0][1])) while(ship.location[size][1] > len(board[0])) or \ (ship.location[size][1] < 0): if ship.location[size][1] > len(board[0]): ship.location[size][1] \ ((ship.location[0][0] - size, \ ship.location[0][1])) if ship.location[size][1] < 0: ship.location[size][1] \ ((ship.location[0][0] + size, \ ship.location[0][1])) ``` Here is the traceback: ``` Traceback (most recent call last): File "python", line 217, in <module> File "python", line 124, in create_med_game ship.location[size][1]((ship.location[0][0], \ TypeError: 'int' object is not callable ```
2016/06/28
[ "https://Stackoverflow.com/questions/38079862", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2759574/" ]
You might be able to directly call [TransferHandler#exportAsDrag(...)](http://docs.oracle.com/javase/8/docs/api/javax/swing/TransferHandler.html#exportAsDrag-javax.swing.JComponent-java.awt.event.InputEvent-int-) method in `MouseMotionListener#mouseDragged(...)`: ``` import java.awt.*; import java.awt.event.*; import java.util.Optional; import javax.swing.*; import javax.swing.table.*; public class Main2 { public JComponent makeUI() { Object columnNames[] = {"Column 1", "Column 2", "Column 3"}; Object data[][] = { { "a", "a", "a" }, { "a", "a", "a" }, { "a", "a", "a" }, { "a", "a", "a" }, { "a", "a", "a" }, }; JTable table = new JTable(new DefaultTableModel(data, columnNames)); table.setDragEnabled(true); table.setDropMode(DropMode.INSERT_ROWS); table.addMouseMotionListener(new MouseAdapter() { @Override public void mouseDragged(MouseEvent e) { JComponent c = (JComponent) e.getComponent(); Optional.ofNullable(c.getTransferHandler()) .ifPresent(th -> th.exportAsDrag(c, e, TransferHandler.COPY)); } }); table.setSelectionMode(ListSelectionModel.MULTIPLE_INTERVAL_SELECTION); table.setColumnSelectionAllowed(true); table.setRowSelectionAllowed(true); JPanel p = new JPanel(new BorderLayout()); p.add(new JScrollPane(table)); p.add(new JTextField(), BorderLayout.SOUTH); return p; } public static void main(String... args) { EventQueue.invokeLater(() -> { JFrame f = new JFrame(); f.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); f.getContentPane().add(new Main2().makeUI()); f.setSize(320, 240); f.setLocationRelativeTo(null); f.setVisible(true); }); } } ```
> > Selection mode should only be possible via click and should be extendable via shift + click, not via dragging > > > You may be able to alter the behavior by overriding JTable's `processMouseEvent` method with some additional logic. When a button down event occurs - and shift is not down - alter the selection of the JTable to the current clicked row before calling the parent method. ``` JTable table = new JTable( rowData, columnNames){ @Override public void processMouseEvent(MouseEvent e){ Component c = (Component)e.getSource(); if (e.getModifiersEx() == MouseEvent.BUTTON1_DOWN_MASK && ( e.getModifiers() & MouseEvent.SHIFT_DOWN_MASK) == 0 ){ int row = rowAtPoint(e.getPoint()); int col = columnAtPoint(e.getPoint()); if ( !isCellSelected(row, col) ){ clearSelection(); changeSelection(row, col, false, false); } } MouseEvent e1 = new MouseEvent(c, e.getID(), e.getWhen(), e.getModifiers() | InputEvent.CTRL_MASK, e.getX(), e.getY(), e.getClickCount(), e.isPopupTrigger() , e.getButton()); super.processMouseEvent(e1); } }; ```
49,280,016
I'm trying to create a dataset from a CSV file with 784-bit long rows. Here's my code: ``` import tensorflow as tf f = open("test.csv", "r") csvreader = csv.reader(f) gen = (row for row in csvreader) ds = tf.data.Dataset() ds.from_generator(gen, [tf.uint8]*28**2) ``` I get the following error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-22-4b244ea66c1d> in <module>() 12 gen = (row for row in csvreader_pat_trn) 13 ds = tf.data.Dataset() ---> 14 ds.from_generator(gen, [tf.uint8]*28**2) ~/Documents/Programming/ANN/labs/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py in from_generator(generator, output_types, output_shapes) 317 """ 318 if not callable(generator): --> 319 raise TypeError("`generator` must be callable.") 320 if output_shapes is None: 321 output_shapes = nest.map_structure( TypeError: `generator` must be callable. ``` The [docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) said that I should have a generator passed to `from_generator()`, so that's what I did, `gen` is a generator. But now it's complaining that my generator isn't **callable**. How can I make the generator callable so I can get this to work? **EDIT:** I'd like to add that I'm using python 3.6.4. Is this the reason for the error?
2018/03/14
[ "https://Stackoverflow.com/questions/49280016", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3128156/" ]
The `generator` argument (perhaps confusingly) should not actually be a generator, but a callable returning an iterable (for example, a generator function). Probably the easiest option here is to use a `lambda`. Also, a couple of errors: 1) [`tf.data.Dataset.from_generator`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) is meant to be called as a class factory method, not from an instance 2) the function (like a few other in TensorFlow) is weirdly picky about parameters, and it wants you to give the sequence of dtypes and each data row as `tuple`s (instead of the `list`s returned by the CSV reader), you can use for example `map` for that: ``` import csv import tensorflow as tf with open("test.csv", "r") as f: csvreader = csv.reader(f) ds = tf.data.Dataset.from_generator(lambda: map(tuple, csvreader), (tf.uint8,) * (28 ** 2)) ```
[From the docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator), which you linked: > > The `generator` argument must be a callable object that returns an > object that support the `iter()` protocol (e.g. a generator function) > > > This means you should be able to do something like this: ``` import tensorflow as tf import csv with open("test.csv", "r") as f: csvreader = csv.reader(f) gen = lambda: (row for row in csvreader) ds = tf.data.Dataset() ds.from_generator(gen, [tf.uint8]*28**2) ``` In other words, the function you pass must produce a generator when called. This is easy to achieve when making it an anonymous function (a `lambda`). Alternatively try this, which is closer to how it is done in the docs: ``` import tensorflow as tf import csv def read_csv(file_name="test.csv"): with open(file_name) as f: reader = csv.reader(f) for row in reader: yield row ds = tf.data.Dataset.from_generator(read_csv, [tf.uint8]*28**2) ``` (If you need a different file name than whatever default you set, you can use `functools.partial(read_csv, file_name="whatever.csv")`.) The difference is that the `read_csv` function returns the generator object when called, whereas what you constructed is already the generator object and equivalent to doing: ``` gen = read_csv() ds = tf.data.Dataset.from_generator(gen, [tf.uint8]*28**2) # does not work ```
49,280,016
I'm trying to create a dataset from a CSV file with 784-bit long rows. Here's my code: ``` import tensorflow as tf f = open("test.csv", "r") csvreader = csv.reader(f) gen = (row for row in csvreader) ds = tf.data.Dataset() ds.from_generator(gen, [tf.uint8]*28**2) ``` I get the following error: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-22-4b244ea66c1d> in <module>() 12 gen = (row for row in csvreader_pat_trn) 13 ds = tf.data.Dataset() ---> 14 ds.from_generator(gen, [tf.uint8]*28**2) ~/Documents/Programming/ANN/labs/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py in from_generator(generator, output_types, output_shapes) 317 """ 318 if not callable(generator): --> 319 raise TypeError("`generator` must be callable.") 320 if output_shapes is None: 321 output_shapes = nest.map_structure( TypeError: `generator` must be callable. ``` The [docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) said that I should have a generator passed to `from_generator()`, so that's what I did, `gen` is a generator. But now it's complaining that my generator isn't **callable**. How can I make the generator callable so I can get this to work? **EDIT:** I'd like to add that I'm using python 3.6.4. Is this the reason for the error?
2018/03/14
[ "https://Stackoverflow.com/questions/49280016", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3128156/" ]
The `generator` argument (perhaps confusingly) should not actually be a generator, but a callable returning an iterable (for example, a generator function). Probably the easiest option here is to use a `lambda`. Also, a couple of errors: 1) [`tf.data.Dataset.from_generator`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) is meant to be called as a class factory method, not from an instance 2) the function (like a few other in TensorFlow) is weirdly picky about parameters, and it wants you to give the sequence of dtypes and each data row as `tuple`s (instead of the `list`s returned by the CSV reader), you can use for example `map` for that: ``` import csv import tensorflow as tf with open("test.csv", "r") as f: csvreader = csv.reader(f) ds = tf.data.Dataset.from_generator(lambda: map(tuple, csvreader), (tf.uint8,) * (28 ** 2)) ```
Yuck, two years later... But hey! Another solution! :D This might not be the cleanest answer but for generators that are more complicated, you can use a decorator. I made a generator that yields two dictionaries, for example: ```py >>> train,val = dataloader("path/to/dataset") >>> x,y = next(train) >>> print(x) {"data": [...], "filename": "image.png"} >>> print(y) {"category": "Dog", "category_id": 1, "background": "park"} ``` When I tried using the `from_generator`, it gave me the error: ``` >>> ds_tf = tf.data.Dataset.from_generator( iter(mm), ({"data":tf.float32, "filename":tf.string}, {"category":tf.string, "category_id":tf.int32, "background":tf.string}) ) TypeError: `generator` must be callable. ``` But then I wrote a decorating function ``` >>> def make_gen_callable(_gen): def gen(): for x,y in _gen: yield x,y return gen >>> train_ = make_gen_callable(train) ``` ``` >>> train_ds = tf.data.Dataset.from_generator( train_, ({"data":tf.float32, "filename":tf.string}, {"category":tf.string, "category_id":tf.int32, "background":tf.string}) ) >>> for x,y in train_ds: break >>> print(x) {'data': <tf.Tensor: shape=(320, 480), dtype=float32, ... >, 'filename': <tf.Tensor: shape=(), dtype=string, ...> } >>> print(y) {'category': <tf.Tensor: shape=(), dtype=string, numpy=b'Dog'>, 'category_id': <tf.Tensor: shape=(), dtype=int32, numpy=1>, 'background': <tf.Tensor: shape=(), dtype=string, numpy=b'Living Room'> } ``` But now, note that in order to iterate `train_`, one has to call it ``` >>> for x,y in train_(): do_stuff(x,y) ... ```
49,244,935
I am trying to execute a Python program as a background process inside a container with `kubectl` as below (`kubectl` issued on local machine): `kubectl exec -it <container_id> -- bash -c "cd some-dir && (python xxx.py --arg1 abc &)"` When I log in to the container and check `ps -ef` I do not see this process running. Also, there is no output from `kubectl` command itself. * Is the `kubectl` command issued correctly? * Is there a better way to achieve the same? * How can I see the output/logs printed off the background process being run? * If I need to stop this background process after some duration, what is the best way to do this?
2018/03/12
[ "https://Stackoverflow.com/questions/49244935", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1650281/" ]
The [nohup](https://en.wikipedia.org/wiki/Nohup#Overcoming_hanging) Wikipedia page can help; you need to redirect all three IO streams (stdout, stdin and stderr) - an example with `yes`: ``` kubectl exec pod -- bash -c "yes > /dev/null 2> /dev/null &" ``` `nohup` is not required in the above case because I did not allocate a pseudo terminal (no `-t` flag) and the shell was not interactive (no `-i` flag) so no `HUP` signal is sent to the `yes` process on session termination. See [this](https://unix.stackexchange.com/questions/84737/in-which-cases-is-sighup-not-sent-to-a-job-when-you-log-out#answer-85296) answer for more details. Redirecting `/dev/null` to stdin is not required in the above case since stdin already refers to `/dev/null` (you can see this by running `ls -l /proc/YES_PID/fd` in another shell). To see the output you can instead redirect stdout to a file. To stop the process you'd need to identity the PID of the process you want to stop ([pgrep](https://linux.die.net/man/1/pgrep) could be useful for this purpose) and send a fatal signal to it (`kill PID` for example). If you want to stop the process after a fixed duration, [timeout](https://linux.die.net/man/1/timeout) might be a better option.
Actually, the best way to make this kind of things is adding an entry point to your container and run execute the commands there. Like: `entrypoint.sh`: ``` #!/bin/bash set -e cd some-dir && (python xxx.py --arg1 abc &) ./somethingelse.sh exec "$@" ``` You wouldn't need to go manually inside every single container and run the command.
21,255,168
I am trying to understand why does the recursive function returns 1003 instead of 1005. ``` l = [1,2,3] def sum(l): x, *y = l return x + sum(y) if y else 1000 sum(l) ``` According to [pythontutor](http://pythontutor.com/visualize.html) the last value of `y` list is 5 and that would make return value `1000 + sum([2,3])` 1005, am I correct? ![enter image description here](https://i.stack.imgur.com/OztQt.jpg)
2014/01/21
[ "https://Stackoverflow.com/questions/21255168", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1984680/" ]
> > According to pythontutor the last value of y list is 5 and that would make return value `1000 + sum([2,3])` 1005, am I correct? > > > No, the last value of `y` is `[]`. It's never anything but a list, and besides, there are no `5`s for it to ever be. On top of that, the recursive return value is always on the right of the `+`, only the `x` is ever on the left. Let's step through it: ``` sum([1, 2, 3]) = 1 + sum([2, 3]) sum([2, 3]) = 2 + sum([3]) sum([3]) = 1000 ``` So, substituting back: ``` sum([2, 3]) = 2 + 1000 = 1002 sum([1, 2, 3] = 1 + 1002 = 1003 ``` The problem is that when `y` is empty, you're returning `1000`, not `x + 1000`. Your confusion may just be a matter of precedence. Maybe you expected this: ``` return x + sum(y) if y else 1000 ``` … to mean this: ``` return x + (sum(y) if y else 1000) ``` … but actually, it means this: ``` return (x + sum(y)) if y else 1000 ```
Recursion step by step 1) `x = 1` `y = [2,3]` 2) `x = 2` `y = [3]` 3) `x = 3` `y = []` Note that step 3) returns `1000` since `not y`. This is because your return statement is equivalent to ``` (x + sum(y)) if y else 1000 ``` Thus we have 3) `1000` 2) `1000 + 2` 1) `1002 + 1` The result is `1003`. So perhaps what you are looking for is: ``` return x + sum(y) if y else 1000 + x ``` or (copied from ndpu's answer): ``` return x + (sum(y) if y else 1000) ``` (take `x` into account in last step)
21,255,168
I am trying to understand why does the recursive function returns 1003 instead of 1005. ``` l = [1,2,3] def sum(l): x, *y = l return x + sum(y) if y else 1000 sum(l) ``` According to [pythontutor](http://pythontutor.com/visualize.html) the last value of `y` list is 5 and that would make return value `1000 + sum([2,3])` 1005, am I correct? ![enter image description here](https://i.stack.imgur.com/OztQt.jpg)
2014/01/21
[ "https://Stackoverflow.com/questions/21255168", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1984680/" ]
Recursion step by step 1) `x = 1` `y = [2,3]` 2) `x = 2` `y = [3]` 3) `x = 3` `y = []` Note that step 3) returns `1000` since `not y`. This is because your return statement is equivalent to ``` (x + sum(y)) if y else 1000 ``` Thus we have 3) `1000` 2) `1000 + 2` 1) `1002 + 1` The result is `1003`. So perhaps what you are looking for is: ``` return x + sum(y) if y else 1000 + x ``` or (copied from ndpu's answer): ``` return x + (sum(y) if y else 1000) ``` (take `x` into account in last step)
You should try to use a debugger or actually print things inside the function. Without executing the code I guess that it should be something like this: ``` l = [1,2,3] def sum(l): x, *y = l return x + sum(y) if y else 1000 sum(l) ``` It will call a such: ``` -> sum([1,2,3]) x : 1 y : [2, 3] -> sum([2, 3]) x: 2 y: [3] -> sum([3]) x: 3 y: [] returns 1000 returns 2 + 1000 returns 1 + 1002 ```
21,255,168
I am trying to understand why does the recursive function returns 1003 instead of 1005. ``` l = [1,2,3] def sum(l): x, *y = l return x + sum(y) if y else 1000 sum(l) ``` According to [pythontutor](http://pythontutor.com/visualize.html) the last value of `y` list is 5 and that would make return value `1000 + sum([2,3])` 1005, am I correct? ![enter image description here](https://i.stack.imgur.com/OztQt.jpg)
2014/01/21
[ "https://Stackoverflow.com/questions/21255168", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1984680/" ]
> > According to pythontutor the last value of y list is 5 and that would make return value `1000 + sum([2,3])` 1005, am I correct? > > > No, the last value of `y` is `[]`. It's never anything but a list, and besides, there are no `5`s for it to ever be. On top of that, the recursive return value is always on the right of the `+`, only the `x` is ever on the left. Let's step through it: ``` sum([1, 2, 3]) = 1 + sum([2, 3]) sum([2, 3]) = 2 + sum([3]) sum([3]) = 1000 ``` So, substituting back: ``` sum([2, 3]) = 2 + 1000 = 1002 sum([1, 2, 3] = 1 + 1002 = 1003 ``` The problem is that when `y` is empty, you're returning `1000`, not `x + 1000`. Your confusion may just be a matter of precedence. Maybe you expected this: ``` return x + sum(y) if y else 1000 ``` … to mean this: ``` return x + (sum(y) if y else 1000) ``` … but actually, it means this: ``` return (x + sum(y)) if y else 1000 ```
You should try to use a debugger or actually print things inside the function. Without executing the code I guess that it should be something like this: ``` l = [1,2,3] def sum(l): x, *y = l return x + sum(y) if y else 1000 sum(l) ``` It will call a such: ``` -> sum([1,2,3]) x : 1 y : [2, 3] -> sum([2, 3]) x: 2 y: [3] -> sum([3]) x: 3 y: [] returns 1000 returns 2 + 1000 returns 1 + 1002 ```
21,255,168
I am trying to understand why does the recursive function returns 1003 instead of 1005. ``` l = [1,2,3] def sum(l): x, *y = l return x + sum(y) if y else 1000 sum(l) ``` According to [pythontutor](http://pythontutor.com/visualize.html) the last value of `y` list is 5 and that would make return value `1000 + sum([2,3])` 1005, am I correct? ![enter image description here](https://i.stack.imgur.com/OztQt.jpg)
2014/01/21
[ "https://Stackoverflow.com/questions/21255168", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1984680/" ]
> > According to pythontutor the last value of y list is 5 and that would make return value `1000 + sum([2,3])` 1005, am I correct? > > > No, the last value of `y` is `[]`. It's never anything but a list, and besides, there are no `5`s for it to ever be. On top of that, the recursive return value is always on the right of the `+`, only the `x` is ever on the left. Let's step through it: ``` sum([1, 2, 3]) = 1 + sum([2, 3]) sum([2, 3]) = 2 + sum([3]) sum([3]) = 1000 ``` So, substituting back: ``` sum([2, 3]) = 2 + 1000 = 1002 sum([1, 2, 3] = 1 + 1002 = 1003 ``` The problem is that when `y` is empty, you're returning `1000`, not `x + 1000`. Your confusion may just be a matter of precedence. Maybe you expected this: ``` return x + sum(y) if y else 1000 ``` … to mean this: ``` return x + (sum(y) if y else 1000) ``` … but actually, it means this: ``` return (x + sum(y)) if y else 1000 ```
You should add parentheses: ``` l = [1,2,3] def sum(l): x, *y = l return x + (sum(y) if y else 1000) ```
21,255,168
I am trying to understand why does the recursive function returns 1003 instead of 1005. ``` l = [1,2,3] def sum(l): x, *y = l return x + sum(y) if y else 1000 sum(l) ``` According to [pythontutor](http://pythontutor.com/visualize.html) the last value of `y` list is 5 and that would make return value `1000 + sum([2,3])` 1005, am I correct? ![enter image description here](https://i.stack.imgur.com/OztQt.jpg)
2014/01/21
[ "https://Stackoverflow.com/questions/21255168", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1984680/" ]
You should add parentheses: ``` l = [1,2,3] def sum(l): x, *y = l return x + (sum(y) if y else 1000) ```
You should try to use a debugger or actually print things inside the function. Without executing the code I guess that it should be something like this: ``` l = [1,2,3] def sum(l): x, *y = l return x + sum(y) if y else 1000 sum(l) ``` It will call a such: ``` -> sum([1,2,3]) x : 1 y : [2, 3] -> sum([2, 3]) x: 2 y: [3] -> sum([3]) x: 3 y: [] returns 1000 returns 2 + 1000 returns 1 + 1002 ```
72,173,762
I am trying to speed up my code by splitting the job among several python processes. In the single-threaded version of the code, I am looping through a code that accumulates the result in several matrices of different dimensions. Since there's no data sharing between each iteration, I can divide the task among several processes, each one having its own local set of matrices to accumulate the result. When all the processes are done, I combine the matrices of all the processes. My idea of solving the issue is to pass a list of the same matrices to each process such that each process writes to this matrix when it's done. My question is, how do I pass this list of numpy array matrices to the processes? This seems like a straightforward thing to do except that it seems I can only pass a 1D array to the processes. Although a temporary solution would be to flatten all the numpy arrays and keep track of where each one begins and ends, is there a way where I simply pass a list of the matrices to the processes?
2022/05/09
[ "https://Stackoverflow.com/questions/72173762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11342618/" ]
Sort a Range ------------ ```vb Sub SortData() 'Dim wb As Workbook: Set wb = ThisWorkbook 'Dim ws As Worksheet: Set ws = wb.Worksheets("Sheet1") Dim ws As Worksheet: Set ws = ActiveSheet With ws.Range("A1").CurrentRegion .Sort _ Key1:=.Columns(1), Order1:=xlAscending, _ Key2:=.Columns(2), Order2:=xlAscending, _ Key3:=.Columns(8), Order3:=xlAscending, _ Header:=xlYes End With End Sub ```
You aren't quite using the `With` statement correctly. Try it like this: ``` Sub Sort() With ActiveSheet.Sort Cells.Select .SortFields.Clear .SortFields.Add2 key:=Range("A2:A35" _ ), SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal .SortFields.Add2 key:=Range("B2:B35" _ ), SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal .SortFields.Add2 key:=Range("H2:H35" _ ), SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal .SetRange Range("A1:I35") .Header = xlYes .MatchCase = False .Orientation = xlTopToBottom .SortMethod = xlPinYin .Apply End With End Sub ```
20,213,981
I have following list of items (key-value pairs): ``` items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] ``` What I want to get: ``` { 'A' : 1, 'B' : [1,2] 'C' : 3 } ``` My naive solution: ``` res = {} for (k,v) in items: if k in res: res[k].append(v) else: res[k] = [v] ``` I'm looking for some optimised more pythonic solution, anyone?
2013/11/26
[ "https://Stackoverflow.com/questions/20213981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/940208/" ]
Use could use defaultdict here. ``` from collections import defaultdict res = defaultdict(list) for (k,v) in items: res[k].append(v) # Use as dict(res) ``` **EDIT:** This is using groupby, but please note, **the above is far cleaner and neater to the eyes**: ``` >>> data = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] >>> dict([(key,list(v[1] for v in group)) for (key,group) in groupby(data, lambda x: x[0])]) {'A': [1], 'C': [3], 'B': [1, 2]} ``` **Downside**: Every element is a list. Change lists to generators as needed. --- To convert all single item lists to individual items: ``` >>> res = # Array of tuples, not dict >>> res = [(key,(value[0] if len(value) == 1 else value)) for key,value in res] >>> res [('A', 1), ('B', [1, 2]), ('C', 3)] ```
This is very easy to do with `defaultdict`, which can be imported from `collections`. ``` >>> from collections import defaultdict >>> items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] >>> d = defaultdict(list) >>> for k, v in items: d[k].append(v) >>> d defaultdict(<class 'list'>, {'A': [1], 'C': [3], 'B': [1, 2]}) ``` You *can* also use a dictionary comprehension: ``` >>> d = {l: [var for key, var in items if key == l] for l in {v[0] for v in items}} >>> d {'A': [1], 'C': [3], 'B': [1, 2]} ```
20,213,981
I have following list of items (key-value pairs): ``` items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] ``` What I want to get: ``` { 'A' : 1, 'B' : [1,2] 'C' : 3 } ``` My naive solution: ``` res = {} for (k,v) in items: if k in res: res[k].append(v) else: res[k] = [v] ``` I'm looking for some optimised more pythonic solution, anyone?
2013/11/26
[ "https://Stackoverflow.com/questions/20213981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/940208/" ]
Use could use defaultdict here. ``` from collections import defaultdict res = defaultdict(list) for (k,v) in items: res[k].append(v) # Use as dict(res) ``` **EDIT:** This is using groupby, but please note, **the above is far cleaner and neater to the eyes**: ``` >>> data = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] >>> dict([(key,list(v[1] for v in group)) for (key,group) in groupby(data, lambda x: x[0])]) {'A': [1], 'C': [3], 'B': [1, 2]} ``` **Downside**: Every element is a list. Change lists to generators as needed. --- To convert all single item lists to individual items: ``` >>> res = # Array of tuples, not dict >>> res = [(key,(value[0] if len(value) == 1 else value)) for key,value in res] >>> res [('A', 1), ('B', [1, 2]), ('C', 3)] ```
You can use the WebOb multidict implementation: ``` >>> from webob import multidict >>> a = multidict.MultiDict(items) >>> a.getall('B') [1,2] ```
20,213,981
I have following list of items (key-value pairs): ``` items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] ``` What I want to get: ``` { 'A' : 1, 'B' : [1,2] 'C' : 3 } ``` My naive solution: ``` res = {} for (k,v) in items: if k in res: res[k].append(v) else: res[k] = [v] ``` I'm looking for some optimised more pythonic solution, anyone?
2013/11/26
[ "https://Stackoverflow.com/questions/20213981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/940208/" ]
Use could use defaultdict here. ``` from collections import defaultdict res = defaultdict(list) for (k,v) in items: res[k].append(v) # Use as dict(res) ``` **EDIT:** This is using groupby, but please note, **the above is far cleaner and neater to the eyes**: ``` >>> data = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] >>> dict([(key,list(v[1] for v in group)) for (key,group) in groupby(data, lambda x: x[0])]) {'A': [1], 'C': [3], 'B': [1, 2]} ``` **Downside**: Every element is a list. Change lists to generators as needed. --- To convert all single item lists to individual items: ``` >>> res = # Array of tuples, not dict >>> res = [(key,(value[0] if len(value) == 1 else value)) for key,value in res] >>> res [('A', 1), ('B', [1, 2]), ('C', 3)] ```
If you don't want to use `defaultdict`/`groupby`, the following works: ``` d = {} for k,v in items: d.setdefault(k, []).append(v) ```
20,213,981
I have following list of items (key-value pairs): ``` items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] ``` What I want to get: ``` { 'A' : 1, 'B' : [1,2] 'C' : 3 } ``` My naive solution: ``` res = {} for (k,v) in items: if k in res: res[k].append(v) else: res[k] = [v] ``` I'm looking for some optimised more pythonic solution, anyone?
2013/11/26
[ "https://Stackoverflow.com/questions/20213981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/940208/" ]
Use could use defaultdict here. ``` from collections import defaultdict res = defaultdict(list) for (k,v) in items: res[k].append(v) # Use as dict(res) ``` **EDIT:** This is using groupby, but please note, **the above is far cleaner and neater to the eyes**: ``` >>> data = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] >>> dict([(key,list(v[1] for v in group)) for (key,group) in groupby(data, lambda x: x[0])]) {'A': [1], 'C': [3], 'B': [1, 2]} ``` **Downside**: Every element is a list. Change lists to generators as needed. --- To convert all single item lists to individual items: ``` >>> res = # Array of tuples, not dict >>> res = [(key,(value[0] if len(value) == 1 else value)) for key,value in res] >>> res [('A', 1), ('B', [1, 2]), ('C', 3)] ```
Maybe it looks ugly, but it works. ``` In [1]: items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] In [2]: d = {} In [3]: map(lambda i: d.update({i[0]: i[1] if d.get(i[0], i[1]) == i[1] else [d[i[0]], i[1]]}), items) Out[3]: [None, None, None, None] In [4]: print d {'A': 1, 'C': 3, 'B': [1, 2]} ``` In `else` branch we can check if `d[i[0]]` returns a list.
20,213,981
I have following list of items (key-value pairs): ``` items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] ``` What I want to get: ``` { 'A' : 1, 'B' : [1,2] 'C' : 3 } ``` My naive solution: ``` res = {} for (k,v) in items: if k in res: res[k].append(v) else: res[k] = [v] ``` I'm looking for some optimised more pythonic solution, anyone?
2013/11/26
[ "https://Stackoverflow.com/questions/20213981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/940208/" ]
Use could use defaultdict here. ``` from collections import defaultdict res = defaultdict(list) for (k,v) in items: res[k].append(v) # Use as dict(res) ``` **EDIT:** This is using groupby, but please note, **the above is far cleaner and neater to the eyes**: ``` >>> data = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] >>> dict([(key,list(v[1] for v in group)) for (key,group) in groupby(data, lambda x: x[0])]) {'A': [1], 'C': [3], 'B': [1, 2]} ``` **Downside**: Every element is a list. Change lists to generators as needed. --- To convert all single item lists to individual items: ``` >>> res = # Array of tuples, not dict >>> res = [(key,(value[0] if len(value) == 1 else value)) for key,value in res] >>> res [('A', 1), ('B', [1, 2]), ('C', 3)] ```
To convert dictionary to list of items ``` dict_items = list(dict_1.items()) ``` To convert the list of items back to the dictionary ``` dict2 = dict(dict_items) ```
20,213,981
I have following list of items (key-value pairs): ``` items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] ``` What I want to get: ``` { 'A' : 1, 'B' : [1,2] 'C' : 3 } ``` My naive solution: ``` res = {} for (k,v) in items: if k in res: res[k].append(v) else: res[k] = [v] ``` I'm looking for some optimised more pythonic solution, anyone?
2013/11/26
[ "https://Stackoverflow.com/questions/20213981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/940208/" ]
If you don't want to use `defaultdict`/`groupby`, the following works: ``` d = {} for k,v in items: d.setdefault(k, []).append(v) ```
This is very easy to do with `defaultdict`, which can be imported from `collections`. ``` >>> from collections import defaultdict >>> items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] >>> d = defaultdict(list) >>> for k, v in items: d[k].append(v) >>> d defaultdict(<class 'list'>, {'A': [1], 'C': [3], 'B': [1, 2]}) ``` You *can* also use a dictionary comprehension: ``` >>> d = {l: [var for key, var in items if key == l] for l in {v[0] for v in items}} >>> d {'A': [1], 'C': [3], 'B': [1, 2]} ```
20,213,981
I have following list of items (key-value pairs): ``` items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] ``` What I want to get: ``` { 'A' : 1, 'B' : [1,2] 'C' : 3 } ``` My naive solution: ``` res = {} for (k,v) in items: if k in res: res[k].append(v) else: res[k] = [v] ``` I'm looking for some optimised more pythonic solution, anyone?
2013/11/26
[ "https://Stackoverflow.com/questions/20213981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/940208/" ]
If you don't want to use `defaultdict`/`groupby`, the following works: ``` d = {} for k,v in items: d.setdefault(k, []).append(v) ```
You can use the WebOb multidict implementation: ``` >>> from webob import multidict >>> a = multidict.MultiDict(items) >>> a.getall('B') [1,2] ```
20,213,981
I have following list of items (key-value pairs): ``` items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] ``` What I want to get: ``` { 'A' : 1, 'B' : [1,2] 'C' : 3 } ``` My naive solution: ``` res = {} for (k,v) in items: if k in res: res[k].append(v) else: res[k] = [v] ``` I'm looking for some optimised more pythonic solution, anyone?
2013/11/26
[ "https://Stackoverflow.com/questions/20213981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/940208/" ]
If you don't want to use `defaultdict`/`groupby`, the following works: ``` d = {} for k,v in items: d.setdefault(k, []).append(v) ```
Maybe it looks ugly, but it works. ``` In [1]: items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] In [2]: d = {} In [3]: map(lambda i: d.update({i[0]: i[1] if d.get(i[0], i[1]) == i[1] else [d[i[0]], i[1]]}), items) Out[3]: [None, None, None, None] In [4]: print d {'A': 1, 'C': 3, 'B': [1, 2]} ``` In `else` branch we can check if `d[i[0]]` returns a list.
20,213,981
I have following list of items (key-value pairs): ``` items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)] ``` What I want to get: ``` { 'A' : 1, 'B' : [1,2] 'C' : 3 } ``` My naive solution: ``` res = {} for (k,v) in items: if k in res: res[k].append(v) else: res[k] = [v] ``` I'm looking for some optimised more pythonic solution, anyone?
2013/11/26
[ "https://Stackoverflow.com/questions/20213981", "https://Stackoverflow.com", "https://Stackoverflow.com/users/940208/" ]
If you don't want to use `defaultdict`/`groupby`, the following works: ``` d = {} for k,v in items: d.setdefault(k, []).append(v) ```
To convert dictionary to list of items ``` dict_items = list(dict_1.items()) ``` To convert the list of items back to the dictionary ``` dict2 = dict(dict_items) ```
12,924,287
I have a text file like this:- ``` V1xx AB1 V2xx AC34 V3xx AB1 ``` Can we add `;` at each end of line through python script? ``` V1xx AB1; V2xx AC34; V3xx AB1; ```
2012/10/16
[ "https://Stackoverflow.com/questions/12924287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1750896/" ]
Here's what you can try. I have `overwritten the same file` though. You can `try creating a new one`(I leave it to you) - You'll need to modify your `with` statement a little : - ``` lines = "" with open('D:\File.txt') as file: for line in file: lines += line.strip() + ";\n" file = open('D:\File.txt', "w+") file.writelines(lines) file.flush() ``` **UPDATE**: - For **in-place modification** of file, you can use `fileinput` module: - ``` import fileinput for line in fileinput.input('D:\File.txt', inplace = True): print line.strip() + ";" ```
``` #Open the original file, and create a blank file in write mode File = open("D:\myfilepath\myfile.txt") FileCopy = open("D:\myfilepath\myfile_Copy.txt","w") #For each line in the file, remove the end line character, #insert a semicolon, and then add a new end line character. #copy these lines into the blank file for line in File: CleanLine=line.strip("\n") FileCopy.write(CleanLine+";\n") FileCopy.close() File.close() #Replace the original file with the copied file File = open("D:\myfilepath\myfile.txt","w") FileCopy = open("D:\myfilepath\myfile_Copy.txt") for line in FileCopy: File.write(line) FileCopy.close() File.close() ``` Notes: I have left the "copy file" in there as a back up. You can manually delete it or use os.remove() (if you do that don't forget to import the os module)
12,924,287
I have a text file like this:- ``` V1xx AB1 V2xx AC34 V3xx AB1 ``` Can we add `;` at each end of line through python script? ``` V1xx AB1; V2xx AC34; V3xx AB1; ```
2012/10/16
[ "https://Stackoverflow.com/questions/12924287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1750896/" ]
``` input_file_name = 'input.txt' output_file_name = 'output.txt' with open(input_file_name, 'rt') as input, open(output_file_name, 'wt') as output: for line in input: output.write(line[:-1]+';\n') ```
``` #Open the original file, and create a blank file in write mode File = open("D:\myfilepath\myfile.txt") FileCopy = open("D:\myfilepath\myfile_Copy.txt","w") #For each line in the file, remove the end line character, #insert a semicolon, and then add a new end line character. #copy these lines into the blank file for line in File: CleanLine=line.strip("\n") FileCopy.write(CleanLine+";\n") FileCopy.close() File.close() #Replace the original file with the copied file File = open("D:\myfilepath\myfile.txt","w") FileCopy = open("D:\myfilepath\myfile_Copy.txt") for line in FileCopy: File.write(line) FileCopy.close() File.close() ``` Notes: I have left the "copy file" in there as a back up. You can manually delete it or use os.remove() (if you do that don't forget to import the os module)
12,924,287
I have a text file like this:- ``` V1xx AB1 V2xx AC34 V3xx AB1 ``` Can we add `;` at each end of line through python script? ``` V1xx AB1; V2xx AC34; V3xx AB1; ```
2012/10/16
[ "https://Stackoverflow.com/questions/12924287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1750896/" ]
Here's what you can try. I have `overwritten the same file` though. You can `try creating a new one`(I leave it to you) - You'll need to modify your `with` statement a little : - ``` lines = "" with open('D:\File.txt') as file: for line in file: lines += line.strip() + ";\n" file = open('D:\File.txt', "w+") file.writelines(lines) file.flush() ``` **UPDATE**: - For **in-place modification** of file, you can use `fileinput` module: - ``` import fileinput for line in fileinput.input('D:\File.txt', inplace = True): print line.strip() + ";" ```
``` input_file_name = 'input.txt' output_file_name = 'output.txt' with open(input_file_name, 'rt') as input, open(output_file_name, 'wt') as output: for line in input: output.write(line[:-1]+';\n') ```
34,839,184
Which approach is the best when we want to deploy two websites on the same aws EC2 instance? * two separate Docker container each consists a django project * one Docker container consists of two separate django project if the two of them are basic django-cms projects and we know they wont expand in future (nor python packages dependency neither vertical or horizontal scaling) My purpose is to deploy 10 low-traffic django-cms websites on the same aws ec2 instance... P.S. I'm using Elastic Beanstalk
2016/01/17
[ "https://Stackoverflow.com/questions/34839184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4566737/" ]
I am sure the answer is "It depends" but I believe the whole reason for using Docker is to isolate your environment, and gain flexibility. So what happens if one project uses a bunch of python packages, and the other uses a bunch of other packages. Worse, what if any of them conflict with each other. Forget about a case where a package may require some changes to the OS. And what happens if you need to scale and maybe move to two separate EC2 instances? I also believe you want to isolate changes to one project when fixing the other, rather than been forced to always re-deploy both projects together. In both of this cases, **using two separate docker containers** is safer, and more flexible/powerful.
What is easier to maintain for you? * Handling software updates? * Redeploying sources? * Reconfiguring each app? I like separation and therefore I would define two separate Docker container but then your in need of a load balancer in front because both containers will need a separate port. The load balancer itself will need either subdomains or domains in order to separate requests for each container. Examples: * www.app1.com -> container1 * www.app2.com -> container2 Example with subdomain: * app1.example.com -> container1 * app2.example.com -> container2 Possible load balancers: * Apache on same EC2 instance. * Nginx on same EC2 instance. I have an example container here: [blacklabelops/nginx](https://hub.docker.com/r/blacklabelops/nginx/) * AmazonWS Load Balancer included in EC2. Benefit from load balancer in front: * Minimize Downtime -> Both applications can have an update without any downtime for the other. * Enables High Availability -> Possible Blue White Deployments of one app. Depends on your app. You can circumvent this complexity by using your example tutorial where everything is inside the same container. You will also have all the configuration details in the same place.
34,839,184
Which approach is the best when we want to deploy two websites on the same aws EC2 instance? * two separate Docker container each consists a django project * one Docker container consists of two separate django project if the two of them are basic django-cms projects and we know they wont expand in future (nor python packages dependency neither vertical or horizontal scaling) My purpose is to deploy 10 low-traffic django-cms websites on the same aws ec2 instance... P.S. I'm using Elastic Beanstalk
2016/01/17
[ "https://Stackoverflow.com/questions/34839184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4566737/" ]
I am sure the answer is "It depends" but I believe the whole reason for using Docker is to isolate your environment, and gain flexibility. So what happens if one project uses a bunch of python packages, and the other uses a bunch of other packages. Worse, what if any of them conflict with each other. Forget about a case where a package may require some changes to the OS. And what happens if you need to scale and maybe move to two separate EC2 instances? I also believe you want to isolate changes to one project when fixing the other, rather than been forced to always re-deploy both projects together. In both of this cases, **using two separate docker containers** is safer, and more flexible/powerful.
I don't know why you don't choose the simpler option: 1. 2 x Python virtual environments; one for each django application. 2. 2 x uwsgi master processes, one for each django application 3. 1 x supervisor process to manage the uwsgi threads 4. 1 x nginx mapped correctly for both applications. This is how things were setup and have been working for years before docker became everyone's favorite toy.
34,839,184
Which approach is the best when we want to deploy two websites on the same aws EC2 instance? * two separate Docker container each consists a django project * one Docker container consists of two separate django project if the two of them are basic django-cms projects and we know they wont expand in future (nor python packages dependency neither vertical or horizontal scaling) My purpose is to deploy 10 low-traffic django-cms websites on the same aws ec2 instance... P.S. I'm using Elastic Beanstalk
2016/01/17
[ "https://Stackoverflow.com/questions/34839184", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4566737/" ]
What is easier to maintain for you? * Handling software updates? * Redeploying sources? * Reconfiguring each app? I like separation and therefore I would define two separate Docker container but then your in need of a load balancer in front because both containers will need a separate port. The load balancer itself will need either subdomains or domains in order to separate requests for each container. Examples: * www.app1.com -> container1 * www.app2.com -> container2 Example with subdomain: * app1.example.com -> container1 * app2.example.com -> container2 Possible load balancers: * Apache on same EC2 instance. * Nginx on same EC2 instance. I have an example container here: [blacklabelops/nginx](https://hub.docker.com/r/blacklabelops/nginx/) * AmazonWS Load Balancer included in EC2. Benefit from load balancer in front: * Minimize Downtime -> Both applications can have an update without any downtime for the other. * Enables High Availability -> Possible Blue White Deployments of one app. Depends on your app. You can circumvent this complexity by using your example tutorial where everything is inside the same container. You will also have all the configuration details in the same place.
I don't know why you don't choose the simpler option: 1. 2 x Python virtual environments; one for each django application. 2. 2 x uwsgi master processes, one for each django application 3. 1 x supervisor process to manage the uwsgi threads 4. 1 x nginx mapped correctly for both applications. This is how things were setup and have been working for years before docker became everyone's favorite toy.
67,404,079
I want to read a file and return word and space in the file with python. and i don't want to pass caractere to caractere. I already used : ``` def openfile(name_file) : with open(name_file) as f : l = re.split(' ',re.sub('\n',' ',f.read())) sentence = [] for i in l : sentence.append(i) print(sentence) ``` input : ``` Clustalo O(1.2.4) multiple sequence alignement id_ref ATGFDFVREF--SFERFSRSFVSRVSVSVRVSFDFVEGREHEHZ id_iso ADEFZRVSDFVSSVDFSVSEFVDCSZF--ZEVVDSVZRVEFDFV -------------- ------- ------------- - --- ``` output on my script : ``` ['clustal','O(1.2.4)','multiple','sequence','alignement', ect...] ``` expected output : ``` ['clustal','','O(1.2.4)','','multiple','','sequence','','alignement',ect...] ```
2021/05/05
[ "https://Stackoverflow.com/questions/67404079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15844030/" ]
This is not the best solution for this, but you can do something like this: ``` import re def openfile(name_file): with open(name_file) as f: original_list = [] lines = f.readlines() for line in lines: li = line.split(' ') for item in li: if item != '': original_list.append(item.strip('\n')) original_list.append('') print(original_list) ``` Output: ``` ['clustal', '', 'O(1.2.4)', '', 'multiple', '', 'sequence', '', 'alignement10', ''] ``` If you don't need the extra `''` at the end you can simply remove it with ``` original_list.pop() ```
Add a parameter on the end of print ```py def openfile(name_file) : with open(name_file) as f : l = re.split(' ',re.sub('\n',' ',f.read())) for i in l : print('i :', i, '\ni : ') ```
67,404,079
I want to read a file and return word and space in the file with python. and i don't want to pass caractere to caractere. I already used : ``` def openfile(name_file) : with open(name_file) as f : l = re.split(' ',re.sub('\n',' ',f.read())) sentence = [] for i in l : sentence.append(i) print(sentence) ``` input : ``` Clustalo O(1.2.4) multiple sequence alignement id_ref ATGFDFVREF--SFERFSRSFVSRVSVSVRVSFDFVEGREHEHZ id_iso ADEFZRVSDFVSSVDFSVSEFVDCSZF--ZEVVDSVZRVEFDFV -------------- ------- ------------- - --- ``` output on my script : ``` ['clustal','O(1.2.4)','multiple','sequence','alignement', ect...] ``` expected output : ``` ['clustal','','O(1.2.4)','','multiple','','sequence','','alignement',ect...] ```
2021/05/05
[ "https://Stackoverflow.com/questions/67404079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15844030/" ]
your code is ok but something you should pay attention first your txt file that includes your characters, in operating systems, each line ends with \n if you go to next line but here you don't have any new line character because its a single line second `re.split` make a list of characters based on pattern you give, so you give it a pattern including a space character so the string will be splited on spaces and the output list will not include spaces! so you have two options to make your code work **Option 1** if all of your characters in txt file in one line, replace every space with two space so it will make an empty character string between each character like this txt file ``` Clustalo O(1.2.4) multiple sequence alignement ``` code.py import re ``` def openfile(name_file) : with open(name_file, "r") as f : l = re.split(' ',re.sub(' ',' ',f.read())) for i in l : print('i :', i) ``` output ``` i : Clustalo i : i : O(1.2.4) i : i : multiple i : i : sequence i : i : alignement ``` **option 2** if your txt file is like this ``` Clustalo O(1.2.4) multiple sequence alignement ``` replace every \n character with double spaces(note: dont include space after each charcter, or make the replacement one space) code.py ``` import re def openfile(name_file) : with open(name_file, "r") as f : l = re.split(' ',re.sub('\n',' ',f.read())) for i in l : print('i :', i) ``` output ``` i : Clustalo i : i : O(1.2.4) i : i : multiple i : i : sequence i : i : alignement ```
Add a parameter on the end of print ```py def openfile(name_file) : with open(name_file) as f : l = re.split(' ',re.sub('\n',' ',f.read())) for i in l : print('i :', i, '\ni : ') ```
67,404,079
I want to read a file and return word and space in the file with python. and i don't want to pass caractere to caractere. I already used : ``` def openfile(name_file) : with open(name_file) as f : l = re.split(' ',re.sub('\n',' ',f.read())) sentence = [] for i in l : sentence.append(i) print(sentence) ``` input : ``` Clustalo O(1.2.4) multiple sequence alignement id_ref ATGFDFVREF--SFERFSRSFVSRVSVSVRVSFDFVEGREHEHZ id_iso ADEFZRVSDFVSSVDFSVSEFVDCSZF--ZEVVDSVZRVEFDFV -------------- ------- ------------- - --- ``` output on my script : ``` ['clustal','O(1.2.4)','multiple','sequence','alignement', ect...] ``` expected output : ``` ['clustal','','O(1.2.4)','','multiple','','sequence','','alignement',ect...] ```
2021/05/05
[ "https://Stackoverflow.com/questions/67404079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15844030/" ]
This is not the best solution for this, but you can do something like this: ``` import re def openfile(name_file): with open(name_file) as f: original_list = [] lines = f.readlines() for line in lines: li = line.split(' ') for item in li: if item != '': original_list.append(item.strip('\n')) original_list.append('') print(original_list) ``` Output: ``` ['clustal', '', 'O(1.2.4)', '', 'multiple', '', 'sequence', '', 'alignement10', ''] ``` If you don't need the extra `''` at the end you can simply remove it with ``` original_list.pop() ```
your code is ok but something you should pay attention first your txt file that includes your characters, in operating systems, each line ends with \n if you go to next line but here you don't have any new line character because its a single line second `re.split` make a list of characters based on pattern you give, so you give it a pattern including a space character so the string will be splited on spaces and the output list will not include spaces! so you have two options to make your code work **Option 1** if all of your characters in txt file in one line, replace every space with two space so it will make an empty character string between each character like this txt file ``` Clustalo O(1.2.4) multiple sequence alignement ``` code.py import re ``` def openfile(name_file) : with open(name_file, "r") as f : l = re.split(' ',re.sub(' ',' ',f.read())) for i in l : print('i :', i) ``` output ``` i : Clustalo i : i : O(1.2.4) i : i : multiple i : i : sequence i : i : alignement ``` **option 2** if your txt file is like this ``` Clustalo O(1.2.4) multiple sequence alignement ``` replace every \n character with double spaces(note: dont include space after each charcter, or make the replacement one space) code.py ``` import re def openfile(name_file) : with open(name_file, "r") as f : l = re.split(' ',re.sub('\n',' ',f.read())) for i in l : print('i :', i) ``` output ``` i : Clustalo i : i : O(1.2.4) i : i : multiple i : i : sequence i : i : alignement ```
54,879,916
I've the following string in python, example: ``` "Peter North / John West" ``` Note that there are two spaces before and after the forward slash. What should I do such that I can clean it to become ``` "Peter North_John West" ``` I tried using regex but I am not exactly sure how. Should I use re.sub or pandas.replace?
2019/02/26
[ "https://Stackoverflow.com/questions/54879916", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11117700/" ]
You can use ``` a = "Peter North / John West" import re a = re.sub(' +/ +','_',a) ``` Any number of spaces with slash followed by any number of slashes can be replaced by this pattern.
In case of varying number of white spaces before and after `/`: ``` import re re.sub("\s+/\s+", "_", "Peter North / John West") # Peter North_John West ```
10,184,476
I'm making a Django page that has a sidebar with some info that is loaded from external websites(e.g. bus arrival times). I'm new to web development and I recognize this as a bottleneck. As it is, the page hangs for a fraction of a second as it loads the data from the other sites. It doesn't display anything until it gets this info because it runs python scripts to get the data before baking it into the html. Ideally, it would display the majority of the page loaded directly off my web server and then have a little "loading" gif or something until it actually manages to grab the data before displaying that. How can I achieve this? I presume javascript will be useful? How can I get it to integrate with my existing poller scripts?
2012/04/17
[ "https://Stackoverflow.com/questions/10184476", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1337686/" ]
You probably don't need up-to-the-second information, so have [another process](http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/) load the data into a cache, and have your website read it from the local cache.
The easiest but not most beautiful way to integrate something like this would be with iframes. Just make iframes for the secondary stuff, and they will load themselves in due time. No javascript required.
48,282,074
I'm trying to learn python for data science application and signed up for a course. I am doing the exercises and got stuck even though my answer is the same as the one in the answer key. I'm basically trying to add two new items to a dictionary with the following piece of code: ``` # Create a new key-value pair for 'Inner London' location_dict['Inner London'] = location_dict['Camden'] + location_dict['Southwark'] # Create a new key-value pair for 'Outer London' location_dict['Outer London'] = location_dict['Brent'] + location_dict['Redbridge'] ``` but when I run it I am getting a `TypeError`.
2018/01/16
[ "https://Stackoverflow.com/questions/48282074", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9224360/" ]
I had a list in there with the sets I think that was what was throwing the error I just converted the list to a set and joined them with .union. Thank you @Adelin, @Bilkokuya, @ Piinthesky, and @Kurast for you're quick responses and input!
for union of sets you can use `|` ``` location_dict['Camden'] | location_dict['Southwark'] ```
358,471
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
2008/12/11
[ "https://Stackoverflow.com/questions/358471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
There is XMODEM module on PyPi. It handles both sending and receiving of data with XModem. Below is sample of its usage: ``` import serial try: from cStringIO import StringIO except: from StringIO import StringIO from xmodem import XMODEM, NAK from time import sleep def readUntil(char = None): def serialPortReader(): while True: tmp = port.read(1) if not tmp or (char and char == tmp): break yield tmp return ''.join(serialPortReader()) def getc(size, timeout=1): return port.read(size) def putc(data, timeout=1): port.write(data) sleep(0.001) # give device time to prepare new buffer and start sending it port = serial.Serial(port='COM5',parity=serial.PARITY_NONE,bytesize=serial.EIGHTBITS,stopbits=serial.STOPBITS_ONE,timeout=0,xonxoff=0,rtscts=0,dsrdtr=0,baudrate=115200) port.write("command that initiates xmodem send from device\r\n") sleep(0.02) # give device time to handle command and start sending response readUntil(NAK) buffer = StringIO() XMODEM(getc, putc).recv(buffer, crc_mode = 0, quiet = 1) contents = buffer.getvalue() buffer.close() readUntil() ```
I think you’re stuck with rolling your own. You might be able to use [sz](http://linux.about.com/library/cmd/blcmdl1_sz.htm), which implements X/Y/ZMODEM. You could call out to the binary, or port the necessary code to Python.
358,471
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
2008/12/11
[ "https://Stackoverflow.com/questions/358471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Here is a link to [XMODEM](http://www.programmersheaven.com/download/2167/download.aspx) documentation that will be useful if you have to write your own. It has detailed description of the original XMODEM, XMODEM-CRC and XMODEM-1K. You might also find this [c-code](http://www.menie.org/georges/embedded/index.html) of interest.
You can try using [SWIG](http://www.swig.org/) to create Python bindings for the C libraries linked above (or any other C/C++ libraries you find online). That will allow you to use the same C API directly from Python. The actual implementation will of course still be in C/C++, since SWIG merely creates bindings to the functions of interest.
358,471
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
2008/12/11
[ "https://Stackoverflow.com/questions/358471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` def xmodem_send(serial, file): t, anim = 0, '|/-\\' serial.setTimeout(1) while 1: if serial.read(1) != NAK: t = t + 1 print anim[t%len(anim)],'\r', if t == 60 : return False else: break p = 1 s = file.read(128) while s: s = s + '\xFF'*(128 - len(s)) chk = 0 for c in s: chk+=ord(c) while 1: serial.write(SOH) serial.write(chr(p)) serial.write(chr(255 - p)) serial.write(s) serial.write(chr(chk%256)) serial.flush() answer = serial.read(1) if answer == NAK: continue if answer == ACK: break return False s = file.read(128) p = (p + 1)%256 print '.', serial.write(EOT) return True ```
There is a python module that you can use -> <https://pypi.python.org/pypi/xmodem> You can see the transfer protocol in <http://pythonhosted.org//xmodem/xmodem.html>
358,471
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
2008/12/11
[ "https://Stackoverflow.com/questions/358471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` def xmodem_send(serial, file): t, anim = 0, '|/-\\' serial.setTimeout(1) while 1: if serial.read(1) != NAK: t = t + 1 print anim[t%len(anim)],'\r', if t == 60 : return False else: break p = 1 s = file.read(128) while s: s = s + '\xFF'*(128 - len(s)) chk = 0 for c in s: chk+=ord(c) while 1: serial.write(SOH) serial.write(chr(p)) serial.write(chr(255 - p)) serial.write(s) serial.write(chr(chk%256)) serial.flush() answer = serial.read(1) if answer == NAK: continue if answer == ACK: break return False s = file.read(128) p = (p + 1)%256 print '.', serial.write(EOT) return True ```
Here is a link to [XMODEM](http://www.programmersheaven.com/download/2167/download.aspx) documentation that will be useful if you have to write your own. It has detailed description of the original XMODEM, XMODEM-CRC and XMODEM-1K. You might also find this [c-code](http://www.menie.org/georges/embedded/index.html) of interest.
358,471
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
2008/12/11
[ "https://Stackoverflow.com/questions/358471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I think you’re stuck with rolling your own. You might be able to use [sz](http://linux.about.com/library/cmd/blcmdl1_sz.htm), which implements X/Y/ZMODEM. You could call out to the binary, or port the necessary code to Python.
You can try using [SWIG](http://www.swig.org/) to create Python bindings for the C libraries linked above (or any other C/C++ libraries you find online). That will allow you to use the same C API directly from Python. The actual implementation will of course still be in C/C++, since SWIG merely creates bindings to the functions of interest.
358,471
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
2008/12/11
[ "https://Stackoverflow.com/questions/358471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` def xmodem_send(serial, file): t, anim = 0, '|/-\\' serial.setTimeout(1) while 1: if serial.read(1) != NAK: t = t + 1 print anim[t%len(anim)],'\r', if t == 60 : return False else: break p = 1 s = file.read(128) while s: s = s + '\xFF'*(128 - len(s)) chk = 0 for c in s: chk+=ord(c) while 1: serial.write(SOH) serial.write(chr(p)) serial.write(chr(255 - p)) serial.write(s) serial.write(chr(chk%256)) serial.flush() answer = serial.read(1) if answer == NAK: continue if answer == ACK: break return False s = file.read(128) p = (p + 1)%256 print '.', serial.write(EOT) return True ```
You can try using [SWIG](http://www.swig.org/) to create Python bindings for the C libraries linked above (or any other C/C++ libraries you find online). That will allow you to use the same C API directly from Python. The actual implementation will of course still be in C/C++, since SWIG merely creates bindings to the functions of interest.
358,471
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
2008/12/11
[ "https://Stackoverflow.com/questions/358471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
There is XMODEM module on PyPi. It handles both sending and receiving of data with XModem. Below is sample of its usage: ``` import serial try: from cStringIO import StringIO except: from StringIO import StringIO from xmodem import XMODEM, NAK from time import sleep def readUntil(char = None): def serialPortReader(): while True: tmp = port.read(1) if not tmp or (char and char == tmp): break yield tmp return ''.join(serialPortReader()) def getc(size, timeout=1): return port.read(size) def putc(data, timeout=1): port.write(data) sleep(0.001) # give device time to prepare new buffer and start sending it port = serial.Serial(port='COM5',parity=serial.PARITY_NONE,bytesize=serial.EIGHTBITS,stopbits=serial.STOPBITS_ONE,timeout=0,xonxoff=0,rtscts=0,dsrdtr=0,baudrate=115200) port.write("command that initiates xmodem send from device\r\n") sleep(0.02) # give device time to handle command and start sending response readUntil(NAK) buffer = StringIO() XMODEM(getc, putc).recv(buffer, crc_mode = 0, quiet = 1) contents = buffer.getvalue() buffer.close() readUntil() ```
There is a python module that you can use -> <https://pypi.python.org/pypi/xmodem> You can see the transfer protocol in <http://pythonhosted.org//xmodem/xmodem.html>
358,471
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
2008/12/11
[ "https://Stackoverflow.com/questions/358471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` def xmodem_send(serial, file): t, anim = 0, '|/-\\' serial.setTimeout(1) while 1: if serial.read(1) != NAK: t = t + 1 print anim[t%len(anim)],'\r', if t == 60 : return False else: break p = 1 s = file.read(128) while s: s = s + '\xFF'*(128 - len(s)) chk = 0 for c in s: chk+=ord(c) while 1: serial.write(SOH) serial.write(chr(p)) serial.write(chr(255 - p)) serial.write(s) serial.write(chr(chk%256)) serial.flush() answer = serial.read(1) if answer == NAK: continue if answer == ACK: break return False s = file.read(128) p = (p + 1)%256 print '.', serial.write(EOT) return True ```
I think you’re stuck with rolling your own. You might be able to use [sz](http://linux.about.com/library/cmd/blcmdl1_sz.htm), which implements X/Y/ZMODEM. You could call out to the binary, or port the necessary code to Python.
358,471
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
2008/12/11
[ "https://Stackoverflow.com/questions/358471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
There is XMODEM module on PyPi. It handles both sending and receiving of data with XModem. Below is sample of its usage: ``` import serial try: from cStringIO import StringIO except: from StringIO import StringIO from xmodem import XMODEM, NAK from time import sleep def readUntil(char = None): def serialPortReader(): while True: tmp = port.read(1) if not tmp or (char and char == tmp): break yield tmp return ''.join(serialPortReader()) def getc(size, timeout=1): return port.read(size) def putc(data, timeout=1): port.write(data) sleep(0.001) # give device time to prepare new buffer and start sending it port = serial.Serial(port='COM5',parity=serial.PARITY_NONE,bytesize=serial.EIGHTBITS,stopbits=serial.STOPBITS_ONE,timeout=0,xonxoff=0,rtscts=0,dsrdtr=0,baudrate=115200) port.write("command that initiates xmodem send from device\r\n") sleep(0.02) # give device time to handle command and start sending response readUntil(NAK) buffer = StringIO() XMODEM(getc, putc).recv(buffer, crc_mode = 0, quiet = 1) contents = buffer.getvalue() buffer.close() readUntil() ```
Here is a link to [XMODEM](http://www.programmersheaven.com/download/2167/download.aspx) documentation that will be useful if you have to write your own. It has detailed description of the original XMODEM, XMODEM-CRC and XMODEM-1K. You might also find this [c-code](http://www.menie.org/georges/embedded/index.html) of interest.
358,471
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
2008/12/11
[ "https://Stackoverflow.com/questions/358471", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Here is a link to [XMODEM](http://www.programmersheaven.com/download/2167/download.aspx) documentation that will be useful if you have to write your own. It has detailed description of the original XMODEM, XMODEM-CRC and XMODEM-1K. You might also find this [c-code](http://www.menie.org/georges/embedded/index.html) of interest.
There is a python module that you can use -> <https://pypi.python.org/pypi/xmodem> You can see the transfer protocol in <http://pythonhosted.org//xmodem/xmodem.html>
12,243,129
For a research project I am trying to boot as many VM's as possible, using python libvirt bindings, in KVM under Ubuntu server 12.04. All the VM's are set to idle after boot, and to use a minimum amount of memory. At the most I was able to boot 1000 VM's on a single host, at which point the kernel (Linux 3x) became unresponsive, even if both CPU- and memory usage is nowhere near the limits (48 cores AMD, 128GB mem.) Before this, the booting process became successively slower, after a couple of hundred VM's. I assume this must be related to the KVM/Qemu driver, as the linux kernel itself should have no problem handling this few processes. However, I did read that the Qemu driver was now multi-threaded. Any ideas of what the cause of this slowness may be - or at least where I should start looking?
2012/09/03
[ "https://Stackoverflow.com/questions/12243129", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1642856/" ]
You are booting all the VMs using qemu-kvm right, and after 100s of VM you feel it's becoming successively slow. So when you feels it stop using kvm, just boot using qemu, I expect you see the same slowliness. My guess is that after those many VMs, KVM (hardware support) exhausts. Because KVM is nothing but software layer for few added hardware registers. So KVM might be the culprit here. Also what is the purpose of this experiment ?
The following virtual hardware limits for guests have been tested. We ensure host and VMs install and work successfully, even when reaching the limits and there are no major performance regressions (CPU, memory, disk, network) since the last release (SUSE Linux Enterprise Server 11 SP1). Max. Guest RAM Size --- 512 GB Max. Virtual CPUs per Guest --- 64 Max. Virtual Network Devices per Guest --- 8 Max. Block Devices per Guest --- 4 emulated (IDE), 20 para-virtual (using virtio-blk) Max. Number of VM Guests per VM Host Server --- Limit is defined as the total number of virtual CPUs in all guests being no greater than 8 times the number of CPU cores in the host for more limitations of KVm please refer [this](http://doc.opensuse.org/products/draft/SLES/SLES-kvm_sd_draft/cha.kvm.limits.html) document link
13,965,823
I'm trying to get NLTK and wordnet working on Heroku. I've already done ``` heroku run python nltk.download() wordnet pip install -r requirements.txt ``` But I get this error: ``` Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/app/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ``` Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
2012/12/20
[ "https://Stackoverflow.com/questions/13965823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1881006/" ]
I know this is an old question, but since the "right" answer has changed thanks to Heroku offering support for `nltk`, I thought it might be worthwhile to answer. Heroku now supports `nltk`. If you need to download something for `nltk` (wordnet in this example, or perhaps stopwords or a corpora), you can do so by simply including an `nltk.txt` file in the same root directory where you have your `Procfile` and `requirements.txt`. In your `nltk.txt` file you list each item you would like to download. For a project I just deployed I needed stopwords and wordnet, so my `nltk.txt` looks like this: ``` stopwords wordnet ``` Pretty straightforward. And, of course, make sure you have the appropriate version of `nltk` specified in your `Pipfile` or `requirements.txt`. For the ground truth, visit <https://devcenter.heroku.com/articles/python-nltk>.
I was also facing the issue when i tried to use this code lemmatizer.lemmatize('goes'), its actually because of packages they have not downloaded. so try to download them using following code, may be it can solve many problems regarding to these, nltk.download('wordnet') nltk.download('omw-1.4') Thank You..
13,965,823
I'm trying to get NLTK and wordnet working on Heroku. I've already done ``` heroku run python nltk.download() wordnet pip install -r requirements.txt ``` But I get this error: ``` Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/app/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ``` Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
2012/12/20
[ "https://Stackoverflow.com/questions/13965823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1881006/" ]
This one works: For Mac OS users. ``` python -m nltk.downloader -d /usr/local/share/nltk_data wordnet ```
Heroku now officially supports NLTK data, built-in! <https://devcenter.heroku.com/articles/python-nltk>
13,965,823
I'm trying to get NLTK and wordnet working on Heroku. I've already done ``` heroku run python nltk.download() wordnet pip install -r requirements.txt ``` But I get this error: ``` Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/app/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ``` Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
2012/12/20
[ "https://Stackoverflow.com/questions/13965823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1881006/" ]
This one works: For Mac OS users. ``` python -m nltk.downloader -d /usr/local/share/nltk_data wordnet ```
I faced the exact same problem while deploying a chatbot on Heroku platform. Although the answer from follyroof is a fool-proof solution, but in many cases, the size of the repository would be increased drastically. So, I used the nltk.download('PACKAGE') in my app.py file. This way whenever app.py is run, the dependencies are automatically downloaded.
13,965,823
I'm trying to get NLTK and wordnet working on Heroku. I've already done ``` heroku run python nltk.download() wordnet pip install -r requirements.txt ``` But I get this error: ``` Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/app/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ``` Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
2012/12/20
[ "https://Stackoverflow.com/questions/13965823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1881006/" ]
Heroku now officially supports NLTK data, built-in! <https://devcenter.heroku.com/articles/python-nltk>
On Mac: I still needed to download the `omw-1.4` data. The code was running from an Python file and the `nltk_data/` directory is in the same directory like the Python file. `nltk.download('wordnet', "nltk_data/")` `nltk.download('omw-1.4', "nltk_data/")` `nltk.data.path.append('nltk_data/')`
13,965,823
I'm trying to get NLTK and wordnet working on Heroku. I've already done ``` heroku run python nltk.download() wordnet pip install -r requirements.txt ``` But I get this error: ``` Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/app/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ``` Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
2012/12/20
[ "https://Stackoverflow.com/questions/13965823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1881006/" ]
I was getting this issue. For those who are not working in virtual environment, will need to download to following directory in ubuntu: ``` /usr/share/nltk_data/corpora/wordnet ``` Instead of wordnet it could be brown or whatever. You can directly run this command in your terminal if you want to download the corpus. ``` $ sudo python -m nltk.downloader -d /usr/share/nltk_data wordnet ``` Again instead of wordnet it could be brown.
I faced the same error. This workaround by *Fred Foo* helped me to fix the issue The following works for me: ``` # 1) execute the below written code # 2) a NLTK Download window will open # 3) select "Corpora" tab and scroll down until "wordnet" # 4) doubleclick to install nltk.download() from nltk.corpus import wordnet ``` [Import WordNet In NLTK](https://stackoverflow.com/questions/6661108/import-wordnet-in-nltk)
13,965,823
I'm trying to get NLTK and wordnet working on Heroku. I've already done ``` heroku run python nltk.download() wordnet pip install -r requirements.txt ``` But I get this error: ``` Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/app/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ``` Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
2012/12/20
[ "https://Stackoverflow.com/questions/13965823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1881006/" ]
I just had this same problem. What ended up working for me is creating an 'nltk\_data' directory in the application's folder itself, downloading the corpus to that directory and adding a line to my code that lets the nltk know to look in that directory. You can do this all locally and then push the changes to Heroku. So, supposing my python application is in a directory called "myapp/" **Step 1: Create the directory** ``` cd myapp/ mkdir nltk_data ``` **Step 2: Download Corpus to New Directory** ``` python -m nltk.downloader ``` This'll pop up the `nltk` downloader. Set your *Download Directory* to `whatever_the_absolute_path_to_myapp_is/nltk_data/`. If you're using the GUI downloader, the download directory is set through a text field on the bottom of the UI. If you're using the command line one, you set it in the config menu. Once the downloader knows to point to your newly created `nltk_data` directory, download your corpus. Or in one step from Python code: ``` nltk.download("wordnet", "whatever_the_absolute_path_to_myapp_is/nltk_data/") ``` **Step 3: Let nltk Know Where to Look** `ntlk` looks for data,resources,etc. in the locations specified in the `nltk.data.path` variable. All you need to do is add `nltk.data.path.append('./nltk_data/')` to the python file actually using nltk, and it will look for corpora, tokenizers, and such in there in addition to the default paths. **Step 4: Send it to Heroku** ``` git add nltk_data/ git commit -m 'super useful commit message' git push heroku master ``` That should work! It did for me anyway. One thing worth noting is that the path from the python file executing nltk stuff to the nltk\_data directory may be different depending on how you've structured your application, so just account for that when you do `nltk.data.path.append('path_to_nltk_data')`
Heroku now officially supports NLTK data, built-in! <https://devcenter.heroku.com/articles/python-nltk>
13,965,823
I'm trying to get NLTK and wordnet working on Heroku. I've already done ``` heroku run python nltk.download() wordnet pip install -r requirements.txt ``` But I get this error: ``` Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/app/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ``` Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
2012/12/20
[ "https://Stackoverflow.com/questions/13965823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1881006/" ]
For Mac OS user only. `python -m nltk.downloader -d /usr/share/nltk_data wordnet` the corpora data can't be downloaded directly to the `/usr/share/nltk_data` folder. error reports "no permission", two solutions: 1. Add additional permission change to the Mac system, details refer to [Operation Not Permitted when on root El capitan (rootless disabled)](https://stackoverflow.com/questions/32659348/operation-not-permitted-when-on-root-el-capitan-rootless-disabled) . However, I don't want to change to mac default setting just for this corpora. and I go for the second solution. 2. * Download the corpora to any directory you have the access to. `python -m nltk.downloader -d some\_user\_accessable\_directory wordnet'. Noted, there you only download the required corpora, e.g., wordnet, reuters instead of the whole corpora from nltk. * Add path to nltk path. In py file, add following lines: `import nltk nltk.data.path.append('nltk_data')`
This one works: For Mac OS users. ``` python -m nltk.downloader -d /usr/local/share/nltk_data wordnet ```
13,965,823
I'm trying to get NLTK and wordnet working on Heroku. I've already done ``` heroku run python nltk.download() wordnet pip install -r requirements.txt ``` But I get this error: ``` Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/app/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ``` Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
2012/12/20
[ "https://Stackoverflow.com/questions/13965823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1881006/" ]
I faced the same **problem** and I tried this solution and it is works. I just did put these: ``` import nltk nltk.download('wordnet') ``` in the above code and it is run without problem. so try it maybe help you.
I was also facing the issue when i tried to use this code lemmatizer.lemmatize('goes'), its actually because of packages they have not downloaded. so try to download them using following code, may be it can solve many problems regarding to these, nltk.download('wordnet') nltk.download('omw-1.4') Thank You..
13,965,823
I'm trying to get NLTK and wordnet working on Heroku. I've already done ``` heroku run python nltk.download() wordnet pip install -r requirements.txt ``` But I get this error: ``` Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/app/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ``` Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
2012/12/20
[ "https://Stackoverflow.com/questions/13965823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1881006/" ]
Heroku now officially supports NLTK data, built-in! <https://devcenter.heroku.com/articles/python-nltk>
I was also facing the issue when i tried to use this code lemmatizer.lemmatize('goes'), its actually because of packages they have not downloaded. so try to download them using following code, may be it can solve many problems regarding to these, nltk.download('wordnet') nltk.download('omw-1.4') Thank You..
13,965,823
I'm trying to get NLTK and wordnet working on Heroku. I've already done ``` heroku run python nltk.download() wordnet pip install -r requirements.txt ``` But I get this error: ``` Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/app/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ``` Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
2012/12/20
[ "https://Stackoverflow.com/questions/13965823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1881006/" ]
I faced the same **problem** and I tried this solution and it is works. I just did put these: ``` import nltk nltk.download('wordnet') ``` in the above code and it is run without problem. so try it maybe help you.
I faced the same error. This workaround by *Fred Foo* helped me to fix the issue The following works for me: ``` # 1) execute the below written code # 2) a NLTK Download window will open # 3) select "Corpora" tab and scroll down until "wordnet" # 4) doubleclick to install nltk.download() from nltk.corpus import wordnet ``` [Import WordNet In NLTK](https://stackoverflow.com/questions/6661108/import-wordnet-in-nltk)
64,734,118
I'm trying to make a discord bot, and when I try to load a .env with load\_dotenv() it doesn't work because it says ``` Traceback (most recent call last): File "/home/fanjin/Documents/Python Projects/Discord Bot/bot.py", line 15, in <module> client.run(TOKEN) File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 708, in run return future.result() File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 687, in runner await self.start(*args, **kwargs) File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 650, in start await self.login(*args, bot=bot) File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 499, in login await self.http.static_login(token.strip(), bot=bot) AttributeError: 'NoneType' object has no attribute 'strip ``` Here's my code for the bot: ``` import os import discord from dotenv import load_dotenv load_dotenv() TOKEN = os.getenv('DISCORD_TOKEN') client = discord.Client() @client.event async def on_ready(): print(f'{client.user} has connected to Discord!') client.run(TOKEN) ``` And the save.env file: (It's a fake token) ``` # .env DISCORD_TOKEN={Bzc0NjfUH8fEWFjg2NDMyMjY2.X6coqw.JyiOR89JIH7fFFoyOMufK_1A} ``` Both files are in the same directory, and I even tried to explicitly specify the .env's path with ``` env_path = Path('path/to/file') / '.env' load_dotenv(dotenv_path=env_path) ``` but that also didn't work
2020/11/08
[ "https://Stackoverflow.com/questions/64734118", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9509783/" ]
I had same error trying to load my environment configuration on ubuntu 20.04 and python-dotenv 0.15.0. I was able to rectify this using python interpreter which will log out any error encountered while trying to load your environments. Whenever your environment variable is loaded successfully, load\_dotenv() returns `True`. For me it was an issue with my configuration file (syntax error) that broke the loading process. All i needed to do was to go to my environment variable config file and fix the broken syntax.. Try passing `verbose=True` when loading your environment variable (from python's interpreter) to get more info from load\_dotenv.
So this took me a while. My load\_dotenv() was returning True. I had commas after some records which is not correct. Once I removed the commas the variables were working.
64,734,118
I'm trying to make a discord bot, and when I try to load a .env with load\_dotenv() it doesn't work because it says ``` Traceback (most recent call last): File "/home/fanjin/Documents/Python Projects/Discord Bot/bot.py", line 15, in <module> client.run(TOKEN) File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 708, in run return future.result() File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 687, in runner await self.start(*args, **kwargs) File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 650, in start await self.login(*args, bot=bot) File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 499, in login await self.http.static_login(token.strip(), bot=bot) AttributeError: 'NoneType' object has no attribute 'strip ``` Here's my code for the bot: ``` import os import discord from dotenv import load_dotenv load_dotenv() TOKEN = os.getenv('DISCORD_TOKEN') client = discord.Client() @client.event async def on_ready(): print(f'{client.user} has connected to Discord!') client.run(TOKEN) ``` And the save.env file: (It's a fake token) ``` # .env DISCORD_TOKEN={Bzc0NjfUH8fEWFjg2NDMyMjY2.X6coqw.JyiOR89JIH7fFFoyOMufK_1A} ``` Both files are in the same directory, and I even tried to explicitly specify the .env's path with ``` env_path = Path('path/to/file') / '.env' load_dotenv(dotenv_path=env_path) ``` but that also didn't work
2020/11/08
[ "https://Stackoverflow.com/questions/64734118", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9509783/" ]
##### I was facing a similar issue and found out these three possible solutions/reasons: 1. Check if the syntax in your .env file is correct or not, the original documentation will be the best source - [Python Dotenv](https://pypi.org/project/python-dotenv/) (sample below) ``` DOMAIN=example.org ADMIN_EMAIL=admin@${DOMAIN} ROOT_URL=${DOMAIN}/app ``` 2. The solution which worked for me, was using `find_dotenv()` instead of file path inside `load_dotenv()`, the reason is `load_dotenv()` doesn't load the .env file properly. `find_dotenv()` is a function that automatically finds .env file if it's located in the same folder as your code file. ``` from dotenv import load_dotenv, find_dotenv load_dotenv(find_dotenv()) ``` 3. You can limit your search to the current project folder using `sys.path[1]` to make sure you're reading the intended file. ``` import sys from dotenv import load_dotenv load_dotenv(sys.path[1]) #try .path[0] if 1 doesn't work ``` Since, I moved my `.env` file's inside another subfolder `config`, then I had to provide the full path to `load_dotenv()` to make it work. ``` import sys from dotenv import load_dotenv path = sys.path[1]+'/config/.env' #try .path[0] if 1 doesn't work load_dotenv(path) ``` [edited]
So this took me a while. My load\_dotenv() was returning True. I had commas after some records which is not correct. Once I removed the commas the variables were working.
64,734,118
I'm trying to make a discord bot, and when I try to load a .env with load\_dotenv() it doesn't work because it says ``` Traceback (most recent call last): File "/home/fanjin/Documents/Python Projects/Discord Bot/bot.py", line 15, in <module> client.run(TOKEN) File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 708, in run return future.result() File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 687, in runner await self.start(*args, **kwargs) File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 650, in start await self.login(*args, bot=bot) File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 499, in login await self.http.static_login(token.strip(), bot=bot) AttributeError: 'NoneType' object has no attribute 'strip ``` Here's my code for the bot: ``` import os import discord from dotenv import load_dotenv load_dotenv() TOKEN = os.getenv('DISCORD_TOKEN') client = discord.Client() @client.event async def on_ready(): print(f'{client.user} has connected to Discord!') client.run(TOKEN) ``` And the save.env file: (It's a fake token) ``` # .env DISCORD_TOKEN={Bzc0NjfUH8fEWFjg2NDMyMjY2.X6coqw.JyiOR89JIH7fFFoyOMufK_1A} ``` Both files are in the same directory, and I even tried to explicitly specify the .env's path with ``` env_path = Path('path/to/file') / '.env' load_dotenv(dotenv_path=env_path) ``` but that also didn't work
2020/11/08
[ "https://Stackoverflow.com/questions/64734118", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9509783/" ]
You need to put the full path. Use * either `os.path.expanduser('~/Documents/MY_PROJECT/.env')` * or: `load_dotenv('/home/MY_USER/Documents/MY_PROJECT/.env')` and it will work. Or you change your current working directory in your code editor to where the ".env" file is (which should be the project folder). Or you open the project folder in the menu of your code editor, this should make the project folder the current working directory. On Linux, you can also go to the project folder in the terminal and start the code editor from there, type for example `codium` or whatever you use in the command prompt. ### TL:DR #### Quote from the other answer > > Since, I moved my .env file's inside another subfolder config, then I had to provide the full path to load\_dotenv() to make it work. > > > This gave me the idea of checking the working directory. #### Current working directory `os.getcwd()` gave me a folder further up the tree. And then I copied the ".env" file into that working directory and it worked. Changing the working directory depends on your code editor. I use codium, which is the open source version of vscode, then you may follow for example [Python in VSCode: Set working directory to python file's path everytime](https://stackoverflow.com/questions/56776521/python-in-vscode-set-working-directory-to-python-files-path-everytime) #### Full path You can also put the full path. Funny enough, I had checked that before coming here, but I copied the path that you get from the terminal, starting with `'~/Documents/MY_PROJECT`, which does not find the file but does not alert either, any tried environment variables were just empty - just because the ".env" file itself was never read.
So this took me a while. My load\_dotenv() was returning True. I had commas after some records which is not correct. Once I removed the commas the variables were working.
63,756,673
I'm facing weird issue in my Jupyter-notebook. In my first cell: ``` import sys !{sys.executable} -m pip install numpy !{sys.executable} -m pip install Pillow ``` In the second cell: ``` import numpy as np from PIL import Image ``` But it says : **ModuleNotFoundError: No module named 'numpy'** [![ModuleNotFoundError: No module named 'numpy'](https://i.stack.imgur.com/UZvdk.png)](https://i.stack.imgur.com/UZvdk.png) I have used this command to install Jupyter notebook : ``` sudo apt install python3-notebook jupyter jupyter-core python-ipykernel ``` Additional information : ``` pip --version pip 20.2.2 from /home/maifee/.local/lib/python3.7/site-packages/pip (python 3.7) python --version Python 3.7.5 ```
2020/09/05
[ "https://Stackoverflow.com/questions/63756673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10305444/" ]
Thanks to @suuuehgi. When Jupyter Notebook isn't opened as root: ``` import sys !{sys.executable} -m pip install --user numpy ```
I've had occasional weird install issues with Jupyter Notebooks as well when I'm running a particular virtual environment. Generally, installing with pip directly in the notebook in this form: `!pip install numpy` fixes it. Let me know how it goes.
63,756,673
I'm facing weird issue in my Jupyter-notebook. In my first cell: ``` import sys !{sys.executable} -m pip install numpy !{sys.executable} -m pip install Pillow ``` In the second cell: ``` import numpy as np from PIL import Image ``` But it says : **ModuleNotFoundError: No module named 'numpy'** [![ModuleNotFoundError: No module named 'numpy'](https://i.stack.imgur.com/UZvdk.png)](https://i.stack.imgur.com/UZvdk.png) I have used this command to install Jupyter notebook : ``` sudo apt install python3-notebook jupyter jupyter-core python-ipykernel ``` Additional information : ``` pip --version pip 20.2.2 from /home/maifee/.local/lib/python3.7/site-packages/pip (python 3.7) python --version Python 3.7.5 ```
2020/09/05
[ "https://Stackoverflow.com/questions/63756673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10305444/" ]
I've had occasional weird install issues with Jupyter Notebooks as well when I'm running a particular virtual environment. Generally, installing with pip directly in the notebook in this form: `!pip install numpy` fixes it. Let me know how it goes.
I had a similar issue. Turns out I renamed an upstream path. And I hadn't deactivated my conda env first. When I deactivated the env. ``` conda deactivate ``` Then when I activated it again, everything was as it should have been. ``` conda activate sample ``` Now I am seeing other issues with jupyter themes... but its not impacting my numpy code. So, at least I fixed the "ModuleNotFoundError: No module named 'numpy'" error
63,756,673
I'm facing weird issue in my Jupyter-notebook. In my first cell: ``` import sys !{sys.executable} -m pip install numpy !{sys.executable} -m pip install Pillow ``` In the second cell: ``` import numpy as np from PIL import Image ``` But it says : **ModuleNotFoundError: No module named 'numpy'** [![ModuleNotFoundError: No module named 'numpy'](https://i.stack.imgur.com/UZvdk.png)](https://i.stack.imgur.com/UZvdk.png) I have used this command to install Jupyter notebook : ``` sudo apt install python3-notebook jupyter jupyter-core python-ipykernel ``` Additional information : ``` pip --version pip 20.2.2 from /home/maifee/.local/lib/python3.7/site-packages/pip (python 3.7) python --version Python 3.7.5 ```
2020/09/05
[ "https://Stackoverflow.com/questions/63756673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10305444/" ]
I've had occasional weird install issues with Jupyter Notebooks as well when I'm running a particular virtual environment. Generally, installing with pip directly in the notebook in this form: `!pip install numpy` fixes it. Let me know how it goes.
I have the same problem. My numpy is installed, I am using the same folder as usual. If I try 'conda deactivate', I get the message: ValueError: The python kernel does not appear to be a conda environment. Please use `%pip install` instead. [I added a print of the 'pip install numpy' result and the 'Module not found error' after](https://i.stack.imgur.com/Kf7il.png)
63,756,673
I'm facing weird issue in my Jupyter-notebook. In my first cell: ``` import sys !{sys.executable} -m pip install numpy !{sys.executable} -m pip install Pillow ``` In the second cell: ``` import numpy as np from PIL import Image ``` But it says : **ModuleNotFoundError: No module named 'numpy'** [![ModuleNotFoundError: No module named 'numpy'](https://i.stack.imgur.com/UZvdk.png)](https://i.stack.imgur.com/UZvdk.png) I have used this command to install Jupyter notebook : ``` sudo apt install python3-notebook jupyter jupyter-core python-ipykernel ``` Additional information : ``` pip --version pip 20.2.2 from /home/maifee/.local/lib/python3.7/site-packages/pip (python 3.7) python --version Python 3.7.5 ```
2020/09/05
[ "https://Stackoverflow.com/questions/63756673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10305444/" ]
I've had occasional weird install issues with Jupyter Notebooks as well when I'm running a particular virtual environment. Generally, installing with pip directly in the notebook in this form: `!pip install numpy` fixes it. Let me know how it goes.
Here is a solution which worked for me: ``` lib_path="c:\\users\\user\\python_39\\lib\\site-packages\\" MODULE_NAME = "module_to_import" MODULE_PATH = lib_path+MODULE_NAME+"\\__init__.py" import importlib import sys spec = importlib.util.spec_from_file_location(MODULE_NAME, MODULE_PATH) module = importlib.util.module_from_spec(spec) sys.modules[spec.name] = module spec.loader.exec_module(module) import module_to_import ```
63,756,673
I'm facing weird issue in my Jupyter-notebook. In my first cell: ``` import sys !{sys.executable} -m pip install numpy !{sys.executable} -m pip install Pillow ``` In the second cell: ``` import numpy as np from PIL import Image ``` But it says : **ModuleNotFoundError: No module named 'numpy'** [![ModuleNotFoundError: No module named 'numpy'](https://i.stack.imgur.com/UZvdk.png)](https://i.stack.imgur.com/UZvdk.png) I have used this command to install Jupyter notebook : ``` sudo apt install python3-notebook jupyter jupyter-core python-ipykernel ``` Additional information : ``` pip --version pip 20.2.2 from /home/maifee/.local/lib/python3.7/site-packages/pip (python 3.7) python --version Python 3.7.5 ```
2020/09/05
[ "https://Stackoverflow.com/questions/63756673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10305444/" ]
Thanks to @suuuehgi. When Jupyter Notebook isn't opened as root: ``` import sys !{sys.executable} -m pip install --user numpy ```
I had a similar issue. Turns out I renamed an upstream path. And I hadn't deactivated my conda env first. When I deactivated the env. ``` conda deactivate ``` Then when I activated it again, everything was as it should have been. ``` conda activate sample ``` Now I am seeing other issues with jupyter themes... but its not impacting my numpy code. So, at least I fixed the "ModuleNotFoundError: No module named 'numpy'" error
63,756,673
I'm facing weird issue in my Jupyter-notebook. In my first cell: ``` import sys !{sys.executable} -m pip install numpy !{sys.executable} -m pip install Pillow ``` In the second cell: ``` import numpy as np from PIL import Image ``` But it says : **ModuleNotFoundError: No module named 'numpy'** [![ModuleNotFoundError: No module named 'numpy'](https://i.stack.imgur.com/UZvdk.png)](https://i.stack.imgur.com/UZvdk.png) I have used this command to install Jupyter notebook : ``` sudo apt install python3-notebook jupyter jupyter-core python-ipykernel ``` Additional information : ``` pip --version pip 20.2.2 from /home/maifee/.local/lib/python3.7/site-packages/pip (python 3.7) python --version Python 3.7.5 ```
2020/09/05
[ "https://Stackoverflow.com/questions/63756673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10305444/" ]
Thanks to @suuuehgi. When Jupyter Notebook isn't opened as root: ``` import sys !{sys.executable} -m pip install --user numpy ```
I have the same problem. My numpy is installed, I am using the same folder as usual. If I try 'conda deactivate', I get the message: ValueError: The python kernel does not appear to be a conda environment. Please use `%pip install` instead. [I added a print of the 'pip install numpy' result and the 'Module not found error' after](https://i.stack.imgur.com/Kf7il.png)
63,756,673
I'm facing weird issue in my Jupyter-notebook. In my first cell: ``` import sys !{sys.executable} -m pip install numpy !{sys.executable} -m pip install Pillow ``` In the second cell: ``` import numpy as np from PIL import Image ``` But it says : **ModuleNotFoundError: No module named 'numpy'** [![ModuleNotFoundError: No module named 'numpy'](https://i.stack.imgur.com/UZvdk.png)](https://i.stack.imgur.com/UZvdk.png) I have used this command to install Jupyter notebook : ``` sudo apt install python3-notebook jupyter jupyter-core python-ipykernel ``` Additional information : ``` pip --version pip 20.2.2 from /home/maifee/.local/lib/python3.7/site-packages/pip (python 3.7) python --version Python 3.7.5 ```
2020/09/05
[ "https://Stackoverflow.com/questions/63756673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10305444/" ]
Thanks to @suuuehgi. When Jupyter Notebook isn't opened as root: ``` import sys !{sys.executable} -m pip install --user numpy ```
Here is a solution which worked for me: ``` lib_path="c:\\users\\user\\python_39\\lib\\site-packages\\" MODULE_NAME = "module_to_import" MODULE_PATH = lib_path+MODULE_NAME+"\\__init__.py" import importlib import sys spec = importlib.util.spec_from_file_location(MODULE_NAME, MODULE_PATH) module = importlib.util.module_from_spec(spec) sys.modules[spec.name] = module spec.loader.exec_module(module) import module_to_import ```
36,481,891
Is there a way to call Excel add-ins from python? In my company there are several excel add-ins that are available, they usually provide direct access to some database and make additional calculations. What is the best way to call those functions directly from python? To clarify, I'm NOT interested in accessing python from excel. I'm interested in accessing excel-addins from python.
2016/04/07
[ "https://Stackoverflow.com/questions/36481891", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5181181/" ]
There are at least 3 possible ways to call an Excel add-in, call the COM add-in directly or through automation. Microsoft provide online documentation for it's Excel interop (<https://learn.microsoft.com/en-us/dotnet/api/microsoft.office.interop.excel>). Whilst it's for .NET, it highlights the main limitations. You cannot directly call a custom ribbon (which contains the add-in). This is intended to protect one add-in from another. (*Although you can install/ uninstall add-in*). **COM add-in** You can call the COM add-in directly using win32com.client. However you can only run methods that are visible to COM. This means you may have to alter the add-in. See a C# tutorial <https://learn.microsoft.com/en-us/visualstudio/vsto/walkthrough-calling-code-in-a-vsto-add-in-from-vba?view=vs-2019>. Once the method is exposed, then it can be called in Python using [win32com.client](https://pypi.org/project/pywin32/). For example: ``` import win32com.client as win32 def excel(): # This may error if Excel is open. return win32.gencache.EnsureDispatch('Excel.Application') xl = excel() helloWorldAddIn = xl.COMAddIns("HelloWorld") # HelloWorld is the name of my AddIn. #app = helloWorldAddIn.Application obj = helloWorldAddIn.Object # Note: This will be None (null) if add-in doesn't expose anything. obj.processData() # processData is the name of my method ``` **Web scraping** If you're able to upload your add-in to an office 365 account. Then you could preform web scraping with a package like Selenium. I've not attempted it. Also you'll most likely encounter issues calling external resources like connection strings & http calls. **Automation** You can run the add-in using an automation package. For example [pyautogui](https://pyautogui.readthedocs.io/en/latest/). You can write code to control the mouse & keyboard to simulate a user running the add-in. This solution will mean you shouldn't need to update existing add-ins. A very basic example of calling add-in through automation: ``` import os import pyautogui import time def openFile(): path = "C:/Dev/test.xlsx" path = os.path.realpath(path) os.startfile(path) time.sleep(1) def fullScreen(): pyautogui.hotkey('win', 'up') time.sleep(1) def findAndClickRibbon(): pyautogui.moveTo(550, 50) pyautogui.click() time.sleep(1) def runAddIn(): pyautogui.moveTo(15, 100) pyautogui.click() time.sleep(1) def saveFile(): pyautogui.hotkey('ctrl', 's') time.sleep(1) def closeFile(): pyautogui.hotkey('alt', 'f4') time.sleep(1) openFile() fullScreen() findAndClickRibbon() runAddIn() saveFile() closeFile() ```
This [link](https://docs.continuum.io/anaconda/excel) list some of the available packages to work with Excel and Excel files. You might find the answer to your question there. As a summary, here are the name of some of the listed packages: > > 1. openpyxl - Read/Write Excel 2007 xlsx/xlsm files > 2. xlrd - Extract data from Excel spreadsheets (.xls and .xlsx, versions 2.0 onwards) on any platform > 3. xlsxwriter - Write files in the Excel 2007+ XLSX file format > 4. xlwt - Generate spreadsheet files that are compatible with Excel 97/2000/XP/2003, OpenOffice.org Calc, and Gnumeric. > > > And especially this might be interesting for you: > > ExPy - ExPy is freely available demonstration software that is simple to install. Once installed, Excel users have access to built-in Excel functions that wrap Python code. Documentation and examples are provided at the site. > > >
74,111,833
I want to add a new field to a PostgreSQL database. It's a not null and unique CharField, like ``` dyn = models.CharField(max_length=31, null=False, unique=True) ``` The database already has relevant records, so it's not an option to * delete the database * reset the migrations * wipe the data * set a default static value. How to proceed? --- Edit ---- Tried to add a `default=uuid.uuid4` ``` dyn = models.CharField(max_length=31, null=False, unique=True, default=uuid.uuid4) ``` but then I get > > Ensure this value has at most 31 characters (it has 36). > > > --- Edit 2 ------ If I create a function with [.hex](https://docs.python.org/3/library/uuid.html#uuid.UUID.hex) ([as found here](https://stackoverflow.com/a/48438640/5675325)) ``` def hex_uuid(): """ The UUID as a 32-character lowercase hexadecimal string """ return uuid.uuid4().hex ``` and use it in the default ``` dyn = models.CharField(max_length=31, null=False, unique=True, default=hex_uuid) ``` I'll get > > Ensure this value has at most 31 characters (it has 32). > > > ***Note***: I don't want to simply get a substring of the result, like adjusting the `hex_uuid()` to have `return str(uuid.uuid4())[:30]`, since that'll increase the collision chances.
2022/10/18
[ "https://Stackoverflow.com/questions/74111833", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5675325/" ]
I ended up using [one of the methods](https://stackoverflow.com/a/56398787/5675325) shown by Oleg ``` def dynamic_default_value(): """ This function will be called when a default value is needed. It'll return a 31 length string with a-z, 0-9. """ alphabet = string.ascii_lowercase + string.digits return ''.join(random.choices(alphabet, k=31)) # 31 is the length of the string ``` with ``` dyn = models.CharField(max_length=31, null=False, unique=True, default=dynamic_default_value) ``` --- If the field was max\_length of 32 characters then I'd have used the `hex_uuid()` present in the question. --- If I wanted to make the dynamic field the same as the `id` of another existing unique field in the same model, then [I'd go through the following steps](https://stackoverflow.com/a/66785579/5675325).
you can reset the migrations or edit it or create new one like: ``` python manage.py makemigrations name_you_want ``` after that: ``` python manage.py migrate same_name ``` edit: example for funcation: ``` def generate_default_data(): return datetime.now() class MyModel(models.Model): field = models.DateTimeField(default=generate_default_data) ```
70,133,541
I have a list of 4 binary numbers and i want to check if they are divisible by 5, and if it's the case, i print them. I've tried something but i'm stuck with an error, showing you the error and the code i made. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-5-8c92562788a5> in <module>() 1 bin_liste = ['0100','0110','1010','1001'] 2 for element in bin_liste: ----> 3 if element%5 != 0: 4 print(element) TypeError: not all arguments converted during string formatting ``` my code: ``` bin_liste = ['0100','0110','1010','1001'] for element in bin_liste: if element%5 != 0: print(element ```
2021/11/27
[ "https://Stackoverflow.com/questions/70133541", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17522853/" ]
EDIT: Originally answered [here](https://discussions.apple.com/thread/253425286?answerId=256408798022#256408798022). If you are using any package manager (e.g. homebrew), then you need the command line tools. Here is my current SDK for macOS Monterey 12.0.1: [![enter image description here](https://i.stack.imgur.com/h3MGt.png)](https://i.stack.imgur.com/h3MGt.png)
If you uninstall the command line tools, and some software you are using is dependent on them, then they (or the latest version) can just be re-installed.
62,562,064
One Population Proportion Research Question: In previous years 52% of parents believed that electronics and social media was the cause of their teenager’s lack of sleep. Do more parents today believe that their teenager’s lack of sleep is caused due to electronics and social media? Population: Parents with a teenager (age 13-18) Parameter of Interest: p Null Hypothesis: p = 0.52 Alternative Hypthesis: p > 0.52 (note that this is a one-sided test) 1018 Parents 56% believe that their teenager’s lack of sleep is caused due to electronics and social media this is a one tailed test and according to the professor the p-value should be 0.0053, but when i calculate the p-value for z-statistic=2.5545334262132955 in python : `p_value=stats.distributions.norm.cdf(1-z_statistic)` this code gives 0.06 as output i know that `stats.distributions.norm.cdf` gives the probability to the left hand side of the statistic but the above code is giving wrong p value but when I type : `stats.distributions.norm.cdf(-z_statistic)` it gives output as 0.0053, how is this possible,please help!!!
2020/06/24
[ "https://Stackoverflow.com/questions/62562064", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13807934/" ]
You approximate the binomial distribution with the normal since n\*p > 30 and the zscore for a [proportion test](https://online.stat.psu.edu/statprogram/reviews/statistical-concepts/proportions) is: [![enter image description here](https://i.stack.imgur.com/Iue0em.png)](https://i.stack.imgur.com/Iue0em.png) So the calculation is: ``` import numpy as np from scipy import stats p0 = 0.52 p = 0.56 n = 1018 Z = (p-p0)/np.sqrt(p0*(1-p0)/n) Z 2.5545334262132955 ``` Your Z is correct, `stats.norm.cdf(Z)` gives you the cumulative probability up till Z, and since you need the probability of observing something more extreme than this, it is: ``` 1-stats.norm.cdf(Z) 0.0053165109918223985 ``` The probability density function of the normal distribution is symmetric, so `1-stats.norm.cdf(Z)` is the same as `stats.norm.cdf(-Z)`
The question is formulated as a binomial problem: 1018 people take a yes/no decision with constant probability. In your case 570 out of 1018 people hold that belief and that probability is to be compared to 52 % I do not know about Python, but I con confirm your teachers result in R: ``` > binom.test(570, 1018, p = .52, alternative = "greater") Exact binomial test data: 570 and 1018 number of successes = 570, number of trials = 1018, p-value = 0.005843 alternative hypothesis: true probability of success is greater than 0.52 95 percent confidence interval: 0.533735 1.000000 sample estimates: probability of success 0.5599214 ``` The fact, that you handle z-values leads me to believe, that you had no Python problem but were using the wrong test, which is why I belief I can answer using R. You can find a binomial test impemented in Python here: <https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom_test.html>
18,487,171
I am having a direcotry structure as : ``` D:\testfolder\folder_to_tar: |---folder1 |--- file1.txt |---folder2 |--- file2.txt |---file3.txt ``` I want to create a tarball using Python at the same directory level. However, I am observing that in the tarball python is including the parent directory as well i.e. `testfolder` in my example. ``` Expected Output : D:\testfolder: |---folder_to_tar.tar |---folder_to_tar |--folder1 ..... Actual Output : D:\testfolder: |---folder_to_tar.tar |---testfolder |---folder_to_tar |--folder1 ..... ``` Code : ``` import tarfile tarname = "D:\\testfolder\\folder_to_tar" tarfile1 = "D:\\testfolder\\folder_to_tar.tar" tarout = tarfile.open(tarfile1,mode="w") try: tarout.add(tarname,arcname=tarname) finally: tarout.close() ``` Can some one please help me on how to achieve it.
2013/08/28
[ "https://Stackoverflow.com/questions/18487171", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1514773/" ]
Try replacing the tarout.add line with: ``` tarout.add(tarname,arcname=os.path.basename(tarname)) ``` Note: you also need to `import os`
Have you tried adding `\` at the end of `tarname`?
21,820,104
I am attempting to use the Python [requests](http://requests.readthedocs.org/en/latest/) library to login to a website called surfline.com, then `get` a webpage once logged in (presumably within a persisting [session](http://docs.python-requests.org:8000/en/latest/user/advanced/)). The login form that surfline.com uses throughout its pages uses `onclick()` to map to a JS function called `verifyLogin()`, which in turn posts to a `.cfm` file. On `success`, it refreshes the page, and the user is now kept logged in so long as the cookies persist. I am using the [requests](http://requests.readthedocs.org/en/latest/) library for the first time, and am unsure how to: 1. login successfully, 2. stay logged-in throughout the session, 3. then print out the homepage as a logged-in user to the terminal. Here is the login form (HTML): ``` <form id="loginForm"> <label for="name">Email Address:</label><br> <input type="text" name="username" id="username"> <label for="mail">Password:</label><br> <input type="password" name="password" id="password"> <input type="hidden" name="top_login" id="top_login" value="true"> <button class="surfline-button blue1" type="button" onclick="verifyLogin();">Log In</button> <button class="surfline-button grey1" type="button" onclick="jQuery('#dialog-login').dialog('close');">Cancel</button> <div id="forgot">Forgot Password? <a href="/myaccount/?action=forgot_password">Click Here</a></div> <div class="clear"></div> <div><input style="float:left; width:18px; height:18px; margin-right:6px; border:none" type="checkbox" name="rememberMe" id="rememberMe" value="true" checked="checked"><div id="remember-me-text" style="float:left; width:200px; text-align:left; margin-top:1px;">Remember me on this computer?</div></div> <div class="clear"></div> <p>Not a Premium member? <a href="https://www.surfline.com/subscribe_vindicia/index.cfm?mkt=login&amp;slintcid=LOGIN&amp;slcmpname=LOGIN-MODAL">TRY PREMIUM FREE NOW</a></p> </form> ``` Here is the `verifyLogin()` function in the [/05222013\_slmenu.js](http://www.surfline.com/global_includes/scripts/05222013_slmenu.js) file: ``` function verifyLogin(){ //var username = jQuery("#username").val(); //var password = jQuery("#password").val(); var usernameVal = jQuery("#username").val(); var passwordVal = jQuery("#password").val(); var rememberMeVal = jQuery("#rememberMe").val(); var top_loginVal = jQuery("#top_login").val(); if(usernameVal.length === 0 || passwordVal.length === 0 ){ jQuery("#login-note").addClass("warning"); if(usernameVal.length === 0){ jQuery("#username").addClass('warning'); jQuery("#login-note").html("Email Field is Blank"); }else{ jQuery("#username").removeClass('warning');} if(passwordVal.length === 0){ jQuery("#password").addClass('warning'); jQuery("#login-note").html("Password Field is Blank"); }else{ jQuery("#password").removeClass('warning'); } if(usernameVal.length === 0 && passwordVal.length === 0){jQuery("#login-note").html("Email and Password Fields are Blank"); } }else{ jQuery("#inner-dialog").fadeOut("slow",function(){ var htmlData = "<center><div style='padding-top:60px;'><h1>Verifying Login</h1><img src='/global_includes/images/ajax-loader-snake-295284.gif' style='margin-top:24px;'></div></center>"; jQuery("#verifying").html(htmlData).fadeIn("slow", function(){ //var loginData = jQuery('#loginForm').serialize(); var loginData = 'username=' + escape(usernameVal) + '&password=' + escape(passwordVal) + '&rememberMe=' + rememberMeVal + '&top_login=' + top_loginVal; jQuery.ajax({ type:'POST', url: '/myaccount/inc_login_handler.cfm', data:loginData, cache:false, success: function(response){ var responseTrimmed = response.replace(/^\s+||\s+$/g,''); if(responseTrimmed != true){ jQuery("#verifying").fadeOut("slow",function(){ jQuery("#inner-dialog").fadeIn("slow"); jQuery("#login-note").html("Unable to find your login information. Please Try Again...").addClass("warning"); jQuery("#username").addClass('warning'); jQuery("#password").addClass('warning'); }); }else{ var successData = "<center><div style='padding-top:60px;'><h1>Success!</h1><img src='/global_includes/images/checkmark.png' style='margin-top:24px; margin-bottom:20px;'><br /> Please Wait, Site Reloading...</div></center>"; jQuery("#verifying").fadeOut("slow",function(){ jQuery('#verifying').html(successData).fadeIn("slow"); }); setTimeout(function(){ window.location.reload(); }, 1200 ); } }, error:function (xhr, ajaxOptions, thrownError){ jQuery("#verifying").fadeOut("slow",function(){ jQuery("#inner-dialog").fadeIn("slow"); jQuery("#login-note").html("There was an error finding your login infomation. Please Try Again...").addClass("warning"); }); } }); }); }) } } ``` I've tried this code, but can't seem to get the homepage as a logged-in user; it still returns as it would to someone who isn't logged in (my full name should appear in the header, etc.): ``` >>> import requests, json >>> s = requests.Session() >>> page_signed_out = s.get('http://www.surfline.com/home/index.cfm') >>> form_data = {'type':'POST', ... 'url':'/myaccount/inc_login_handler.cfm', ... 'data':"username=myemail@example.com&password=mypassword&rememberMe=true&top_login=true", ... 'cache':False} >>> s.post(url, ... data=json.dumps(form_data), ... headers= {'content-type': 'application/json'}) <Response [200]> >>> page_signed_in = s.get('http://www.surfline.com/home/index.cfm') ``` Basically, how can I log into surfline.com from a python file, and get a page as a logged in user? I don't mind using a different library, if it is not possible with the `requests` library. Thank you. **EDIT:** Here are the cookies after a POST has been made, as suggested by @André <[Cookie(version=0, name='CRYPTOPASS', value='%296%25%2A%2521B%3FO%2A%3C%28', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='CRYPTOUSER', value='23%252600%2E3Z%5FM%5EEZV1IK%27TFIW%3E', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='LOGGED\_OUT', value='true', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='USER\_ID', value='259829', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='CFID', value='437349255', port=None, port\_specified=False, domain='www.surfline.com', domain\_specified=False, domain\_initial\_dot=False, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='CFTOKEN', value='1082d1233da237c-3E4C2D43-FFC6-4ECC-1E80D9B505E495CE', port=None, port\_specified=False, domain='www.surfline.com', domain\_specified=False, domain\_initial\_dot=False, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False)]>
2014/02/17
[ "https://Stackoverflow.com/questions/21820104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1337422/" ]
Consider using WPF's built in validation techniques. See this MSDN documentation on the [`ValidationRule`](http://msdn.microsoft.com/en-us/library/system.windows.controls.validationrule%28v=vs.110%29.aspx) class, and this [how-to](http://msdn.microsoft.com/en-us/library/ms753962%28v=vs.110%29.aspx).
Based on your clarification, you want to limit user input to be a number with decimal points. You also mentioned you are creating the TextBox programmatically. Use the TextBox.PreviewTextInput event to determine the type of characters and validate the string inside the TextBox, and then use e.Handled to cancel the user input where appropriate. This will do the trick: ``` public MainWindow() { InitializeComponent(); TextBox textBox = new TextBox(); textBox.PreviewTextInput += TextBox_PreviewTextInput; this.SomeCanvas.Children.Add(textBox); } ``` Meat and potatoes that does the validation: ``` void TextBox_PreviewTextInput(object sender, TextCompositionEventArgs e) { // change this for more decimal places after the period const int maxDecimalLength = 2; // Let's first make sure the new letter is not illegal char newChar = char.Parse(e.Text); if (newChar != '.' && !Char.IsNumber(newChar)) { e.Handled = true; return; } // combine TextBox current Text with the new character being added // and split by the period string text = (sender as TextBox).Text + e.Text; string[] textParts = text.Split(new char[] { '.' }); // If more than one period, the number is invalid if (textParts.Length > 2) e.Handled = true; // validate if period has more than two digits after it if (textParts.Length == 2 && textParts[1].Length > maxDecimalLength) e.Handled = true; } ```
21,820,104
I am attempting to use the Python [requests](http://requests.readthedocs.org/en/latest/) library to login to a website called surfline.com, then `get` a webpage once logged in (presumably within a persisting [session](http://docs.python-requests.org:8000/en/latest/user/advanced/)). The login form that surfline.com uses throughout its pages uses `onclick()` to map to a JS function called `verifyLogin()`, which in turn posts to a `.cfm` file. On `success`, it refreshes the page, and the user is now kept logged in so long as the cookies persist. I am using the [requests](http://requests.readthedocs.org/en/latest/) library for the first time, and am unsure how to: 1. login successfully, 2. stay logged-in throughout the session, 3. then print out the homepage as a logged-in user to the terminal. Here is the login form (HTML): ``` <form id="loginForm"> <label for="name">Email Address:</label><br> <input type="text" name="username" id="username"> <label for="mail">Password:</label><br> <input type="password" name="password" id="password"> <input type="hidden" name="top_login" id="top_login" value="true"> <button class="surfline-button blue1" type="button" onclick="verifyLogin();">Log In</button> <button class="surfline-button grey1" type="button" onclick="jQuery('#dialog-login').dialog('close');">Cancel</button> <div id="forgot">Forgot Password? <a href="/myaccount/?action=forgot_password">Click Here</a></div> <div class="clear"></div> <div><input style="float:left; width:18px; height:18px; margin-right:6px; border:none" type="checkbox" name="rememberMe" id="rememberMe" value="true" checked="checked"><div id="remember-me-text" style="float:left; width:200px; text-align:left; margin-top:1px;">Remember me on this computer?</div></div> <div class="clear"></div> <p>Not a Premium member? <a href="https://www.surfline.com/subscribe_vindicia/index.cfm?mkt=login&amp;slintcid=LOGIN&amp;slcmpname=LOGIN-MODAL">TRY PREMIUM FREE NOW</a></p> </form> ``` Here is the `verifyLogin()` function in the [/05222013\_slmenu.js](http://www.surfline.com/global_includes/scripts/05222013_slmenu.js) file: ``` function verifyLogin(){ //var username = jQuery("#username").val(); //var password = jQuery("#password").val(); var usernameVal = jQuery("#username").val(); var passwordVal = jQuery("#password").val(); var rememberMeVal = jQuery("#rememberMe").val(); var top_loginVal = jQuery("#top_login").val(); if(usernameVal.length === 0 || passwordVal.length === 0 ){ jQuery("#login-note").addClass("warning"); if(usernameVal.length === 0){ jQuery("#username").addClass('warning'); jQuery("#login-note").html("Email Field is Blank"); }else{ jQuery("#username").removeClass('warning');} if(passwordVal.length === 0){ jQuery("#password").addClass('warning'); jQuery("#login-note").html("Password Field is Blank"); }else{ jQuery("#password").removeClass('warning'); } if(usernameVal.length === 0 && passwordVal.length === 0){jQuery("#login-note").html("Email and Password Fields are Blank"); } }else{ jQuery("#inner-dialog").fadeOut("slow",function(){ var htmlData = "<center><div style='padding-top:60px;'><h1>Verifying Login</h1><img src='/global_includes/images/ajax-loader-snake-295284.gif' style='margin-top:24px;'></div></center>"; jQuery("#verifying").html(htmlData).fadeIn("slow", function(){ //var loginData = jQuery('#loginForm').serialize(); var loginData = 'username=' + escape(usernameVal) + '&password=' + escape(passwordVal) + '&rememberMe=' + rememberMeVal + '&top_login=' + top_loginVal; jQuery.ajax({ type:'POST', url: '/myaccount/inc_login_handler.cfm', data:loginData, cache:false, success: function(response){ var responseTrimmed = response.replace(/^\s+||\s+$/g,''); if(responseTrimmed != true){ jQuery("#verifying").fadeOut("slow",function(){ jQuery("#inner-dialog").fadeIn("slow"); jQuery("#login-note").html("Unable to find your login information. Please Try Again...").addClass("warning"); jQuery("#username").addClass('warning'); jQuery("#password").addClass('warning'); }); }else{ var successData = "<center><div style='padding-top:60px;'><h1>Success!</h1><img src='/global_includes/images/checkmark.png' style='margin-top:24px; margin-bottom:20px;'><br /> Please Wait, Site Reloading...</div></center>"; jQuery("#verifying").fadeOut("slow",function(){ jQuery('#verifying').html(successData).fadeIn("slow"); }); setTimeout(function(){ window.location.reload(); }, 1200 ); } }, error:function (xhr, ajaxOptions, thrownError){ jQuery("#verifying").fadeOut("slow",function(){ jQuery("#inner-dialog").fadeIn("slow"); jQuery("#login-note").html("There was an error finding your login infomation. Please Try Again...").addClass("warning"); }); } }); }); }) } } ``` I've tried this code, but can't seem to get the homepage as a logged-in user; it still returns as it would to someone who isn't logged in (my full name should appear in the header, etc.): ``` >>> import requests, json >>> s = requests.Session() >>> page_signed_out = s.get('http://www.surfline.com/home/index.cfm') >>> form_data = {'type':'POST', ... 'url':'/myaccount/inc_login_handler.cfm', ... 'data':"username=myemail@example.com&password=mypassword&rememberMe=true&top_login=true", ... 'cache':False} >>> s.post(url, ... data=json.dumps(form_data), ... headers= {'content-type': 'application/json'}) <Response [200]> >>> page_signed_in = s.get('http://www.surfline.com/home/index.cfm') ``` Basically, how can I log into surfline.com from a python file, and get a page as a logged in user? I don't mind using a different library, if it is not possible with the `requests` library. Thank you. **EDIT:** Here are the cookies after a POST has been made, as suggested by @André <[Cookie(version=0, name='CRYPTOPASS', value='%296%25%2A%2521B%3FO%2A%3C%28', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='CRYPTOUSER', value='23%252600%2E3Z%5FM%5EEZV1IK%27TFIW%3E', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='LOGGED\_OUT', value='true', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='USER\_ID', value='259829', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='CFID', value='437349255', port=None, port\_specified=False, domain='www.surfline.com', domain\_specified=False, domain\_initial\_dot=False, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='CFTOKEN', value='1082d1233da237c-3E4C2D43-FFC6-4ECC-1E80D9B505E495CE', port=None, port\_specified=False, domain='www.surfline.com', domain\_specified=False, domain\_initial\_dot=False, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False)]>
2014/02/17
[ "https://Stackoverflow.com/questions/21820104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1337422/" ]
What you probably need is a masked input. WPF doesn't have one, so you can either implement it yourself (by using [validation](http://msdn.microsoft.com/en-us/library/ms753962.aspx), for example), or use one of available third-party controls: * [`FilteredTextBox` from WPFDeveloperTools](http://www.codeplex.com/WPFDeveloperTools) * [`MaskedTextBox` from Extended WPF Toolkit](http://wpftoolkit.codeplex.com/) * [etc.](https://stackoverflow.com/q/481059/293099)
Based on your clarification, you want to limit user input to be a number with decimal points. You also mentioned you are creating the TextBox programmatically. Use the TextBox.PreviewTextInput event to determine the type of characters and validate the string inside the TextBox, and then use e.Handled to cancel the user input where appropriate. This will do the trick: ``` public MainWindow() { InitializeComponent(); TextBox textBox = new TextBox(); textBox.PreviewTextInput += TextBox_PreviewTextInput; this.SomeCanvas.Children.Add(textBox); } ``` Meat and potatoes that does the validation: ``` void TextBox_PreviewTextInput(object sender, TextCompositionEventArgs e) { // change this for more decimal places after the period const int maxDecimalLength = 2; // Let's first make sure the new letter is not illegal char newChar = char.Parse(e.Text); if (newChar != '.' && !Char.IsNumber(newChar)) { e.Handled = true; return; } // combine TextBox current Text with the new character being added // and split by the period string text = (sender as TextBox).Text + e.Text; string[] textParts = text.Split(new char[] { '.' }); // If more than one period, the number is invalid if (textParts.Length > 2) e.Handled = true; // validate if period has more than two digits after it if (textParts.Length == 2 && textParts[1].Length > maxDecimalLength) e.Handled = true; } ```
66,200,173
I have the following question, why is `myf(x)` giving less accurate results than `myf2(x)`. Here is my python code: ``` from math import e, log def Q1(): n = 15 for i in range(1, n): #print(myf(10**(-i))) #print(myf2(10**(-i))) return def myf(x): return ((e**x - 1)/x) def myf2(x): return ((e**x - 1)/(log(e**x))) ``` Here is the output for myf(x): ``` 1.0517091807564771 1.005016708416795 1.0005001667083846 1.000050001667141 1.000005000006965 1.0000004999621837 1.0000000494336803 0.999999993922529 1.000000082740371 1.000000082740371 1.000000082740371 1.000088900582341 0.9992007221626409 0.9992007221626409 ``` myf2(x): ``` 1.0517091807564762 1.0050167084168058 1.0005001667083415 1.0000500016667082 1.0000050000166667 1.0000005000001666 1.0000000500000017 1.000000005 1.0000000005 1.00000000005 1.000000000005 1.0000000000005 1.00000000000005 1.000000000000005 ``` I believe it has something to do with the floating point number system in python in combination with my machine. The natural log of euler's number produces a number with more digits of precision than its equivalent number x as an integer.
2021/02/14
[ "https://Stackoverflow.com/questions/66200173", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14858513/" ]
Let's start with the difference between `x` and `log(exp(x))`, because the rest of the computation is the same. ```py >>> for i in range(10): ... x = 10**-i ... y = exp(x) ... print(x, log(y)) ... 1 1.0 0.1 0.10000000000000007 0.01 0.009999999999999893 0.001 0.001000000000000043 0.0001 0.00010000000000004326 9.999999999999999e-06 9.999999999902983e-06 1e-06 9.99999999962017e-07 1e-07 9.999999994336786e-08 1e-08 9.999999889225291e-09 1e-09 1.000000082240371e-09 ``` If you look closely, you might notice that there are errors creeping in. When = 0, there's no error. When = 1, it prints `0.1` for and `0.10000000000000007` for `log(y)`, which is wrong only after 16 digits. By the time = 9, `log(y)` is wrong in half the digits from log(). Since we know what the true answer is (), we can easily compute what the relative error of the approximation is: ```py >>> for i in range(10): ... x = 10**-i ... y = exp(x) ... z = log(y) ... print(i, abs((x - z)/z)) ... 0 0.0 1 6.938893903907223e-16 2 1.0755285551056319e-14 3 4.293440603042413e-14 4 4.325966668215291e-13 5 9.701576564765975e-12 6 3.798286318045685e-11 7 5.663213319457187e-10 8 1.1077471033430869e-08 9 8.224036409872509e-08 ``` Each step loses us about a digit of accuracy! Why? Each of the operations `10**-i`, `exp(x)`, and `log(y)` only introduces a tiny relative error into the result, less than 10−15. Suppose `exp(x)` introduces a relative error of , returning the number ⋅(1 + ) instead of (which, after all, is a transcendental number that can't be represented by a finite string of digits). We know that || < 10−15, but what happens when we then try to compute log(⋅(1 + )) as an approximation to log() = ? We might hope to get ⋅(1 + ) where is very small. But log(⋅(1 + )) = log() + log(1 + ) = + log(1 + ) = ⋅(1 + log(1 + )/), so = log(1 + )/. And even if is small, ≈ 10− gets closer and closer to zero as increases, so the error log(1 + )/ ≈ / can get worse and worse as increases because 1/ → ∞. **We say that the logarithm function is *ill-conditioned* near 1: if you evaluate it at an approximation to an input near 1, it can turn a very small input error into an arbitrarily large output error.** In fact, you can only go a few more steps before `exp(x)` is rounded to 1 and so `log(y)` returns zero exactly. This is not because of anything in particular about floating-point—any kind of approximation would have the same effect with log! The condition number of a function is a property of the mathematical function itself, not of the floating-point arithmetic system. If the inputs came from physical measurements, you could run into the same problem. --- This is related to why the functions `expm1` and `log1p` exist. Although the function log() is ill-conditioned near 1, the function log(1 + ) is not, so `log1p(y)` computes it more accurately than evaluating it `log(1 + y)` can. Similarly, the subtraction in `exp(x) - 1` is subject to [catastrophic cancellation](https://en.wikipedia.org/wiki/Catastrophic_cancellation) when ≈ 1, so `expm1(x)` computes − 1 more accurately than evaluating `exp(x) - 1` can. `expm1` and `log1p` are not the same functions as `exp` and `log`, of course, but sometimes you can rewrite subexpressions in terms of them to avoid ill-conditioned domains. In this case, for example, if you rewrite log() as log(1 + [ − 1]), and use `expm1` and `log1p` to compute it, the round-trip is often computed exactly: ```py >>> for i in range(10): ... x = 10**-i ... y = expm1(x) ... z = log1p(y) ... print(i, x, z, abs((x - z)/z)) ... 0 1 1.0 0.0 1 0.1 0.1 0.0 2 0.01 0.01 0.0 3 0.001 0.001 0.0 4 0.0001 0.0001 0.0 5 9.999999999999999e-06 9.999999999999999e-06 0.0 6 1e-06 1e-06 0.0 7 1e-07 1e-07 0.0 8 1e-08 1e-08 0.0 9 1e-09 1e-09 0.0 ``` **For similar reasons, you might want to rewrite `(exp(x) - 1)/x` as `expm1(x)/x`.** If you don't, then when `exp(x)` returns ⋅(1 + ) rather than , you will end up with (⋅(1 + ) − 1)/ = ( − 1 + )/ = ( − 1)⋅[1 + /( − 1)]/, which again may blow up because the error is /( − 1) ≈ /. --- **However, it is *not* simply luck that the second definition seems to produce the correct results!** This happens because the compounded errors—/ from `exp(x) - 1` in the numerator, / from `log(exp(x))` in the denominator—cancel each other out. The first definition computes the numerator badly and the denominator accurately, but the second definition computes them both *about equally badly*! In particular, when ≈ 0, we have log(⋅(1 + )) = + log(1 + ) ≈ + and ⋅(1 + ) − 1 = + − 1 ≈ 1 + + − 1 = + . Note that is the *same* in both cases, because it's the error of using `exp(x)` to approximate . You can test this experimentally by comparing against `expm1(x)/x` (which is an expression guaranteed to have low relative error, since division never makes errors much worse): ```py >>> for i in range(10): ... x = 10**-i ... u = (exp(x) - 1)/log(exp(x)) ... v = expm1(x)/x ... print(u, v, abs((u - v)/v)) ... 1.718281828459045 1.718281828459045 0.0 1.0517091807564762 1.0517091807564762 0.0 1.0050167084168058 1.0050167084168058 0.0 1.0005001667083415 1.0005001667083417 2.2193360112628554e-16 1.0000500016667082 1.0000500016667084 2.220335028798222e-16 1.0000050000166667 1.0000050000166667 0.0 1.0000005000001666 1.0000005000001666 0.0 1.0000000500000017 1.0000000500000017 0.0 1.000000005 1.0000000050000002 2.2204460381480824e-16 1.0000000005 1.0000000005 0.0 ``` This approximation + to the numerator and the denominator is best for closest to zero and worst for farthest from zero—but as gets farther from zero, the error magnification in `exp(x) - 1` (from catastrophic cancellation) and in `log(exp(x))` (from ill-conditioning of logarithm near 1) both diminish anyway, so the answer remains accurate. --- However, the benign cancellation in the second definition only works until is so close to zero that `exp(x)` is simply rounded to 1—at that point, `exp(x) - 1` and `log(exp(x))` both give zero and will end up trying to compute 0/0 which yields NaN and a floating-point exception. **So you should use `expm1(x)/x` in practice, but it is a happy accident that *outside* the edge case where `exp(x)` is 1, two wrongs make a right in `(exp(x) - 1)/log(exp(x))` giving good accuracy even as `(exp(x) - 1)/x` and (for similar reasons) `expm1(x)/log(exp(x))` both give bad accuracy.**
> > Why is this result more accurate than another result by equivalent functions > > > Bad luck. Neither function is reliable pass about `(17-n)/2` digits. Difference in results relies on common implementations of `exp` and `log`, yet not language specified. --- For a given `x` as a negative `n` power of 10, the extended math result is `1.(n zeroes)5(n-1 zeros)166666....` With `exp(x) - 1`, both functions lose about half of their significance as `n` becomes greater due to severe calculation of `exp(small_value)` and `1`. `exp(x)-1` retains a semblance of the math value as long of the typical float with it typical 17-ish digits of precision. When `n` == `7`, `exp(x) = 1.00000010000000494...` and `exp(x)-1` is `1.0000000494...e-07`. `n = 8`: the wheels fell off. When `n` == `8`, `exp(x) = 1.00000000999999994...` and `exp(x)-1` is `9.99999993922529029...e-09`. `9.99999993922529029...e-09` is not near enough to `1.000000005...e-08`. At that point, both functions suffer a loss of precision in the `1.00000000x` place. --- Let `n` go up to 16 or so and then it all breaks down
5,591,557
I have some python code that does a certain task. I need to call this code from C# without converting the python file as an .exe, since the whole application is built on C#. How can I do this?
2011/04/08
[ "https://Stackoverflow.com/questions/5591557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/692856/" ]
If your python code can be executed via [IronPython](http://www.ironpython.net/) then this is definitely the way to go - it offers the best interop and means that you will be able to use .Net objects in your scripts. There are many ways to invoke IronPython scripts from C# ranging from compiling the script up as an executable to executing a single script file or event dynamically executing expressions as they are typed in by the user - check the documentation and post another question if you are still haivng problems. If your script is a CPython script and can't be adapted to work with IronPython then your options are more limited. I believe that some CPython / C# interop exists, but I couldn't find it after a quick Google search. The best thing I can think of would be to just invoke the script directly using `python.exe` and the `Process` class.
Have a look at [IronPython](http://www.ironpython.net/). Based on your answer and comments, I believe that the best thing you can do is to embed IronPython in your application. As always, there is a relevant SO [question](https://stackoverflow.com/questions/208393/how-to-embed-ironpython-in-a-net-application) about this. As Kragen said, it is important not to rely on a CPython module.
5,591,557
I have some python code that does a certain task. I need to call this code from C# without converting the python file as an .exe, since the whole application is built on C#. How can I do this?
2011/04/08
[ "https://Stackoverflow.com/questions/5591557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/692856/" ]
Have a look at [IronPython](http://www.ironpython.net/). Based on your answer and comments, I believe that the best thing you can do is to embed IronPython in your application. As always, there is a relevant SO [question](https://stackoverflow.com/questions/208393/how-to-embed-ironpython-in-a-net-application) about this. As Kragen said, it is important not to rely on a CPython module.
[Process.Start](http://visualbasic.about.com/od/usingvbnet/a/prstrt.htm) is what you're after. It allows you to call another program, passing it arguments.
5,591,557
I have some python code that does a certain task. I need to call this code from C# without converting the python file as an .exe, since the whole application is built on C#. How can I do this?
2011/04/08
[ "https://Stackoverflow.com/questions/5591557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/692856/" ]
If your python code can be executed via [IronPython](http://www.ironpython.net/) then this is definitely the way to go - it offers the best interop and means that you will be able to use .Net objects in your scripts. There are many ways to invoke IronPython scripts from C# ranging from compiling the script up as an executable to executing a single script file or event dynamically executing expressions as they are typed in by the user - check the documentation and post another question if you are still haivng problems. If your script is a CPython script and can't be adapted to work with IronPython then your options are more limited. I believe that some CPython / C# interop exists, but I couldn't find it after a quick Google search. The best thing I can think of would be to just invoke the script directly using `python.exe` and the `Process` class.
[Process.Start](http://visualbasic.about.com/od/usingvbnet/a/prstrt.htm) is what you're after. It allows you to call another program, passing it arguments.
52,943,850
Example I have this two csv, how can overwrite the value of column `type` in a.csv or replace if it matched both the string in column `fruit` in a.csv and b.csv ``` a.csv fruit,name,type apple,anna,A banana,lisa,A orange,red,A pine,tin,A b.csv fruit,type banana,B apple,B ``` **How to output this:** OR how to overwrite ``` fruit,name,type apple,anna,B banana,lisa,B orange,red,A pine,tin,A ``` Im trying this using pandas but i dont know whats next ``` df1=pd.read_csv("sha1_vsdt.csv",delimiter=",",error_bad_lines=False,engine = 'python',quoting=3) df2=pd.read_csv("final.csv",delimiter=",",error_bad_lines=False,engine = 'python',quoting=3) df = pd.merge(df1, df2, on='SHA-1', how='outer') ```
2018/10/23
[ "https://Stackoverflow.com/questions/52943850", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As per your input you have given ``` import pandas as pd df1=pd.read_csv("a.csv") df2=pd.read_csv("b.csv") df = pd.merge(df1, df2, on='fruit', how='outer') df['type_x'] = df['type_y'].combine_first(df['type_x']) del df["type_y"] df = df[pd.notnull(df['name'])] ``` input df1 ``` fruit name type 0 apple anna A 1 banana lisa A 2 orange red A 3 pine tin A ``` input df2 ``` fruit type 0 banana B 1 lemon B ``` output ``` fruit name type_x 0 apple anna A 1 banana lisa B 2 orange red A 3 pine tin A ``` if you have different files with different column names ``` import pandas as pd df1=pd.read_csv("a.csv") df2=pd.read_csv("b.csv") df = pd.merge(df1, df2, on='fruit', how='outer') df[df.columns[2]] = df[df.columns[3]].combine_first(df[df.columns[2]]) del df[df.columns[3]] df = df[pd.notnull(df[df.columns[1]])] ```
Use [`map`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html) by `Series` created by [`set_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html) and then rewrite missing unmatched values by original column values by [`fillna`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html): ``` #if possible duplicated fruit column s = df2.drop_duplicates('fruit').set_index('fruit')['type'] df1['type'] = df1['fruit'].map(s).fillna(df1['type']) print (df1) fruit name type 0 apple anna B 1 banana lisa B 2 orange red A 3 pine tin A ```
52,943,850
Example I have this two csv, how can overwrite the value of column `type` in a.csv or replace if it matched both the string in column `fruit` in a.csv and b.csv ``` a.csv fruit,name,type apple,anna,A banana,lisa,A orange,red,A pine,tin,A b.csv fruit,type banana,B apple,B ``` **How to output this:** OR how to overwrite ``` fruit,name,type apple,anna,B banana,lisa,B orange,red,A pine,tin,A ``` Im trying this using pandas but i dont know whats next ``` df1=pd.read_csv("sha1_vsdt.csv",delimiter=",",error_bad_lines=False,engine = 'python',quoting=3) df2=pd.read_csv("final.csv",delimiter=",",error_bad_lines=False,engine = 'python',quoting=3) df = pd.merge(df1, df2, on='SHA-1', how='outer') ```
2018/10/23
[ "https://Stackoverflow.com/questions/52943850", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As per your input you have given ``` import pandas as pd df1=pd.read_csv("a.csv") df2=pd.read_csv("b.csv") df = pd.merge(df1, df2, on='fruit', how='outer') df['type_x'] = df['type_y'].combine_first(df['type_x']) del df["type_y"] df = df[pd.notnull(df['name'])] ``` input df1 ``` fruit name type 0 apple anna A 1 banana lisa A 2 orange red A 3 pine tin A ``` input df2 ``` fruit type 0 banana B 1 lemon B ``` output ``` fruit name type_x 0 apple anna A 1 banana lisa B 2 orange red A 3 pine tin A ``` if you have different files with different column names ``` import pandas as pd df1=pd.read_csv("a.csv") df2=pd.read_csv("b.csv") df = pd.merge(df1, df2, on='fruit', how='outer') df[df.columns[2]] = df[df.columns[3]].combine_first(df[df.columns[2]]) del df[df.columns[3]] df = df[pd.notnull(df[df.columns[1]])] ```
You don't need `merge`, this can be implemented via a simple `.loc`: ``` df2.set_index('fruit', inplace=True) mask = df1.fruit.isin(df2.index) df1.loc[mask, 'type'] = df2.loc[df1.loc[mask, 'fruit'], 'type'].values fruit name type 0 apple anna B 1 banana lisa B 2 orange red A 3 pine tin A ```
52,943,850
Example I have this two csv, how can overwrite the value of column `type` in a.csv or replace if it matched both the string in column `fruit` in a.csv and b.csv ``` a.csv fruit,name,type apple,anna,A banana,lisa,A orange,red,A pine,tin,A b.csv fruit,type banana,B apple,B ``` **How to output this:** OR how to overwrite ``` fruit,name,type apple,anna,B banana,lisa,B orange,red,A pine,tin,A ``` Im trying this using pandas but i dont know whats next ``` df1=pd.read_csv("sha1_vsdt.csv",delimiter=",",error_bad_lines=False,engine = 'python',quoting=3) df2=pd.read_csv("final.csv",delimiter=",",error_bad_lines=False,engine = 'python',quoting=3) df = pd.merge(df1, df2, on='SHA-1', how='outer') ```
2018/10/23
[ "https://Stackoverflow.com/questions/52943850", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As per your input you have given ``` import pandas as pd df1=pd.read_csv("a.csv") df2=pd.read_csv("b.csv") df = pd.merge(df1, df2, on='fruit', how='outer') df['type_x'] = df['type_y'].combine_first(df['type_x']) del df["type_y"] df = df[pd.notnull(df['name'])] ``` input df1 ``` fruit name type 0 apple anna A 1 banana lisa A 2 orange red A 3 pine tin A ``` input df2 ``` fruit type 0 banana B 1 lemon B ``` output ``` fruit name type_x 0 apple anna A 1 banana lisa B 2 orange red A 3 pine tin A ``` if you have different files with different column names ``` import pandas as pd df1=pd.read_csv("a.csv") df2=pd.read_csv("b.csv") df = pd.merge(df1, df2, on='fruit', how='outer') df[df.columns[2]] = df[df.columns[3]].combine_first(df[df.columns[2]]) del df[df.columns[3]] df = df[pd.notnull(df[df.columns[1]])] ```
You can align indices, [`update`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html), then `reset_index`: ``` # align indices df1 = pd.read_csv(s1).set_index('fruit') df2 = pd.read_csv(s2).set_index('fruit') # update df1.update(df2) # reset index res = df1.reset_index() print(res) fruit name type 0 apple anna B 1 banana lisa B 2 orange red A 3 pine tin A ``` **Setup** ``` from io import StringIO s1 = StringIO("""fruit,name,type apple,anna,A banana,lisa,A orange,red,A pine,tin,A""") s2 = StringIO("""fruit,type banana,B apple,B""") ```
56,923,131
I have a string converted to an MD5 hash using a python script with the following command: ```py admininfo['password'] = hashlib.md5(admininfo['password'].encode("utf-8")).hexdigest() ``` This value is now stored in an online database. Now I'm creating a C++ script to do a login on this database. During the login, I ask for the password and I convert it to an MD5 hash to compare it with the value from the online database. But giving the same string, I obtain a different MD5 hash value every time. How can I fix it? ```cpp cin >> Admin_pwd; cout << endl; unsigned char digest[MD5_DIGEST_LENGTH]; const char* string = Admin_pwd.c_str(); MD5((unsigned char*)&string, strlen(string), (unsigned char*)&digest); char mdString[33]; for(int i = 0; i < 16; i++) sprintf(&mdString[i*2], "%02x", (unsigned int)digest[i]); printf("md5 digest: %s\n", mdString); ``` First try: `md5 digest: dcbb3e6add7fb94b98c56d7f70b7c46e` Second try: `md5 digest: 2870f4de491ad17d53d6d6e9dae19ca9` Third try: `md5 digest: 84656428baf461093e9fca2c8b05a296`
2019/07/07
[ "https://Stackoverflow.com/questions/56923131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10006055/" ]
``` (unsigned char*)&string ``` You're hashing the pointer itself (and unspecified data after it), not the string that it points to. And it changes on every execution (maybe). You meant just `(unsigned char*)string`.
Make it `MD5((unsigned char*)string, ...)`; drop the ampersand. You are not passing the character data to `MD5` - you are passing the value of the `string` pointer itself (namely, the address of the first character of the password), plus whatever garbage happens to be on the stack after it.
65,544,809
I'm trying to create several class instances of graphs and initialize each one with an empty set, so that I can add in-neighbors/out-neighbors to each instance: ``` class Graphs: def __init__(self, name, in_neighbors=None, out_neighbors=None): self.name = name if in_neighbors is None: self.in_neighbors = set() if out_neighbors is None: self.out_neighbors = set() def add_in_neighbors(self, neighbor): in_neighbors.add(neighbor) def add_out_neighbors(self, neighbor): out_neighbors.add(neighbor) def print_in_neighbors(self): print(list(in_neighbors)) graph_names = [1,2,3,33] graph_instances = {} for graph in graph_names: graph_instances[graph] = Graphs(graph) ``` However, when I try to add an in-neighbor: ``` graph_instances[1].add_in_neighbors('1') ``` I get the following error: ``` NameError: name 'in_neighbors' is not defined ``` I was following [this](https://stackoverflow.com/questions/4535667/python-list-should-be-empty-on-class-instance-initialisation-but-its-not-why) SO question that has a class instance initialized with a `list`, but I couldn't figure out where I'm mistaken
2021/01/02
[ "https://Stackoverflow.com/questions/65544809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14735451/" ]
``` Dog dog1 = new Dog(1, "Dog1", "Cheese"); Dog dog2 = new Dog(1, "Dog1", "Meat"); Dog dog3 = new Dog(2, "Dog2", "Fish"); Dog dog4 = new Dog(2, "Dog2", "Milk"); List<Dog> dogList = List.of(dog1, dog2, dog3, dog4); //insert dog objects into dog list //Creating HashMap that will have id as the key and dog objects as the values Map<Integer, List<Dog>> map = new HashMap<>(); for (Dog dog : dogList) { map.computeIfAbsent(dog.id, k -> new ArrayList<>()) .add(dog); } map.entrySet().forEach(System.out::println); ``` Prints ``` 1=[{1, Dog1, Cheese, {1, Dog1, Meat] 2=[{2, Dog2, Fish, {2, Dog2, Milk] ``` The dog class ``` static class Dog { int id; String name; String foodEaten; public Dog(int id, String name, String foodEaten) { this.id = id; this.name = name; this.foodEaten = foodEaten; } public String toString() { return String.format("{%s, %s, %s", id, name, foodEaten); } } ```
Hi Tosh and welcome to Stackoverflow. The problem is in your if statement when you check for IDs of your two objects. ``` for (Dog dog : dogList) { if(dog.getId() == dog.getId()){ crMap.put(cr.getIndividualId(), clientReceivables.); } } ``` Here you check IDs of the same object named "dog" and you will always get **True** in this if statement. Thats why it fills your map with all the values for both IDs. Another problem you have there is that your dog4 has the ID equals to 1 which is ID of dog1 and dog2. With that in mind you still won't achieve what you want, so check that too. Now for the solution. If you want to go through the list and compare every dog to the next one than you need to write that different. I am not sure if you are just working with java or with android but there is a solution to this and cleaner version of your code. With Java 8 you got Stream API which can help you with this. Check it [here](https://www.oracle.com/technical-resources/articles/java/ma14-java-se-8-streams.html) Also for android you got android-linq which you can check [here](https://github.com/zbra-solutions/android-linq)
65,544,809
I'm trying to create several class instances of graphs and initialize each one with an empty set, so that I can add in-neighbors/out-neighbors to each instance: ``` class Graphs: def __init__(self, name, in_neighbors=None, out_neighbors=None): self.name = name if in_neighbors is None: self.in_neighbors = set() if out_neighbors is None: self.out_neighbors = set() def add_in_neighbors(self, neighbor): in_neighbors.add(neighbor) def add_out_neighbors(self, neighbor): out_neighbors.add(neighbor) def print_in_neighbors(self): print(list(in_neighbors)) graph_names = [1,2,3,33] graph_instances = {} for graph in graph_names: graph_instances[graph] = Graphs(graph) ``` However, when I try to add an in-neighbor: ``` graph_instances[1].add_in_neighbors('1') ``` I get the following error: ``` NameError: name 'in_neighbors' is not defined ``` I was following [this](https://stackoverflow.com/questions/4535667/python-list-should-be-empty-on-class-instance-initialisation-but-its-not-why) SO question that has a class instance initialized with a `list`, but I couldn't figure out where I'm mistaken
2021/01/02
[ "https://Stackoverflow.com/questions/65544809", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14735451/" ]
You can use `Collectors.groupingBy(Dog::getId)` to group the dogs with the same `id`. **Demo:** ``` public class Main { public static void main(String[] args) { List<Dog> list = List.of(new Dog(1, "Dog1", "Cheese"), new Dog(1, "Dog1", "Meat"), new Dog(2, "Dog2", "Fish"), new Dog(2, "Dog2", "Milk")); Map<Integer, List<Dog>> map = list.stream().collect(Collectors.groupingBy(Dog::getId)); System.out.println(map); } } ``` **Output:** ``` {1=[Dog [id=1, name=Dog1, foodEaten=Cheese], Dog [id=1, name=Dog1, foodEaten=Meat]], 2=[Dog [id=2, name=Dog2, foodEaten=Fish], Dog [id=2, name=Dog2, foodEaten=Milk]]} ``` **The `toString` implementation:** ``` public String toString() { return "Dog [id=" + id + ", name=" + name + ", foodEaten=" + foodEaten + "]"; } ```
Hi Tosh and welcome to Stackoverflow. The problem is in your if statement when you check for IDs of your two objects. ``` for (Dog dog : dogList) { if(dog.getId() == dog.getId()){ crMap.put(cr.getIndividualId(), clientReceivables.); } } ``` Here you check IDs of the same object named "dog" and you will always get **True** in this if statement. Thats why it fills your map with all the values for both IDs. Another problem you have there is that your dog4 has the ID equals to 1 which is ID of dog1 and dog2. With that in mind you still won't achieve what you want, so check that too. Now for the solution. If you want to go through the list and compare every dog to the next one than you need to write that different. I am not sure if you are just working with java or with android but there is a solution to this and cleaner version of your code. With Java 8 you got Stream API which can help you with this. Check it [here](https://www.oracle.com/technical-resources/articles/java/ma14-java-se-8-streams.html) Also for android you got android-linq which you can check [here](https://github.com/zbra-solutions/android-linq)
68,319,575
I see that the example python code from Intel offers a way to change the resolution as below: ``` config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30) # Start streaming pipeline.start(config) ``` <https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py> I am trying to do the same thing in MATLAB as below, but get an error: ``` config = realsense.config(); config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30); pipeline = realsense.pipeline(); % Start streaming on an arbitrary camera with default settings profile = pipeline.start(config); ``` Error is below: ``` Error using librealsense_mex Couldn't resolve requests Error in realsense.pipeline/start (line 31) out = realsense.librealsense_mex('rs2::pipeline', 'start', this.objectHandle, varargin{1}.objectHandle); Error in pointcloudRecordCfg (line 15) profile = pipeline.start(config); ``` Any suggestions?
2021/07/09
[ "https://Stackoverflow.com/questions/68319575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1231714/" ]
You were close, but you need to set the revised object to a new variable. Also, you probably want to aggregate arrays since there are multiple 'completed'. This first creates the base object and then populates it using `reduce()` for both actions ``` let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}), revised = todos.reduce((b, todo) => { b[todo.status].push(todo); return b; }, keys); ``` ```js const todos = [{ id: 'a', name: 'Buy dog', action: 'a', status: 'deleted', }, { id: 'b', name: 'Buy food', tooltip: null, status: 'completed', }, { id: 'c', name: 'Heal dog', tooltip: null, status: 'completed', }, { id: 'd', name: 'Todo this', action: 'd', status: 'completed', }, { id: 'e', name: 'Todo that', action: 'e', status: 'todo', }, ]; let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}), revised = todos.reduce((b, todo) => { b[todo.status].push(todo); return b; }, keys); console.log(revised); ```
``` function getByValue(arr, value) { var result = arr.filter(function(o){return o.status == value;} ); return result? result[0] : null; // or undefined } todo_obj = getByValue(arr, 'todo') deleted_obj = getByValue(arr, 'deleted') completed_obj = getByValue(arr, 'completed') ```
68,319,575
I see that the example python code from Intel offers a way to change the resolution as below: ``` config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30) # Start streaming pipeline.start(config) ``` <https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py> I am trying to do the same thing in MATLAB as below, but get an error: ``` config = realsense.config(); config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30); pipeline = realsense.pipeline(); % Start streaming on an arbitrary camera with default settings profile = pipeline.start(config); ``` Error is below: ``` Error using librealsense_mex Couldn't resolve requests Error in realsense.pipeline/start (line 31) out = realsense.librealsense_mex('rs2::pipeline', 'start', this.objectHandle, varargin{1}.objectHandle); Error in pointcloudRecordCfg (line 15) profile = pipeline.start(config); ``` Any suggestions?
2021/07/09
[ "https://Stackoverflow.com/questions/68319575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1231714/" ]
You were close, but you need to set the revised object to a new variable. Also, you probably want to aggregate arrays since there are multiple 'completed'. This first creates the base object and then populates it using `reduce()` for both actions ``` let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}), revised = todos.reduce((b, todo) => { b[todo.status].push(todo); return b; }, keys); ``` ```js const todos = [{ id: 'a', name: 'Buy dog', action: 'a', status: 'deleted', }, { id: 'b', name: 'Buy food', tooltip: null, status: 'completed', }, { id: 'c', name: 'Heal dog', tooltip: null, status: 'completed', }, { id: 'd', name: 'Todo this', action: 'd', status: 'completed', }, { id: 'e', name: 'Todo that', action: 'e', status: 'todo', }, ]; let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}), revised = todos.reduce((b, todo) => { b[todo.status].push(todo); return b; }, keys); console.log(revised); ```
Your code is fine, you just need to use the reduce function as an output - it doesn't mutate `todos` it returns a new value. ```js const todos = [{id: 'a',name: 'Buy dog',action: 'a',status: 'deleted',},{id: 'b',name: 'Buy food',tooltip: null,status: 'completed',},{id: 'c',name: 'Heal dog',tooltip: null,status: 'completed',},{id: 'd',name: 'Todo this',action: 'd',status: 'completed',},{id: 'e',name: 'Todo that',action: 'e',status: 'todo',},]; const out = todos.reduce((acc, todo) => { return { ...acc, [todo.status]: { ...acc[todo.status], ...todo } }; }, {}); console.log(out); ``` ```css .as-console-wrapper {min-height:100%} /* make preview prettier */ ```
68,319,575
I see that the example python code from Intel offers a way to change the resolution as below: ``` config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30) # Start streaming pipeline.start(config) ``` <https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py> I am trying to do the same thing in MATLAB as below, but get an error: ``` config = realsense.config(); config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30); pipeline = realsense.pipeline(); % Start streaming on an arbitrary camera with default settings profile = pipeline.start(config); ``` Error is below: ``` Error using librealsense_mex Couldn't resolve requests Error in realsense.pipeline/start (line 31) out = realsense.librealsense_mex('rs2::pipeline', 'start', this.objectHandle, varargin{1}.objectHandle); Error in pointcloudRecordCfg (line 15) profile = pipeline.start(config); ``` Any suggestions?
2021/07/09
[ "https://Stackoverflow.com/questions/68319575", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1231714/" ]
You were close, but you need to set the revised object to a new variable. Also, you probably want to aggregate arrays since there are multiple 'completed'. This first creates the base object and then populates it using `reduce()` for both actions ``` let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}), revised = todos.reduce((b, todo) => { b[todo.status].push(todo); return b; }, keys); ``` ```js const todos = [{ id: 'a', name: 'Buy dog', action: 'a', status: 'deleted', }, { id: 'b', name: 'Buy food', tooltip: null, status: 'completed', }, { id: 'c', name: 'Heal dog', tooltip: null, status: 'completed', }, { id: 'd', name: 'Todo this', action: 'd', status: 'completed', }, { id: 'e', name: 'Todo that', action: 'e', status: 'todo', }, ]; let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}), revised = todos.reduce((b, todo) => { b[todo.status].push(todo); return b; }, keys); console.log(revised); ```
You haven't been very clear in your question, I assume you want an output that looks like `{todo: [], completed: [], deleted: []}`. In that case here is a simple solution. ```js var todos = [{ id: 'a', name: 'Buy dog', action: 'a', status: 'deleted', }, { id: 'b', name: 'Buy food', tooltip: null, status: 'completed', }, { id: 'c', name: 'Heal dog', tooltip: null, status: 'completed', }, { id: 'd', name: 'Todo this', action: 'd', status: 'completed', }, { id: 'e', name: 'Todo that', action: 'e', status: 'todo', }, ]; var result = todos.reduce(function(result, item) { var statusList = result[item.status]; if (!statusList) { statusList = []; result[item.status] = statusList; } statusList.push(item); // or use this to create a deep copy // statusList.push(JSON.parse(JSON.stringify(item))); return result; }, {}); console.log(result); ```